Re: Back up routines
On Tuesday 28 July 2009 05:35:20 Jon Dowland wrote: On Sun, Jul 26, 2009 at 08:51:31PM +0200, Johan Grönqvist wrote: The homepage http://rdiff-backup.nongnu.org/ also mentions some graphical front ends that may be useful, but I have not tried any of them. I am in the process of packaging archfs, a FUSE-powered user filesystem tool that provides a view onto an rdiff-backup. I had no idea such a program existed. I've been wanting to add such functionality to my dosbox frontend, but haven't had the time to fiddle with it myself. It's cool to learn something new every day! :) Do you have an estimate of when it will appear in sid? -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: Maintaining personal backports
On Monday 20 July 2009 19:55:30 Kumar Appaiah wrote: On Mon, Jul 20, 2009 at 10:19:29AM -0500, Kumar Appaiah wrote: If you are looking for small private archive: http://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_small_ public_package_archive Also debi command in devscript may reduce dpkg -i. Thanks for that. It'd be nice to automate the build process as well though, and combine it with these tools. So, I used Osamu's reference to create a small repository for myself, and then put together this piece of 5 minute shell jugglery for one command building of packages to load in there. It's really not neat, but hey, works well for a 5 minute effort. I've had a pretty easy time using reprepro for making a personal repository. You can use it as a partial mirror of what you already have and also be able to add an extra dist section (like lenny-backports) for the backported packages. You can still use dupload or dput to upload the packages by configuring an incoming directory for reprepro to watch. Or you can just add the .changes to reprepro explicitly by calling reprepro -b /path/to/repos include $source_$arch.changes. Using reprepro makes it easy to upload the new packages to an experimental dist for testing, then call reprepro -b /path/to/repos copysrc experimental lenny-backports $srcname. I had to learn this the hard way, because occasionally some backported packages don't work properly. Just run it with the sid source package as argument, and (assuming your directories are set up like mine), it should result in a backported package for you. I am looking to Wikify this, with full procedure on how to set up the mirror, the pbuilder/cowbuilder build environment and finally building packages. But before that, I'd appreciate it if others can suggest workarounds for the following kludges: 1. I download the Sources file from the mirror. It might become stale, so I'd have to remove it periodically. 2. I am parsing the output of grep-dctrl with certain assumptions. They might fail for some cases, and are not robust. 3. Judging the name of the changes file from the .dsc. 4. Checking for errors and bailing out. Thanks! Kumar #!/bin/sh if [ ! -n $1 ]; then echo Usage: `basename $0` sid_package exit 1; fi if [ ! -s Sources ]; then wget ftp://ftp.utexas.edu/pub/debian/dists/unstable/main/source/Sources.bz2 -O - |bzcat Sources fi FILE=$(grep-dctrl -X -S $1 -s Directory,Files Sources |awk '/^Directory/ { url = $NF} /\.dsc$/ { url = url / $NF} END { print url}') echo $FILE dget -d ftp://ftp.utexas.edu/pub/debian/$FILE; sudo cowbuilder --configfile ~/.pbuilderrc-lenny --build $(basename $FILE) dupload -t aceslinc /var/cache/pbuilder/result/$(basename ${FILE%.dsc})_amd64.changes If you have a spare machine, or enough spare ram to run virtualbox, you may want to take a look at cowpoke (in the devscripts package). Cowpoke will run cowbuilder on a remote machine (or VM if you use virtualbox). Here you get the benefit of having a build log saved for you, having lintian run on the result (you may not care about this) and also having the .changes file signed (you may not care about this either). On a related note, I've been spending the last week on rebuilding lenny packages using alternative CFLAGS and -march options. I have a friend who's running gentoo, and he keeps telling me that they have a better system for building packages with the options the you select. I decided to try and make my own quick, sloppy build system using multiple buildd's with cowpoke as an example. I've had mixed results with some packages honoring those options and other packages ignoring them. It's been a very interesting experiment. -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: Maintaining personal backports
On Wednesday 22 July 2009 08:02:09 Kumar Appaiah wrote: On Wed, Jul 22, 2009 at 09:33:36PM +0900, Osamu Aoki wrote: And Andrei suggested that I use apt-get source. But I still need to determine some file names using the version of the package, for which I need to parse the sources file. I have to think of an elegant way to Maybe it's easier to parse the output of 'apt-cache showsrc'. That would help. Thanks again, Andrei. How about adding deb-src line in chroot pointing to unstable archive? Then apt-get source any-binary will run in chroot to get pertinent I am not sure how this would help, as I run apt-get source outside of the chroot, right? I think adding some version using dch should help reduce version name confusion of build package. dch is in devscript. I was also thinking about an automated dch to increase the version to something like ${VER}~mybpo1, or some such thing. I leave it to you to suggest some sane method my which this can be achieved. using ~ decreases the version. Try this: dpkg-source -x dscfile pushd $src-$ver dch -l mybpo dpkg-buildpackage -S (It's rare, but sometimes you may need a builddep installed for this, such as po4a). popd dupload $src-$newver_source.changes cowbuilder $src-$newver.dsc (use -B for DEBBUILDOPTS in pbuilderrc) Thanks for all the help, and I hope people find this useful. Kumar -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: Maintaining personal backports
On Wednesday 22 July 2009 09:18:11 Kumar Appaiah wrote: On Wed, Jul 22, 2009 at 08:56:25AM -0500, Joseph Rawson wrote: Using reprepro makes it easy to upload the new packages to an experimental dist for testing, then call reprepro -b /path/to/repos copysrc experimental lenny-backports $srcname. I had to learn this the hard way, because occasionally some backported packages don't work properly. Thanks for the tip. I'll note it down for reference. If you have a spare machine, or enough spare ram to run virtualbox, you may want to take a look at cowpoke (in the devscripts package). Cowpoke will run cowbuilder on a remote machine (or VM if you use virtualbox). Here you get the benefit of having a build log saved for you, having lintian run on the result (you may not care about this) and also having the .changes file signed (you may not care about this either). True. But it's a personal machine, and only for a few packages. On a related note, I've been spending the last week on rebuilding lenny packages using alternative CFLAGS and -march options. I have a friend who's running gentoo, and he keeps telling me that they have a better system for building packages with the options the you select. I decided to try and make my own quick, sloppy build system using multiple buildd's with cowpoke as an example. I've had mixed results with some packages honoring those options and other packages ignoring them. It's been a very interesting experiment. This interests me a lot. I have been thinking for a long time about the Gentoo way, and I've been thinking why it should be any different for Debian. Let me detail you on what my idea is, since you've pretty much been doing something similar. Suppose there is a Debian package, which uses configure and supports several options using the many --enable-feature flags, or, alternately, disables some in a similar manner. If you want a custom package, you would have to do apt-get source pkg, and manually edit the rules file to enable or disable the options, or change the CFLAGS or compiler options. Not too difficult, but it the method differs from package to package. Why not alter the rules file to provide default values, and alter itself according to the environment, or according to some settings in a file like Gentoo's /etc/mk.conf? To firm up my description, consider the case of mutt, or elinks. Say you don't need mutt's IMAP support or SMTP support, or elinks' 256 colour support. It's not too tough to get the source package, modify one or two lines, and build it. But what I am hoping for is something like USE=-smtp -imap debuild or the like, and other options such as compiler flags can also be specified. This is much less kludgey, and is much automated, like Gentoo. Granted, this would require the modification of debian/rules files to be sensitive to the environment variables, but I was still hopeful that if we can formulate a standard to adhere to, we could propose this to some package maintainers for packages where it could make a difference (smaller executable sizes, faster/more optimized performance for number crunching etc.). Do you think this is a good idea? While I think it's a good idea, making a proposal that would be acceptable won't be easy. One reason is that each USE flag would have to be well specified or defined so that its meaning is clear. The actual use of those USE flags would only be for those people who would be building their own distribution based from the debian sources. It would be unreasonable for debian to try and distribute binaries for different combinations of those flags (or even a small subset of commonly expected combinations). However, debian already does ship binaries with a common USE combination, which is close to USE=this that +kitchen-sink (at least mostly, some sources are split into multiple binaries that effectively use different USE flags). Things would have to be done in a way that discourages people who would build packages using USE flags that diverge from the official builds from reporting bugs against those packages, as it would be way too difficult to determine where the bug is, what caused it, etc. In many situations, not only would it be necessary to modify the rules file, but also the control file. On certain packages, it may even be required to modify some of the postint, preinst. On packages with *.install files in the debian/ directory, it may be difficult for the maintainer to know which files may or may not be present with respect to how it was configured or built, based on the USE flags. In some cases, entire packages would have to be removed from the control file, as the USE flags wouldn't allow them to be built. This can possibly cause problems further down in the package tree, where another package depends on the package that was removed. It's been taking me a while to think through this as I've been
Re: Maintaining personal backports
On Wednesday 22 July 2009 09:18:11 Kumar Appaiah wrote: On Wed, Jul 22, 2009 at 08:56:25AM -0500, Joseph Rawson wrote: Using reprepro makes it easy to upload the new packages to an experimental dist for testing, then call reprepro -b /path/to/repos copysrc experimental lenny-backports $srcname. I had to learn this the hard way, because occasionally some backported packages don't work properly. Thanks for the tip. I'll note it down for reference. If you have a spare machine, or enough spare ram to run virtualbox, you may want to take a look at cowpoke (in the devscripts package). Cowpoke will run cowbuilder on a remote machine (or VM if you use virtualbox). Here you get the benefit of having a build log saved for you, having lintian run on the result (you may not care about this) and also having the .changes file signed (you may not care about this either). True. But it's a personal machine, and only for a few packages. On a related note, I've been spending the last week on rebuilding lenny packages using alternative CFLAGS and -march options. I have a friend who's running gentoo, and he keeps telling me that they have a better system for building packages with the options the you select. I decided to try and make my own quick, sloppy build system using multiple buildd's with cowpoke as an example. I've had mixed results with some packages honoring those options and other packages ignoring them. It's been a very interesting experiment. This interests me a lot. I have been thinking for a long time about the Gentoo way, and I've been thinking why it should be any different for Debian. Let me detail you on what my idea is, since you've pretty much been doing something similar. Suppose there is a Debian package, which uses configure and supports several options using the many --enable-feature flags, or, alternately, disables some in a similar manner. If you want a custom package, you would have to do apt-get source pkg, and manually edit the rules file to enable or disable the options, or change the CFLAGS or compiler options. Not too difficult, but it the method differs from package to package. Why not alter the rules file to provide default values, and alter itself according to the environment, or according to some settings in a file like Gentoo's /etc/mk.conf? To firm up my description, consider the case of mutt, or elinks. Say you don't need mutt's IMAP support or SMTP support, or elinks' 256 colour support. It's not too tough to get the source package, modify one or two lines, and build it. But what I am hoping for is something like USE=-smtp -imap debuild or the like, and other options such as compiler flags can also be specified. This is much less kludgey, and is much automated, like Gentoo. Granted, this would require the modification of debian/rules files to be sensitive to the environment variables, but I was still hopeful that if we can formulate a standard to adhere to, we could propose this to some package maintainers for packages where it could make a difference (smaller executable sizes, faster/more optimized performance for number crunching etc.). Do you think this is a good idea? Thanks. Kumar BTW, I almost forgot. You may want to take a look at this: http://www.emdebian.org/ There is a lot of interesting ideas here about rebuilding packages for an embedded environment, and many of these ideas are useful for helping to make a customized distribution, regardless of whether the target is embedded or not. -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: etckeeper - keeping /etc under version control
On Wednesday 08 July 2009 01:42:56 Scott Gifford wrote: Peter Jordan usernetw...@gmx.info writes: Suno Ano, Wed Jun 17 2009 19:07:31 GMT+0200 (CEST): Peter Metadata = data stored in .svn/ ? yes, problem is, Subversion does not have one directory i.e. one .svn/ at the root of the project where is stores metadata but it scatters them all over the place which is very annoying. Git only has one .git/ at the project root at that is it http://sunoano.name/ws/public_xhtml/scm.html#why_git -- metadata svk does not store the metadata in the project path at all Being able to use something like etckeeper with svn (maybe via svk) would be very useful to me, has anybody tried this? Scott. I wrote a program a few years ago that uses svn to help keep track of /etc. The name of the program, unsurprisingly is etcsvn. I stopped using it around 2007, choosing to use paella to help handle some of this. I found that keeping track of /etc in its entirety was somewhat burdensome (this was at a time when packages were placing things in /etc that shouldn't have been there, such as gconf). I'm not sure how loosely you are using the phrase like etckeeper, but I can give you a short synopsis of similarities and differences. etckeeper: uses directories that's easy to add scripts to make it very flexible (this could be done with etcsvn, with a little bit of work) hooks into apt to track changes to /etc made by upgrading packages (I never thought about this when I wrote etcsvn, but it would be a nice addition) etcsvn: /etc is an export, not a working copy (I was concerned about keeping the / partition small and a working copy is over twice the size). the working copy is kept in /var/lib/etcsvn (or somewhere under there, I can't remember now) the working copy is only readable by root etcsvn sets a umask of 077 so exports back to /etc can be done securely etcsvn uses svn properties to keep track of ownership, file permissions, and mtime (this could be extended to keep track of other metadata, including extended attributes. I knew nothing about xattr when I wrote this). since subversion can handle empty directories, etcsvn can do so as well since subversion can do a checkout of a subdirectory in a repository, you can keep /etc from multiple machines in the same repository (in these cases the repository was never accessible from those machines, I used to keep it on my laptop and use ssh port-forwarding and agent forwarding to access the repository on those machines) etcsvn doesn't handle authentication to the repository (I normally used ssh to handle this) etcsvn may need some work to use https methods better etcsvn uses an etcsvn.conf file in its working copy, where you specify the directories and/or files to be tracked. This means that you can also track files and/or directories in /var or elsewhere, and keep ownership, permissions, and mtime straight. I haven't touched the code in a long time. It may not work as it used to. After looking at it, the last thing I did was a few weeks ago, when I changed all the os.system calls to use subprocess instead (something I've been trying to do across the board with all my python code). I have been also thinking about using another strategy, instead of keeping all of /etc in subversion. Basically like this: get package list through dpkg --get-selections conffiles = list() tracked_files = list() for package in packages: dpkg --status (get the conffiles and add them to the list) walk through /etc check if file is in conffiles list if file in conffiles: check md5sum if md5sum differs: tracked_files.append(file) else: check file in ignored_list if not in ignored_list: tracked_files.append(file) if file in track_anyway_list: tracked_files.append(file) add tracked files to svn This would keep the amount of files that were being tracked down to a minimum. Anyway, if you are interested in it, look it over, maybe try it out and let me know. It may be outdated, but bringing it back up to date shouldn't be too difficult. -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: etckeeper - keeping /etc under version control
On Wednesday 17 June 2009 09:28:36 Oliver Schneider wrote: I do not see how this solves the metadata issue if you use a version control system directly without the smartness etckeeper brings to the table e.g. by using its .gitignore settings. svk is an attempt to inject the notion of being a decentralized scm into a centralized one (which svn happens to be) ... that has nothing to do with putting /etc under version control metadata = data stored in .svn/ ? Yes. In this case all the metadata stored by SVN. Certainly there is more metadata which is nowadays simply ignored by many VCS, partially due to the discrepancies between different platforms and there implementation of, say, file permissions, partially out of negligence or lack of a *proper* solution. I considered that svn used the *proper* solution with it properties system. Using subversion properties allows you to store arbitrary metadata on a per file/directory basis. This allows each svn client to handle that metadata in the most appropriate way, without having the application try to decide this for you (It does make some assumptions for you, such as svn:executable, which doesn't make sense on windows platforms, but is necessary for unixy systems. I solved most of these problems a few years ago by making a program, etcsvn, to handle this stuff for me. After using it for around a year or so, with many machines, I found that it was less hassle to keep from placing the complete /etc directory in subversion, and just track certain files (i.e. those that were changed or new). Also, as far as I understand SVK it's using only one Subversion library, not the whole thing. It's not just a distributed SVN in that sense ... also see: http://svk.bestpractical.com/view/SVKAntiFUD // Oliver -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: Unable to start postgresql on Debian
On Saturday 25 April 2009 03:26:31 Foss User wrote: On Sat, Apr 25, 2009 at 1:21 PM, Foss User foss...@gmail.com wrote: I installed postgresql using the following commands: aptitude update aptitude install postgresql I tried to start it: /etc/init.d/postgresql-8.3 start But I don't find any process with the 'sql' string in it in the ps list. Also, I am unable to connect using psql. There are no logs in /var/log/postgresql directory. Could someone please help in troubleshooting this? I did a little bit of troubleshooting myself by putting echo statements in the /etc/init.d/postgresql-8.3 file and the script it calls: /usr/share/postgresql-common/init.d-functions The script is looking for /etc/postgresql/8.3 directory but it does not exist on my Debian Squeeze. So, the init script fails. Could someone please tell me why this happened. This probably means that you don't have a database cluster ready. This is usually done at the end of installing the postgresql package. This happens to me when I install postgresql with a preseed file in the debian installer, but I don't know why you have this problem. To fix this, try this command (as root): pg_createcluster 8.3 main --start I checked all the postgre .deb files in my /var/apt/cache/archives directory. None of the postgre .deb files have this /etc/postgresql/8.3 directory. has something gone wrong with the deb packages in testing? -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.
Re: debian and ubuntu - answer from user not pretending to be guru
On Saturday 02 May 2009 19:02:42 Christofer C. Bell wrote: There's nothing special about how Ubuntu does it. In fact, when you install Etch you can have the Ubuntu behavior at installation time (when it prompts for a root password, select Cancel, then in the installer menu, select the option for configuring user accounts and select No when it asks if you want to allow root to have a password). It's all pretty self-explanatory in the installer. This option was removed in Lenny's installer. Actually, it's still in the installer. The debconf priority was lowered, but you can still set the option in a preseed file, or by telling the installer to lower the priority of debconf, or by passing priority=low to the installer. Anyway, again, not criticizing your desire to have a root password, I'm simply pointing out that there's nothing special about what Ubuntu is doing and if you want to have a root password on Ubuntu and use Ubuntu, you can. I had to figure that out on my own, long ago. What they did do, that wasn't always trivial is modify many of the graphical su programs to use sudo instead of su, which helps bypass the need for a root password. Also, the default for Aptitude::Get-Root-Command on debian is su, while it's sudo on ubuntu. Also, the sudo on ubuntu seems to have its authentication timestamps tied to the terminal/shell (I don't know which) that originally authenticated. So, if you are using sudo in one terminal, then quickly start another terminal and use sudo in that terminal, you will have to authenticate again. -- Thanks: Joseph Rawson signature.asc Description: This is a digitally signed message part.