Re: [dev] a suckless init system?
I'll just note that, regardless of code quality, etc, there's the question of what the end-user usability goals for an init system should be. Is it just to bring up the system, or is it to bring up the system fast enough to use in an quickbooting environment (5s off an SSD)? I'm very inclined towards the later, but that's partly because I use net-tops in impromptu settings (and it seems like resume from hibernate (due to suspend-to-ram for 8 hours eating too much battery) is slower than a reboot from scratch). On Thu, Aug 16, 2012 at 9:53 AM, Jens Staal staal1...@gmail.com wrote: torsdagen den 16 augusti 2012 06.59.45 skrev pancake: Using mk takes sense as long as init scripts are a dependency based system. Please go on. That looks fun Looks like doing suckless software implies surviving to troll comments. Your software will be suckless when trolls stop throwing rocks at it. On Aug 15, 2012, at 6:02, Sam Watkins s...@nipl.net wrote: There are dependency based init systems, should use mk for it. net: 1 inetd: net 2: getty inetd mk 2 # go to runlevel 2 # inetd crashes mk 2 # bring it back to life It would need some sort of procfs view with process names, where unlink sends a term signal, and some extra features for mk, to remove objects in various ways. That could be done in a separate program. mk -rm inetd # stop inetd (and anything that depends on it) mk -rmdeps 1 # go back to just runlevel 1 Ok, now I should install some sanity into my brain. I wonder if people get kicked off the list for posting stuff like this? Sam There is a mk-based init system that was initially presented here: http://9fans.net/archive/2009/10/375 perhaps a start? -- cheers, dave tweed__ high-performance computing and machine vision expert: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless init system?
Well, yes-and-no. The end user (who in the case of many linux desktops and laptops is also the sys admin) may not be aware of how things are structured under the hood, but they can perceive laptop X spends a lot of time doing stuff when I turn it on, while laptop Y is usable almost instantly. The only reason I mentioned it (I otherwise try and stay out of religiously tinted discussions was that there was discussion about how to do it but no mention of what the important externally visible (if you don't like end-user) goals should be. On Thu, Aug 16, 2012 at 2:32 PM, Kurt H Maier khm-suckl...@intma.in wrote: On Thu, Aug 16, 2012 at 12:00:03PM +0100, David Tweed wrote: I'll just note that, regardless of code quality, etc, there's the question of what the end-user usability goals for an init system should be. No. An end user should not even be aware init exists. The people an init system has to impress are systems administrators. -- cheers, dave tweed__ high-performance computing and machine vision expert: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless init system?
On Thu, Aug 16, 2012 at 3:19 PM, Kurt H Maier khm-suckl...@intma.in wrote: On Thu, Aug 16, 2012 at 02:39:43PM +0100, David Tweed wrote: Well, yes-and-no. The end user (who in the case of many linux desktops and laptops is also the sys admin) may not be aware of how things are structured under the hood, but they can perceive laptop X spends a lot of time doing stuff when I turn it on, while laptop Y is usable almost instantly. The only reason I mentioned it (I otherwise try and stay out of religiously tinted discussions was that there was discussion about how to do it but no mention of what the important externally visible (if you don't like end-user) goals should be. For init systems, speed is a natural consequence of correct design. Only an incompetent would have to explicitly list it as a design goal. Maybe I have no claim to real competence..., but I always tend to find that if the software design goals aren't pretty concrete listing even the obvious things then either (a) someone else will consider something I find obvious to be a why do you want that or (b) someone else's obvious is my why on earth would you want that. Anyway, here's a comment that I remembered reading the first time round http://lwn.net/Articles/300955/ Note that the point isn't whether fast boot is an important enough goal to impact in other trade-off's (I think it is, others may think it's less important than simplicity), as much as that it's something where it's better to come to an explicit design goal decision.
Re: [dev] wmii falling out of favor
On Mon, Jan 2, 2012 at 7:02 AM, Patrick Haller 201009-suckl...@haller.ws wrote: On 2012-01-01 21:13, Suraj N. Kurapati wrote: So I considered the trade-offs between SLOC minimalism, project and community activity, and my productivity in DWM vs. WMII and finally decided to switch back to WMII (which I used since six years prior). How often do people re-evaluate their toolsets? With my shell, I can examine shell history and do stuff like: cd() { dir=$1 test -f $1 dir=`dirname $1` builtin cd $dir ls | sed 10q | fmt -w $COLUMNS } With X11, do we screencast a day's work and watch it in fast-forward? That's related to one of the reasons I tend to prefer doing stuff on the command line: we know how to record textual operations and search them relatively efficiently. On my machine each terminal's history file is given a unique name and the each command (command, not output) is stored as a (time, current directory, command) in the file and the files are stored forever (minus a couple of simple space savers like not storing incredibly frequent commands like pwd, df, ls, etc). Then months later I can often figure out something that I did from a vague memory (eg, I'm sure I had to hack a symlink to a library to make something work a couple of months ago, which ln -s commands did I issue around the time my cwd was last trialProgSource?) I don't do it often, but occasionally it comes up and saves me an hour or two investigation. I'm not aware of any way of either storing or, more importantly, searching a user's interaction with the GUI apps on a computer system. -- cheers, dave tweed__ computer vision researcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] what's your opinion on Go
On Tue, Dec 13, 2011 at 11:27 AM, Connor Lane Smith c...@lubutu.com wrote: Hey, 2011/12/13 Hadrian Węgrzynowski hadr...@hawski.com: C is the king and Go is the prince. Go needs to be more stable/mature, then it will be the king. Maybe I'm biased, but I think the future is all about functional programming. C has its benefits by being very very low-level, but if you're going to include a whole bunch of new features, like GC and CSP, imo you may as well just go the whole hog and mix in some beautiful features like functional purity and type inference. I would think a functional language designed around efficiency could gain a lot from supercompilation, and would be easier to write correct programs in, too. (Like I say, I may be biased: my undergrad dissertation is on highly-optimised second-order reduction systems.) [Standard preamble: different people write programs to do different kinds of tasks, and it's easy to fall into the trap of thinking the kinds of problems you tackle and trade-offs that make sense for you are the same ones everyone else has.] Every month or so I look at the current state of Go, and my view at the moment is that it falls a between the two stools that matter to me: it auto manages certain things with the assumption that if you need much greater control/performance you'll actually write complete functions, including prologue/epilogue, by hand in assembly. On the other hand, it doesn't provide a lot of the things that I'd like in a higher-level language for tasks where biggest difficulty is the problem complexity rather than getting the highest possible performance. (I suspect one of the responses from the Go designers would be that they've tackled writing performance code with their goroutines: just run the task on a bigger cluster of machines. Which is fine if you're at Google, maybe a less clear choice for everyone else...) In some ways Scala looks like an interesting design for a new mix of styles of program structuring, while Go looks close to Python with full native compliation now (PyPy still not feature complete AIUI). But those are the issues that make sense for my types of usage. It's a shame Fortress appears to have terminally stalled without getting beyond the incomplete interpeter stage... -- cheers, dave tweed__ computer vision researcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] wmii + ruby 1.9.3 = no power woes!
On Tue, Nov 8, 2011 at 1:46 PM, Connor Lane Smith c...@lubutu.com wrote: On 08/11/2011, Suraj N. Kurapati sun...@gmail.com wrote: I thought Suckless folks were enthusiastic about Plan9 technologies; has this changed? If so, why? Appreciative, not necessarily enthusiastic. Plan 9 technologies have their place, but it's very tempting to use them everywhere. I don't believe 9P belongs in an X window manager. I know I subscribe to something slightly different to the current suckless viewpoint on software but FWIW... My view is that the plan 9 technologies are attractive if and only if they're used everywhere: if a pseudo-filesystem interface was pervasive it would avoid the learn another new language/technology tricks/etc for this task and the problems like combinining responsiveness with low idle CPU usage would probably be solved. In contrast, there's little incentive to dig into 9P, and even less into IXP, precisely because so little stuff uses them. -- cheers, dave tweed__ computer vision researcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [dwm] sloppy focus
As my vote, I prefer to keep sloppy focus, at the very least as an option, (the fact that sloppy focus doesn't seem to work properly on Windows means I'm forced back to click to focus at work at it's driving me mad the sheer volume of unnecessary clicking). Incidentally, I use the mouse a LOT with my PC, it's just that tiling window management is one area that doesn't benefit from mouse interaction (partly because it's already reduced the user input needed down to the bare minimum). On Mon, Jul 4, 2011 at 2:10 AM, Connor Lane Smith c...@lubutu.com wrote: Hey, I don't know about all you, but I find dwm's sloppy focus can be really annoying at times -- focusing a window when I accidentally nudge my atrophying pointer -- and would rather click-to-focus. The great thing about dropping dwm's sloppy focus is it saves 20 lines of code! So how about we make dwm less mousy and a bit simpler, too? Thanks, cls -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [dwm] sloppy focus
On Mon, Jul 4, 2011 at 4:14 PM, Connor Lane Smith c...@lubutu.com wrote: Interesting, those on IRC were very 'for' this idea. Different demographics? Oh you silly ML people! On 4 July 2011 06:51, garbeam garb...@gmail.com wrote: No I totally disagree. Click to focus makes the life uneccessary harder. Doing this just for the rare corner case of touching your pointing device by accident doesn't sound like a very sound reason for it. Well, the thing is, I don't ever use the mouse for window management, but I sometimes move the mouse out of the way and in doing so accidentally focus a completely different window. I personally would rather dwm had no mouse support at all, but clearly that would be controversial... Still, at least with click-to-focus the mouse is completely dormant until you intentionally click something. I guess psychologically for me it's the other way around: I can't remember the last time I nudged the mouse accidentally, so I tend to associate things with if I've moved the mouse over a window, clearly it's because I want to interact with the window so why should I have to redundantly tell the computer that with ane extra click. (It may tangentially be relevant that I'm right handed but use the mouse with my left hand, so when eg I write stuff on paper or pick up a drink it's on the other side of the desk to the mouse, so the risk of accidental nudging is incredibly low.) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Experimental editor
On Fri, Jun 17, 2011 at 9:46 AM, Nicolai Waniek roc...@rochus.net wrote: On 06/17/2011 10:37 AM, markus schnalke wrote: For the same reason we want Unix's manifold toolchain and for the same reason we want several programming languages: Because ``One fits all'' is an illusion. Then try to figure out some basic tools that you can glue together to form a fully functional editor. 'Reinventing' an editor for every purpose and most probably copying some things on source level from one editor to the next is ridiculous. Even more annoying is that the way that the lack of an OS-level editor component means that there's a tendency for any application that wants to provide a writing/editing capability to write their own, often poor, editing code. I entirely agree with that one interface fits all users is a problem, but I'd like a system where there was one interface for editing in all circumstances for this user. cheers, dave tweed
Re: [dev] Experimental editor
On Fri, Jun 17, 2011 at 9:51 AM, David Tweed david.tw...@gmail.com wrote: On Fri, Jun 17, 2011 at 9:46 AM, Nicolai Waniek roc...@rochus.net wrote: On 06/17/2011 10:37 AM, markus schnalke wrote: For the same reason we want Unix's manifold toolchain and for the same reason we want several programming languages: Because ``One fits all'' is an illusion. Then try to figure out some basic tools that you can glue together to form a fully functional editor. 'Reinventing' an editor for every purpose and most probably copying some things on source level from one editor to the next is ridiculous. Even more annoying is that the way that the lack of an OS-level editor component means that there's a tendency for any application that wants to provide a writing/editing capability to write their own, often poor, editing code. I entirely agree with that one interface fits all users is a problem, but I'd like a system where there was one interface for editing in all circumstances for this user. To clarify, I by OS-level component I mean at the this is THE component applications use when the want editing, but which would be changeable by the user. (If you've seen things like the historical Oberon OS, that kind of thing.) cheers, dave tweed -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Experimental editor
On Wed, Jun 15, 2011 at 8:53 PM, Peter John Hartman peterjohnhart...@gmail.com wrote: A simple editor probably shouldn't have any more keybindings than, say, surf; in fact one or two less: page up/down, up/right/left/down, and find. One doesn't need modes for that. If you want to do something wacked out to your file (like go to the third word on the 4th sentence and delete every vowel), that should probably be done *outside* the editor. I've got a long comment queued up (restricted internet situation at work), but just to respond to the comment about moving stuff outside the editor. One big disadvantage of doing everything by hand is that such stuff isn't in an undo history that you can execute. I tend to use undo a certain amount, mostly immediately (which could be simulated by just keeping one copy) but a reasonable amount undoing several sets of changes. I can't wait. As to revision control, just use software made to handle revision control. The editor doesn't need to do this. I'm going to assume that what you mean by The editor doesn't need to do this. is the computer user doesn't benefit from having undo in the editor rather than a version control; I'm much less interested in what _needs_ to be done rather than what is most _beneficial_ to do. (It's probably obvious from other posts that I don't subscribe to the common interpretation of the suckless philosophy. IMO, it's unfortunate that both GNOME/KDE/whatever and suckless developers seem more interested in doing stuff for reasons based on the underlying code -- it's cool to do that, it looks good in demos, it's minimal to do that, etc -- rather than from a consideration of what most benefical for the user.) I actually use relatively fine-grained version control as an additional workflow measure (as-i-develop bisection, etc), but there are some things for which firing up an existing VC and attempting to revert a minor change is overkill. The comment I'm talking about isn't specifically about your email but a general post on the thread; ironically the reason the post is queued up is because I chose to write it in an effective editor rather than this crummy gmail edit box, and I can't send arbitrary files through this work machine. cheers, dave tweed
Re: [dev] Experimental editor
On Thu, Jun 16, 2011 at 1:49 PM, Kurt H Maier karmaf...@gmail.com wrote: On Thu, Jun 16, 2011 at 4:15 AM, David Tweed david.tw...@gmail.com wrote: I'm going to assume that what you mean by The editor doesn't need to do this. is the computer user doesn't benefit from having undo in the editor rather than a version control; invalid assumption. what he meant was 'the EDITOR doesn't need to do this; some other piece of software can do it FOR the editor' The various subtlties in what he could have meant was precisely what I was trying to get a clarification of. (It seemed silly to wait for a round trip delay before proceeding to the conversation.) It's not clear what use software to handle revision control is meant to suggest: is it that you could implement something at the resolution of a typical editor undo buffer (individual character insertion/deletions) or is it you shouldn't need any finer resolution than you can acheive with a current revision control system?. Incidentally, to be clear I'm looking at things at the user experience level: I don't care virtually at all about the difference between an editor which has an accessible-within-the-editor built-in fine-grained change buffer and one that acheives an accessible-within-the-editor fine-grained change buffer provided by an external program. But for me there's a huge difference between to undo something on the current buffer, repeatedly invoke the undo command in the editor compared to open a terminal window, bring up a graphical rev control diff viewer, find the revision id corresponding to the desired undo point, rewind and check out that revision, go back to the editor and reload the file. cheers, dave tweed
Re: [dev] Experimental editor
On Wed, Jun 15, 2011 at 1:12 PM, Peter John Hartman peterjohnhart...@gmail.com wrote: A simple editor probably shouldn't have any more keybindings than, say, surf; in fact one or two less: page up/down, up/right/left/down, and find. One doesn't need modes for that. If you want to do something wacked out to your file (like go to the third word on the 4th sentence and delete every vowel), that should probably be done *outside* the editor. I've got a long comment queued up (restricted internet situation at work), but just to respond to the comment about moving stuff outside the editor. One big disadvantage of doing everything by hand is that such stuff isn't in an undo history that you can execute. I tend to use undo a certain amount, mostly immediately (which could be simulated by just keeping one copy) but a reasonable amount undoing several sets of changes. -- cheers, dave tweed__ computer vision researcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] 2surf, an experiment in tiled browsing
On Sun, Jun 12, 2011 at 1:38 AM, Bjartur Thorlacius svartma...@gmail.com wrote: On 6/11/11, Peter John Hartman peterjohnhart...@gmail.com wrote: Why not just utilize dwm's tile mode and have each link open in a new window? Presumably so you don't have to close a window after every article you examine, and resize the search results window. If you're going to resize the parent window every time before you use it, you could just as well hide it. A two-pane setup however mandates the common workflow of skimming the results, selecting an entry for examination, examining the linked article, and then either reading article or returning to the results page for continued skimming. Opening a new window for every activated link would allow for the alternative workflow of continuous skimming and then reading articles one after another. But resizing a pager's window will Note that whatever the implementation mechanism, this kind of streamlined workflow would also be very useful for reading hyperlinked tutorial documents/reference manuals (as I'm doing now). Quite often some section will refer to something that is explained somewhere else in the manual (eg, file handling might have limits due to index sizes, which might link to a discussion about integer type sizes in general). I agree that it would be nice to get you don't have to explicitly close this window behaviour, whether in the browser only or in combination with the tiling WM. -- cheers, dave tweed__ computer vision researcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] TermKit
On Fri, May 20, 2011 at 10:06 AM, Nick suckless-...@njw.me.uk wrote: On Fri, May 20, 2011 at 12:23:39AM +0200, hiro wrote: https://github.com/unconed/TermKit no comment, only sorry. indeed. i read about it yesterday. makes me want to vomit. Certainly the general implementation, the language and the architecture do seem nasty. OTOH, it always depresses me that it's kind of taken as a virtue that the interactive shell and the terminal are know almost nothing about each other (at least for almost all modern computing devices, I can see if you genuinely are using a 1970's dumb terminal you don't have the wiggle room for more). At the very least, it would be very productive to 1. Have a terminal/shell combination that, upon resizing, actually redisplays text properly rather than just chops off stuff in vanished/newly visible space. 2. Had the _option_ for shell history to pop up in another window, rather than _only_ being available as a command output, so that it scrolls other stuff you've been doing off the screen. 3. Had more flexible context-sensitive cutting support. (Eg, let me somehow copy a sequence of commands text without including either the prompt or command output.) Obviously it's not clear what the best way to provide greater information flow between interactive shells and terminals, and it may/may not be that the Plan9 or emacs-shell approach is the way to go, but it'd be nice if there was some increase in terminal productivity in the coming years. (Of course, the other problem is it's a large number of shells and terminals to change, and if additional metadata needs adding to commands that's a huge number of programs to change, so it's unlikely to happen...) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] TermKit
On Fri, May 20, 2011 at 1:26 PM, Connor Lane Smith c...@lubutu.com wrote: fwiw, I agree. TermKit appears to be a very glossy turd, but there are certainly outstanding issues in our terminals, which is why in Plan 9 they tried to fix them by pairing a plaintext-only Rio term with graphical windows which actually replace the spawning term. It's not perfect, but the plaintext / graphics dichotomy does make things simpler in some ways. That's an interesting proposal. The areas for improvement I tend to see revolve more around text and the information-loss going between the interactive shell and the terminal (so that this information is only accessible via typing shell commands). Eg, the shell knows which bits are the input, output, prompt but by the time the terminal has displayed it it's all one big block of text. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev][st] Approach to adding -bg colour option to st
On Tue, Apr 26, 2011 at 7:14 AM, Ethan Grammatikidis eeke...@fastmail.fm wrote: On 25 Apr 2011, at 11:03 am, David Tweed wrote: (As background, I currenlty use a hacked aterm which changes the background colour according to the cwd. Sounds like you want to hack that code into st rather than add options, no? I think that's a curious and interesting feature in its own right. Well, I suspect that that feature would be controversial (maybe it doesn't count as a suckless feature) for the main distribution. So I was thinking about making the changes so that colours are standard variables rather than #defines as part of an options patch. That would both be a start for me to see if I like st in general, and then make it easier to have a patch manipulating them to my personal st. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev][st] Approach to adding -bg colour option to st
On Mon, Apr 25, 2011 at 10:34 AM, Jan k...@unstable.nl wrote: It is also worth considering whether to handle this via X Resources (the ~/.Xdefaults file) instead of command line options - since a terminal is not normally something you are going to start by invoking directly, but rather through a shortcut in your window manager. The only reason I see for having the bg-color changed at runtime is, that I have special cases in which I want a red or green or so bg terminal. If I just want to set the default bg, it should imho be done in config.h. Yes, the X resources route doesn't work for me since I have a palette of 8 acceptable and different background colours and then different terminals acquire a different colour depending on what they're doing. For my personal usage I'd prefer not to just overwrite dc.col[DefaultBG] since in the long term I'd like to be able to change the background colour of a running xterm, since if I'm reading hte code correctly that's the only place, eg, black is stored. (As background, I currenlty use a hacked aterm which changes the background colour according to the cwd. This works for changing hte colours, but the codebase and aterms behaviour on resize is poor, so long term I'd like to move to a better terminal codebase where I can hack in my colour functionality.) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
[dev][st] Approach to adding -bg colour option to st
Hi, I'm considering experimenting with st, but I'm incredibly habituated to having my terminal windows all with slightly different coloured backgrounds (so I can semi-subconsciously keep track of where the ones in various directories are). The obvious minimal change would be to convert DefaultBG and DefaultFG from #define's into full C variables with defaults and add option partsing code to set them according to -fg and -bg. (The simplest thing would be to make the argument be an array index rather than parsing a colour spec.) Before I start working on a patch, does that sound like the appropriate way to do this or is there a different way of handling colour that's being envisaged? -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
[dev] fast-booting to text editor
Hi, one of those general suckless software questions: I'm in a position where I'll be both commuting a lot and needing to write a lot of text (review coments) over the coming months. I've got a spare old but very small, low weight notebook PC I plan to try and use. The only requirements I have are that there be a decent text editor, a filesystem that can hold several files and the ability to move files onto/off-of my permanent full-capacity PC. (I'd actually prefer not to have any other facilities.) The obvious thing to do would be to install a standard linux distro, try and remove as many uneeded services and then just keep hibernating it. However, my experience is that linux is not particuarly snappy booting from a hibernate image, partly because there's so many programs that want to be paged back in and partly because it needs to still slowly start up any hardware it can find on the machine. I'm just wondering if anyone has any ideas/experience of any more minimal solution, or if I should just go with the original plan. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] fast-booting to text editor
On Sun, Mar 20, 2011 at 6:46 PM, Peter John Hartman peterjohnhart...@gmail.com wrote: On Sun, Mar 20, 2011 at 06:21:54PM +, David Tweed wrote: it. However, my experience is that linux is not particuarly snappy booting from a hibernate image, partly because there's so many programs that want to be paged back in and partly because it needs to still slowly start up any hardware it can find on the machine. booting from hibernate is perfectly fast nowadays. Just tried an experiment using an ubuntu distribution that I HAVEN'T trimmed down on the target hardware. To boot to the standard desktop using dwm as wm took 50s. Without starting any other programs and immediately hibernating, a restart from hibernate image takes 37s to get to the password unlock screen.(I can get rid of the need for a password, but I don't imagine that particular program contributed much to the restart time). Many thanks for the suggestion about mincore Yoshi: I'd heard about Damn Small Linux, but not mincore which seems like a good option to experiment with. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OT] What's wrong with C++?
On Sat, Sep 11, 2010 at 10:40 AM, Paolo lordkran...@gmail.com wrote: Why program in C++ when you can do it in C, making the program simpler and better? When you can't make the program simpler and better, or you need to do it faster than you do in C, just write C++ or whatever. This is just the place where people write about C, little overheads and simpler programs. The point I was making was just that there are SOME problem domains where the features C provides fit what's needed and the C++ features aren't useful, in which case C will be simpler and better. And it's great to use it in those cases. But there are SOME other problem domains where some of the features C++ provides that aren't in C are incredibly useful in writing really clean, maintainable and more efficient code, and as such I don't think that a blanket statement Why program in C++ when you can do it in C, making the program simpler and better? is accurate. Choose the language that's best for the particular problem you're solving at the time is all I was saying. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OT] What's wrong with C++?
On Fri, Sep 10, 2010 at 7:19 PM, Paolo lordkran...@gmail.com wrote: Why program in C++ when you can do it in C, making the program simpler and better? One of my maxims is that everyone mistakenly thinks that the kind of programs that they write are the kind of programs everyone writes. There are some domains in which C is simpler, there are some domains in which C++ yields simpler programs, assuming you account carefully for all the complexity caused by macros and conventions which have to be ensured by the programmer. (Incidentally, I think the I only use 10 per cent of the language, so it must be bloated are people who don't realise not everyone writes the kind of programs they do, and would presumably also object to a natural language which is big enough to be usable by both poets and lawyers. Now, complaints about the bad interaction between C++ features are more justified...) My opinion on C++ is, basically, every major feature in C++ is in response to a real difficulty in programming that's worth attempting to ameliorate, but the solutions chosen by C++ are often suboptimal, and very often interact badly with other features. I'm also of the opinion that many of the worst elements of C++ are due to the design requirement to in essence have a block of code which is valid C have the same semantics as in C (and to some extent the desire to keep using object file/linker formats bassed in the 70s); that strongly constrains some important basics to annoying things. My biggest concern about the latest evolving standard C++0x is that it attempts to cram even more functionality into a design space that's already highly constrained by both C compatibility and existing C++ compatibility. Of course, Stroustrup argues that C++ wouldn't have become popular had it not constantly been presented as incremental evolution. A lot of my work is writing numerical code that is quite performance critical. As such, I find it almost invaluable to be able to write a template function so that one source base can work on int8_t's, , floats, doubles, complexfloat, etc, with proper typechecking rather than in C with kludges using macros that render debugging a nightmare. That combined with C++'s namespaces (which whilst not a proper module system, are a godsend if you need to QUICKLY create a program which uses two existing libraries that happen to use the same name) is enough to mean that, FOR MY KIND OF PROGRAMMING, I'd rather use MY subset of C++ that doesn't have bad interactions than have to write in C doing lots of C++ stuff by hand. But I expect some people working on other kinds of problems have their own subset of C++ that they use, and some people working on other kinds of programming are best served by C. So for me, C++ is basically a good idea with a botched implementation, and I think it's a bit of a shame that D Java has semantics designed for a managed interpreter, D still appears to be primarily supported by a few developers, that Go does not have any interest in efficient numerical computation, BitC appears to have only one developer, etc. To be honest, if it wasn't so heavily based on an ecosystem, and possibly legal issues, that are controlled by Microsoft I might have tried moving to C#. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OT] Music?
On Wed, Sep 8, 2010 at 10:35 PM, Joel Davila 6336...@gmail.com wrote: On 8 September 2010 15:12, Nikhilesh S s.nikhil...@gmail.com wrote: What kind of music do you listen to? Your favourite artists, genres, etc.? Interesting. Suckless music may be classics as Beethoven, Brahms,Chopin, Ravel, Tchaikovsky... I thought the only music that would count as suckless is John Cage's 4′33″. It's the only piece where there's just no bloat. Even that Nokia classic ringtone has 13 notes that are unnecessarily inflates the NOM (notes of music) metric. [I'm sorry, this is cruel caricaturing but I couldn't resist]. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] A language similar to Markdown that sucks less
On Sun, Aug 22, 2010 at 8:20 PM, Joseph Xu joseph...@gmail.com wrote: On 8/22/2010 12:47 PM, David J Patrick wrote: On 10-08-22 12:37 PM, Alexander Teinum wrote: What doesn’t work well for me, is that I cannot easily extend Markdown. The design that I propose is simpler and more strict. All tags work the same way. The input is close to a data structure, and it doesn’t need complex parsing. The drawback is that tables and lists need more characters: pandoc extends markdown and has some table support, djp The problem with all these Markdown extensions is that they come as packages with monolithic parsers, so if you like the pandoc syntax for one kind of entity but the PHP markdown extensions syntax for another, you're screwed. This is a problem with latex as well, all syntax for complex structures like tables must derive from the base tex syntax. Hence the source code for tables looks ridiculous in latex. The troff idea of multiple passes with different parsers translating each type of entity into a base representation solves this problem nicely and should be emulated more. I wonder why troff never really caught on in the academic community. There's the obvious point that, being a mathematician, Knuth really understands how mathematicians think and both the TeX basic mathematics notation and the quality look noticeably better than eqn. There are two slightly more minor reasons: 1. Knuth went to incredible pains to ensure that the output file from a given .tex file is absolutely identical regardless of the machine the program was run on (and has shouted loudly at anybody making changes which break this). Given that academic papers can remain relevant for at least 50 years, and that citations in other papers are occasionally very specific (the 1st paragraph on page 4) that may have been an important point. 2. Knuth really, really, Really, no REALLY, cares about his programs not misbehaving in the case of user errors (unlike some luminaries in the computing field). The work he did basically trying incredibly convoluted language abuse to break TeX means that it's almost unencounterably rare to have it silently produce corrupt output files or segfault. Admittedly part of this may be from him primarily working in an era when submitting jobs for batch-mode processing was one common way of doing things, so that you want to have useful logs at the end rather than relying on the user interactively spotting something screwy is happening. Again, back in 1982 this attitude may have been relatively important. (I've got to admit it's probably reading his amaxing paper on the TRIP test for TeX that probably fired up my desire not to silently output corrupt files, or fail mysteriously when given corrupted/erroneous input, and above all to consider how you can diagnose errors in your program at just as important as considering normal processing during the design stage.) Of course, it's possible that the fact TeX took off whilst ROFF descendants never did is purely historical accident. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Suckless design in Games
On Wed, Aug 11, 2010 at 1:05 PM, Szabolcs Nagy n...@port70.net wrote: i experimented with various types of genericity for algorithms here is one approach: implement the algorithm in the simplest way for the most useful types etc. then when you need it for a specific task then copy the code and apply appropriate modifications in the source (often s/type1/type2/g) it turns out that with this approach it's often easier to change a component without screwing things up than by understanding all the gory details of a generic api like boost adapting an algorithm for your usecase usually takes less than an hour and the result is readable customized code, easy to debug, easy to edit I spent a while working for a company that used that philosphy (albeit in Matlab). As someone who didn't write the original algorithms (so I didn't have the slowly accumulated knowledge from having been the one who wrote them) it was a complete nightmare because there were various implementations of a basic algorithm, all different subsets of the bugs that had ever existed in the algorithm development depending on when and who they'd branched from. Worse, once this had been done for a while the using code sometimes grew to depend on some of the bugs (rather than biting the bullet and consolidating them and dealing with the fallout) so you had to basically keep in mind what peculiarities this particular implementation of the algorithm had (which of course was always from memory because even in this case, the original code writers frown on you putting comments about what bugs their code has). And of course, because if you can't see that the code has broken, don't poke it when a bug was fixed in one implementation the other implementations weren't checked whilst memories were fresh. Unified implementations would have required more up-front work, but would I think have been less work overall. (The people who had actually written the rats maze of code disagreed; it's difficult to see how much was partly because it was their baby.) All of your points have some validity, but given the choice between one algorithm implementation with a complex BUT EXPLICIT API and multiple implementations of some algorithm that does something along these lines with simple APIs, personally I'd take the one implementation. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Suckless design in Games
On Wed, Aug 11, 2010 at 1:09 AM, Alex Hutton highspeed...@gmail.com wrote: An idea I had the other day, and this is for dealing with data compartmentation in games, was to write a game in C and use sqlite for all the data. I've never used sqlite so I don't know how the performance would go, but it seems like a good idea to store all the data in a relational database as it makes it less likely that the data structures would have to be refactored during development and it allows me to avoid having to use 'objects'. I don't know what granularity of data you're thinking of storing in the database. If it's very fine grained data then I'd expect you're going to hit concurrent writing lock-scaling problems (see, eg http://www.sqlite.org/faq.html#q5 ) Part of the reason why systems like git or hg don't use a standard database for their backends is that a general purpose database generally has limited means for expressing which subsets of the data are independent, and hence what the minimal locking is for writers (and reading whilst something may be writing), whereas, eg, the git backend knows due to the data struture and code design that it can do _almost_ everything without locking. (That's part of why it scales to things like Ingo Molnar's automated Merge every single patch in every single remotely relevant patch queue into one mammoth kernel and try and boot it for regression testing.) One of the big advantages of conceptual objects (whether they're actual language objects) is that since they can have different implementations they can have different access semantics. As I mentioned in a different thread, if you look at a lot of praised old code it's perfectly well designed for the realities of computer architecture at the time it was written, but nowadays a lot of it is either middlingly effectively written or sometimes PLEASE, PLEASE don't write code that way examples. Just because new innovations aren't a magic bullet and are often overhyped doesn't meant they're worthless. Personally, I think it's a shame that there's no widespread language that has compiler/run-time support for immutable once set-up variables in this multicore age. (You can have a design in which a variable your design treats as immutable in C, but there's no inherent support, eg, for detecting violations or cloning copies for core's with a different cache, etc.) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] wrap: minimalist archiving tool
On Mon, Aug 9, 2010 at 12:48 PM, Connor Lane Smith c...@lubutu.com wrote: On 9 August 2010 04:54, David Tweed david.tw...@gmail.com wrote: The one thing that leaps out at me is that there's no checksumming of either the individual files or the whole the archive file performed, so if you give it a damaged archive you won't be able to tell or isolate the damaged files. I figure the archive doesn't need to be able to checksum. Many compression formats and transmission protocols already checksum - reading a gzipped tarball from the web can result in up to four checksums. So if you're worried about integrity, just compress. If you're forced to use raw wraps and you're worried about storage, not transmission, you can always include checksums as Szabolcs said. I think it depends on the use case. I was thinking about actually archiving data, in which case if something has gone wrong on your storage medium you want to be able to recover as much of the data as you can, particularly the files which are undamaged. (It looks like once the extractor desynchronises you may not get remaining files even if that part of the file is uncorrupted.) But there are more sophisticated archiving solutions so that's probably not a use case worth worrying about. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] The mysterious 31
On Wed, Aug 4, 2010 at 2:22 PM, John Yates j...@yates-sheets.org wrote: Here are two useful references: http://bretm.home.comcast.net/hash/ http://burtleburtle.net/bob/hash/ RE computation cost: Generalized integer multiplication and integer division both used to be expensive. On modern processors a multiplication is fully pipelined and comparable to an L1 data cache hit (~2 to 4 cycles). Most modern implementations of division run the divide circuit at a multiple greater than 1 of the cpu clock (typically 2x) and implement early out. Further division is nearly always an autonomous, non-pipelined logic block that once initiated runs asynchronously until it is ready to deliver its result. Bottom line: on any contemporary out-of-order x86 chip from either Intel or AMD the cost of either of these instruction is negligible. I'm not a great expert of microarchitecture, but isn't the fact the implied division is the last explicit operation in a function, with nothing much after it to overlap with, going to serialise things thought? (The fact that I'm currently developing for Intel Atoms and ARM chips, both in-order chips, probably does skew my ideas on performance though. An instruction reference claims that it's latency is 50 instructions on an Atom, which is why I try and avoid unnecessary divisions.) bucket selector. Nearly universally this is a modulo operation taking the remainder of the digest divided by the number of buckets. A particularly poor approach is to have a power of 2 number of buckets so as to to reduce the modulo operation to simple masking. The reason is that this discards all entropy in all masked out bits. A _much_ better approach is to pick the number of buckets to be a prime number -- the remainder of division by a prime number will be influenced by _every_ bit in the digest. (When I need to implement a dynamic hash table I always include a statically initialized array of primes chosen to give me the growth pattern I desire.) If your hash function has appropriate avalanching and funneling so that you believe it is a good approximation to uniform, then there's no reason to believe that the entropy obtained by a using 2^n bits should be any better or worse than reducing modulo a prime. (Certainly if you look at most of the hash tables on Linux they're powers of 2, but I don't know if that means much.) That said, I haven't actually timed a full hash-table implementation to see if an integer division operation shows up on an optimised program using hash tables though. My bigger point was just that lots of the code snippets in K R was written for a different time and is inappropriate in 2010. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] The mysterious 31
On Wed, Aug 4, 2010 at 6:01 PM, David Tweed david.tw...@gmail.com wrote: On Wed, Aug 4, 2010 at 2:22 PM, John Yates j...@yates-sheets.org wrote: performance though. An instruction reference claims that it's latency is 50 instructions on an Atom, which is why I try and avoid unnecessary divisions.) Oops: Make that 50 cycles. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] The mysterious 31
On Wed, Aug 4, 2010 at 2:53 AM, Jacob Todd jaketodd...@gmail.com wrote: In KR, chapter 6, section 6, there is a funtion called hash that hashes a string, which will be stored in a linked list. The function in question is on page 144, but here it is for those of you who don't have KR handy. /* hash: form hash value for string s */ unsigned hash(char *s) { unsigned hashval; for(hashval = 0; *s != '\0'; s++) hashval = *s + 31 * hashval; return hashval % HASHSIZE; } So what is the purpose of the magic 31? The only thing that I think might be a reference to what it may be for is the paragraph prior, which states The hashing function, ..., adds each character value in the string to a scrambled combination of the previous ones and returns the remainder modulo the array size. Does the magic 31 have to do with scrambling? It's also worth remembering that K R was written at a time many decades ago when performance aspects of computer architecture were a lot, lot simpler. Apparently they have #define HASHSIZE 101 which given that there's no really efficient way of computing % for arbitrary numbers is going to be quite slow (particularly if the strings are short), which is why hashes for modern machines use table sizes that avoid needing a mod. (There are other things that are slow on modern architectures that modern hash functions avoid.) I'd use KR for the C syntax and some of the higher level ideas of programming, but not try to understand good hashing technology from there. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] The mysterious 31
On Wed, Aug 4, 2010 at 5:01 AM, David Tweed david.tw...@gmail.com wrote: On Wed, Aug 4, 2010 at 2:53 AM, Jacob Todd jaketodd...@gmail.com wrote: It's also worth remembering that K R was written at a time many decades ago when performance aspects of computer architecture were a lot, lot simpler. Apparently they have #define HASHSIZE 101 which given that there's no really efficient way of computing % for arbitrary numbers is going to be quite slow (particularly if the strings are short), which is why hashes for modern machines use table sizes that avoid needing a mod. (There are other things that are slow on modern architectures that modern hash functions avoid.) I'd use KR for the C syntax and some of the higher level ideas of programming, but not try to understand good hashing technology from there. A minor clarification: looking at the gcc assembly if you've hardcoded the table size it figures out some magic constants so that it takes around 8 bit-twiddling and subtraction instructions to do the mod operation, which is slow but not that slow. If you pass the table size as a variable, which you would in serious code you get integer division based code, which is going to be quite slow. But the point about K R being basically a syntax and high-level strategy book these days stands. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Interesting post about X11
On Tue, Jun 15, 2010 at 2:56 AM, Will Light visi...@gmail.com wrote: but the notion of a browser-based terminal for a local machine just seems ridiculous...and that's a mild example! a browser-based music sequencer or video editor, for example, is so far off that it's just impractical. Just to provide some context: this probably isn't the fully featured video editor that you were talking about, and it appears not to use NaCl but do everything remotely, but clearly a web-based video editor exists: http://googlesystem.blogspot.com/2010/06/youtube-video-editor.html -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Interesting post about X11
On Thu, Jun 17, 2010 at 2:04 AM, Will Light visi...@gmail.com wrote: yeah, I'm aware that the stuff exists. just earlier today I was doing quite a bit of fiddling around with the current version of audiotool (http://www.audiotool.com/), and it's pretty cool. the potential is definitely there, but the point I'm trying to make is that these things are constantly playing catch-up. the feature set that these browser-based apps are seeking to duplicate is the sort of stuff that was novel, say...10 years ago, but it's nothing groundbreaking from the standpoint of a professional music producer. perhaps these apps will end up replacing the entry-level stuff like garageband or iMovie, but I think they will be hard-pressed to unseat Cubase, ProTools, or even newcomers like REAPER. I'd hope that progress will be a little faster once the application-core can be implemented in any language that you can compile to obviously safe machine code (maybe even the same application-core codebase used in the standalone product). But I think you're probably going to be right, but for utility to the expected user reasons rather than for any technological problem. The interesting thing about browser-applications is for doing task X for those people who don't do X very much at all. The majority of these people will only benefit from functionality that you can present in easy-to-understand-and-use-immediately ways, in contrast to use-it-everyday people who will both install the application locally and learn over time how to use subtle functionality. So I'd expect that web-applications will acquire those advanced capabilities which are basically automatic, but won't acquire the stuff that requires sophisticated user understanding to use. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Interesting post about X11
On Thu, Jun 17, 2010 at 3:32 AM, Kurt H Maier karmaf...@gmail.com wrote: On Thu, Jun 17, 2010 at 2:27 AM, David Tweed david.tw...@gmail.com wrote: obviously safe machine code hahahahahah Would you care to elaborate on this? The compilation problem is asymmetric: there's going to be lots of code sequences that are in fact innocuous which the verifier can't show to be innocuous, but I don't see any reason why it's not possible to have a compiler that can compile some code that it can show to be innocuous into safe machine code. (Ie, the compiler may be refuse to compile a given piece of code, but if it does compile it's as safe as running interpreted code -- ie, up to the level of undocumented chip errata could be exploited in either case.) I'm genuinely interested if there is a flaw in this reasoning, because I spend a lot of time writing numerical SIMD code that you can't access from languages where the semantics means values need to be boxed. All I'm interested in are machine instructions for SIMD plus enough scalar operations and known address conditional jumps to implement marching through arrays of data. The compiler can refuse to compile any code containing any of the other instructions in the instruction set and I won't care. (Clearly I don't even need Turing completeness in the sections of the application that are compiled to native instructions.) Are you saying the NaCl-style verification approach cannot work in such a case? -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Suckless operating system
On Tue, Jun 15, 2010 at 12:19 AM, Matthew Bauer mjbaue...@gmail.com wrote: I wish modern filesystems would allow some way of identifying a file type besides in the filename. It seems like that would make things more straight forward. The other issue is an providing a very-easy-to-type equivalent of globbing on filenames in shell/script expressions for whatever mechanism is used (ie, for things like 'find . -name *.(h|cpp|tcc) | xargs .. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Interesting post about X11
On Tue, Jun 15, 2010 at 2:56 AM, Will Light visi...@gmail.com wrote: i only take issue with the notion that web-based applications will somehow replace desktop apps entirely. for some use cases, sure...i mean, if somebody only uses facebook and gmail on their netbook, then yeah, why the hell do they have more than a web browser installed? but the notion of a browser-based terminal for a local machine just seems ridiculous...and that's a mild example! a browser-based music sequencer or video editor, for example, is so far off that it's just impractical. Weel... It's difficult to discern what will take-off in the future either technologically or sociologically. But one of the key use cases that I think might work with the web are those which are BOTH primarily data-based tasks (rather than getting your computer to do something) and which you don't really do that often. This is because (a) there's a big class of data which, with appropriate security safeguards, you're happy to be on someone else's server. For example, I'm happy to have my home photos stored in the cloud because in the worst case they're just a bit naff but having someone else take care of making reliable backups works for me. (b) for occasional applications I really prefer not to install the application on my PC (if nothing else, from the minimal attack surface security viewpont). Ironically, I can't imagine why anyone would want to use a web-based word-processor because it's something that you use so frequently that having a local version seems better for all sorts of reasons; likewise a browser-based terminal doesn't seem to make sense because if you use one at all you use it all the time. But I can absolutely imagine using a _performant_ web-based music sequencer or video editor just because I'd only use them once a year at most. (A professional musician would get more benefit from a locally installed app, but for a dilettante like me a _performant_ web-based application would be great.) Time will tell if Google's Native Client technology combined with intelligent caching will make actually having web-based music sequencers and video editors feasible in the near future. I've actually spent a bit of time thinking about this, precisely because applications in my field tend to suffer from issue b: with conventional installed software you have to be pretty sure you want to use the application to install it (particularly since I generally worry the uninstall won't actually remove all the crap it installed or will remove stuff shared with other programs) whilst with a web application you can try it and if even if it's behaves heinously or isn't useful, the only thing you need to do is not visit that site again. In this respect I think that HALF of Google's Chrome OS program is a good idea for users: making effective web applications available is a brilliant thing; it's the web-applications will be the only applications available on Chrome OS that I'm not sure about. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Suckless operating system
On Mon, Jun 14, 2010 at 12:38 AM, Connor Lane Smith c...@lubutu.com wrote: On 14 June 2010 00:16, David Tweed david.tw...@gmail.com wrote: One of the issues to consider is that what computers are used for changes with time, and decisions that one may classify as the suckless way of doing things at one point in time may mean that it's not effectively useable in some future situations. If the system is sufficiently modular it should be relatively future-proof. I meant to suggest that design decisions and architectures might need changing as new use cases come to light rather than that a single design should be future proof-ish, and that this is in fact desirable. However that means that saying something is suckless has to be implicitly qualified with for current needs. To pick a really simple example, consider the changes to booting that happened since the arrival of netbooks. What was once a relatively rare process, with the corresponding suckless design being to keep things simple, has become something where sub 5s booting is wanted, which requires more complicated techniques. That's not to say that old-style booting was wrong for the time it was designed, but the criteria now are different and consequently the most elegant solution is now different. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] stderr: unnecessary?
On Sat, Jun 12, 2010 at 12:06 PM, Kris Maglione maglion...@gmail.com wrote: On Sat, Jun 12, 2010 at 12:53:27PM +0200, pancake wrote: On Jun 12, 2010, at 9:27 AM, Connor Lane Smith c...@lubutu.com wrote: On 12 June 2010 08:00, Kris Maglione maglion...@gmail.com wrote: Except it can actually fetch as much data as is addressable in memory in a single call, if the kernel and library are tailored to. That's why mmap is for. Using read is just stupid. mmap is silly. If you want that much data mapped, it's because you want fast access to it. If you just want random access to it, you read it as you need it. mmap doesn't offer any performance advantage. When you touch a page that wasn't already there, the kernel has to fault it in, which is already as expensive as the read system call, and even more so because of the coarse granularity. It needs to read in an entire page, even if all you need is a byte. And if you need a dword across a page boundary, you get two faults and two pages read in. There's really just no point. I just know I'm going to regret getting involved in this but... My understanding is that on Linux at least, reading causes the data to be moved into the kernel's page cache (which I believe has a page level granularity even if you read only a byte), and then a copy is made from the page cache into the processes memory space. Mmapping it means your process gets the page cache page mapped into its address space, so the data is only in memory once rather than an average of 1.x times where x depends on pagecache discard policy. So IF you are genuinely moving unpredictably around accessing a truly huge file, mmapping it means that you can fit more of it in memory rather than having both your program and the page cache trying to figure out which bits to discard in an attempt to keep memory usage down. This effect is actually much more important with huge files than smaller files where the page cache duplication doesn't have as much effect on system memory usage as a whole. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Tiling windowmanager workflow (Was: [dvtm] Fibonacci layout patch)
On Tue, Jun 1, 2010 at 12:56 PM, Moritz Wilhelmy c...@wzff.de wrote: On Tue, Jun 01, 2010 at 01:27:07PM +0200, Mate Nagy wrote: Using the vim splits may be cheating, but it sure is convenient. sorry for self-reply: I thought that maybe for maximum punishment, the fibonacci layout could support nmaster. (Also note that this is a 2560x1600 setup, that's why so much division (and nmaster) makes sense.) Ah, guess it's just my 1280x1024 screen then :) Actually tiling doesn't even make much sense on it, when I went with monocle on the netbook I grew used to it and use it everywhere now. Anyone else interested in sharing their way how they use their System? It seems like an interesting topic. The typical usage where I have more than one window is that I have maybe 2 windows providing a view on an editor open (eg, for showing a .h and .cpp file, or two .cpp files when I'm trying to track down an inconsistency in usage), an xterm for running the program being developed and one xterm holding gnuplot and one gnuplot window (or looking at accuracy plots (I tend to work on machine learning stuff where you do that a lot). So it's all associated with one human level task, but it's using multiple computer applications. Likewise there are other situations, but they all tend to fall into that category: I don't tend to have separate human tasks on screen at the same time very much. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [sw] Suckless web-framework
On Mon, Apr 5, 2010 at 6:34 PM, Charlie Kester corky1...@comcast.net wrote: On Mon 05 Apr 2010 at 08:29:24 PDT Connor Lane Smith wrote: On 5 April 2010 15:13, Uriel lost.gob...@gmail.com wrote: Actually, modern browsers parse HTML much faster than XHTML (yes, I was fooled by the XML scam once too, and it was not until recently that I discovered even the myth of it making parsing of webpages faster was totally bunk). My point was not that we should write XHTML, but that we should write simple HTML, and that simple does not solely mean fewer characters. (Nor does it solely mean efficiency. I have a dog on my shelf telling me: simplicity, clarity, generality.) I was considering from the point of view of the author of a new, say, htmlfmt. To quote, On Mon, Apr 5, 2010 at 1:38 PM, Connor Lane Smith c...@lubutu.com wrote: I'm not even sure how fewer characters equates as simpler: LOC is only an approximation of how suckless our code is. When given a trade-off between two simple lines or one complex one, write two. A paragraph makes sense as ptext/p: it opens, it closes. Quotes are nice too. I'm not saying it should validate as XHTML, but simplicity is more profound than wc. While pondering the import of your message, and thinking about how ordinary language uses quotation marks to both open and close a quote, it struck me that my email client was giving me an elegant example of how the need for a closing tag can be eliminated. See how the '' character is used? Regardless of the strengths and weaknesses of HTML, the '' style has it's own problems. Firstly, unless you've got an editor programmed for the syntax it means you can't cut and paste just the content of a quoted region. Secondly, don't forget that you've got to figure out how to allow literal '' characters at the start of a line if you want it to be able to work with absolutely any data someone wants to display in there (such as 8 visual cut markers). As for paragraphs, separating them with blank lines always made more sense to me than p tags, and here again, no closing tag is required. Of course, if there are other reasons why one might want to have elements within a semantic paragraph which are one separated lines then one needs to come up with a syntax for having visual blank lines which aren't (as witnessed by the use of comment blank lines in TeX/LaTeX: preceding paragraph start of paragraph content % $$displayedEqn$$ % end of paragraph content following paragraph , which again spoils the simplicity. And note how I've just used some blank lines in order to present an example within what is semantically one paragraph). None of this is to say that a different markup mechanism than HTML might work better, but it's easy to have an initially simple proposal that suddenly sprouts a lot of complexity when you add mechanisms for corner cases. Personally, the ONE, SINGLE thing XML (and specialisations) had going for it was that, for all it's annoyances, it was hoped to be dominant enough that you only had to learn techniques, common bugs and libraries for one syntax. Unfortunately the multiple ideas about how to do a better markup language mean that that even that advantage has gone. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] Why use Mercurial?
On 2/14/10, Kris Maglione maglion...@gmail.com wrote: 1) We've been using Mercurial since long before the advent of git. As a purely factual matter, this can't be correct as Matt Mackall started work on Mercurial after reading Linus Torvalds announce he'd got the very initial bare-bones of git working. (It all began as a difference of opinion about whether whole-file checkpointing (git) or tracking patches (mercurial) was the better fundamental approach.) To all practical purposes they are the same age. You might have meant since long before git became user-friendly enough, but that's a slighty different statement. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OFFTOPIC] Recommended meta-build system
On Wed, Feb 3, 2010 at 4:15 AM, Noah Birnel nbir...@gmail.com wrote: On Mon, Feb 01, 2010 at 04:49:52PM +0100, sta...@cs.tu-berlin.de wrote: ... a mobile phone with integrated camera, touch screen, 'apps' for learning languages, etc. is as much suckless as an axe with a door bell, toilet paper and nuclear power generator. At this point a mobile phone is a general purpose portable computer. The camera is no more out of line than the speakers hooked up to your home box. I partly think it's perception shape by marketing. You can still buy a mobile phone that only has voice functions. You can also buy a more general communications/entertainment node device which has a host of hardware and software that's all appropriate to that usage, including as one component making voice calls. The only problem is that they're still marketed as phones which seems to cause cognition problems for some people who seem to take is as gospel the label marketers use must be right and therefore that the device is wrong, rather than vice versa. (I've never subscribed to the philosophy that an entity should do one thing well but rather that there should not have non-orthogonal capabilities in the same entity. If you're into that sort of thing, I don't see any reason why you'd consider mobile photo-taking, internet browsing, causual entertainment games, etc, to be non-orthogonal to chatting to friends: they're all ways to entertain yourself while not at home.) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OFFTOPIC] Recommended meta-build system
Thanks to everyone for all the help. I'm looking more at the development process than the distribution process which means different issues are most important for me. The big issue I'm looking at is that I've got lots of programs which can be visualised as having conventional dependencies with the twist that suppose executable foo depends upon colourSegmentation.o, if the target processor has SSE3 instructions the IF there's an processor optimised segmentation.c in the SSE3 directory compile and link against that, IF it doesn't exist compile and link against the version in the GENERIC_C directory. I think maintaining separate makefiles that are manually kept up to date in each case as new processor oprtimised code gets written is going to be reliable in the longer term. I think I'll follow the general advice to maintain a single makefile that describes the non-processor specific dependencies by hand and then try some homebrew script to automatically infer and add appropriate paths to object files in each processor-capability makefile depending on availability for each processor-capability set. (This is probaby not a common problem.) I recommend mk from Plan 9, the syntax is clean and clearly defined (not the problem is it BSD make, is it GNU make or is it some archaic Unix make?). I found that all meta build systems suck in one way or another -- some do a good job at first glance, like scons, but they all hide what they really do and in the end it's like trying to understand configure scripts if something goes wrong. make or mk are better choices in this regard. Yeah. I don't mind powerful languages for doing stuff automatically, the problem is systems that aren't designed to be easily debuggable when they go wrong. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] [OFFTOPIC] Recommended meta-build system
On Wed, Jan 27, 2010 at 6:25 AM, Uriel lost.gob...@gmail.com wrote: Why the fucking hell should the fucking build tool know shit about the OS it is running on?!?!?! If you need to do OS guessing, that is a clear sign that you are doing things *wrong* 99% of the time. [In what follows by OS I mean kernel plus userspace libraries that provide a higher level interface to the hardware than runs in the kernel.] It would be great if conceptual interfaces that are a decade or more old were universally standardised (so you don't have to worry about whether mkstemp() is provided, etc) so that a lot of the configuration processing could go away, and maybe that's the situation for most text and filesystem applications. But there are and are will be in the future new interfaces that haven't solidified into a common form yet, eg, webcam access, haptic input devices, accelerometers/GPS, cloud computing APIs, etc, for which figuring out what is provided will still necessary in meta-build/configuration systems for years to come for any software that will be widely distributed. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
[dev] [OFFTOPIC] Recommended meta-build system
Hi, I'm wondering if anyone has had particularly good experiences with any meta-build system (cmake, etc) in the following circumstances: I will have a large codebase which consists of some generic files and some processor specific files. (I'm not worried about OS environent stuff like has vsnprintf? that configure deals with.) In addition, it'd be nice to be able to have options like debugging, release, grof-compiled, etc, similar to procesor specification. I need to be able to select the appropriate files for a given build and compile them together to form an executable. It would be preferrable if all object files and executables could coexist (because it's a C++ template heavy source-base that means individual files compile relatively slowly, so it'dbe preferrable only to recompile if the source has actually changed) using directories or naming conventions. I've been doing some reading about things like cmake and SCons but most strike me as having built-in logic for their normal way of doing things and are relatively clunky if you specify something different. (Incidentally, when I say meta-build system I mean that I don't mind if it builds things directly or if it outputs makefiles that can be invoked.) Does anyone have any experiences of using any tool for this kind of purpose? (One option would be to just have a static makefile and then do some include-path hackery to select processor specific directories to pick a specific versions of files depending on options and then rely on ccache to pick up the correct object file from the cache rather than recompiling. But that feels like a hack for avoiding having a more expressive build system.) Many thanks for sharing any experiences, -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless computer algebra system
On Fri, Nov 20, 2009 at 12:51 AM, Kris Maglione maglion...@gmail.com wrote: On Thu, Nov 19, 2009 at 02:23:35PM -0600, A.J. Gardner wrote: I'm interested in math and CASs, but my opinions on available software are ill-formed and mostly ignorant. Does anyone else here have an interest in this topic, broadly speaking? If so, do you have any preferences for one package over another? Have you found any math software that seem to follow the worse-is-better design model? Don't be silly. There's nothing like a suckless CAS, at least nothing remotely approaching the simplicity of suckless.org software. Computer algebra and calculus are complex and computationally intensive. They can't (and arguably shouldn't) be simplified beyond a point. There's a related issue: often the task you want to perform inherently has too high a computational complexity as generic problem (at least the complexity is too high if you're working on problems that tend to come up), so you either want multiple different algorithms (or different systems) which employ approaches which might be have different problems they solve quickly. (For instance, the Fermat system (http://home.bway.net/lewis/) uses an algorithm that's not used by other CAS for some polynomial manipulations, and apparently succeeds on some problems they out-of-memory on, whilst failing on problems that others can solve. So if you only care about acheiving your mathematical task and not on only using aesthetically righteous software you might well try both systems.). Contast this with, eg, a typesetting system where you can have one best algorithm that does everything. I have a lot of sympathy with you on the UI front: I dislike intensely the programmer personal GUI choices each CAS I've used seems to make and in an ideal world there would be a common GUI (and syntax would be modified for consistency where the system's semantic representation makes it possible). But I don't think anyone (me included) actually wants to do the work to do that. Incidentally, one system I haven't seen mentioned so far is axiom http://www.axiom-developer.org/ Not compellingly brilliant but not bad either. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless computer algebra system
On Fri, Nov 20, 2009 at 1:38 PM, Jukka Ruohonen jruoho...@iki.fi wrote: On Thu, Nov 19, 2009 at 02:23:35PM -0600, A.J. Gardner wrote: Anyone know of any suckless math software out there in the tubes? As for algebra, the king of the hill is without doubt LAPACK. But since Fortran is nowadays seldom used, few people can tell if it sucks in the sense of suckless (?). But it is still the fastest language in it's own area; this may or may not be important. FWIW, my understanding is that the LAPACK library must have an API which conforms with a reference Fortran implementation, but there are various versions implemented in various languages (Fortran, C, CUDA, etc). As for the code quality, I can see the code driving certain people on this list mad because it deliberately doesn't compute things in the simplest way and fewest lines in order to do things like acheive close to optimal cache blocking on modern multicore machines. A comparison of how much performance can vary depending on how it's coded can be glimpsed in the graphs in this paper: http://www.tudelft.nl/live/ServeBinary?id=608b3015-5195-438a-90fc-c30c2252a066binary=/doc/blas_lapack.pdf -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless computer algebra system
On Fri, Nov 20, 2009 at 2:15 PM, Jukka Ruohonen jruoho...@iki.fi wrote: On Fri, Nov 20, 2009 at 01:53:47PM +, David Tweed wrote: FWIW, my understanding is that the LAPACK library must have an API which conforms with a reference Fortran implementation, but there are various versions implemented in various languages (Fortran, C, CUDA, etc). This is true. But the LAPACK itself is a stand-alone Fortran library; typically it is used via things like f2c. As for the code quality, I can see the code driving certain people on this list mad because it deliberately doesn't compute things in the simplest way and fewest lines in order to do things like acheive close to optimal cache blocking on modern multicore machines. A comparison of how much performance can vary depending on how it's coded can be glimpsed in the graphs in this paper: This is again true, IMO. I'd say that in the sense of traditional software engineering, the code quality of numerical software is generally terrible. I was pointing out more how the simple-minded software metrics would condemn you to around about the level of performance acheived by the reference LAPACK (white bars) in the paper referenced, which to my mind suggests there's a flaw in the software metrics. I'd also query that the code quality is terrible in most numerical software: what I'd say is that they've got a task to acheive (ie, using as much of the computing power as possible) and make the software as simple and maintainable as it can be given the task. (What they don't generally do is say if we reduce what portion of the task we'll implement for users, we get wonderfully simple code.) But as for validity and reliability of numerical algorithms, the thing that really matters in this context, LAPACK is again without doubt the most respected library. In fact, it is intriguing to follow the history of numerical matrix algebra and the close correspondence of it with the development of ALGOL, LINPACK, and later LAPACK. This is probably splitting hairs, but my understanding is that LAPACK (and BLAS below it) are more specifications of library functionality (in the form of a reference implementation) rather than a single library. The development history is certainly interesting, particularly the adapation from old-style computer assumptions (memory is memory is memory) to taking active steps to optimise for the multi-level memory hierarchy in modern machines with starting around the time of Goto-BLAS. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] a suckless computer algebra system
On Fri, Nov 20, 2009 at 3:20 PM, Jukka Ruohonen jruoho...@iki.fi wrote: On Fri, Nov 20, 2009 at 02:57:24PM +, David Tweed wrote: I was pointing out more how the simple-minded software metrics would condemn you to around about the level of performance acheived by the reference LAPACK (white bars) in the paper referenced, which to my mind suggests there's a flaw in the software metrics. I'd also query that the code quality is terrible in most numerical software: what I'd say is that they've got a task to acheive (ie, using as much of the computing power as possible) and make the software as simple and maintainable as it can be given the task. (What they don't generally do is say if we reduce what portion of the task we'll implement for users, we get wonderfully simple code.) I think there is no misunderstanding here; the suckless metrics do not apply here. But given that this is a suckless list, with the terrible code quality I meant that numerical software is often terrible against conventional metrics of code quality; cryptic, hard to read and understand, full of little hacks, difficult to change and maintain, badly formatted, etc. Take the aspects of software quality mentioned in ISO 9126-1: * Functionality (suitability, accuracy, compliance, security, etc.) * Reliability (maturity, recoverability, fault tolerance, etc.) * Usability (learnability, understandability, operability, etc.) * Efficiency (time and space performance, etc.) * Maintainability (stability, analyzability, testability, etc.) * Portability (installability, replaceability, adaptability, etc.) Against these abstract concepts of general software quality, I'd say that numerical software is generally par excellence in some aspects, but terrible in others. I think it's just a difference in when we'd use words like terrible. It sounds like if code is difficult to read on some absolute scale you'd call that terrible but say that it's justified in being terrible in order to acheive efficiency. I'd rate difficulty of reading relative to other ways of acheiving _exactly the same_ programming goal (including efficiency level targets), so if there's not a simpler way of programming it I'd rate that as having good readability. (Of course, if there is a simpler way of coding to acheive the exact goal then it has poor readability.) Likewise, portability is relative to the goal: if the goal is to extract as much performance from the particular processor the code is running on, then portability implies different things than a portable implementation of wc, etc, etc. It's a difference in when we would use particular terms rather than a real difference. The other thing is that the only code I've looked at in depth is ATLAS and Goto-BLAS, which both seem quite readable given their targets. Mayber there's a corpus of much worse numerical code I just haven't seen. (In case it confused people, the comment in my original mail about sending some people on this list mad was more a comment on those people's software values than a criticism of numerical code.) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] A lightwieight and working typesetting system.
On Wed, Sep 2, 2009 at 12:56 PM, QUINTIN Guillaumecoincoin1...@gmail.com wrote: Hi, Do you guys know a (working) typesetting system other than latex ? And a good soft to make presentations ? A key point is: do you need to typeset complicated mathematical expressions? I'm not aware of anything that has such good built-in support for automatically typesetting complicated mathematical expressions and fonts containing the symbols as the TeX family. (Using MathML is, I gather, capable of giving the same results but more complicated.) Any wysiwyg thing based on selecting and fine-tuning character and operator positions with the mouse WILL drive you crazy when you decide to change notation in a large part of your document for consistency reasons. Most powerpoint presentations that involve math tend to use programs that create graphic images of equations rendered by TeX code; never seen many OpenOffice presentations to know if they do the same thing. So if you're writing mathematics you'll probably have TeX/LaTeX installed anyway, so the question is whether you want to add an additional typesetting system that you'll use for other typesetting or not. Of course, if you have no need to do mathematical expressions the above consideration doesn't apply. -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot
Re: [dev] dwm in a window
On Tue, Jul 7, 2009 at 4:25 PM, Kurt H Maierkarmaf...@gmail.com wrote: On Tue, Jul 7, 2009 at 9:24 AM, Anselm R Garbegarb...@gmail.com wrote: a) on top of existing ones b) existing ones on top I tend to a) atm just because it would make porting to other platforms so much simpler. There is no point to running a window system on top of an existing window system, unless there is some religious abstraction method you're married to. Implementing a gui that runs on a gui ends up with crap like WINE. I can understand arguments that x11 needs to be replaced, and I can understand (some) arguments that x11 needs to be left alone, but the idea that x11 needs to be *supplemented* is amazing. The advantage of running something on top of X during development is that users can experiment with it whilst still being able to run their existing applications, thus getting hopefully some people interested in doing development because they like using what's currently there. Otherwise, you end up with something like Berlin, the Y windowing system, the full display-postscript compositing engine behind full GNUstep and all those other new windowing systems that never actually got anywhere near completion because the only thing one could do with them in their current state was development. (Before anyone asks, I'm unlikely to get involved in developing a new windowing system precisely because I suspect that it would be very difficult to defy the historical patter that a lot of code would be written but development would stall before a day-in-day-out usable system would be completed. Mind you, I'm weird in that I tend to prefer existing software that I can use to eulogising about how in principle there's this great way of doing things but whose current incarnation doesn't have any way of acheiving the tasks I want to use my computer for today ;-) ) -- cheers, dave tweed__ computer vision reasearcher: david.tw...@gmail.com while having code so boring anyone can maintain it, use Python. -- attempted insult seen on slashdot