Re: [Haskell-cafe] Children elements with HXT
That stuffed me up for a bit. I wrote some ugly template haskell a while back to automatically generate XmlPickler instances. can send to you if you want On Dec 23, 2009, at 7:55 AM, Tony Morris wrote: Adding (a_remove_whitespace,v_1) as a parser option when running solves it. Silly me. Tony Morris wrote: I am trying to parse XML using HXT following http://www.haskell.org/haskellwiki/HXT/Conversion_of_Haskell_data_from/to_XML Here is my XML file (way.xml): way id=27776903 visible=true timestamp=2009-05-31T13:39:15Z version=3 changeset=1368552 user=Matt uid=70 tag k=access v=private/ tag k=highway v=service/ /way The problem is when parsing, by reading the tag entries into the list held by the Way data structure, I cannot get anything but an empty list. Here is my parsing code: import Text.XML.HXT.Arrow newtype Way = Way { tags :: [Tag] } deriving (Eq, Show) xpWay :: PU Way xpWay = xpElem way (xpWrap (Way, tags) (xpList xpTag)) data Tag = Tag { k :: String, v :: String } deriving (Eq, Show) xpTag :: PU Tag xpTag = xpElem tag (xpWrap (uncurry Tag, k v) (xpPair (xpAttr k xpText) (xpAttr v xpText))) When I run, I get the following result: Main run = runX (xunpickleDocument xpWay [] way.xml) [Way {tags = []}] Why is the tags list empty instead of holding two entries? -- Tony Morris http://tmorris.net/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: install-dirs on Mac OS X
Tom Tobin wrote: Heinrich Apfelmus wrote: Likewise, ~/Library/Haskell seems to be the best place for user installs. While I don't mind the /Library/Haskell path for global installs, I'm not sure how I feel about this for local installs. It usually drives me crazy when my more Unix-y tools stick stuff in my ~/Library/ directory; for instance, I had to actively fight with my copy of Aquamacs Emacs in order to get everything running from ~/.emacs.d/ rather than ~/Library/Preferences/Aquamacs Emacs/. The reason I'm proposing this is consistency: the idea behind the two Library folders is that your Haskell installation is the union of the /Library/Haskell and the ~/Library/Haskell directories. The only difference between these two directories should be that the former is available to all users while the latter is available to just a single user. On a related note, with the --prefix--/packages/ layout for packages in place, it would ideally be possible to mv packages from /Library to ~/Library and vice-versa, making them available to all users or to just a single user with a simple file rename. At least, that's the OS X philosophy. Our main obstacle would be to figure out what to do with dependents and dependencies. Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] profiling problem
Hi, I guess I should report this? Previously, this program got a signal 11, and that happened somewhat later in the process, so I'm not sure how reproducible it is. Source and data is available, if it is of any interest. (This is using the GHC currently shipped with Ubuntu 9.10.) xml2x3prof: internal error: traverseWeakPtrList: not WEAK (GHC version 6.10.4 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Command terminated by signal 6 3577.55user 48.81system 1:00:27elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+1306970minor)pagefaults 0swaps -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] FGL/Haskell and Hierarchical Clustering/dendograms
Hi! I have some trouble implementing single-linkage clustering algorithm by using a minimum-spanning tree, so I would appreciate if some of you could give me some advise. I am implementing a single-linkage clustering algorithm, and my approach is to use minimum spanning trees for that task. I am using the library FGL ( http://web.engr.oregonstate.edu/~erwig/fgl/haskell/), and I have managed to compute a minimum spanning tree from an arbitrary fully connected graph with 5 nodes. I get [ [(4,0) ] , [ (3,1) , (4,0) ] , [ (1,1) , (3,1) , (4,0) ] , [ (2,3) , (4,0) ] , [ (5,12) , (2,3) , (4,0) ] ], which is the root path tree of the minimum spanning tree created by the function msTreeAt. From that I would create a dendrogram. [ (1,1) , (3,1) , (4,0) ] is telling that node 1,3 and 4 has the same cost, namely cost 1. Therefore these are merged at level 1. At level 1 we now have 3 clusters: (1,3,4), 2 and 5. Now the second lowest should be merged, that is 2 and 4. BUT because 4 is already merged in the cluster (1,3,4), we should merge (1,3,4) and 2 at level 3 (because the cost is 3). Now at level 3 we have 2 clusters, (1,2,3,4) and 5. Now we merge the last one at level 12: (1,2,3,4,5), and we are finished. I have very hard to see, how this could be done efficiently without pointers (as in C). I have thought of just saving the nodes from the start of the root path, and traversing it, but a lot of searching should be done all the time. Can you please give me some advise on that? Kind regards Nikolas Borrel-Jensen Computer Science University Of Copenhagen ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Children elements with HXT
Max Cantor wrote: That stuffed me up for a bit. I wrote some ugly template haskell a while back to automatically generate XmlPickler instances. can send to you if you want I recall typLAB writing about generic XML picklers: http://blog.typlab.com/2009/11/writing-a-generic-xml-pickler/ They mention package regular-xmlpickler: http://hackage.haskell.org/package/regular-xmlpickler HTH, Martijn. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Control.Parallel and the Reader monad
Hi, I'm facing the following problem. I've got come computation c :: a - Reader e b that i'm running on several as: mapM c xs A natural optimisation of this program would to be to take advantage of Control.Parallel to run these computation in parallel, which seems sound since the Reader monad is commutative. Unfortunately, I didn't manage to do so. Has this situation been tackled with before ? Is there some library or function I've missed involving commutative monads and Control.Parallel ? Regards, Paul ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] FGL/Haskell and Hierarchical Clustering/dendograms
Nikolas Borrel-Jensen nikolasbor...@gmail.com writes: I have very hard to see, how this could be done efficiently without pointers (as in C). I have thought of just saving the nodes from the start of the root path, and traversing it, but a lot of searching should be done all the time. I must admit I didn't follow your examples. But when I implemented single linkage clustering, I maintained a list of current clusters. Each cluster held a Set of its nodes, and traversing the list of edges from least cost to greatest, the clusters containing the end points of each edge was identified, and, if different, merged. It's probably possible to do it more efficiently, but I don't think it's too bad. -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Hemkay, the 100% Haskell MOD player
I see no problem. I would generate a frequency and a volume control curve for each channel and apply this to the played instrument, then I would mix it. Yes, that's basically what I do now: I flatten the song into a series of play states where for each active channel I store the pointer to the current sample, its frequency, volume and panning (none of these three parameters change within a chunk), and use that information to perform mixing. The mixing step is quite ad hoc, but at least it's simple enough, so it doesn't get out of hand. It is the strength of Haskell to separate everything into logical steps and let laziness do things simultaneously. Stream fusion can eliminate interim lists, and final conversion to storable vector using http://hackage.haskell.org/package/storablevector-streamfusion/ can eliminate lists at all. But in my understanding that elimination is only possible if lists are not used as persistent containers, only to mimic control structures. Now I rely on samples being stored as lists, so I can represent looping samples with infinite lists and not worry about the wrap-around at all. So in order to have any chance for fusion I'd have to store samples as vectors and wrap them in some kind of unfold mechanism to turn them into lists that can be potentially fused away. In other words, besides a 'good consumer', I need a 'good producer' too. However, there seems to be a conflict between the nature of mixing and stream processing when it comes to efficiency. As it turns out, it's more efficient to process channels one by one within a chunk instead of producing samples one by one. It takes a lot less context switching to first generate the output of channel 1, then generate channel 2 (and simultaneously add it to the mix) and so on, than to mix sample 1 of all channels, then sample 2 etc., since we can write much tighter loops when we only deal with one channel at a time. On the other hand, stream fusion is naturally fit to generate samples one by one. It looks like the general solution requires a fusable transpose operation, otherwise we're back to hand-coding the mixer. Have you found a satisfying solution to this problem? Gergely -- http://www.fastmail.fm - A no graphics, no pop-ups email service ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Infix instance headers
OK thanks, Bas On Wed, Dec 23, 2009 at 10:50 AM, Simon Peyton-Jones simo...@microsoft.com wrote: Ah I see. It's an oversight. Thank you for pointing that out. I'll fix it, but it'll be in the HEAD not 6.12. Simon | -Original Message- | From: Bas van Dijk [mailto:v.dijk@gmail.com] | Sent: 22 December 2009 10:52 | To: Simon Peyton-Jones | Subject: Re: [Haskell-cafe] Infix instance headers | | On Tue, Dec 22, 2009 at 9:53 AM, Simon Peyton-Jones | simo...@microsoft.com wrote: | works for me. what version of GHC are you using? | | ghc-6.12.1 | | Sorry, maybe I wasn't totally clear. The example I posted works but if | you write the class name in the instance header as infix I get a | Malformed instance header error. | | So the following doesn't work: | | class (Monad pr, Monad cr) = pr `ParentOf` cr | | instance Monad r = r `ParentOf` r | | instance ( Monad cr | , cr `TypeCast2` RegionT resource s pcr | , pr `ParentOf` pcr | ) | = pr `ParentOf` cr | | regards, | | Bas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] pointfree-trouble
i dont know any calculus-thingy, this is what i did: reMatr a = Matr . a . unMatr reMatr a = Matr . (. unMatr) a reMatr a = Matr . (flip (.) unMatr) a reMatr = Matr . (flip (.) unMatr) as http://old.nabble.com/pointfree-trouble-td26881661.html#a26889388 Daniel pointed out, this doesnt work because (flip (.) unMatr) takes two arguments. i'm really interested in this calculus stuff, looking up now:) Kim-Ee Yeoh wrote: There you have it: fully- and semi-pointfree versions of reMatr. A heads up: aggressively pursuing pointfreeness without type signatures guarantees a courtesy call from the monomorphism restriction, pace ()-garlic aficionados. As for your question on why the original code doesn't typecheck: if you explain how you arrived at it, perhaps we can figure out where you tripped up. Daniel Fischer for instance, *calculated* for you the right answer. Habeas calculus and all that. slemi wrote: thanks, that's a really neat syntactic sugar :) however, my original question was how to make the reMatr function pointfree, as reMatr = Matr . (flip (.) unMatr) is not working. any ideas/explanation why it doesnt work? Kim-Ee Yeoh wrote: Here's another way of writing it: data Matrix a = Matr {unMatr :: [[a]]} | Scalar a deriving (Show, Eq) -- RealFrac constraint removed reMatr :: RealFrac a = ([[a]] - [[a]]) - (Matrix a - Matrix a) reMatr f = Matr . f . unMatr -- this idiom occurs a lot, esp. with newtypes Affixing constraints to type constructors is typically deprecated. slemi wrote: i have trouble making a function pointfree: data RealFrac a = Matrix a = Matr [[a]] | Scalar a deriving (Show, Eq) unMatr :: RealFrac a = Matrix a - [[a]] unMatr = (\(Matr a) - a) reMatr :: RealFrac a = ([[a]] - [[a]]) - (Matrix a - Matrix a) reMatr a = Matr . (flip (.) unMatr) a this works fine, but if i leave the 'a' in the last function's definition like this: reMatr = Matr . (flip (.) unMatr) it gives an error. can anybody tell me why? (i'm using ghci) -- View this message in context: http://old.nabble.com/pointfree-trouble-tp26881661p26902326.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: hnn-0.1, a haskell neural network library
Hi, I just released the first version of my Haskell Neural Network library. It provides the very minimal features anyone would need in a neural network library. It has yet to be completed (regarding the features) and there are some ways for opitmizations. Though, I'd be very glad to hear from Haskellers about this library, get feedback, etc. Some links to get started with HNN : http://www.haskell.org/haskellwiki/HNN http://hackage.haskell.org/package/hnn -- hackage page http://mestan.fr/haskell/hnn/ -- online documentation http://alpmestan.wordpress.com/2009/12/23/hnn-0-1-has-been-released/ Please don't hesitate to try it and play with it and give as much feedback as you want ! Haskelly yours. -- Alp Mestan http://alpmestan.wordpress.com/ http://alp.developpez.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] pointfree-trouble
Am Mittwoch 23 Dezember 2009 14:40:46 schrieb slemi: i dont know any calculus-thingy, this is what i did: reMatr a = Matr . a . unMatr reMatr a = Matr . (. unMatr) a reMatr a = Matr . (flip (.) unMatr) a You need to be aware of the implicit parentheses, that is Matr . ((flip (.) unMatr) a) or (.) Matr ((flip (.) unMatr) a) = ((.) Matr) ((flip (.) unMatr) a) = f (g x), with f = (.) Matr g = flip (.) unMatr x = a Now f (g x) = (f . g) x and you're done. But as Kim-Ee Yeoh pointed out, if you're pointfreeing, give type signatures, or the monomorphism restriction is going to surprise you some time. reMatr = Matr . (flip (.) unMatr) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Missing symbol when trying to use OpenCLRaw
Can anyone tell me what command line options I need to build a simple example using OpenCLRaw on MacOSX (ghc 6.10.4)? Using -framework OpenCL reduces the missing symbols down to just _clGetProgramInfo but I can't see how to eliminate that last one. Has anyone else played with OpenCLRaw? -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Generating Haskell From a XSD
I would like to generate Haskell data types and xml serialization code from xsd. I know of DtdToHaskell but unfortunately I yet to be able to generate a valid dtd from my xsd. Is there a way to generate Haskell from a xsd? -Jonathan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Generating Haskell From a XSD
I don't think such a tool exists. I think it would be a great contribution if someone decides to create one. Best Keith On Wed, Dec 23, 2009 at 12:34 PM, jonathangfisch...@gmail.com wrote: I would like to generate Haskell data types and xml serialization code from xsd. I know of DtdToHaskell but unfortunately I yet to be able to generate a valid dtd from my xsd. Is there a way to generate Haskell from a xsd? -Jonathan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- keithsheppard.name ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Generating Haskell From a XSD
Any interest in helping build the tool? -Jonathan On Dec 23, 2009 10:00am, Keith Sheppard keiths...@gmail.com wrote: I don't think such a tool exists. I think it would be a great contribution if someone decides to create one. Best Keith On Wed, Dec 23, 2009 at 12:34 PM, jonathangfisch...@gmail.com wrote: I would like to generate Haskell data types and xml serialization code from xsd. I know of DtdToHaskell but unfortunately I yet to be able to generate a valid dtd from my xsd. Is there a way to generate Haskell from a xsd? -Jonathan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- keithsheppard.name ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Generating Haskell From a XSD
I'd like to help... I'm not an expert in Haskell, but I guess I could help somehow... Hector Guilarte -Original Message- From: jonathangfisch...@gmail.com Date: Wed, 23 Dec 2009 18:11:21 To: Keith Sheppardkeiths...@gmail.com; jonathangfisch...@gmail.com Cc: haskell-cafe@haskell.org Subject: Re: Re: [Haskell-cafe] Generating Haskell From a XSD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Generating Haskell From a XSD
Yes I'm interested in helping too. It's hard for me to know how much time I'll have but my other side proj is starting to wind down now. Maybe a wiki planning page and a patch-tag (or any other repo site really) workspace is a good starting point? Sent from my iPhone On Dec 23, 2009, at 1:19 PM, Hector Guilarte hector...@gmail.com wrote: I'd like to help... I'm not an expert in Haskell, but I guess I could help somehow... Hector Guilarte -Original Message- From: jonathangfisch...@gmail.com Date: Wed, 23 Dec 2009 18:11:21 To: Keith Sheppardkeiths...@gmail.com; jonathangfisch...@gmail.com Cc: haskell-cafe@haskell.org Subject: Re: Re: [Haskell-cafe] Generating Haskell From a XSD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell in image processing
Disclaimer: this is an Xmas gift as opposed to serious stuff. http://professor-fish.blogspot.com/2009/12/major-breakthrough-in-image-processing.html Greetings, Ralf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Generating Haskell From a XSD
Cool, glad to hear there is interest. I'm talking to a few other people that are expressing interest to. I still don't have access to the haskell wiki, but putting a page up there seems like a start. Any other suggestions for wiki page sites? -Jonathan On Dec 23, 2009 11:57am, Keith Sheppard keiths...@gmail.com wrote: Yes I'm interested in helping too. It's hard for me to know how much time I'll have but my other side proj is starting to wind down now. Maybe a wiki planning page and a patch-tag (or any other repo site really) workspace is a good starting point? Sent from my iPhone On Dec 23, 2009, at 1:19 PM, Hector Guilarte hector...@gmail.com wrote: I'd like to help... I'm not an expert in Haskell, but I guess I could help somehow... Hector Guilarte -Original Message- From: jonathangfisch...@gmail.com Date: Wed, 23 Dec 2009 18:11:21 To: Keith sheppardkeiths...@gmail.com; jonathangfisch...@gmail.com Cc: haskell-cafe@haskell.org Subject: Re: Re: [Haskell-cafe] Generating Haskell From a XSD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Instances of `IsString`.
Hi Jason, I believe the original purpose of IsString was to enable writing of DSL's, much like described in this paper: http://portal.acm.org/citation.cfm?id=1411236 As such, you might find far more uses of IsString inside DSL's, some of which are likely to remain private. It was never designed to be a feature for every day use, but if you need it (as you do for Paradise) then it's hard to live without. Thanks, Neil On Sun, Dec 20, 2009 at 10:39 PM, Jason Dusek jason.du...@gmail.com wrote: 2009/12/20 Brandon S. Allbery KF8NH allb...@ece.cmu.edu: On Dec 20, 2009, at 17:09 , Jason Dusek wrote: A quick check on Hayoo! and in my interpreter shows that there are basically no instances of `IsString`. Is it really so little used? The only 2 instances I'm aware of are String and lazy and strict ByteStrings. It's not clear to me that there need to be any more (well, maybe the packed Unicode string package on hackage). It could be easily abused, that is for sure. -- Jason Dusek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] simple type?
Hello; I am writing some vectorized code, and I wondered exactly which types can be used in DPH? Is it true one cannot use data constructors and recursive types? Thanks, -Jamie ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is it just me... or is cabal/hackage a little broken?
On Wed, 2009-12-23 at 00:14 +0100, Bardur Arantsson wrote: Hi all, Sorry about the inflammatory title, but I just got this message from an uploaded package (hums): Warning: This package indirectly depends on multiple versions of the same package. This is highly likely to cause a compile failure. The thing is, I got the same message while trying to compile locally and it turned out that all I had to do was to $ cabal install PKG-X on all the packages that cabal complained about. Right. I wrote a simple constraint solver for cabal-install to solve this problem (the diamond dependency problem). So why doesn't hackage do this automagically when I upload a package? The single hackage buildbot does not do anything smart. It does not use the cabal-install solver, so it regularly runs into the diamond dep problem. How am I supposed to know which versions of my package's dependencies (or their dependencies) are the most recently compiled by hackage? You're not. My suggestion is don't worry about it not building on hackage. The quality of the build results there just are not that good. For the new hackage server we're using a different design that will generate good build results. For the record: I did do a Check package upload first. It didn't complain. Right. It's not a problem with the package itself. Is this an intractable problem? Am I being overly demanding (probably)? No, not intractable. The cabal-install solver can handle it most of the time. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: repr-0.3.2
Hello, Some months ago I uploaded a little package called 'repr' to hackage. I've now updated the package to work with ghc-6.12.1 and its new base library 4.2.0.0. Back then I forgot to make a proper announcement so I will do that now: 'repr' allows you to render overloaded expressions to their textual representation. For example: *Repr let rd = 1.5 + 2 + (3 + (-4) * (5 - pi / sqrt 6)) :: Repr Double *Repr show rd fromRational (3 % 2) + 2 + (3 + negate 4 * (5 - pi / sqrt 6)) See: http://hackage.haskell.org/package/repr-0.3.2 regards, Bas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: explicit-iomodes-0.1.2
Hello, I just released a minor update to my explict-iomodes package: http://hackage.haskell.org/package/explicit-iomodes-0.1.2 Changes: * Tested with ghc-6.12.1 and base-4.2.0.0 * Re-exported 'isEOF' from System.IO which I forgot in the previous version. * Internal changes: Explicit imports of all used symbols and Unicode syntax and symbols everywhere. regards, Bas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: usb-safe-0.4.1
Hello, I just released usb-safe-0.4.1: http://hackage.haskell.org/package/usb-safe-0.4.1 Changes compared to 0.3 (I didn't make an announcement for 0.4): * Tested with ghc-6.12.1 and base-4.2.0.0. * It turned out that an Interface can also be regarded as a scarce resource: it needs to be claimed before use and it needs to be released when you're done with it. So just as with USB devices I had to create a new region monad that supported claiming of interfaces. So now I had two region monads: a DeviceRegion and an InterfaceRegion which both were implemented exactly the same and supported the exact same operations. So the next thing I did was to generalize the region code and put it in its own package: 'regions'. This package will be announced separately. * Removed the FilteredEndpoint type. Specific endpoints can now be retrieved from the alternate directly using: 'getEndpoints'. * Added missing language extension: OverlappingInstances (The package was unusable without it) * Added convenience functions 'setConfigWhich', 'withInterfaceWhich' and 'setAlternateWhich' for quickly setting the first setting that satisfies the given predicate on its descriptor. * Hidden the ReadEndpoint and WriteEndpoint type classes so users can't write their own instances for it and thereby potentially messing up my safety guarantees. * Lots of little refactorings (is that a real word?): documentation updates, explicit imports of all used symbols, Unicode syntax and symbols (using the base-unicode-symbols package) * Finally I packaged up some example code in: darcs get http://code.haskell.org/~basvandijk/code/usb-safe-examples/ regards, Bas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: regions-0.1.0.1
Hello, As explained in my previous email, I generalized the DeviceRegion monad transformer from my usb-safe package and put it in its own package: http://hackage.haskell.org/package/regions-0.1.0.1 Hackage does not seem to build documentation at the moment so if you're interested please look at the source code which you can get using: darcs get http://code.haskell.org/~basvandijk/code/regions The main goal of regions is to allow opening of scarce resoures and operating on them but to prevent operating on closed resources. The primary technique used in this package is called Lightweight monadic regions which was invented by Oleg Kiselyov and Chung-chieh Shan. See: http://okmij.org/ftp/Haskell/regions.html#light-weight The package provides the monad transformer: newtype RegionT resource s (pr ∷ * → *) α And the following operation for opening scarce resources (like files, memory pointers, database connections, USB devices, etc.): open ∷ (Resource resource, MonadCatchIO pr) ⇒ resource → RegionT resource s pr (RegionalHandle resource (RegionT resource s pr)) So you give it the resource you wish to open and it yields a computation in the region monad that returns a handle to the opened resource. Note that this handle is parametrized by the region itself. Finally there's the ability to run the region: runRegionT ∷ (Resource resource, MonadCatchIO pr) ⇒ (∀ s. RegionT resource s pr α) → pr α 'runRegionT' runs the given region and finally closes all opened resources automatically. Note that the 's' is quantified over the region but not over the return value. So all values that have this 's' in their type can not be returned from the given region. Remember that the handle to a resource, which we got from 'open', was parametrized by the region and so has this 's' in it. This prevents us from returning a handle to a closed resource from this function. So it's never possible to accidentally perform I/O with a closed resource. There's also the ability to concurrently run a region inside another region using: forkTopRegion ∷ (Resource resource, MonadIO pr) ⇒ TopRegion resource s () → RegionT resource s pr ThreadId Alo note the packages: http://hackage.haskell.org/package/regions-monadsfd-0.1.0.1 http://hackage.haskell.org/package/regions-monadstf-0.1.0.1 Which defines instances for the monads classes in monads-fd and monads-tf respectively. Finally, if you want to open your own scarce resource in a region, the only thing you have to do is define an instance for: class Resource resource where data Handle resource ∷ * openResource ∷ resource → IO (Handle resource) closeResource ∷ Handle resource → IO () That's it! I'm looking forward to see instances for file Handles, memory Ptrs, database connections, etc. regards, Bas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Performance of functional priority queues
On Tuesday 16 June 2009 23:50:45 Richard O'Keefe wrote: I've now done some benchmarks myself in C, Java, and Smalltalk, comparing imperative versions of leftist heaps with functional ones. For what it's worth, on a 2.16GHz Intel Core 2 Duo Mac, the coefficient in front of the log(n) part was C JavaST(A) ST(B) Imperative40 70 150 1123 Functional 240 126 290 1895 where ST(A) was a native-code Smalltalk and ST(B) a byte-code one. The C/Functional case used the Boehm collector, untuned. Times are in nanoseconds. Values of n ranged from 2 to 100; the correspondent was saying that small sizes were important. It seems that a factor of 2 for *this* problem is credible; a factor of 10 is not. And your results above indicate that the fastest imperative heap is over 3x faster than the fastest functional heap? -- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: repr-0.3.2
Where do you make use of it? :) Andrey -- View this message in context: http://old.nabble.com/ANNOUNCE%3A-repr-0.3.2-tp26908749p26909005.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Cabal's License type: MIT commented out?
I noticed on Hackage that packages that are MIT licensed show up as OtherLicense. I took a peek inside the Cabal code, and noticed that the License type has lines for MIT, but commented out [1]: ---- | The MIT license, similar to the BSD3. Very free license. -- | MIT Why is this? Both the BSD3 and BSD4 get entries in this module. It would be nice to know up-front that a license is indeed liberal (BSD/MIT) rather than having to open up the package and look at the LICENSE file. (I also use the MIT license for my code, and would like to avoid confusing people once I start distributing Haskell libraries.) [1] http://hackage.haskell.org/packages/archive/Cabal/latest/doc/html/src/Distribution-License.html#License ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Hemkay, the 100% Haskell MOD player
Patai Gergely schrieb: It is the strength of Haskell to separate everything into logical steps and let laziness do things simultaneously. Stream fusion can eliminate interim lists, and final conversion to storable vector using http://hackage.haskell.org/package/storablevector-streamfusion/ can eliminate lists at all. But in my understanding that elimination is only possible if lists are not used as persistent containers, only to mimic control structures. Now I rely on samples being stored as lists, so I can represent looping samples with infinite lists and not worry about the wrap-around at all. So in order to have any chance for fusion I'd have to store samples as vectors and wrap them in some kind of unfold mechanism to turn them into lists that can be potentially fused away. In other words, besides a 'good consumer', I need a 'good producer' too. Right. The conversion from storablevector to stream-fusion:Stream is such a good producer. However, there seems to be a conflict between the nature of mixing and stream processing when it comes to efficiency. As it turns out, it's more efficient to process channels one by one within a chunk instead of producing samples one by one. It takes a lot less context switching to first generate the output of channel 1, then generate channel 2 (and simultaneously add it to the mix) and so on, than to mix sample 1 of all channels, then sample 2 etc., since we can write much tighter loops when we only deal with one channel at a time. Yes, I would also do it this way. So in the end you will have some storablevectors as intermediate data structures. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Generating Haskell From a XSD
On Wed, Dec 23, 2009 at 1:27 PM, jonathangfisch...@gmail.com wrote: Cool, glad to hear there is interest. I'm talking to a few other people that are expressing interest to. I still don't have access to the haskell wiki, but putting a page up there seems like a start. Any other suggestions for wiki page sites? I'm fond of Google Code. The vcs / wiki / bug-tracker integration is nice. --Rogan -Jonathan On Dec 23, 2009 11:57am, Keith Sheppard keiths...@gmail.com wrote: Yes I'm interested in helping too. It's hard for me to know how much time I'll have but my other side proj is starting to wind down now. Maybe a wiki planning page and a patch-tag (or any other repo site really) workspace is a good starting point? Sent from my iPhone On Dec 23, 2009, at 1:19 PM, Hector Guilarte hector...@gmail.com wrote: I'd like to help... I'm not an expert in Haskell, but I guess I could help somehow... Hector Guilarte -Original Message- From: jonathangfisch...@gmail.com Date: Wed, 23 Dec 2009 18:11:21 To: Keith sheppardkeiths...@gmail.com; jonathangfisch...@gmail.com Cc: haskell-cafe@haskell.org Subject: Re: Re: [Haskell-cafe] Generating Haskell From a XSD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Generating Haskell From a XSD
Ok here we go http://code.google.com/p/xsdtohaskell/ On Wed, Dec 23, 2009 at 4:25 PM, Rogan Creswick cresw...@gmail.com wrote: On Wed, Dec 23, 2009 at 1:27 PM, jonathangfisch...@gmail.com wrote: Cool, glad to hear there is interest. I'm talking to a few other people that are expressing interest to. I still don't have access to the haskell wiki, but putting a page up there seems like a start. Any other suggestions for wiki page sites? I'm fond of Google Code. The vcs / wiki / bug-tracker integration is nice. --Rogan -Jonathan On Dec 23, 2009 11:57am, Keith Sheppard keiths...@gmail.com wrote: Yes I'm interested in helping too. It's hard for me to know how much time I'll have but my other side proj is starting to wind down now. Maybe a wiki planning page and a patch-tag (or any other repo site really) workspace is a good starting point? Sent from my iPhone On Dec 23, 2009, at 1:19 PM, Hector Guilarte hector...@gmail.com wrote: I'd like to help... I'm not an expert in Haskell, but I guess I could help somehow... Hector Guilarte -Original Message- From: jonathangfisch...@gmail.com Date: Wed, 23 Dec 2009 18:11:21 To: Keith sheppardkeiths...@gmail.com; jonathangfisch...@gmail.com Cc: haskell-cafe@haskell.org Subject: Re: Re: [Haskell-cafe] Generating Haskell From a XSD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Space Leak with semi-implicit parallelization and the nasty Garbage collector
Am Donnerstag 24 Dezember 2009 02:14:51 schrieb Michael Lesniak: Hello haskell-cafe (and merry christmas!), I have a strange problem with the garbage collector / memory which I'm unable to find a solution for. I think the source of my problems has to do with lazy evaluation, but currently I'm unable to find it. Using the attached program and threadscope I see that the GC is using a lot of time and the program comes (more or less) to a halt (see exa-1.png). When I increase the heap the program takes much longer and the GC is running more or less all the time (see exa-2.png). Some more detailled information: * I can see the described behaviour under both GHC 10.4 and GHC 12.1 * Linux kernel 2.6.31-16 on a dualcore * Program compiled with ghc --make -O2 -threaded -eventlog Example.hs -o exa * Started with exa +RTS -ls and one of { -N, -N1, -N2 } Any help (pointing into the right direction, mention possibly helpful blog articles or paper, constructive critic in general) is appreciated! I can't reproduce that (ghc-6.12.1, Linux linux-mkk1 2.6.27.39-0.2-pae #1 SMP 2009-11-23 12:57:38 +0100 i686 i686 i386 GNU/Linux, dual core): $ time ./exa +RTS -ls -N2 -sstderr ./exa +RTS -ls -N2 -sstderr 1 3 0 0 9 3 6 9 1 8 8 9 2 5 5 8 6 5 7 8 4 72,499,126,908 bytes allocated in the heap 45,280,708 bytes copied during GC 177,504 bytes maximum residency (10 sample(s)) 183,844 bytes maximum slop 4 MB total memory in use (1 MB lost due to fragmentation) Generation 0: 131527 collections, 131526 parallel, 7.18s, 3.88s elapsed Generation 1:10 collections,10 parallel, 0.00s, 0.00s elapsed Parallel GC work balance: 1.16 (10931266 / 9433437, ideal 2) MUT time (elapsed) GC time (elapsed) Task 0 (worker) : 115.10s(126.56s) 3.34s( 1.84s) Task 1 (worker) : 124.21s(126.56s) 3.84s( 2.04s) Task 2 (worker) :0.09s(126.56s) 0.00s( 0.00s) Task 3 (worker) :0.00s(126.56s) 0.00s( 0.00s) SPARKS: 21 (21 converted, 0 pruned) INIT time0.00s ( 0.13s elapsed) MUT time 238.05s (126.56s elapsed) GCtime7.19s ( 3.88s elapsed) EXIT time0.00s ( 0.00s elapsed) Total time 244.46s (130.57s elapsed) %GC time 2.9% (3.0% elapsed) Alloc rate305,559,453 bytes per MUT second Productivity 97.1% of total user, 181.7% of total elapsed gc_alloc_block_sync: 151252 whitehole_spin: 0 gen[0].steps[0].sync_large_objects: 75620 gen[0].steps[1].sync_large_objects: 9662 gen[1].steps[0].sync_large_objects: 0 244.45user 2.06system 2:10.58elapsed 188%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+35736outputs (0major+2426minor)pagefaults 0swaps Garbage collection isn't even visible in the threadscope profile. With -N1: 71,999,280,108 bytes allocated in the heap 20,729,380 bytes copied during GC 92,492 bytes maximum residency (2 sample(s)) 190,872 bytes maximum slop 3 MB total memory in use (0 MB lost due to fragmentation) Generation 0: 130901 collections, 0 parallel, 2.64s, 2.68s elapsed Generation 1: 2 collections, 0 parallel, 0.00s, 0.00s
Re: [Haskell-cafe] FGL/Haskell and Hierarchical Clustering/dendograms
Hi Nikolas, Interesting problem. I'd do something like the following, where the initial spanning tree from you example (re-tree-ified) is: {- ghci :t t t :: Tree (Id, Cost) g ghci ppT t (4,0) | +- (3,1) | | | `- (1,1) | `- (2,3) | `- (5,12) -} and which results in the tree: {- ghci let s = agglom fst snd t ghci :t s s :: Tree (Cost, [Id]) ghci ppT s (0,[4]) | +- (1,[3,1]) | `- (3,[2]) | `- (12,[5]) -} which can then be flattened/etc as needed by further steps of the algo. The code for `agglom': - import Data.Tree import Data.List type Id = Int type Cost = Int t :: Tree (Id,Cost) t = Node (4,0) [Node (3,1) [Node (1,1) []] ,Node (2,3) [Node (5,12) []]] ppT :: Show a = Tree a - IO () ppT = putStrLn . drawTree . fmap show -- | Compress the incoming @Tree a@ with @accu...@. agglom :: Eq cost = (a - b) - (a - cost) - Tree a - Tree (cost,[b]) agglom proj cost = go where accum = accumEq proj cost go (Node a []) = Node (cost a,[proj a]) [] go (Node a ts) = let b = proj a c = cost a (bs,ss) = accum c ts in Node (c,b:bs) (fmap go ss) -- | Repeatedly @splitEq@, and return a pair -- whose /first/ element is a list of the projected -- @b...@s from those root values along paths from -- the roots of the trees in the incoming @[Tree a]@ -- which have @cost@ equal to the third function parameter, -- and whose /second/ element is the (concatenation of the) -- list(s) gotten from each of the @splitEq@ calls. accumEq :: Eq cost = (a - b) - (a - cost) - cost - [Tree a] - ([b],[Tree a]) accumEq proj cost c = go [] [] where split ts = splitEq proj cost c ts go xs ys [] = (xs,ys) go xs ys ts = let (eqs,neqs) = split ts in case eqs of []- ([],ts) _ - let (bs,tss) = unzip eqs in go (bs++xs) (neqs++ys) (concat tss) -- | Split the incoming trees into -- (1) a @[(b,Tree a)]@ of each @b@ is the -- @p...@ected value from an @a@ where -- the @cost@ of that @a@ is equal to -- the third function parameter, and (2) -- the members of the incoming @[Tree a]@ -- whose roots' costs are /not/ equal to -- the third function parameter. splitEq :: Eq cost = (a - b) - (a - cost) - cost - [Tree a] - ([(b,[Tree a])],[Tree a]) splitEq proj cost c = foldl' go ([],[]) where go (!eqs,!neqs) t@(Node a ts) | c==cost a = ((proj a,ts):eqs,neqs) | otherwise = (eqs,t:neqs) - Cheers, Matt On 12/23/09, Nikolas Borrel-Jensen nikolasbor...@gmail.com wrote: Hi! I have some trouble implementing single-linkage clustering algorithm by using a minimum-spanning tree, so I would appreciate if some of you could give me some advise. I am implementing a single-linkage clustering algorithm, and my approach is to use minimum spanning trees for that task. I am using the library FGL ( http://web.engr.oregonstate.edu/~erwig/fgl/haskell/), and I have managed to compute a minimum spanning tree from an arbitrary fully connected graph with 5 nodes. I get [ [(4,0) ] , [ (3,1) , (4,0) ] , [ (1,1) , (3,1) , (4,0) ] , [ (2,3) , (4,0) ] , [ (5,12) , (2,3) , (4,0) ] ], which is the root path tree of the minimum spanning tree created by the function msTreeAt. From that I would create a dendrogram. [ (1,1) , (3,1) , (4,0) ] is telling that node 1,3 and 4 has the same cost, namely cost 1. Therefore these are merged at level 1. At level 1 we now have 3 clusters: (1,3,4), 2 and 5. Now the second lowest should be merged, that is 2 and 4. BUT because 4 is already merged in the cluster (1,3,4), we should merge (1,3,4) and 2 at level 3 (because the cost is 3). Now at level 3 we have 2 clusters, (1,2,3,4) and 5. Now we merge the last one at level 12: (1,2,3,4,5), and we are finished. I have very hard to see, how this could be done efficiently without pointers (as in C). I have thought of just saving the nodes from the start of the root path, and traversing it, but a lot of searching should be done all the time. Can you please give me some advise on that? Kind regards Nikolas Borrel-Jensen Computer Science University Of Copenhagen ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] FGL/Haskell and Hierarchical Clustering/dendograms
For completeness, you might then do the actual clustering something like: import Data.Tree import Data.List import Data.Function -- ... code from before ... cluster :: Ord cost = (a - b) - (a - cost) - Tree a - Cluster (cost,[b]) cluster proj cost t = -- List can't be empty since Tree can't. let o:os = sortBy (compare `on` fst) . flatten . agglom proj cost $ t in foldl' cons (One o) os data Cluster a = One a | Many [Cluster a] deriving(Eq,Ord,Read,Show) instance Functor Cluster where fmap f (One a) = One (f a) fmap f (Many cs) = Many ((fmap . fmap) f cs) cons :: Cluster a - a - Cluster a cons c a = Many [c,One a] {- ghci let c = cluster fst snd t ghci :t c c :: Cluster (Cost, [Id]) ghci c Many [Many [Many [One (0,[4]),One (1,[3,1])],One (3,[2])],One (12,[5])] ghci :t fmap snd c fmap snd c :: Cluster [Id] ghci fmap snd c Many [Many [Many [One [4],One [3,1]],One [2]],One [5]] ghci :t fmap fst c fmap fst c :: Cluster Cost ghci fmap fst c Many [Many [Many [One 0,One 1],One 3],One 12] -} --- Matt On 12/23/09, Matt Morrow moonpa...@gmail.com wrote: Hi Nikolas, Interesting problem. I'd do something like the following, where the initial spanning tree from you example (re-tree-ified) is: {- ghci :t t t :: Tree (Id, Cost) g ghci ppT t (4,0) | +- (3,1) | | | `- (1,1) | `- (2,3) | `- (5,12) -} and which results in the tree: {- ghci let s = agglom fst snd t ghci :t s s :: Tree (Cost, [Id]) ghci ppT s (0,[4]) | +- (1,[3,1]) | `- (3,[2]) | `- (12,[5]) -} which can then be flattened/etc as needed by further steps of the algo. The code for `agglom': - import Data.Tree import Data.List type Id = Int type Cost = Int t :: Tree (Id,Cost) t = Node (4,0) [Node (3,1) [Node (1,1) []] ,Node (2,3) [Node (5,12) []]] ppT :: Show a = Tree a - IO () ppT = putStrLn . drawTree . fmap show -- | Compress the incoming @Tree a@ with @accu...@. agglom :: Eq cost = (a - b) - (a - cost) - Tree a - Tree (cost,[b]) agglom proj cost = go where accum = accumEq proj cost go (Node a []) = Node (cost a,[proj a]) [] go (Node a ts) = let b = proj a c = cost a (bs,ss) = accum c ts in Node (c,b:bs) (fmap go ss) -- | Repeatedly @splitEq@, and return a pair -- whose /first/ element is a list of the projected -- @b...@s from those root values along paths from -- the roots of the trees in the incoming @[Tree a]@ -- which have @cost@ equal to the third function parameter, -- and whose /second/ element is the (concatenation of the) -- list(s) gotten from each of the @splitEq@ calls. accumEq :: Eq cost = (a - b) - (a - cost) - cost - [Tree a] - ([b],[Tree a]) accumEq proj cost c = go [] [] where split ts = splitEq proj cost c ts go xs ys [] = (xs,ys) go xs ys ts = let (eqs,neqs) = split ts in case eqs of []- ([],ts) _ - let (bs,tss) = unzip eqs in go (bs++xs) (neqs++ys) (concat tss) -- | Split the incoming trees into -- (1) a @[(b,Tree a)]@ of each @b@ is the -- @p...@ected value from an @a@ where -- the @cost@ of that @a@ is equal to -- the third function parameter, and (2) -- the members of the incoming @[Tree a]@ -- whose roots' costs are /not/ equal to -- the third function parameter. splitEq :: Eq cost = (a - b) - (a - cost) - cost - [Tree a] - ([(b,[Tree a])],[Tree a]) splitEq proj cost c = foldl' go ([],[]) where go (!eqs,!neqs) t@(Node a ts) | c==cost a = ((proj a,ts):eqs,neqs) | otherwise = (eqs,t:neqs) - Cheers, Matt On 12/23/09, Nikolas Borrel-Jensen nikolasbor...@gmail.com wrote: Hi! I have some trouble implementing single-linkage clustering algorithm by using a minimum-spanning tree, so I would appreciate if some of you could give me some advise. I am implementing a single-linkage clustering algorithm, and my approach is to use minimum spanning trees for that task. I am using the library FGL ( http://web.engr.oregonstate.edu/~erwig/fgl/haskell/), and I have managed to compute a minimum spanning tree from an arbitrary fully connected graph with 5 nodes. I get [ [(4,0) ] , [ (3,1) , (4,0) ] , [ (1,1) , (3,1) , (4,0) ] , [ (2,3) , (4,0) ] , [ (5,12) , (2,3) , (4,0) ] ], which is the root path tree of the minimum