Am Samstag 16 Januar 2010 18:53:33 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Donnerstag 14 Januar 2010 08:25:48 schrieb Will Ness:
> > > Daniel Fischer web.de> writes:
> > > > Am Mittwoch 13 Januar 2010 10:43:42 schrieb Heinrich Apfelmus:
> > > > > I wonder whether it's really th
Daniel Fischer web.de> writes:
>
> Am Donnerstag 14 Januar 2010 08:25:48 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Mittwoch 13 Januar 2010 10:43:42 schrieb Heinrich Apfelmus:
> > > > I wonder whether it's really the liveness of pair in
> > > >
> > > > mergeSP (a,b) pair
Am Donnerstag 14 Januar 2010 08:25:48 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Mittwoch 13 Januar 2010 10:43:42 schrieb Heinrich Apfelmus:
> > > I wonder whether it's really the liveness of pair in
> > >
> > > mergeSP (a,b) pair
> > > = let sm = spMerge b (fst pair)
> >
Daniel Fischer web.de> writes:
>
> Am Mittwoch 13 Januar 2010 10:43:42 schrieb Heinrich Apfelmus:
> > I wonder whether it's really the liveness of pair in
> >
> > mergeSP (a,b) pair
> > = let sm = spMerge b (fst pair)
> > in (a ++ fst sm, merge (snd sm) (snd pair))
> >
> > that i
Am Mittwoch 13 Januar 2010 10:43:42 schrieb Heinrich Apfelmus:
> I wonder whether it's really the liveness of pair in
>
> mergeSP (a,b) pair
> = let sm = spMerge b (fst pair)
> in (a ++ fst sm, merge (snd sm) (snd pair))
>
> that is responsible for the space leak, for chances are th
Daniel Fischer wrote:
> Heinrich Apfelmus wrote:
>>
>> It is exactly because these troubles that I'm advocating the original
>> VIP data structure that buries the dorks (that name is awesome :D) deep
>> inside the structure. :)
In fact, your transformation that fixes the space leaks pretty much
em
Am Dienstag 12 Januar 2010 11:30:07 schrieb Heinrich Apfelmus:
> Tricky stuff. It is known that pairs/records are prone to unwanted
> retention, see for example the recent thread
>
> http://thread.gmane.org/gmane.comp.lang.haskell.cafe/66903/focus=67871
>
> or
>
> Jan Sparud. Fixing some space
Daniel Fischer wrote:
> Why has
>
> mergeSP (a,b) ~(c,d)
>= let (bc,b') = spMerge b c in (a ++ bc, merge b' d)
>
> a memory leak, but
>
> mergeSP (a,b) ~(c,d)
>= let (bc,m) = spMerge' b c d in (a ++ bc, m)
>
> not?
>
> Well, looking at the core for mergeSP, the fog clears somewhat. The
Daniel Fischer web.de> writes:
>
> Am Freitag 08 Januar 2010 19:45:47 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
>
> > >
> > > mergeSP :: Integral a => People a -> People a -> People a
> > > mergeSP p1@(P a _) p2 = P (a ++ vips mrgd) (dorks mrgd)
> > > where
> > > mrgd
Daniel Fischer web.de> writes:
>
> Am Samstag 09 Januar 2010 08:04:20 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Freitag 08 Januar 2010 19:45:47 schrieb Will Ness:
> > > > Daniel Fischer web.de> writes:
> > >
> > > It's not tail-recursive, the recursive call is inside a cel
Am Freitag 08 Januar 2010 18:37:21 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Donnerstag 07 Januar 2010 11:43:44 schrieb Heinrich Apfelmus:
> > > Will Ness wrote:
> > >
> > > Hm? In my world view, there is only reduction to normal form and I
> > > don't see how "allocate its own st
Am Samstag 09 Januar 2010 08:04:20 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Freitag 08 Januar 2010 19:45:47 schrieb Will Ness:
> > > Daniel Fischer web.de> writes:
> >
> > It's not tail-recursive, the recursive call is inside a celebrate.
>
> It is (spMerge that is).
No.
"In co
Daniel Fischer web.de> writes:
>
>
> Am Donnerstag 07 Januar 2010 11:43:44 schrieb Heinrich Apfelmus:
> > Will Ness wrote:
> > > Heinrich Apfelmus writes:
>
> The below code is now a well-behaved memory citizen (3MB for the 100
millionth prime, about the same as the PQ code). It still is cons
Heinrich Apfelmus quantentunnel.de> writes:
>
> Will Ness wrote:
> > But I get the impression that GHC isn't working through equational
> > reasoning?..
> > I see all this talk about thunks etc.
>
> Sure it does. Concerning the thunks, they're part of the implementation
> of the reduction mode
Daniel Fischer web.de> writes:
>
> Am Freitag 08 Januar 2010 19:45:47 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
>
>
> It's not tail-recursive, the recursive call is inside a celebrate.
It is (spMerge that is). It calls tail-recursive celebrate in a tail position.
What you've don
Am Freitag 08 Januar 2010 19:45:47 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> >
> > mergeSP :: Integral a => People a -> People a -> People a
> > mergeSP p1@(P a _) p2 = P (a ++ vips mrgd) (dorks mrgd)
> > where
> > mrgd = spMerge (dorks p1) (vips p2) (dorks p2)
> >
Will Ness yahoo.com> writes:
>
>
> That might be why Daniel's structure is better: it plunges down faster than
> mine.
>
> "treefold" structure was:
> (2+4) + ( (4+8) + ( (8+16) + ( (16+32) + ( (32+64) + ...
> dpths: 3 4 4 5 5 66 77 8
this sho
Daniel Fischer web.de> writes:
> roll = scanl (+)
> wheel = 2:4:2:4:6:2:6:4:2:4:6:6:2:6:4:2:6:4:6:8:4:2:4:2:
> 4:8:6:4:6:2:4:6:2:6:6:4:2:4:6:2:6:4:2:4:2:10:2:10:wheel
> wheel11 = res
> where
> snms = scanl (+) 11 (take 47 wheel)
> nums = tail $ scanl (
Daniel Fischer web.de> writes:
>
>
> The below code is now a well-behaved memory citizen (3MB for the 100
millionth prime, about the same as the PQ code). It still is considerably
slower than the PQ code.
> In terms of MUT times as reported by +RTS -sstderr - as well as (MUT+GC)
times - (mea
Daniel Fischer web.de> writes:
>
>
> Am Donnerstag 07 Januar 2010 11:43:44 schrieb Heinrich Apfelmus:
> > Will Ness wrote:
> >
> > Hm? In my world view, there is only reduction to normal form and I don't
> > see how "allocate its own storage" fits in there. Okasaki having shown
> > something to
Am Donnerstag 07 Januar 2010 17:13:38 schrieb Daniel Fischer:
> compos :: [a] -> [a]
> compos = vips . itfold mergeSP . multip
Sigh! That's what I get for editing the code in the mail editor. I decided
to change the really stupid 'itfold' to 'smartfold' and forgot this
occurrence.
__
Will Ness wrote:
> Heinrich Apfelmus writes:
>
>> Concerning lists as producer/consumer, I think that's exactly what lazy
>> evaluation is doing. Neither filter , map or span evaluate and
>> store more list elements that strictly necessary.
>
> I laways suspected as much, but was once told th
Am Mittwoch 06 Januar 2010 19:13:01 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Mittwoch 06 Januar 2010 00:09:07 schrieb Will Ness:
> > > Daniel Fischer web.de> writes:
> > > > Am Montag 04 Januar 2010 22:25:28 schrieb Daniel Fischer:
> > > > Fix rfold:
> > > >
> > > > rfold f [x]
Daniel Fischer web.de> writes:
>
>
> Am Mittwoch 06 Januar 2010 00:09:07 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Montag 04 Januar 2010 22:25:28 schrieb Daniel Fischer:
> > > Fix rfold:
> > >
> > > rfold f [x] = x
> > > rfold f xs = rfold f (pairwise f xs)
> > >
> > > and
Will Ness yahoo.com> writes:
>
> Daniel Fischer web.de> writes:
>
> > Am Dienstag 05 Januar 2010 14:49:58 schrieb Will Ness:
> > >
> > > euler ks@(p:rs) = p : euler (rs `minus` map (*p) ks)
> > > primes = 2:euler [3,5..]
> > >
> > >
>
> Re-write:
>
> primes = euler $ rollFrom [2] 1
>
Am Mittwoch 06 Januar 2010 00:09:07 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Montag 04 Januar 2010 22:25:28 schrieb Daniel Fischer:
> > > memory still grows, but much slower, in my tests, due to the much
> > > smaller GC time, it's a bit faster than the version with the original
Daniel Fischer web.de> writes:
>
>
> Am Montag 04 Januar 2010 22:25:28 schrieb Daniel Fischer:
> > memory still grows, but much slower, in my tests, due to the much smaller
> > GC time, it's a bit faster than the version with the original tfold.
>
> Not for larger inputs (but not so large tha
Daniel Fischer web.de> writes:
> So we must make sure that the list of composites that primes' consumes is
> not the same as that which primes'' consumes.
yes that is what I had done too. Duplicated everything. Turns out, it works
exactly as you told it would when using the compiler switch, -f
Daniel Fischer web.de> writes:
>
>
> Am Dienstag 05 Januar 2010 14:49:58 schrieb Will Ness:
> > > ... There are two attempts to eliminate 45.
> >
> > I would say there are two requests to not have 45 in the output.
> >
> Thers are many possible ways to phrase it.
>
> >
> > You solution is inde
Am Dienstag 05 Januar 2010 14:49:58 schrieb Will Ness:
> > ... There are two attempts to eliminate 45.
>
> I would say there are two requests to not have 45 in the output.
>
Thers are many possible ways to phrase it.
> > > I don't see any problem here. As Melissa (and yourself, I think) have
> > >
Daniel Fischer web.de> writes:
>
>
> Am Montag 04 Januar 2010 19:16:32 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Montag 04 Januar 2010 13:25:47 schrieb Will Ness:
> > > > Euler's sieve is
> > > >
> > > > sieve (p:ps) xs = h ++ sieve ps (t `minus` map (p*) [p,p+2..])
> > >
Am Dienstag 05 Januar 2010 10:33:19 schrieb Will Ness:
> Such a
> system would probably have to distinguish, at the type level, between
> [1..m] ; cycle [1..m] ; take n [1..m] ; etc. These would all be not just
> fuctions, but parts of a type's (here list) behaviour with automatically
> deduced sem
Will Ness skrev:
Emil Axelsson chalmers.se> writes:
For me, a real smart compiler is one that would take in e.g. (sum $
take n $
cycle $ [1..m]) and spill out a straight up math formula, inside a few ifs
maybe (just an aside).
(Also an aside, I couldn't resist...)
Then I'm sure you'd say
Daniel Fischer web.de> writes:
>
>
> Am Montag 04 Januar 2010 16:30:18 schrieb Will Ness:
> >
> > For me, a real smart compiler is one that would take in e.g. (sum $ take n
> > $ cycle $ [1..m]) and spill out a straight up math formula, inside a few
> > ifs maybe (just an aside).
>
> Such thin
Am Montag 04 Januar 2010 22:25:28 schrieb Daniel Fischer:
> compos ps = fst (tfold mergeSP $ nwise 1 mergeSP (pairwise mergeSP (multip
> ps)))
>
> tfold f (a: ~(b: ~(c:xs)))
> = (a `f` (b `f` c)) `f` tfold f xs
>
> nwise k f xs = let (ys,zs) = splitAt k xs in rfold f ys : nwise
Am Montag 04 Januar 2010 16:30:18 schrieb Will Ness:
> Heinrich Apfelmus quantentunnel.de> writes:
> >
> > (I haven't followed the whole thread, but hopefully I have enough grasp
> > of it to make a useful remark. :))
> >
> > Concerning lists as producer/consumer, I think that's exactly what lazy
Am Montag 04 Januar 2010 19:16:32 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Montag 04 Januar 2010 13:25:47 schrieb Will Ness:
> > > Euler's sieve is
> > >
> > > sieve (p:ps) xs = h ++ sieve ps (t `minus` map (p*) [p,p+2..])
> > > where (h,t) = span (< p*p) x
Am Montag 04 Januar 2010 18:08:42 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Sonntag 03 Januar 2010 09:54:37 schrieb Will Ness:
> > > The quesion of a memory blowup with the treefolding merge still
> > > remains. For some reason using its second copy for a feeder doesn't
> > > redu
Emil Axelsson chalmers.se> writes:
>
> > For me, a real smart compiler is one that would take in e.g. (sum $
> > take n $
> > cycle $ [1..m]) and spill out a straight up math formula, inside a few ifs
> > maybe (just an aside).
>
> (Also an aside, I couldn't resist...)
>
> Then I'm sure yo
Daniel Fischer web.de> writes:
>
>
> Am Montag 04 Januar 2010 13:25:47 schrieb Will Ness:
>
> > Euler's sieve is
> >
> > sieve (p:ps) xs = h ++ sieve ps (t `minus` map (p*) [p,p+2..])
> > where (h,t) = span (< p*p) xs
>
> Not quite. That's a variant of the postponed fil
Daniel Fischer web.de> writes:
>
> Am Sonntag 03 Januar 2010 09:54:37 schrieb Will Ness:
> >
> > The quesion of a memory blowup with the treefolding merge still remains.
> > For some reason using its second copy for a feeder doesn't reduce the
> > memory (as reported by standalone compiled progr
Am Montag 04 Januar 2010 13:25:47 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Sonntag 03 Januar 2010 09:54:37 schrieb Will Ness:
> > > Daniel Fischer web.de> writes:
> > > > But there's a lot of list constructuion and deconstruction necessary
> > > > for the Euler sieve.
> > >
> >
For me, a real smart compiler is one that would take in e.g. (sum $ take n $
cycle $ [1..m]) and spill out a straight up math formula, inside a few ifs
maybe (just an aside).
(Also an aside, I couldn't resist...)
Then I'm sure you'd say that Feldspar [1] has a smart compiler :)
The above exp
Heinrich Apfelmus quantentunnel.de> writes:
>
> Will Ness wrote:
> >
> > I keep thinking that storage duplication with span, filter etc. is not
> > really
> > necessary, and can be replaced with some pointer chasing - especially when
> > there's only one consumer present for the generated valu
Will Ness wrote:
>
> I keep thinking that storage duplication with span, filter etc. is not really
> necessary, and can be replaced with some pointer chasing - especially when
> there's only one consumer present for the generated values.
>
> What I mean is thinking of lists in terms of produce/
Daniel Fischer web.de> writes:
>
>
> Am Sonntag 03 Januar 2010 09:54:37 schrieb Will Ness:
>
> > Daniel Fischer web.de> writes:
> >
> > > But there's a lot of list constructuion and deconstruction necessary for
> > > the Euler sieve.
> >
> > yes. Unless, of course, s smart compiler recognize
Am Sonntag 03 Januar 2010 09:54:37 schrieb Will Ness:
>
> Exactly the point I tried to make. :)
>
>
> again, yes. :)
>
>
> yes.
>
>
> yes, that's what I meant - the cost of calling all the fuctions that - we
> know in advance will - have nothing to do eventually.
>
8-)
> >
> > But there's a lo
Will Ness yahoo.com> writes:
>
> ... It was a big STOP sign on the way to
> Postponed Filters - Euler's - Bird's merged multiples - tree-merging (with
> wheel) road of little steps, and used as a justification for her to make a
> big leap across the chasm towards the PQ code.
correction: "ac
Daniel Fischer web.de> writes:
>
>
> Am Samstag 02 Januar 2010 14:13:29 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Mittwoch 30 Dezember 2009 20:46:57 schrieb Will Ness:
> > > > Daniel Fischer web.de> writes:
> > > > > Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fi
Am Samstag 02 Januar 2010 14:13:29 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Mittwoch 30 Dezember 2009 20:46:57 schrieb Will Ness:
> > > Daniel Fischer web.de> writes:
> > > > Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fischer:
> > > > > > especially the claim that goin
Daniel Fischer web.de> writes:
>
>
> Am Mittwoch 30 Dezember 2009 20:46:57 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fischer:
> > > > > especially the claim that going by primes squares
> > > > > is "a pleasing but minor op
On Wed, 2009-12-30 at 11:09 -0500, haskell-cafe-requ...@haskell.org
wrote:
> Am Mittwoch 30 Dezember 2009 01:23:32 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> No, it's my own code. Nothing elaborate, just sieving numbers 6k1,
> twice as fast as the
> haskellwiki code (here) and
Am Mittwoch 30 Dezember 2009 20:46:57 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fischer:
> > > > especially the claim that going by primes squares
> > > > is "a pleasing but minor optimization",
> > >
> > > Which it is not. It is a
Daniel Fischer web.de> writes:
>
>
> Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fischer:
> > > especially the claim that going by primes squares
> > > is "a pleasing but minor optimization",
> >
> > Which it is not. It is a major optimisation. It reduces the algorithmic
> > complexity
Am Dienstag 29 Dezember 2009 20:16:59 schrieb Daniel Fischer:
> > especially the claim that going by primes squares
> > is "a pleasing but minor optimization",
>
> Which it is not. It is a major optimisation. It reduces the algorithmic
> complexity *and* reduces the constant factors significantly.
Daniel Fischer web.de> writes:
>
No, it's my own code. Nothing elaborate, just sieving numbers 6k±1, twice as
fast as the haskellwiki code (here) and uses only 1/3 the memory. For the
record:
.
thanks! will need to sift through it thoroughly... :) :)
> >
> > BTW I think a really sma
Daniel Fischer web.de> writes:
>
>
> Am Mittwoch 30 Dezember 2009 01:04:34 schrieb Will Ness:
> >
> > > While I haven't detected that with the primes code, I find that in my
> > > ghci your code is approximately 2.5 times faster than ONeill or Bayer
> > > when interpreted (no difference in scal
Am Mittwoch 30 Dezember 2009 01:23:32 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Gee, seems my mail server reads your posts very thoroughly today :)
>
> I hope it's not a bad thing. :)
>
It means, twenty minutes after I replied to the previous, I got your hours old
post which
menti
Am Mittwoch 30 Dezember 2009 01:04:34 schrieb Will Ness:
>
> > While I haven't detected that with the primes code, I find that in my
> > ghci your code is approximately 2.5 times faster than ONeill or Bayer
> > when interpreted (no difference in scaling observed), while when compiled
> > with -O2,
Daniel Fischer web.de> writes:
>
>
> Gee, seems my mail server reads your posts very thoroughly today :)
I hope it's not a bad thing. :)
>
> Am Dienstag 29 Dezember 2009 14:58:10 schrieb Will Ness:
> >
> > If I realistically needed primes generated in a real life setting, I'd
> > probably ha
Daniel Fischer web.de> writes:
>
>
> Am Dienstag 29 Dezember 2009 14:34:03 schrieb Will Ness:
> > Daniel Fischer web.de> writes:
> > > Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> > > > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x
> > > > times faster than
Gee, seems my mail server reads your posts very thoroughly today :)
Am Dienstag 29 Dezember 2009 14:58:10 schrieb Will Ness:
> Eugene Kirpichov gmail.com> writes:
> > 2009/12/29 Will Ness yahoo.com>:
> > > Daniel Fischer web.de> writes:
> > >> Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will
Am Dienstag 29 Dezember 2009 14:34:03 schrieb Will Ness:
> Daniel Fischer web.de> writes:
> > Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> > > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x
> > > times faster than Priority Queue based code from Melissa O'Neill's
Daniel Fischer web.de> writes:
>
>
> Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x times
> > faster than Priority Queue based code from Melissa O'Neill's ZIP package
> > mentioned at the haskellwiki/Prime_Number
Eugene Kirpichov gmail.com> writes:
>
> 2009/12/29 Will Ness yahoo.com>:
> > Daniel Fischer web.de> writes:
> >
> >>
> >>
> >> Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> >> > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x
> >> > times
> >> > faster than P
2009/12/29 Will Ness :
> Daniel Fischer web.de> writes:
>
>>
>>
>> Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
>> > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x times
>> > faster than Priority Queue based code from Melissa O'Neill's ZIP package
>> > mentioned a
Daniel Fischer web.de> writes:
>
>
> Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x times
> > faster than Priority Queue based code from Melissa O'Neill's ZIP package
> > mentioned at the haskellwiki/Prime_Number
Will Ness yahoo.com> writes:
>
>
> wheelSums = roll 0 wdiffs
> roll = scanl (+)
> wheel = wdiffs ++ wheel
> wdiffs = 2:4:2:4:6:2:6:4:2:4:6:6:2:6:4:2:6:4:6:8:4:2:4:2:
>4:8:6:4:6:2:4:6:2:6:6:4:2:4:6:2:6:4:2:4:2:10:2:10:wdiffs
>
Apparently that works too,
68 matches
Mail list logo