Is fixing GHC arrays a big research job or is it
something that someone can straightforwardly
handle if my site actually gets enough traffic to
warrant it?
-Alex-
On Thu, 26 Feb 2004, Simon Peyton-Jones wrote:
> | But in managing this tradeoff, what is faster:
> | * constructing/destructing e.g.
| But in managing this tradeoff, what is faster:
| * constructing/destructing e.g. 16 trees (for a 65000 item table)
| * 2 memcpy of 256 item arrays (perhaps after you primop?)
|
| If the later is not dramatically slower than I
| will bias towards more arrayness.
I doubt the latter is dramatical
I know that array copy is much more expensive than
a single update. But when you say:
On Wed, 25 Feb 2004, Simon Peyton-Jones wrote:
> Haskell's current story is to use O(log n) structures such as trees --
Yes, I got that. The question is how I trade off
between reads and writes. If I write ve
G'day all.
Quoting "S. Alexander Jacobson" <[EMAIL PROTECTED]>:
> Isn't the following more efficient than Data.FiniteMap?
[deletia]
TernaryTrie is basically this, minus the Radix type class, and using
balanced binary trees for each radix levels instead of arrays.
Cheers,
Andrew Bromage
| > (//) is very slow
| Is that inherent in Haskell (or laziness) or is it
| just an artifact of the current GHC
| implementation? Would the problem be solved by
| making my arrays strict or by using Unboxed
| arrays? Is there a faster array implementation
| around?
It's slow because it costs O
On Tue, 24 Feb 2004, Hal Daume III wrote:
> > It seems like updates could be very fast because I
> > assume // is implemented with a fast memcpy
> (//) is very slow
Is that inherent in Haskell (or laziness) or is it
just an artifact of the current GHC
implementation? Would the problem be sol
[Rewrote prior code to be cleaner]
Isn't the following more efficient than Data.FiniteMap?
class Ix a=>Radix a where maxRange::(a,a)
class Radix a => HashKey b a where hashKey::b->[a]
instance Radix Char where maxRange=(chr 0,chr 255)
instance Radix a=> HashKey [a] a where hashKey x=x
> It seems like updates could be very fast because I
> assume // is implemented with a fast memcpy
(//) is very slow
> _
> S. Alexander Jacobson mailto:[EMAIL PROTECTED]
> tel:917-770-6565 h
Ok. I just looked more carefully at FiniteMap and
the Data.HashTable documentation and coded what I
had incorrectly imagined would be there. Isn't
the following more efficient than FiniteMap
without requiring the IO Monad?
-
class M
as a tangent, for absurdly fast and optimal (optimized to the cache
line) maps see
http://judy.sourceforge.net/
the algorithms behind them are very cool.
see http://judy.sourceforge.net/downloads/10minutes.htm
for a quick discussion of why they are fast (there is a fewhundred page
book too)
heh.
> I believe FiniteMap works by representing the data
> in binary trees. It is therefore O(log2(n)) to
> read and update.
>
> However, if my application reads many more times
> than it writes, then perhaps I can get a
> substantial performance boost by increasing the
> branch factor on the tree.
"S. Alexander Jacobson" <[EMAIL PROTECTED]> writes:
> However, if my application reads many more times than it writes,
> then perhaps I can get a substantial performance boost by increasing
> the branch factor on the tree. For example, if each node was an
> array of 256 elements, reads would be O
S. Alexander Jacobson wrote:
Does FiniteMap used Arrays, Lists, or Algebraic Types?
when in doubt, RTFC:
http://cvs.haskell.org/cgi-bin/cvsweb.cgi/fptools/libraries/base/Data/FiniteMap.hs?rev=1.17
data FiniteMap key elt
= EmptyFM
| Branch key elt -- Key and elt stored here
IF
I believe FiniteMap works by representing the data
in binary trees. It is therefore O(log2(n)) to
read and update.
However, if my application reads many more times
than it writes, then perhaps I can get a
substantial performance boost by increasing the
branch factor on the tree. For example, if
14 matches
Mail list logo