Re: [Haskell-cafe] upgrading regex in GHC 6.8.2

2007-12-20 Thread Alex Jacobson

Searchpath already does recursive module chasing accross the internet.
If your module is available at a url in an unpacked module hierarchy or 
in a tgz file or if it is exposed in a darcs/svn/cvs etc repo, 
searchpath can retrieve it and put it on your local import path.


The main limitations on using searchpath are that the packages you need 
may not yet have been added to the searchpath directory and it does not 
currently run cabal so if the module you import need some interesting 
build process you will need to handle them manually.


The directory issue could be solved if someone were to write a small 
patch to hackage so that it exposes the database in the correct format. 
 The cabal issue I think requires only a small modification to the 
searchpath code but I don't know cabal well enough to do it


-Alex-

Duncan Coutts wrote:

On Fri, 2007-12-21 at 13:58 +1030, Michael Mounteney wrote:
Hello, I have an application that uses/used Text.Regex and have just updated 
GHC from 6.6.1 to 6.8.2 and it seems that Text.Regex is gone, so I'm trying 
to install the replacement from Hackage.


First of all, the procedure is quite tedious as one has to install the 
hierarchy of dependencies manually but apparently there are moves to automate 
this process.


Yes. You can try cabal-install now if you like:
http://haskell.org/cabal/code.html

though be prepared to report bugs and limitations:
http://hackage.haskell.org/trac/hackage

That said, I use it all the time now. It's much quicker than manually
downloading and configuring everything.


The procedure stalled on regex-base-0.92.


None of the 0.9x versions have been updated for the base-3 library that
comes with ghc-6.8 now. Instead try using:

regex-base-0.72.0.1
regex-posix-0.72.0.2
regex-compat-0.71.0.1

These versions work with ghc-6.8 and earlier.

These would be the "latest" versions if it were not for the 0.9x series.
We need some way to tell hackage or cabal-install that the latest
version is not necessarily the best or recommended version.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: SearchPath v0.9

2007-12-19 Thread Alex Jacobson
SearchPath v0.9 does recursive module chasing accross the internet using 
a combination of mapfiles you provide and the default map file, caching

the downloaded modules in a local directory.  Searchpath can handle
modules in module hierarchies based at a URLs, in tgz archives
accessible via URL, and in accessible darcs/svn/cvs etc repos at
particular tags.

New in v0.9

* handling tagged darcs/svn repos
* handling .tgz archives
* massive code cleanup
* substantially faster
* easier/better command line options
* better usage documentation
* handling non-local haskell files

Check it out at http://searchpath.org.

-Alex-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IO is a bad example for Monads

2007-12-11 Thread Alex Jacobson
It might help to point out that its easy to end up with memory/space 
leaks in Java/python/ruby/perl too.  Also stack overflow is really easy. 
 Also, you can get into really deep badness if you do anything 
interesting with concurrency because of the global interpreter lock etc.


As far as pickling goes, HAppS-Data makes it trivial to pickle most 
anything into XML or name/value pairs so that is no longer a valid 
complaint.


-Alex-


Dan Weston wrote:

Hans van Thiel wrote:

On Tue, 2007-12-11 at 16:56 +0100, Wolfgang Jeltsch wrote:
Maybe there are also patient people in the outside world so that we 
can still expose Haskell to the outside world while not trying to 
attract quick-and-dirty hackers. ;-) 

But who are those people? And what harm can they possibly do, assuming
they fit the derogatory description?


I fear those people can do vast amounts of damage. :(

When inept programming yields the wrong result, it is clear (even to the 
inept) that the program is bad.


When the result is correct but there are egregious time or space leaks, 
it is "clear" to everyone but the Haskell guru that it "must" be the 
programming language that is deficient, and will be duly flamed far and 
wide. This perception will be impossible to reverse when it gains 
traction (and nothing ever goes away on the Internet).


Seeming "deus ex machina" code changes (perhaps helpfully offered on 
haskell-cafe) to minimize or correct the undesirable runtime behavior 
appear even to many Haskellites to be black magic, accompanied by the 
runes of profile dumps (like knowing what generation 0 and generation 1 
garbage collection is).


Haskell is not a quick-and-dirty language but quite the opposite.  
Haskell’s unique selling propositions are features like type classes, 
higher order functions and lazy evaluation which make life easier in 
the long term.  The downside of these features is that they might 
make life harder in the short term.

I don't know. In a sense Haskell is easier than, for example, C, because
the concept of a function definition is more natural that that of
assignments and loops. The idea that x = 5; x = x + 7 makes sense
requires a complete new way of thinking. OK, once you've been doing it
for a few years switching back to x = 5 + 7 is hard.


I would limit that to say that *denotational* semantic intuition is easy 
to wield in Haskell. Operational semantic intuition is Haskell is very 
non-obvious to the imperative (and many functional) programmers.


Making matters worse, the first is an advantage well-hyped by 
functionistas, the second hurdle is rarely admitted to.


That said, I definitely think that we should make learning the 
language as easy as possible.  But our ultimate goal should be to 
primarily show newcomers the Haskell way of problem solving, not how 
to emulate Python or Java programming in Haskell.

Again, is there a danger of that happening?


Yes. Those absent the necessary humility to approach haskell-cafe with 
open mind and flame-retardant dialog will fall back on what they know: 
transliterated Java/Python with a morass of do blocks and IO monads, 
then (rightly) bash how "ugly" Haskell syntax is when used in this way.


This type of programmer looking to use Haskell casually should sign a 
"benefit of the doubt" contract whereby they assume that any runtime 
suboptimalities derive from their own coding and not from Haskell's 
defects. This is the innate assumption of the curious, the 
self-motivated, the clever. This is not typically the starting 
assumption of the "I'm an expert at Joe-imperative language" hacker who 
took 10 years to perfect his Java skills and expects thereby to jump to 
at least year 5 of Haskell without effort.


I do strongly believe in stimulating the curiosity of all comers, just 
not in giving the false impression that a quick read-through of a few 
tutorials will let you write lightning-fast code, or know when to 
abandon [Char] for something more clever, or where to insert those bangs 
and fold left instead of right, and how ad hoc and parametric 
polymorphism differ, and what Rank-n and existential means (and why you 
can just pickle any object in Python but need to know a half dozen 
abstract things including who Peano was to do the same in Haskell), and 
what the heck an infinite type is, and on and on.


Haskell has definitely been teaching me some serious humility! Possibly 
it is best that those not ready for that lesson might better stick with 
Python.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: atom 2007.12

2007-12-03 Thread Alex Jacobson
This sounds like a really interesting piece of software.  That being 
another significant use for the name Atom these days is as an identifier 
for a feed format.


  http://en.wikipedia.org/wiki/Atom_(standard)

You may find it easier to advertise and romote this project with a more 
unique name.


-Alex-

Tom Hawkins wrote:

Hello,

Atom is a language embedded in Haskell for describing reactive
software, primarily for realtime control applications.  Based on
conditional term rewriting, an atom
description is composed of a set of state transition rules.  The name
"atom" comes from the atomic behavior of rules: if a rule is selected
to fire, all its transitions occur or none at all.  A hallmark of the
language, rule atomicity greatly simplifies design reasoning.

This release of atom is a major redirection.  Atom is no longer a
hardware description language (I changed jobs.  I'm now in software.).
 Much of the frontend language and backend generators have changed,
though rule scheduling remains nearly the same.  On the frontend,
atom's Signal datatypes have been replaced with Terms and Vars, which
leverage Haskell's GADTs.  The 4 supported Term and Var types include
Bool, Int, Float, and Double.  At the backend, atom generates C and
Simulink models.  The Verilog and VHDL generators have been dropped,
but they may reappear in the future.

Enjoy!

http://funhdl.org/
darcs get http://funhdl.org/darcs/atom

-Tom
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is there a more concise way to generate helper functions for a datatype built on records?

2007-11-24 Thread Alex Jacobson
There is a simplified version of HList style functionality inside 
HAppS-Data because I found Oleg's repo too hard to understand.


-Alex-



Stuart Cook wrote:

On 11/25/07, Thomas Hartman <[EMAIL PROTECTED]> wrote:

I think I'm running into more or less the same issue discussed at

http://bloggablea.wordpress.com/2007/04/24/haskell-records-considered-grungy/

Just wondering if I missed anything, or if any of the ideas
considering better records setter/getters have been implemented in the
meantime.


Have a look at [http://code.haskell.org/category], and the associated blog posts

http://twan.home.fmf.nl/blog/haskell/overloading-functional-references.details
(http://tinyurl.com/2ustba)

and

http://twan.home.fmf.nl/blog/haskell/References-Arrows-and-Categories.details
(http://tinyurl.com/2v8het)

which discuss "functional references" (similar to Luke's), and include
Template Haskell code for deriving more flexible accessors from a
record declaration.


Also check out HList [http://homepages.cwi.nl/~ralf/HList/], which can
do some interesting things, provided you're willing to abandon the
built-in record system.


Stuart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the order of splice definitions matter in template haskell, or is this a bug?

2007-10-31 Thread Alex Jacobson
order matters.  But I hope people are transitioning to using mkCommand 
instead of expose as it provides more functionality.


-Alex-

Thomas Hartman wrote:


I have a situation where

... stuff...

$(expose ['setState, 'getState]
f = SetState

compiles but

f = SetState
$(expose ['setState, 'getState]

doesn't compile, with error: Not in scope: data constructor 'SetState.

Is this a bug?

expose is defined in HAppS.State.EventTH

t,.
---

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google 
Groups "HAppS" group.

To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to 
[EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/HAppS?hl=en

-~--~~~~--~~--~--~---



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] is there a way to run tests at compile time using TH

2007-08-26 Thread Alex Jacobson

I'd like to have code not compile if it doesn't pass the tests.

Is there a way to use TH to generate compiler errors if the tests don't 
pass?


-Alex-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: HAppS-Data 0.9: XML, Pairs, HList, deriveAll

2007-08-14 Thread Alex Jacobson
We've refactored the happs repos and are now going to releasing 
components of HAppS as individual useful packages.  HAppS-Data is the 
first one.  Don't pull a tag, pull the most recent stuff in the repos.


---
HAppS-Data v0.9: XML, Name/Value Pairs, HList, deriveAll

* toXml and fromXml transform your haskell values to and from XML.
Declare your own instances of class Xml to customize the Xml
representation.

* toPairs and fromPairs transform haskell values to and from
name-value pairs (e.g. for url-encoded data).  Pairs are converted
between xpath expressions.  Use toPairsX if you want a conversion
without the top level constructor, fromPairs can handle that as long
as your type has only one top level constructor.

* toHTMLForms to produce an HTML forms representation of your data
that can be consumed by fromPairs in a urldecoding context.  toHTMLForms
uses toPairsX for shorter input field names.

* $(deriveAll) to batch derive Default and as well as the standard
derivable without all the boring per data "deriving" declarations

* Default missing values by declaring your own instances of class Default or
have default values derived autotomatically.

* Normalize your values by declaring your own instance of class Normalize.

* Type safe easy-to-use heterogenous collections.  t1 .&. t2 .&. t3
are a heterogenous lists of values.  (HasT hlist t) is a class
constraint to that the hlist contains a particular type.  (x hlist)::t
obtains a value of type t from inside the hlist.  (u hlist v) updates
the hlist with the value v if the hlist has type.  x and u return
compile time errors if the type is not inside the hlist. fromPairs is
currently broken for hlist.

darcs get http://happs.org/HAppS/HAppS-Data

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] xpath types for SYB/generic haskell type system?

2007-08-13 Thread Alex Jacobson
The SYB papers provide really powerful functions for accessing and 
manipulating a values in arbitrary shaped containers.


The cost of this capability appears to be loss of type checking.  For 
example gfindtype x returns a maybe y.


Given that the type checker actually has to know whether or not x 
actually contains a y inside of it, is there a way to annotate a 
gfindtype sort of function that just returns a value and if applied with 
the wrong type has a compiler enforce error?


It may not be in this version of haskell, but it seems like there is no 
technical reason you could not have partial type annotations that 
describe the traversal strategies described in SYB.  Perhaps it is a 
type version of an xpath expression e.g


  myFindType::(.//y) x => x->y

The (.//y) x says that y is a type nested somewhere in x.

Note, since this is happening at compiler time, this capability will 
still not prevent you from doing a (fromJust Nothing), but it still 
seems super valuable if you are doing generic haskell type stuff?


Is there a mathematical reason why this wouldn't work?

-Alex-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to make haskell faster than python at finding primes?

2007-08-06 Thread Alex Jacobson

Paulo Tanimoto wrote:

The challenge was the implement the modcount algorithm not to calculate
primes per se.
(see e.g. http://jjinux.blogspot.com/2005/11/io-comparison.html).


Can you show us the Python code?


Note this is python for the naive, accumulate and do modulus version. 
Not for modcount.  See below for ocaml version of modcount.


Having slept a few hours, I still think the modcount version should be 
faster than the naive version because you don't have to recalculate a 
full modulues operator for each new value.  You just increment and check 
equality.  So would love to get short fast haskell modcount.


-Alex-



---

#!/usr/bin/env python -OO

"""Find prime numbers.  See usage() for more information.

Author: JJ Behrens
Date: Sun Dec 30 03:36:58 PST 2001
Copyright: (c) JJ Behrens
Description:

Find prime numbers.  See usage() for more information.  The algorithm used
to determine if a given number, n, is prime is to keep a list of prime
numbers, p's, less than n and check if any p is a factor of n.

"""

import sys

"""Output usage information to the user.

mesg -- If this is not NULL, it will be output first as an error message.

"""
def usage(mesg):
  if mesg: sys.stderr.write("Error: %s\n" % mesg)
  sys.stderr.write("Usage: %s NUMBER_OF_PRIMES\n" % sys.argv[0])
  sys.stderr.write("Print out the first NUMBER_OF_PRIMES primes.\n")
  if mesg: sys.exit(1)
  else: sys.exit(0)

"""Output a prime number in a nice manner."""
def printPrime(p): print p

"""Is numCurr prime?

primeRecList -- This is the list of primes less than num_curr.

"""
def isPrime(numCurr, primeRecList):
  for p in primeRecList:
if not numCurr % p: return 0
  else: return 1

"""Print out the first numPrimes primes.

numPrimes must be positive, of course.

"""
FIRST_PRIME = 2
def findPrimes(numPrimes):
  numCurr = FIRST_PRIME - 1
  primeRecList = []
  while numPrimes > 0:
numCurr += 1
if isPrime(numCurr, primeRecList):
numPrimes -= 1
printPrime(numCurr)
primeRecList.append(numCurr)

if len(sys.argv) != 2: usage("missing NUMBER_OF_PRIMES")
try:
  numPrimes = int(sys.argv[1])
  if numPrimes < 1: raise ValueError
except ValueError: usage("NUMBER_OF_PRIMES must be a positive integer")
findPrimes(numPrimes)



(* Author: JJ Behrens
   Date: Sun Nov  4 02:42:42 PST 2001
   Copyright: (c) JJ Behrens
   Description:

   Find prime numbers.  See usage() for more information.  The algorithm
   used to determine if a given number, n, is prime is to keep a list of
   tuples (p, mc) where each p is a prime less than n and each mc is
   n % p.  If n is prime, then no mc is 0.  The effeciency of this
   algorithm is wholly determined by how efficiently one can maintain this
   list.  mc does not need to be recalculated using a full % operation
   when moving from n to n + 1 (increment and then reset to 0 if mc = p).
   Furthermore, another performance enhancement is to use lazy evaluation
   of mc (i.e. collect multiple increments into one addition and one
   modulo--this avoids a traversal of the entire list for values of n that
   are easy to factor).  As far as I know, I'm the inventor of this
   algorithm. *)

(* We'll contain a list of [prime_rec]'s that replace the simple list of
   primes that are used in simple algorithms.

   [prime] This is the prime, as before.

   [count] Given [n], [count] = [n] % [prime].

   [updated] One way to keep [count] up to date is to update it for each
 new [n].  However, this would traversing the entire list of
 [prime_rec]'s for each new value of [n].  Hence, we'll only update
 [count] each time that [prime] is considered as a possible factor
 of [n].  When we do update [count], we'll set [updated] to [n].
 E.g., if [count] has not been updated since [n1] and [n] is now [n2],
 then [updated] will be [n1].  If [prime] is now considered as a
 factor of [n2], then we'll set [updated] to [n2] and [count] to
 [count] + [n2] - [n1] % [prime].  If [count] is now 0, [prime] is
 indeed a factor of [n2].
*)
type prime_rec =
  { prime : int;
mutable count: int;
mutable updated: int }

(* Output usage information to the user.  If [mesg] is provided, it will
   be output first as an error message. *)
let usage ?(mesg = "") () =
  if not (mesg = "") then Printf.fprintf stderr "Error: %s\n" mesg;
  Printf.fprintf stderr "Usage: %s NUMBER_OF_PRIMES\n" Sys.argv.(0);
  prerr_string "Print out the first NUMBER_OF_PRIMES primes.\n";
  if mesg = "" then exit 0 else exit 1

(* Output a prime number in a nice manner. *)
let print_prime p =
  Printf.printf "%d\n" p

(* Find [numerator] % [divisor] quickly by assuming that [numerator] will
   usually be less than [opt_tries] * [divisor].  Just leave [opt_tries]
   to its default value unless you plan on doing some tuning. *)
let rec fast_mod ?(opt_tries = 2) numerator divisor =
  match opt_tries with
0 -> numerator mod divisor
  | _ -> begin
if numerator < divisor then nume

Re: [Haskell-cafe] how to make haskell faster than python at finding primes?

2007-08-06 Thread Alex Jacobson
Thought perhaps the problem is that modcount is just a slower algorithm. 
 ... nevermind.  Thanks.

-Alex-

Alex Jacobson wrote:
The challenge was the implement the modcount algorithm not to calculate 
primes per se.

(see e.g. http://jjinux.blogspot.com/2005/11/io-comparison.html).

-Alex-

Donald Bruce Stewart wrote:

alex:
This implementation of calculating 1 primes (compiled with GHC 
-O2) is 25% slower than the naive python implementation.  Shouldn't 
it be faster?  Am I doing something obviously stupid?


Why not just:

primes = sieve [2..]

sieve (p : xs) = p : sieve [x | x <- xs, x `mod` p > 0]

main   = print (take 1000 primes)

That's super naive, and seems to be around 5x faster than the code you 
were

trying. (So make sure you're doing the same thing as the python version)
-- Don 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to make haskell faster than python at finding primes?

2007-08-06 Thread Alex Jacobson
The challenge was the implement the modcount algorithm not to calculate 
primes per se.

(see e.g. http://jjinux.blogspot.com/2005/11/io-comparison.html).

-Alex-

Donald Bruce Stewart wrote:

alex:
This implementation of calculating 1 primes (compiled with GHC -O2) 
is 25% slower than the naive python implementation.  Shouldn't it be 
faster?  Am I doing something obviously stupid?


Why not just:

primes = sieve [2..]

sieve (p : xs) = p : sieve [x | x <- xs, x `mod` p > 0]

main   = print (take 1000 primes)

That's super naive, and seems to be around 5x faster than the code you were
trying. (So make sure you're doing the same thing as the python version)

-- Don 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] how to make haskell faster than python at finding primes?

2007-08-06 Thread Alex Jacobson
This implementation of calculating 1 primes (compiled with GHC -O2) 
is 25% slower than the naive python implementation.  Shouldn't it be 
faster?  Am I doing something obviously stupid?


  primes = map (\(x,_,_)->x) $ filter (\(_,isP,_)->isP) candidates
  candidates = (2,True,[]):(3,True,[]): map next (tail candidates)

  next (candidate,isP,modCounts) =
let newCounts = map incrMod2 modCounts in -- accumulate mods
(candidate+2 -- we only need to bother with odd numbers
,(isPrime newCounts) -- track whether this one is prime
,if isP then (candidate,2):newCounts else newCounts) -- add if prime
  isPrime = and .  map ((/=0).snd)
  incrMod2 (p,mc) = let mc' = mc+2 in
if mc'p then (p,1) else (p,0)

Note: It is shorter than the python, but I would have assumed that GHC 
could deliver faster as well.


-Alex-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: HDBC or HSQL

2007-08-03 Thread Alex Jacobson
Have you looked at the HAppS.DBMS.IxSet?  It gives you a type safe way 
to query indexed collections.


-Alex-

Isto Aho wrote:

Hi,

I'd like to store small matrices into a db. Number of rows and columns 
may vary in a way not
known in advance. One might use a relation (matrixId, col, row, value) 
or something like that
but if it is possible to put a matrix in one command into db, some 
queries will be easier.
E.g., one relation can store several matrices and it would be easy to 
query, how many
matrices are stored currently. With that above four tuple you can find 
out the number of unique

matrixId's, too, but it is not as easy as with matrices.

Anyhow, now I'm not sure if I should stick with HSQL any more... Earlier 
comments on this
thread made me think that maybe it would be a better idea to try to 
learn enough HDBC.


This would be used in a server application. Is HAppS applicable here?

e.g. after some tweaking the following works with HSQL:

addRows = do
dbh <- connect server database user_id passwd
intoDB dbh ([555,111, 50, 1000]::[Int]) 
([21.0,22.0,23.0,24.0]::[Double])
intoDB dbh ([556,111, 50, 1000]::[Int]) 
([21.0,22.0,23.0,24.0]::[Double])

intoDB dbh ([]::[Int]) ([]::[Double])
   where
intoDB dbh i_lst d_lst =
catchSql (do
let cmd = "INSERT INTO trial (intList, dList) 
VALUES (" ++
toSqlValue i_lst ++ "," ++ toSqlValue 
d_lst ++ ")"

execute dbh cmd
)
(\e -> putStrLn $ "Problem: " ++ show e)


Similarly, queries can handle matrices and I like that it is now
possible to select those columns or rows from the stored matrix that
are needed.  E.g.

retrieveRecords2 :: Connection -> IO [[Double]]
retrieveRecords2 c = do
-- query c "select dList[1:2] from trial" >>= collectRows getRow
query c "select dList from trial" >>= collectRows getRow
where
getRow :: Statement -> IO [Double]
getRow stmt = do
lst   <- getFieldValue stmt "dList"
return lst
readTable2 = do
dbh <- connect server database user_id passwd
    values <- retrieveRecords2 dbh
putStrLn $ "dLists are : " ++ (show values)


br,
Isto


2007/8/1, Alex Jacobson <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>>:


Out of curiosity, can I ask what you are actually trying to do?

I am asking because I am trying to make HAppS a reasonable replacement
for all contexts in which you would otherwise use an external relational
database except those in which an external SQL database is a specific
requirement.

-Alex-



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: HDBC or HSQL

2007-08-03 Thread Alex Jacobson
Will be pushing out the refactored happs repos in the next 2 weeks.  The 
gist is:


* HAppS.IxSet provides efficient query operations on haskell sets.
* HAppS.State provides ACID, replicated, and soon sharded access to your
  application state.
* HAppS.Network will provide server side HTTP functionality from which 
to access your replicated state.


-Alex-

Bulat Ziganshin wrote:

Hello Alex,

Wednesday, August 1, 2007, 8:34:23 AM, you wrote:


I am asking because I am trying to make HAppS a reasonable replacement
for all contexts in which you would otherwise use an external relational
database except those in which an external SQL database is a specific 
requirement.


where i can read about such usage?




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OS swapping and haskell data structures

2007-08-01 Thread Alex Jacobson
Ok, so for low throughput applications, you actually need a disk 
strategy.  Got it.


Ok, is there a standard interface to BerkleyDB or some other disk based 
store?


-Alex-




Duncan Coutts wrote:

On Wed, 2007-08-01 at 11:31 -0700, Bryan O'Sullivan wrote:

Alex Jacobson wrote:
If you create a Data.Map or Data.Set larger than fits in physical 
memory, will OS level swapping enable your app to behave reasonably or 
will things just die catastrophically as you hit a memory limit?
Relying on the OS to page portions of your app in and out should always 
be the fallback of last resort.  You are fairly guaranteed to get 
terrible performance because the VM subsystem can't anticipate your 
app's memory access patterns, and catastrophic death of either your app 
or other system processes is a strong possibility (Google for "OOM 
killer" if you want some horror stories).  In many cases, you can't even 
rely on paging being possible.


Furthermore, as I understand it, GC does not interact well with paging
since the GC has to traverse the data structures on major GCs it'll
force it all to be kept in memory.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] OS swapping and haskell data structures

2007-07-31 Thread Alex Jacobson
If you create a Data.Map or Data.Set larger than fits in physical 
memory, will OS level swapping enable your app to behave reasonably or 
will things just die catastrophically as you hit a memory limit?



-Alex-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: HDBC or HSQL

2007-07-31 Thread Alex Jacobson

Out of curiosity, can I ask what you are actually trying to do?

I am asking because I am trying to make HAppS a reasonable replacement 
for all contexts in which you would otherwise use an external relational 
database except those in which an external SQL database is a specific 
requirement.


-Alex-




Isto Aho wrote:

Hi,

I was also wandering between these different db-libs and thanks for your 
information.


I tried several (HDBC, HSQL, HaskellDB) and made only small trials.
HaskellDB has quite many examples on wiki that gave a quick start to 
further trials.
But, I wasn't able to tell that some of the fields have default values 
and then it

was already time to move on to the HSQL and HDBC trials.

Is it possible to use sql-array-types with HDBC with postgresql? I don't 
remember was this the
reason why I eventually tried HSQL - anyhow, it was rather difficult to 
get started with HDBC
but the src test cases helped here. One example in a wiki would do 
miracles :)


HSQL didn't have the array-types but it took only couple of hours to add 
"a sort of" support
for those. There are some problems though... (indexed table queries 
returning some nulls
is not yet working and ghci seems to be allergic to this)  I was even 
wondering, should I propose

a patch in some near future for this.

But if HDBC can handle those sql-arrays or if you can give a couple of 
hints, how to proceed
in order to add them there, given your view below, I'd be willing to try 
to help / to try to use HDBC.


br,
Isto

2007/7/30, John Goerzen <[EMAIL PROTECTED] 
>:


On 2007-07-25, George Moschovitis <[EMAIL PROTECTED]
> wrote:
 > I am a Haskell newbie and I would like to hear your suggestions
regarding a
 > Database conectivity library:
 >
 > HSQL or HDBC ?
 >
 > which one is better / more actively supported?

I am the author of HDBC, so take this for what you will.

There were several things that bugged me about HSQL, if memory serves:

1) It segfaulted periodically, at least with PostgreSQL

2) It had memory leaks

3) It couldn't read the result set incrementally.  That means that if
you have a 2GB result set, you better have 8GB of RAM to hold it.

4) It couldn't reference colums in the result set by position, only by
name

5) It didn't support pre-compiled queries (replacable parameters)

6) Its transaction handling didn't permit enough flexibility

I initially looked at fixing HSQL, but decided it would be easier to
actually write my own interface from scratch.

HDBC is patterned loosely after Perl's DBI, with a few thoughts from
Java's JDBC, Python's DB-API, and HSQL mixed in.

I believe it has fixed all of the above issues.  The HDBC backends that
I've written (Sqlite3, PostgreSQL, and ODBC) all use Haskell's C memory
management tools, which *should* ensure that there is no memory
leakage.

I use it for production purposes in various applications at work,
connecting to both Free and proprietary databases.  I also use it in my
personal projects.  hpodder, for instance, stores podcast
information in
a Sqlite3 database accessed via HDBC.  I have found HDBC+Sqlite3 to be a
particularly potent combination for a number of smaller projects.

http://software.complete.org/hdbc/wiki/HdbcUsers
 has a list of some
programs that are known to use HDBC.  Feel free to add yours to it.

-- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org 
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
br,
Isto




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Query Language for Haskell Terms

2007-06-27 Thread Alex Jacobson

Titto,

The usual tradeoff is between efficiency and queryability.  It is really 
easy to optimize graph traversal.  It is really hard to get performance 
out of the Logic model.  The traditional sweet spot has been the 
relational model, but it breaks down at very large scale.  A lot of very 
large scale web sites implement some form of relational database 
sharding which basically means partitioning the database and doing a bit 
of graph traversal to decide on the database and then relational within 
that database and then merging the results.


-Alex-

Pasqualino 'Titto' Assini wrote:

On Wednesday 27 June 2007 09:32:16 Alex Jacobson wrote:
  

Titto,

Have you looked at HAppS.DBMS.IxSet?  Right now it provides a generic
way to query indexed sets.

If you want to take a shot at making the queries serializable, I don't
think it would be that difficult (but I have not tried so YMMV).



Hi Alex, thanks for remininding me about that. It is a very nice back-end and 
as you say, it should not be too hard to design a SQL-like query language  on 
top of it. 



I am still wondering, however, what meta-model is more appropriate to 
represent the info hold in a Web app.


Unfortunately there seem to be at least 3 different ones (without considering 
mixed approaches like F-Logic):
 


1) Graph

This is really the native Haskell way of representing information: the model 
is defined using classes or data types and types are connected by direct 
uni-directional links.


So for example the Sale/Item model (from the HAppS DBMS Examples) might be 
written like:


data Item = Item {stock::Int,description::String,price::Cents}  
deriving (Ord,Eq,Read,Show)


data Sale = Sale {date::CalendarTime
 ,soldItem::Item -- NOTE: 
uni-directional link to Item
 ,qty::Int
 ,salePrice::Cents} 
deriving (Ord,Eq,Read,Show)


or in more abstract form using classes:

class Item i where
 description :: i -> String
 price :: i -> Cents

class Sale s where
 soldItem :: Item i => s -> i

This is also very much the Web-way: information is a graph of resource linked 
via uni-directional links.


Information is queried by path traversal (REST-style):

Assuming that the root "/" represents the collection of all sales then:

HTTP GET /elemAt[2345]/soldItem/description.json 

might return the JSON representation of the description of the item sold as 
part of sale 2345.



2) Relational

Information is represented as tables, that can be joined up via keys, as 
implemented  in HAppS DBMS or in any relational database.


The model becomes:

data Item = Item {itemId::Id  -- NOTE: primary key 
 ,stock::Int,description::String,price::Cents} 
deriving (Ord,Eq,Read,Show)


data Sale = Sale {date::CalendarTime,
  soldItemId::Id -- NOTE: foreign key 
  ,qty::Int,salePrice::Cents} 
deriving (Ord,Eq,Read,Show)


Plus the appropriate indexes definitions.

Information can be queried via a SQL-like language.


3) Logic

This is the "Semantic Web" way: information is broken down into assertions, 
that in their simplest form are just triples: subject predicate object, the 
model then becomes something like:


Item hasDescription String
Item hasPrice Cents
Sale hasItem Item

It can be populated piecemeal with assertions like:

item0 hasDescription "indesit cooker 34BA"
item0 hasPrice 3.5
Sale0 hasSoldItem item0

It can be queried using a logic-oriented query language (e.g SPARQL):
sale2345 hasItem ?item
?item hasDescription ?description



Moving from Graph to Relational to Logic the meta-model becomes simpler and 
more flexible. The flip-side is that the model (and the queries) become more 
verbose.  It is not clear where is the sweet spot.
 

What people think?  



Best,

titto








  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Query Language for Haskell Terms

2007-06-27 Thread Alex Jacobson

Titto,

Have you looked at HAppS.DBMS.IxSet?  Right now it provides a generic 
way to query indexed sets.


If you want to take a shot at making the queries serializable, I don't 
think it would be that difficult (but I have not tried so YMMV).


-Alex-

Pasqualino 'Titto' Assini wrote:

Hi,

I am writing a Web application using HAppS.

As all HAppS apps, it represents its internal state as a Haskell term (HAppS 
automagically provides persistence and transactions).


It is a neat and efficient solution, you can write your data model entirely in 
Haskell and, at least for read-only transactions (queries) it will be operate 
as fast as possible as all data is in memory (if your transactions modify the 
application state, the transactions has to be recorded on disk to make it 
persistent, but this is pretty fast too).


One major component, however seem to be missing, if we are effectively using 
Haskell as an in-memory database where is the "SQL for Haskell": a generic 
query language for Haskell terms?


There are three basic functions that every web app has to provide, and all of 
them could be provided by a generic "Haskell SQL":

-- query the application state
-- transform (possibly monadically) the application state : the result of the 
query is the new state
-- access control:  what an user can see is what is returned by an internal 
access control query


The availabilty of such a language would be a major boost for Haskell-based 
web applications as every application could be accessed via the same API, the 
only difference being the underlying application-specific data model.



So my question is: what ready-made solutions are there in this space, if any?

And if there are none, how would you proceed to design/implement such a 
language?




The basic requirements, in decreasing order of importance, are:

-- Safe, it must be possible to guarantee that a query:
--- cannot cause a system crash
--- completes by a fixed time of time
--- uses a 'reasonable' amount of space
--- cannot perform any unsafe operation (IO, or any unallowed read/write of 
the application state)


-- Expressive (simple queries should be simple, complex queries should be 
possible)


-- Simple to implement 


-- Efficient:
--- Repeated queries should be executed efficiently time-wise (it is 
acceptable for queries to be executed inefficiently the first time) and all 
should be space-efficient, so it should not do unnecessary copying. 


-- User friendly:
--- Simple to use for non-haskeller
--- Short queries

Ah, I almost forgot, it should also be able to make a good espresso.


The problem can be broken in two parts:

1) How to implement generic queries on nested terms in Haskell?

2) How to map the queries, written as a string, to the internal Haskell query


Regarding the first point, I am aware of with the following options:
- SYB (Data.Generics..)
- Oleg's Zipper 
- (Nested) list comprehensions (that are being extended with SQL-like order by 
and group by operators)


Being rather new to Haskell all these options are rather unfamiliar so I would 
appreciate any advice on what should be preferred and why.




Regarding the second point: 

The simplest solution would be to avoid the problem entirely by using Haskell 
directly as the query language.


This is the LambdaBot way: queries are Haskell 
expression, compiled in a limited environment (a module with a fixed set of 
imports, no TH).


Lambdabot avoids problems by executing the expression on a separate process in 
a OS-enforced sandbox that can be as restrictive as required (especially 
using something like SELinux).


However, to get the query to execute efficiently it would probably have to be 
executed in a GHC thread and I am not sure how safe that would be.


Looking at the discussion at  
http://haskell.org/haskellwiki/Safely_running_untrusted_Haskell_code 
 it seems clear that there are many open issues.


For example, how would I enforce limits on the space used by the query?

So, it would probably be better to define a separate query 
language that is  less expressive but more controllable than full Haskell, but 
what form should that take?




Any suggestion/tip/reference is very welcome,

 titto








___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] class MonoidBreak?

2007-06-08 Thread Alex Jacobson

Ok how about this class:

  class (Monoid m) => MonoidBreak m where
  mbreak::m->m->m

And the condition is

  mappend (mbreak y z) y == z


-Alex-

Dan Piponi wrote:

On 6/7/07, Alex Jacobson <[EMAIL PROTECTED]> wrote:

Is there a standard class that looks something like this:

class (Monoid m) => MonoidBreak m where
 mbreak::a->m a->(m a,m a)


I think you have some kind of kind issue going on here. If m is a
Monoid I'm not sure what m a means. Looks like you're trying to factor
elements of monoids in some way. Maybe you mean

class (Monoid m) => MonoidBreak m where
   mbreak::a->m->(m,m)

Though I'm not sure what the relationship between m and a is intended 
to be.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] class MonoidBreak?

2007-06-07 Thread Alex Jacobson

Is there a standard class that looks something like this:

class (Monoid m) => MonoidBreak m where
mbreak::a->m a->(m a,m a)

and it should follow some law like:

m == uncurry mappend $ mbreak x m

-Alex-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RE: [Haskell] boilerplate boilerplate

2007-06-01 Thread Alex Jacobson
I suppose a deriveAll command from template haskell would work.  Is that 
really possible?


-Alex-
Neil Mitchell wrote:

Hi Alex,


The problem with Data.Derive is that I now have a pre-processor cycle as
part of my build process. Automatic and universal Data and Typeable
instance deriving should just be built into Haskell.


Not if you use the template haskell support. We don't currently have a
"deriveAll" command, but I'm sure it could be added, which applied a
given definition to every data type in that module.

Thanks

Neil


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RE: [Haskell] boilerplate boilerplate

2007-06-01 Thread Alex Jacobson

Claus Reinke wrote:
Actually, standalone deriving doesn't really solve the boilerplate 
boilerplate problem.  My original complaint here is that I don't want 
to explicitly declare a deriving (Data,Typeable) for every type used 
somewhere inside a larger type I am using.  In this particular case, 
I am using SYB to autogenerate/autoparse XML.  The Atom/RSS datatype 
has a lot of elements in it.  Having to do an explicit declaration of 
deriving (Data,Typeable) for each of them is just a tremendous 
beat-down no matter where I do it.


A simple solution to the problem is just to have the compiler 
automatically derive Data and Typeable for all classes. 


it still sounds as if you might want to follow Neil's suggestions 
about using

Data.Derive, which does seem to have a derive-this-for-all feature.

The problem with Data.Derive is that I now have a pre-processor cycle as 
part of my build process. Automatic and universal Data and Typeable 
instance deriving should just be built into Haskell.  I totally agree 
that, if you don't have this deriving by default, it is totally useful 
to be able to derive in a location different from where the type is 
declared, but that is a separate issue.  My main point is that Data and 
Typeable should always be there without extra futzing on the part of the 
programmer.


Speaking of which, any thoughts on fixing my SYB code in the other thread?

-Alex-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] how do I pass customization items in syb code?

2007-06-01 Thread Alex Jacobson
I'm looking at the XML SYB example 
http://www.cs.vu.nl/boilerplate/testsuite/xmlish/Main.hs


I'd like to find a way to pass other type customizations as arguments to 
data2content and content2data.


I modified data2Content as follows:

  data2content f = element
   `ext1Q` list
   `extQ`  string 
   `extQ`  f


But I can't figure out how to give f a type such that 
 
   (data2Content $ myFooElem `extQ` myBarElem) 

operates on both Foo and Bar elements. Right now I am giving f the type 
f::(a->[Content]) which compiles but causes the wrong behavior.


Any recommendations on how to fix this?

-Alex-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RE: [Haskell] boilerplate boilerplate

2007-05-31 Thread Alex Jacobson
Actually, standalone deriving doesn't really solve the boilerplate 
boilerplate problem.  My original complaint here is that I don't want to 
explicitly declare a deriving (Data,Typeable) for every type used 
somewhere inside a larger type I am using.  In this particular case, I 
am using SYB to autogenerate/autoparse XML.  The Atom/RSS datatype has a 
lot of elements in it.  Having to do an explicit declaration of deriving 
(Data,Typeable) for each of them is just a tremendous beat-down no 
matter where I do it.


A simple solution to the problem is just to have the compiler 
automatically derive Data and Typeable for all classes.  Perhaps 
initially there is some compiler flag like -fSYB. 

Slightly more elegant would be to not automatically derive if the user 
has done so explicitly and to add syntax to the deriving clause like 
"deriving ... not (Data,Typeable)" to tell the compiler that these 
instances should be unavailable for this type.


A substantially more general/elegant solution would be for the compiler 
to derive instances automatically for any classes whose methods I use 
and for which there has been no explicit contradictory declaration.  But 
I assume that is harder and having Data and Typeable means you can use 
SYB and not worry so much about deriving instances anymore.


Most elegant would be for the user to be able to add derivable classes 
via import declarations, but again simply having Data and Typeable is a 
95% solution and the perfect should not be the enemy of the good.


-Alex-



Simon Peyton-Jones wrote:

| does that help to keep it on the radar?-)
| claus

Indeed!  But please modify the wiki.  Email has a half life of about 1 day!

S
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Network.HTTP+ByteStrings Interface--Or: How to shepherd handles and go with the flow at the same time?

2007-05-31 Thread Alex Jacobson
The HAppS HTTP code basically delivers the first 64k and a handle to 
acquire the rest.  The 99% or higher case is that the document fits in 
memory so the 64k bound is fine.  If you have something bigger,  the 
user is going to have to decide how to handle that on a case by case basis.


Note: chunk-encoding means that there is no theoretical limit to how big 
an HTTP request or response may be.


-Alex-

Jules Bean wrote:

I've been having something of a discussion on #haskell about this but
I had to go off-line and, in any case, it's a complicated issue, and I
may be able to be more clear in an email.

The key point under discussion was what kind of interface the HTTP
library should expose: synchronous, asynchronous? Lazy, strict?

As someone just pointed out, "You don't like lazy IO, do you?". Well,
that's a fair characterisation. I think unsafe lazy IO is a very very
cute hack, and I'm in awe of some of the performance results which
have been achieved, but I think the disadvantages are underestimated.

Of course, there is a potential ambiguity in the phrase 'lazy IO'. You
might interpret 'lazy IO' quite reasonably to refer any programming
style in which the IO is performed 'as needed' by the rest of the
program. So, to be clear, I'm not raising a warning flag about that
practice in general, which is a very important programming
technique. I'm raising a bit of a warning flag over the particular
practice of achieving this in a way which conceals IO inside thunks
which have no IO in their types: i.e. using unsafeInterleaveIO or even
unsafePerformIO.

Why is this a bad idea? Normally evaluating a haskell expression can
have no side-effects. This is important because, in a lazy language,
you never quite know[*] when something's going to be evaluated. Or if
it will. Side-effects, on the other hand, are important things (like
launching nuclear missiles) and it's rather nice to be precise about
when they happen. One particular kind of side effect which is slightly
less cataclysmic (only slightly) is the throwing of an exception. If
pure code, which is normally guaranteed to "at worst" fail to
terminate can suddenly throw an exception from somewhere deep in its
midst, then it's extremely hard for your program to work out how far
it has got, and what it has done, and what it hasn't done, and what it
should do to recover. On the other hand, no failure may occur, but the
data may never be processed, meaning that the IO is never 'finished'
and valuable system resources are locked up forever. (Consider a naive
program which reads only the first 1000 bytes of an XML document
before getting an unrecoverable parse failure. The socket will never
be closed, and system resources will be consumed permanently.)

Trivial programs may be perfectly content to simply bail out if an
exception is thrown. That's very sensible behaviour for a small
'pluggable' application (most of the various unix command line
utilities all bail out silently or nearly silently on SIGPIPE, for
example). However this is not acceptable behaviour in a complex
program, clearly. There may be resources which need to be released,
there may be data which needs saving, there may be reconstruction to
be attempted on whatever it was that 'broke'.

Error handling and recovery is hard. Always has been. One of the
things that simplifies such issues is knowing "where" exceptions can
occur. It greatly simplifies them. In haskell they can only occur in
the IO monad, and they can only occur in rather specific ways: in most
cases, thrown by particular IO primitives; they can also be thrown
'To' you by other threads, but as the programmer, that's your
problem!.

Ok. Five paragraphs of advocacy is plenty. If anyone is still reading
now, then they must be either really interested in this problem, or
really bored. Either way, it's good to have you with me! These issues
are explained rather more elegantly by Oleg in [1].


So, where does that leave the HTTP library? Well here are the kinds of
interface I can imagine. I'm deliberately ignoring all the stuff about
request headers, request content, and imagining that this is all about
URL -> ByteString. Here are the options that occur to me:

A. Strict, synchronous GET
   sSynGET :: URL -> IO ByteString

   Quite simply blocks the thread until the whole data has
   arrived. Throws some kind of exception on failure, presumably. This
   is a simple primitive, appropriate for relatively small files
   (files which fit comfortably in your memory) and simple
   programs. It's also great for programs which want to take their own
   control over the degree of asynchrony; they can just fork as many
   threads as they choose to GET with.

B. Strict, asynchronous GET
   sAsynGET :: URL -> IO (MVar ByteString)

   Download the entire data, but do it in a separate thread. Give me
   an MVar so I can synchronise on the data's arrival in whichever way
   suits my program best. Suitable for small files which fit
   conveniently in memory. 

[Haskell-cafe] where do I point the type annotations

2007-05-17 Thread Alex Jacobson
I am playing with using SYB to make generic indexed collections.  The 
current code is this:


   data Syb = Syb [Dynamic] -- list of [Map val (Set a)] 


   empty item = Syb  $ gmapQ (toDyn . emp item) item
   where
   emp::x->y->Map.Map y (Set.Set x)
   emp x y = Map.empty

   insert x (Syb indices) = Syb $ zipWith f indices (gmapQ toDyn x)
   where
   f dynIndex dynAttr = toDyn $ Map.insert attr 
(maybe (Set.singleton x) (Set.insert x) $

   Map.lookup attr index) index
   where
   index = fromJust $ fromDynamic dynIndex
   attr = fromJust $ fromDynamic dynAttr

   e = empty i where i=i::Test
   t1 = Test "foo" 2
   c1 = insert t1 e

 

Which causes the following error (the line numbers are wrong because 
there was other code in the original):


 Ambiguous type variable `a' in the constraints:
   `Ord a' arising from use of `insert' at Main.hs:113:5-15
 `Typeable a' arising from use of `insert' at Main.hs:113:5-15
 Probable fix: add a type signature that fixes these type variable(s)

What am I doing wrong?  Is there a better way to define insert that does 
not have this problem?
Note: I also tried doing this so that each attribute tried to find a 
matching index using fromDynamic, but that gave me an error involving 
gmapQ not have an Ord constraint. 


-Alex-

 




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HaskellForge?

2007-01-10 Thread Alex Jacobson

Let's be really specific about what we want to have in this regard:

1. repo hosting
2. repo searching
3. A shared/federated name space mapping module names to the URLs of 
repos that implement those modules
4. A dev system that uses the name space to download and import chase 
the modules necessary for your program to run
5. A build system that takes a program name and builds/installs it on 
your system


For #1, web hosting with ssh access is really cheap and easy.  I've seen 
hosting as low as $1/month.  If people here are really unsatisfied with 
all the hosting out there.  I am happy to provide simple darcs hosting 
on one of my servers for a pittance.


For #2, google works pretty well.  Is there functionality we need that 
it misses?


For #3, I made a start with .  If 
people want me to add other packages to that namespace let me know.  
Conceptually, the namespace is federated because you can combine that 
file with other files to build a composite namespace.


For #4, I made a start with SearchPath (available at 
http://searchpath.org).  Searchpath looks at the source of the haskell 
module passed on on the command line and then does recursive import 
chasing accross the internet using the set of module maps passed on the 
command line.  Currently, is does not handle well modules that use cpp 
to import code and it doesn't handle at all modules that have 
dependencies on C libraries that are not already local and on the path.


For #5, the platform specific package managers seem like the correct 
solution.


Perhaps something important is gained from integrating some subset of 
1-5.  Perhaps the particular implementations of #1-5 are lacking in some 
manner that is not apparent. Perhaps we just need community acceptance 
for a particular version of each of these.


-Alex-






Michael T. Richter wrote:

On Mon, 2007-08-01 at 18:19 +0100, Sven Panne wrote:

> For example, if I want to install Rails (ruby web-app framework), I just
> type:
>   gem install rails
> It's pretty slick.

  
How does this work with the native packaging mechanism on your platform 
(RPM, ...)? Does it work "behind it's back" (which would be bad)? 



It doesn't.  It is its own Ruby-specific packaging mechanism.

Let's 
assume that "rails" needs "foo" and "bar" as well, which are not yet on your 
box. Does "gem install" transitively get/update all dependecies of "rails"?



Well rails needs dozens of libraries, seemingly, and it can be told to 
collect all dependencies.  (If it isn't told one way or another, it 
asks.)  This is true, however, iff the dependencies are all gems 
themselves.  If you need a third-party library installed that's not 
wrapped in a gem, you have to use your usual packaging system to get 
it.  (Being an Ubuntu user, I use aptitude.)


Overall, I like the gems approach.  The Ruby packages for 
debian-alikes are almost invariably out of date and building a lot of 
these Ruby enhancements is a pain in the posterior.  If I want a 
stable version of a given component, I'll use aptitude (or RPM or 
whatever) and live with it being out of date.  If I want the latest 
and greatest, however, I'll stick to the gems.  Since gems can be 
installed and deleted just like aptitude's packages can be (and just 
as cleanly) it really isn't that hard an approach.


--
*Michael T. Richter*
/Email:/ [EMAIL PROTECTED], [EMAIL PROTECTED]
/MSN:/ [EMAIL PROTECTED], [EMAIL PROTECTED]; /YIM:/ 
michael_richter_1966; /AIM:/ YanJiahua1966; /ICQ:/ 241960658; 
/Jabber:/ [EMAIL PROTECTED]


/"[Blacks] ... are inferior to the whites in the endowments both of 
body and mind."/ *--Thomas Jefferson*




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SYB and/or HList for XML, deserialization and collections

2006-12-28 Thread Alex Jacobson
I'd really rather factor our the template haskell.  It does not leave me 
feeling good.


At the specific level, TemplateHaskell doesn't solve the problem of 
getting good XML element names.  For example with HList lets me annotate 
labels with information about whether they are attributes or elements.


At the usage level, it does not traverse other modules easily and forces 
you to do weird things with code order in order to compile.


At the concept level, I find it way too hard to think in terms of the 
abstract syntax of Haskell.  I'd rather be thinking in terms of 
application semantics.


At the implementation level, it forces the generation of standard 
accessor names e.g. withFoo for every foo, rather than supporting a 
general syntax for access or update.


-Alex-



Bulat Ziganshin wrote:

Hello S.,

Wednesday, December 27, 2006, 2:24:00 AM, you wrote:

  
Having just done a major refactor of the HAppS HTTP API to make it 
much much easier to use, I am now thinking about simplifying the 
current boilerplate associated with XML serialization and state 
deserialization.



are you considered using Template Haskell to do it? at least it is used for
automatic generation of class instances for binary serialization


  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe