RE: [Haskell-cafe] Unable to hc-build ghc 6.6.1

2008-06-10 Thread Re, Joseph (IT)
 From: Ian Lynagh [mailto:[EMAIL PROTECTED] 
 You're likely to find it easier to bootstrap by installing a 
 bindist onto the machine, unless that is impossible for some reason.

Unfortunately it is due to policy.

  GenApply.o(.text+0x13a55): In function `s5cr_info':
  : undefined reference to `base_DataziList_zdsintersperse_info'
 
 Nothing comes to mind. Were the libraries rebuilt after 
 building GenApply on the machine on which the hc files were generated?

Nope. After building the hc tarball with a new copy of ghc 6.6.1 on the
host machine, I pulled it along with a new copy of the source tarball
from http://www.haskell.org/ghc onto the target machine. Untarred the hc
files, changed mk/build.mk and fixed mk/bootstrap.mk, and ran
distrib/hc-build.


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Unable to hc-build ghc 6.6.1

2008-06-10 Thread Re, Joseph (IT)
 -Original Message-
  From: Ian Lynagh [mailto:[EMAIL PROTECTED]
  Nothing comes to mind. Were the libraries rebuilt after building 
  GenApply on the machine on which the hc files were generated?
 
 Nope. After building the hc tarball with a new copy of ghc 
 6.6.1 on the host machine, I pulled it along with a new copy 
 of the source tarball from http://www.haskell.org/ghc onto 
 the target machine. Untarred the hc files, changed 
 mk/build.mk and fixed mk/bootstrap.mk, and ran distrib/hc-build.

Just wanted to note that I get the same error with ghc 6.4.2 (after
patching ghc/Makefile's ordering of the SUBDIRS line as noted in
http://hackage.haskell.org/trac/ghc/ticket/841, adding -lncurses to
mk/bootstrap.mk, and removing ghc 6.6.1 to prevent conflicts).


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Unable to hc-build ghc 6.6.1

2008-06-09 Thread Re, Joseph (IT)
I'm trying to do a registered hc-build on linux2.4 x86 with ghc 6.6.1.
After fixing mk/bootstrap.mk to include -lncurses in HC_BOOT_LIBS to
get past an undefined reference to tputs et al, I've gotten stuck with
the following undefined reference error:


== gmake all -wr;
 in /tmp/ghc-6.6.1/utils/runstdtest

gmake[1]: Nothing to be done for `all'.
Finished making all in runstdtest: 0

== gmake all -wr;
 in /tmp/ghc-6.6.1/utils/genapply

gcc -o genapply  -fno-defer-pop -fomit-frame-pointer -fno-builtin
-DDONT_WANT_WIN32_DLL_SUPPORT -D__GLASGOW_HASKELL__=606  -O
-I/tmp/ghc-6.6.1/includes -I/tmp/ghc-6.6.1/libraries/base/include
-I/tmp/ghc-6.6.1/libraries/unix/include
-I/tmp/ghc-6.6.1/libraries/parsec/include
-I/tmp/ghc-6.6.1/libraries/readline/include-L/tmp/ghc-6.6.1/rts
-L/tmp/ghc-6.6.1/libraries/base -L/tmp/ghc-6.6.1/libraries/base/cbits
-L/tmp/ghc-6.6.1/libraries/haskell98 -L/tmp/ghc-6.6.1/libraries/parsec
-L/tmp/ghc-6.6.1/libraries/regex-base
-L/tmp/ghc-6.6.1/libraries/regex-compat
-L/tmp/ghc-6.6.1/libraries/regex-posix -L/tmp/ghc-6.6.1/libraries/Cabal
-L/tmp/ghc-6.6.1/libraries/template-haskell
-L/tmp/ghc-6.6.1/libraries/readline -L/tmp/ghc-6.6.1/libraries/unix
-L/tmp/ghc-6.6.1/libraries/unix/cbits -u
base_GHCziBase_Izh_static_info -u base_GHCziBase_Czh_static_info -u
base_GHCziFloat_Fzh_static_info -u base_GHCziFloat_Dzh_static_info
-u base_GHCziPtr_Ptr_static_info -u base_GHCziWord_Wzh_static_info
-u base_GHCziInt_I8zh_static_info -u base_GHCziInt_I16zh_static_info
-u base_GHCziInt_I32zh_static_info -u
base_GHCziInt_I64zh_static_info -u base_GHCziWord_W8zh_static_info
-u base_GHCziWord_W16zh_static_info -u
base_GHCziWord_W32zh_static_info -u base_GHCziWord_W64zh_static_info
-u base_GHCziStable_StablePtr_static_info -u
base_GHCziBase_Izh_con_info -u base_GHCziBase_Czh_con_info -u
base_GHCziFloat_Fzh_con_info -u base_GHCziFloat_Dzh_con_info -u
base_GHCziPtr_Ptr_con_info -u base_GHCziStable_StablePtr_con_info -u
base_GHCziBase_False_closure -u base_GHCziBase_True_closure -u
base_GHCziPack_unpackCString_closure -u
base_GHCziIOBase_stackOverflow_closure -u
base_GHCziIOBase_heapOverflow_closure -u
base_GHCziIOBase_NonTermination_closure -u
base_GHCziIOBase_BlockedOnDeadMVar_closure -u
base_GHCziIOBase_Deadlock_closure -u
base_GHCziWeak_runFinalizzerBatch_closure -u __stginit_Prelude
GenApply.o -lHSreadline -lreadline -lHStemplate-haskell -lHSunix
-lHSunix_cbits -lHSCabal -lHShaskell98 -lHSregex-compat -lHSregex-posix
-lHSregex-base -lHSbase -lHSbase_cbits -lHSparsec -lHSrts -lgmp -lm
-lncurses  -ldl -lrt
GenApply.o(.text+0x13a55): In function `s5cr_info':
: undefined reference to `base_DataziList_zdsintersperse_info'
GenApply.o(.text+0x14c11): In function `s58p_info':
: undefined reference to `base_DataziList_zdsintersperse_info'
GenApply.o(.text+0x17d79): In function `s54x_info':
: undefined reference to `base_DataziList_zdsintersperse_info'
collect2: ld returned 1 exit status
gmake[1]: *** [genapply] Error 1
Failed making all in genapply: 1
gmake: *** [all] Error 1
gmake: Leaving directory `/tmp/ghc-6.6.1/utils'

Does anyone know why this might be occuring? The only reference to
base_DataziList_zdsintersperse_info on google didn't seem to have an
answer.  For reference my mk/build.mk is:

SRC_HC_OPTS = -H32m -O -fvia-C -Rghc-timing -keep-hc-files
GhcLibHcOpts= -O
GhcLibWays  =
SplitObjs   = NO

just as the example one given on
http://hackage.haskell.org/trac/ghc/wiki/Building/Porting.
Also- in case this is related- I moved AutoApply_thr.thr_hc to
AutoApply_thr.hc (and likewise for debug, thr_debug, and thr_p).

Thanks,
Joseph Re


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Re, Joseph (IT)
Not sure about it's current state, but a friend was working on this
until he graduated recently: http://www.acm.uiuc.edu/projects/Wipt

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ketil Malde

 Aren't there any usable third-party package managers for windoze?

The most usable one I've seen is Steam from Valve, IIRC.  It'd be cool
if Haskell packages were provided this way.

-k


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Looking for final year project - using Haskell, or another functional language

2007-07-22 Thread Re, Joseph (IT)
If you meant just non-graphic uses of a gpu, I don't know much in that
area, but I do know of a very neat graphics related (but irregular on
standard gpus) topic; ray tracing.
 
First off, a basic idea of what ray tracing is:
http://en.wikipedia.org/wiki/Ray_tracing
 
Second, for great reasons why ray tracing is useful and probably
neccessary for the future, Dr. Kavita Bala gave a talk in 2005
(abstract:
http://www.acm.uiuc.edu/conference/2005/speakers.php#KavitaBalaAbstract
video:
http://www.acm.uiuc.edu/conference/2005/video/UIUC-ACM-RP05-Kavita-Bala.
wmv).  I don't recall if she explicitly says it, but applying her
feature emphasis approach to procedurally generated textures (which
should scale in resolution perfectly) would be a way for a modern engine
to really shine.
 
During the same conference Dr. Peter Shirley gave the talk Real Time
Ray Tracing on the Desktop: When and How? (abtract:
http://www.acm.uiuc.edu/conference/2005/speakers.php#PeteShirleyAbstract
video:
http://www.acm.uiuc.edu/conference/2005/video/UIUC-ACM-RP05-Peter-Shirle
y.wmv) Shirley decides in that video that a hardware based solution is
what he should shoot for, and his group joins the Saarland University
Graphics Group.
 
[Note: if you are willing to watch both of these I recommend watching
Dr. Shirley's first. Also, note that some videos have delayed starts.]
 
Shortly thereafter, their groups make a number of advancements in ray
tracing, and they created a working RPU
(http://graphics.cs.uni-sb.de/SaarCOR/) on a FPGA board that is capable
of rendering basic scenes in realtime as well as come out with a
software solution that does roughly the same called OpenRT
(http://www.openrt.de/).  Both implementations scale accross multiple
processors very well due to inherent parallelism in ray tracing.  My
guess is that a realtime engine (and a good game) using software ray
tracing will help it gain popularity, but then dedicated boards made
from the FPGA designs will take over shortly thereafter (High end
graphics cards from NVIDIA provide 23 times more programmable floating
point performance and 100 times more memory bandwidth as our
prototype).
 
Concurrently there has also been work of doing ray tracing on existing
GPUs.  Here's a neat paper from someone at my university, on structures
for using a gpu for ray tracing, Fast GPU Ray Tracing of Dynamic Meshes
using Geometry Images
(http://graphics.cs.uiuc.edu/geomrt/geomrt2006.pdf).  I'm unaware if he
applies any of the breakthrough techniques learned from OpenRT's
research (since it's very recent), but if not, that would certainly be a
viable topic to explore within the realm of GPGPU-ish work.
 
Getting a really performance optimized haskell implementation of OpenRT
or bindings to an existing library
(http://liris.cnrs.fr/~bsegovia/yacort/ or http://xfrt.sourceforge.net/)
would be very cool, especially when combined with a shader dsl, a
procedural generation library+dsl, overall graphics engine, and *dreams
on*.  At that point, someone could easily write the world's most
beautiful nethack with little artistic skill.
 
-
 
Non-graphics related GPGPU uses:
 
ATI's Stream Computing:
http://ati.amd.com/technology/streamcomputing/index.html
NVIDIA's CUDA: http://developer.nvidia.com/object/cuda.html
 
Both of these allow you to code in the standard C language and low
level assembly language layer and driver interface.  A number of open
source GPGPU libraries (BrookGPU
http://graphics.stanford.edu/projects/brookgpu/index.html and Sh
http://libsh.org/) shoot for C++.  One would think that given how well
programming GPUs lends itself to a functional language, clearly, they
must be appealing to the lowest common denominator, but their language
for the hardcore is some god awful assembly language...
 
[defense]
I get that they're performance nuts, but when's the last time you wrote
a large and meaningful application in assembly (not a single inlined
assembly function) by hand, that beat using your compiler (it terms of
developer+system execute time)?  If you can answer, my guess is either
you're working on a compiler, or yours is broken.
[/defense]
 
Writing a real language for general progamming on GPUs would be pretty
cool, although you'd have to then think of something to do with it as
otherwise it might be a short project (rough guess considering the
existing work in the area).
 
 
Did either of these help?
 
-- Joseph Re



From: wp [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 12, 2007 6:41 PM
To: Re, Joseph (IT)
Subject: Re: [Haskell-cafe] Looking for final year project - using
Haskell,or another functional language


Hi Joseph,

no, I don't mind if you cc the list.
I just skimmed the two papers on Vertigo and Renaissance. Very
interesting ... and not just that. I have been following in the last
months/years the advances of GPGPU. Surely a lot of people think this is
nothing of practical usage, especially if it comes to serious

RE: [Haskell-cafe] Speedy parsing

2007-07-20 Thread Re, Joseph (IT)
Ah, I thought I might have to resort to one of the ByteStrings modules.
I've heard of them but was more or less avoiding them due to some
complexities with installing extra libraries in my current dev
environment.  I'll try to work that out with the sysadmins and try it
out.

Thanks for the advice

-Original Message-
From: Tillmann Rendel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 19, 2007 8:48 PM
To: Re, Joseph (IT)
Cc: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Speedy parsing

Re, Joseph (IT) wrote:
 At this point I'm out of ideas, so I was hoping someone could identify

 something stupid I've done (I'm still novice of FP in general, let 
 alone for high performance) or direct me to a 
 guide,website,paper,library, or some other form of help.

Two ideas about your aproaches:

(1) try to avoid explicit recursion by using some standard library
functions instead. it's easier (once you learned the library) and may be
faster (since the library may be written in a easy to optimize style).

(2) try lazy ByteStrings, they should be faster.

   http://www.cse.unsw.edu.au/~dons/fps.html

As an example, sorting of the individual lines of a csv files by key. 
csv parses the csv format, uncsv produces it. these functions can't
handle '=' in the key or ',' in the key or value. treesort sorts by
inserting stuff into a map and removing it in ascending order:

 import System.Environment
 import qualified Data.ByteString.Lazy.Char8 as B import qualified 
 Data.Map as Map import Control.Arrow (second)
 
 csv = (map $ map $ second B.tail . B.break (== '=')) . 
   (map $ B.split ',') .
   (B.split '\n')
 
 uncsv = (B.join $ B.pack \n) .
 (map $ B.join $ B.pack ,) .
 (map $ map $ \(key, val) - B.concat [key, B.pack =, val])
 
 treesort = Map.toAscList . Map.fromList
 
 main = B.interact $ uncsv . map treesort . csv

   Tillmann


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Speedy parsing

2007-07-20 Thread Re, Joseph (IT)
Oh, I didn't realize how much of a difference it would make.  Thanks a
lot!

-- Joseph Re 

-Original Message-
From: Salvatore Insalaco [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 12:21 PM
To: Tillmann Rendel
Cc: Re, Joseph (IT); haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Speedy parsing

2007/7/20, Tillmann Rendel [EMAIL PROTECTED]:
 Re, Joseph (IT) wrote:
  At this point I'm out of ideas, so I was hoping someone could 
  identify something stupid I've done (I'm still novice of FP in 
  general, let alone for high performance) or direct me to a 
  guide,website,paper,library, or some other form of help.

I think that your problem is simply an excess of lazyness :).
foldr is too lazy: you pass huge structures to fold, that you evaluate
anyway.
Try to import Data.List and use foldl' (not foldl), that is strict.
Being left fold, you probably need to rework a bit your functions (I
tried to simply flip them, but the result of the program wereflipped
too), but the time (and memory) difference is unbelievabl (foldl' is
constant in memory allocation).

Salvatore


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Speedy parsing

2007-07-19 Thread Re, Joseph (IT)
I was hoping someone could direct me to material on how I might go about
optimizing a small parsing program I wrote.  Used for calculating
differences between two files that store a hash of hashes (each line is
a hash, each line has key=value pairs seperated by commas), the
bottleneck seems to be in getting the data into memory (in a usable
structure) quickly, as I'm working with extremely large files (250,000
line, 100MB+ files are normal in size, some are quite larger, and I have
to process a large number of them often).
 
Extracting just the parsing elements of my current python
implementation:
FILE: compare.py
 import sys
 def parse(lines):
print Parsing,len(lines),lines...
failed = 0
hashes = {}
for line in lines:
   try:
  hash = {}
  parts = line.split(,)
  for part in parts:
 k,v = part.split(=)
 hash[k] = v
  hashes[ hash[LogOffset] ] = hash
   except:
  if line != :
 failed += 1
if failed  0:
   print [ERROR] Failed to parse:,failed,lines
print ...Parsing resulted in,len(hashes),unique hashes
return hashes
 
 def normalize(lines):
lineset = set()
for line in lines:
   if Type=Heartbeat == line[0:14]: pass
   elif line == : pass
   else: lineset.add( line ) #use set to get only uniques
return lineset
 
 def main():
hashes = parse( normalize ( open(sys.argv[1]).readlines() ))
 
 if __name__ == '__main__': main()

$ time python compare.py 38807lineFile
Removed 52 bad lines
Parsing 38807 lines...
...Parsing resulted in 38807 unique hashes

real0m3.807s
user0m3.330s
sys 0m0.470s

$ time python compare.py 255919lineFile
Removed 0 bad lines
Parsing 255919 lines...
...Parsing resulted in 255868 unique hashes

real0m30.889s
user0m23.970s
sys 0m2.900s

*note: profiling shows over 7.1 million calls to the split() function,
to give you a good idea of the number of pairs the file contains.

Once you factor in much increased filesizes, actually preforming the
analysis, and running it on a few dozen files, my tests started to
become quite time consuming (not to mention it takes 1GB of memory for
just the 250K line files, although execution time is still much more
important at the moment thanks to ram being cheap).  

Thusly, I figured I'd rewrite it in C, but first I wanted to give
Haskell a shot, if only to see how it compared to python (hoping maybe I
could convince my boss to let me use it more often if the results were
good).  The first thing I wanted to check was if parsec was a viable
option.  Without even putting the data into lists/maps, I found it too
slow.

FILE: compare_parsec.hs
 {-# OPTIONS_GHC -O2 #-}
 module Main where
 import System.Environment (getArgs)
 import Text.ParserCombinators.Parsec

 csv = do x - record `sepEndBy` many1 (oneOf \n)
  eof
  return x
 record = field `sepBy` char ','
 field = many (anyToken)

 main = do
~[filename] - getArgs
putStrLn Parsing log...
res - parseFromFile csv filename
case res of
   Left err - print err
   Right xs - putStrLn ...Success

$ time ./compare_parsec 38807lineFile
Parsing log...
...Success

real0m13.809s
user0m11.560s
sys 0m2.180s

$ time ./compare_parsec 255919lineFile
Parsing log...
...Success

real1m28.983s
user1m8.480s
sys 0m9.530s

This, sadly, is significantly worse than the python code above.  Perhaps
someone here can offer advice on more efficient use of parsec?
Unfortunately I don't have profiling libraries for parsec available on
this machine, nor have I had any luck finding material on the web.

After this, I tried doing my own parsing, since the format is strict and
regular.  I tried pushing it to lists (fast, not very usable) and maps
(much easier for the analysis stage and what the python code does, but
much worse speedwise).

FILE: compare_lists.hs
 {-# OPTIONS_GHC -O2 -fglasgow-exts #-}
 module Main where
 import System.Environment (getArgs)
 type Field = (String,String)
 type Record = [Field]
 type Log = [Record]

 main = do
~[filename1] - getArgs
file1 - readFile filename1
putStrLn Parsing file1...
let data1 = parseLog file1
print data1
putStrLn ...Done

 -- parse file
 parseLog :: String - Log
 parseLog log = foldr f [] (lines log)
where f  a = a
  f \n a = a
  f x a = (parseRecord x):a
 -- parse record
 parseRecord :: String - Record
 parseRecord record = foldr (\x a - (parseField x):a) [] (split ','
record)
 -- parse field
 -- no error detection/handling now
 parseField :: String - Field
 parseField s = (takeWhile isntCharEq s, tail $ dropWhile isntCharEq s)

 isntCharEq :: Char - Bool
 isntCharEq '=' = False
 isntCharEq _ = True

 split :: Eq a = a - [a] - [[a]]
 split delim = foldr f [[]]
where
   f x rest@(r:rs)
 | x == delim = [] : rest
 | otherwise = (x:r) : rs

I wasn't sure the best way to force evaluation on this, so I opt'd 

RE: [Haskell-cafe] Haskell shootout game

2007-07-16 Thread Re, Joseph (IT)
Interestingly enough, we're doing something very similar for [EMAIL PROTECTED]'s
2007 MechMania XIII contest (http://en.wikipedia.org/wiki/Mechmania), an
AI competition hosted during our annual Reflections Projections
conference.
 
  I can't release too many details until the day of the contest (Oct
13), but it's tactical, grid based combat game where you get one day to
write an AI (while testing in a pre-arena of sorts that is rendered to a
video wall in the middle of the conference building's atrium) and then
the next morning we run a (usually double elimination) tournament and
display all the simulations on a giant projector.  You can look at
screenshots / client API docs from 2006 as an example until we post the
details the night of the contest.
 
  After the contest we post results, (hopefully) clean up the code, and
release it for people to play with.  We're not professionals, nor do we
mainly write games, but it should be clean enough for someone to modify
and play around with.
 
   I guess it goes without saying that you can actually enter the
contest proper by coming to the conference if you happen to live in the
middle of nowhere (Champaign-Urbana, IL USA).  Registration will be up
(www.acm.uiuc.edu/conference/) towards the end of summer.



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hugh Perkins
Sent: Sunday, July 15, 2007 2:47 PM
To: haskell-cafe
Subject: [Haskell-cafe] Haskell shootout game


Had an idea: a real shootout game for Haskell.

The arena itself comprises:
- a 2d grid, of a certain size (or maybe variable size) 
- each grid cell can be a wall, or one of the opponents
- the boundaries of the grid are walls
- random blocks of wall are placed around the grid

This can run on a hosted webserver probably, because each match is part
of a webpage request, and lasts a maximum of about a second, so shouldnt
be terminated prematurely by cpu-monitoring scripts.


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Looking for final year project - using Haskell, or another functional language

2007-07-13 Thread Re, Joseph (IT)
I actually meant that simply as beyond opengl bindings, but added
'better' to make reference to Hugh's suggestion.  The website sure could
be better though ;)

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Claus Reinke
Sent: Thursday, July 12, 2007 7:23 PM
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Looking for final year project - using
Haskell,or another functional language



Building on what Hugh was getting at, beyond better opengl bindings,

i'm curious: just what do you think is missing in haskell's opengl
binding?

just be sure to ignore http://www.haskell.org/HOpenGL/ , which should be
moved to the wiki or to /dev/null. instead, look at the implementation,
mailing list and and api docs (which need to be read side-by-side with
the specs):

http://darcs.haskell.org/packages/OpenGL
http://www.haskell.org/mailman/listinfo/hopengl

 
http://www.haskell.org/ghc/docs/latest/html/libraries/OpenGL/Graphics-Re
ndering-OpenGL-GL.html

the mailing list has occasional progress info like this

http://www.haskell.org/pipermail/hopengl/2006-November.txt

Implement the entire opengl 1.3 interface specifications in Haskell.

 
http://www.haskell.org/ghc/docs/latest/html/libraries/OpenGL/Graphics-Re
ndering-OpenGL-GL-BasicTypes.html

This module corresponds to section 2.3 (GL Command Sytax)
of the OpenGL 2.1 specs.

claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Maintaining the community

2007-07-13 Thread Re, Joseph (IT)
Perhaps I haven't found the amazing treasure trove of open NNTP servers
you appear to have, but in my experience I've yet to find a single good
(read: access to most groups and quick about it) and free NNTP server
(read: not from my ISP, employer, or university - which, if provided at
all, have always been quite limited in which groups they serve), so I
can completely understand where the others are coming from in this
regard.  Perhaps those of you who have found good, free NNTP servers
would care to share these well kept secrets?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Coppin
Sent: Friday, July 13, 2007 5:04 PM
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Maintaining the community

Mark T.B. Carroll wrote:
 Andrew Coppin [EMAIL PROTECTED] writes:

   
 ...and when you view a web page, your web browser has to connect to a

 web server somewhere.

 I don't see your point...
 

 Very many news servers will only serve news to people on the network 
 of whoever's running the server: i.e. a rather restricted 'customer'
base.
   

Really? Most web servers will accept a connection from anybody. (Unless
it's *intended* to be an Intranet.) I'm not quite sure why somebody
would configure their NNTP server differently...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Looking for final year project - using Haskell, or another functional language

2007-07-12 Thread Re, Joseph (IT)
Building on what Hugh was getting at, beyond better opengl bindings, I'd
be interested in what a modern real-time graphics engine would look like
in Haskell; not a game engine, just a very flexible and well made
universal graphics engine.  I think there's already a lot of ground work
already broken in with a practical example of Yampa via Frag,
http://aegisknight.org/papers/Renaissance%202005.pdf and
http://conal.net/papers/Vertigo/ for purely functional programming of
shaders, etc.
 
At the same time, however, there's still a decent amount of work to be
explored outside that core with the:
* Representation of objects - internal scene graph description and
optimization for different types of scenes such as indoor (bsp?),
landscapes (octree?) as well as issues wrt a scene or collection of
scenes' actual definition.
* Perhaps questions relating to collections of objects (hierarchical
issues).
 
* Procedural texture and model generation - some interesting work with
Pan and derivatives, but certainly nothing incorporated into a 3d engine
afaik.  That being said, it's important to be able to design it
separately besides just having the engine render it, but the popular
demoscene (http://www.werkkzeug.com/), professional
(http://www.profxengine.com/), and open source (http://www.fxgen.org/)
tools all use artist oriented design methods (linking literal function
boxes with arrows or stacking them upon each other) and thus are
inherently crippled in functionality (and they become incomprehensible
with any large size).  A proper DSL incorporating some of Pan's features
with the larger math libraries of the 3 examples above would allow a
superior tool by simply combining [text editor of your choice] and a
small app using the engine's procedural generation libraries to compile
your MaterialDescriptionLanguage code and provide a preview window.
 
* Somewhat related matters such as plugin based texture rendering (ie
rendering a video to texture via external video decoding plugin). 
 
* Automatically generated LOD meshes and detecting when to apply them
optimally.  Haven't personally read anything on this, but a quick search
on citeseer gives a large number of promising papers. Beyond the
graphics aspect there are also somewhat related networking issues
(simulation visualizers, multiplayer games) if you're more interested in
that.
 
* Animation - I know little about this.  (I'm told) Yampa could be of
great use, but I'm not sure how it ties in with standard animation
techniques with key frames, IK bones, and whatnot.
 
* Effects such as particle systems, post processing, and cloth
simulation seem like a great place to exploit the easy concurrency
inherent to purity (see Particle animation and rendering using data
parallel computation for a start), although post processing would be
very simple if you incorporated a good shader DSL similar to vertigo as
noted above.
 
* Ability to query the scene for integration with other code for object
picking, that is, translating 2d-3d to figure out what the user
clicked, for apps, AI if used with a simulation or game of sorts (ie
accurate response to shadows cast by other actors), etc.  You might be
able to prevent this from forcing the rendering to pause, but nothing
comes to mind.  AFAIK, if STM retried your query you would get the next
frame's data (or later), which may still be ok, but the delay might be
visible to the user.
 
* Resource management for large asset collections.  The real trick here
is you need to stay real-time and so lazy methods simply won't work.  I
assume you could apply a good deal of techniques from
preemptive/speculative evaluation and garbage collection.  If you did
something with scene management above you could do a static analysis of
all the scenes that need to be processed before the user is willing to
wait (ie game-level, simulation-large time chunk?) to optimize when
you perform the loads/clears.  Dynamically generated data might pose a
few extra difficulties.
   Games such as Halo have hard coded hallways and elevators as times
for the graphics engine to load data, but I think a general heuristic
for figuring out something similar is feasible with 1-2 passes over a
(quasi?)4D scene graph (add [Tree] of the data required to render a
collection of scenes over time).


Just a few *very* rough ideas I'm thinking of at the moment; certainly
more (and more depth) to consider.  I apologize for the archaic
formatting, I'll try to tex/wiki up a formal list of questions soon.

Hope you enjoy whichever project you end up choosing,
Joseph Re



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hugh Perkins
Sent: Tuesday, July 10, 2007 4:46 PM
To: wp; haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Looking for final year project - using
Haskell,or another functional language


rpc layer, like .Net Remoting or ICE (but preferably without needing
configuration/interface files)

Course, if you know what you're