Talking about GBs it makes me wonder , even if the image allowed more than
500MBs if that would be a wise move. 

Smalltalk is afterall one of the slowest , execution wise, programming
languages. In python which is as slow if not slower, the attitude is to move
big data to C libs and then interface with these C libs via python. 

When I see even morphic at times take up to 50% of my dual core 2.0GHz CPU
just makes me wonder if cramming 1 GB or more inside the image will end
bringing pharo to a crawl for processing that data.

So what pharo may need is something that already nativeboost is doing,
mapping to C types and access to fast C functions for processing and
manipulating big data. Just make it more easier. 

For example python has numpy (mainly C code, extra types etc) and nowdays
also Blaze is emerging as a future candidate.

https://github.com/ContinuumIO/blaze <https://github.com/ContinuumIO/blaze>  

just a thought. 



--
View this message in context: 
http://forum.world.st/Pharo-cog-Vm-on-Windows-with-2GB-virtual-mem-tp4710746p4712155.html
Sent from the Pharo Smalltalk Developers mailing list archive at Nabble.com.

Reply via email to