[Pharo-users] Fwd: [Esug-list] 2nd Visualization Contest with Roassal
Begin forwarded message: > From: Alexandre Bergel > Subject: [Esug-list] 2nd Visualization Contest with Roassal > Date: 9 Apr 2014 23:46:52 GMT+2 > To: ESUG Mailing list > > Dear colleagues and friends, > > We are happy to announce the Second Visualization Contest with Roassal. > > What can I win? > - 150 euros, sponsored by ObjectProfile > - a über-cool ObjectProfile T-shirt and some wonderful stickers > - maximum publicity of your work > - a nice award certificate, delivered during ESUG > > How can I win? > - you have to produce a visualization with Roassal. You can use the Pharo, > VisualWorks or VAST version of Roassal. Here is an example of what we expect > http://bit.ly/sunburstDemo or http://bit.ly/GraphET > - making a video is mandatory. The video will weight the most in our > decision. The video could be of any length and has to include your name and > say that is was made (partly or completely) with Roassal. No need to talk, > just show off! > - making your code available and easy to install will help you get more points > > How can I submit? > - send the links of your video and other material (if needed) to > i...@objectprofile.com Every email you will send to this email will be > acknowledged. If you do not receive a 'Ok' from us, it means we haven't read > it, in that case send your email again after a few days. > - the deadline for submitting is August 11, 2014 > > Mini FAQ? > - Is the object-profile team allowed to participate? No > - Should my visualization or code be open source? No need for this, whatever > license is fine. However your video has to be public. > - How can I get more information? Just comment on the facebook > https://www.facebook.com/ObjectProfile or using Twitter @ObjectProfile or > send email to i...@objectprofile.com or the pharo or moose mailing list > - What I submit two different videos? Yes, no problem with that > - How will judge the videos? both the esug community and the object profile > team. The Esug community will have 30% of the final grade, object profile the > remaining 70% > > Cheers, > The profilers > > -- > _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;: > Alexandre Bergel http://www.bergel.eu > ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;. > > > ___ > Esug-list mailing list > esug-l...@lists.esug.org > http://lists.esug.org/mailman/listinfo/esug-list_lists.esug.org
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Am 10.04.2014 03:09, schrieb Paul DeBruicker: > +1 to Sven's comment and Marten - you should post this on the GemStone list > as one of their guys will be able to help you with the encoding issues. Its ok for me. Due to Sven's software I actually start using Gemstone - just to give the honor back to him. Marten -- Marten Feldtmann
Re: [Pharo-users] [Pharo-dev] Pharo Consortium: New Gold Member Lam Research
Great! Doru -- www.tudorgirba.com "Every thing has its own flow." > On 09.04.2014, at 11:38, Marcus Denker wrote: > > The Pharo Consortium is very happy to announce that Lam Research has joined > the Consortium as a Gold Industrial Member. > > More about > - Lam Research: http://lamrc.com > - Pharo Consortium: http://consortium.pharo.org > > > The goal of the Pharo Consortium is to allow companies to support the ongoing > development and future of Pharo. > > Individuals can support Pharo via the Pharo Association: > http://association.pharo.org
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
+1 to Sven's comment and Marten - you should post this on the GemStone list as one of their guys will be able to help you with the encoding issues. Paul Sven Van Caekenberghe-2 wrote > On 09 Apr 2014, at 20:54, > itlists@ > wrote: > >> And now the additional information: I'm working under Gemstone and I >> noticed quite some differences between Pharo and its Gemstone port >> of Zinc in this area ... I have to take a closer look here. >> >> Marten > > I already expected that much. Yes, Zinc on Gemstone is seriously behind > the original. Furthermore, I have never seen or worked with it. And > although I have some sympathy for other Smalltalk implementations out > there, I find it difficult to give free support for an expensive, closed > source, commercial product. > > Sven -- View this message in context: http://forum.world.st/Zinc-HTTP-server-seems-to-convert-always-tp4753613p4753780.html Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
Re: [Pharo-users] NeoJSON Parsing Nested Objects
It's basic but it works. :) THanks sean! Esteban A. Maringolo 2014-04-09 18:07 GMT-03:00 Sean P. DeNigris : > Esteban A. Maringolo wrote I'm wrapping the Digital Ocean API. >>> Cool ! >> +1 > > The embryo is alive. See documentation at > http://smalltalkhub.com/#!/~SeanDeNigris/DigitalOcean > > A few supported operations: > DoDroplet allActive. > DoDroplet allActive detect: [ :e | e name = 'mycooldomain.org' ]. > DoDropletSize all. > DoDropletSize named: '512MB' > > > > - > Cheers, > Sean > -- > View this message in context: > http://forum.world.st/NeoJSON-Parsing-Nested-Objects-tp4753695p4753749.html > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com. >
Re: [Pharo-users] NeoJSON Parsing Nested Objects
Esteban A. Maringolo wrote >>> I'm wrapping the Digital Ocean API. >> Cool ! > +1 The embryo is alive. See documentation at http://smalltalkhub.com/#!/~SeanDeNigris/DigitalOcean A few supported operations: DoDroplet allActive. DoDroplet allActive detect: [ :e | e name = 'mycooldomain.org' ]. DoDropletSize all. DoDropletSize named: '512MB' - Cheers, Sean -- View this message in context: http://forum.world.st/NeoJSON-Parsing-Nested-Objects-tp4753695p4753749.html Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
On 09 Apr 2014, at 20:54, itli...@schrievkrom.de wrote: > And now the additional information: I'm working under Gemstone and I > noticed quite some differences between Pharo and its Gemstone port > of Zinc in this area ... I have to take a closer look here. > > Marten I already expected that much. Yes, Zinc on Gemstone is seriously behind the original. Furthermore, I have never seen or worked with it. And although I have some sympathy for other Smalltalk implementations out there, I find it difficult to give free support for an expensive, closed source, commercial product. Sven
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
And now the additional information: I'm working under Gemstone and I noticed quite some differences between Pharo and its Gemstone port of Zinc in this area ... I have to take a closer look here. Marten -- Marten Feldtmann
Re: [Pharo-users] NeoJSON Parsing Nested Objects
On 09 Apr 2014, at 19:28, Sean P. DeNigris wrote: > Sean P. DeNigris wrote >> I'd like to convert them to DropletSize objects > > Duh :-P > > reader for: DoResponse do: [ :m | > m mapInstVar: #status. > (m mapInstVar: #contents to: #sizes) valueSchema: > #ArrayOfDropletSizes ]. BTW, I did some recent enhancements to NeoJSON mapping that could be interesting. http://www.smalltalkhub.com/#!/~SvenVanCaekenberghe/Neo/commits like #nextPut:as:
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
On 09 Apr 2014, at 19:35, itli...@schrievkrom.de wrote: > Ok, forget the JSON stuff - it has nothing to do with the "problem". > > Other way round: > > My whole database and internal processing is done in UTF8. This is the > most important point here to mention. Why ? This means you forgo almost all String functionality, since UTF8 is a variable length encoding not really suitable to character by character processing. > Now the request comes into Zinc as mentioned below (the content of the > request is a JSON string only): > > HTML-Request (charset=UTF-8) =(sends)=> ZINC HTTP > > Now Zinc sees the content of the body, knows that it is coded in UTF8 > and creates a ZnStringEntity with UTF8Encoder. > > Zinc HTTP =(builds)=> ZnStringEntity (with UTF8Encoder) > > The instance of ZnRequest and its entity value is an instance of > ZnStringEntity (with its encoder attribute is set to an instance to > ZnUTF8Encoder). Yes, of course, UTF-8 (a variable length binary encoding) is converted into native Pharo Strings (possibly WideStrings) containing Characters, each of which is encoded using a Unicode code point value. > I checked the content of the string attribute of the ZnStringEntity and > this string is NOT encoded in UTF8 any more, but in either ISO8859-? > or WIN1252. Here you lose me (again) ;-) > I think, that this is ok for almost all people, because they work with > some CodePages - but my internal processing assumes UTF8. No, nobody works with code pages or any encoding, just native [Wide]Strings in pure Unicode. > I just fixed this for me by changing ZnStringEntity>>initializeEncoder > to ALWAYS set the encoder attribute to ZnNullEncoder and now everthing > is ok again. This means of course, that all apllication running with > that source code work in UTF 8 only ... OK, I think I understand, you want UTF-8 to remain UTF-8. What you did is one solution, but I think it is wrong to use a String to represent bytes. This case is actually already implemented server side for Seaside: ZnZincServerAdaptor>>#configureServerForBinaryReading "Seaside wants to do its own text conversions" server reader: [ :stream | ZnRequest readBinaryFrom: stream ] The #reader: option is used here to read everything binary, without decoding to Strings. You will get ZnZnByteArrayEntity objects back, containing the original binary representation. BTW, I think this is an interesting discussion. Regards, Sven > Marten > > Am 09.04.2014 18:42, schrieb Sven Van Caekenberghe: >> Marten, >> >> On 09 Apr 2014, at 18:25, itli...@schrievkrom.de wrote: >> >>> Ok, if the browser sends POST/PUT request with a JSON structure it also >>> sends charset = utf8 (in my case). That's ok, because for JSON this is >>> more or less the default charset. >>> >>> Zinc now seems to notice, that UTF8 charset is needed and creates a >>> ZnStringEntity with an UTF8Encoder. >>> >>> Now when my application tries to get the JSON string of that >>> ZnStringEntity and builds the structure out of that string - and the >>> strings are NOT UTF8, but converted to (?) ISO8859 ? >> >> (NeoJSONReader fromString: >> (ZnEntity with: (NeoJSONWriter toString: { #message -> 'An der schönen >> blauen Donau' } asDictionary))) >>at: #message. >> >> You must be doing something possibly wrong when you <> that ZnStringEntity and builds the structure out of that string>> (how do >> you do that, BTW), so please write some code that demonstrates what is not >> right according to you. >> >> Sven >> > > > -- > Marten Feldtmann >
Re: [Pharo-users] NeoJSON Parsing Nested Objects
2014-04-09 14:56 GMT-03:00 Sven Van Caekenberghe : > > On 09 Apr 2014, at 19:18, Sean P. DeNigris wrote: > >> I'm wrapping the Digital Ocean API. > > Cool ! +1 I use it for my Pharo hosting too (plus a JIRA instance on other droplet). Anybody using Supervisord or anything similar tool to "manage" pharo instances? Regards!
Re: [Pharo-users] NeoJSON Parsing Nested Objects
On 09 Apr 2014, at 19:18, Sean P. DeNigris wrote: > I'm wrapping the Digital Ocean API. Cool !
Re: [Pharo-users] Error: No factory specified
Renaming Cache to AbstractCache (or whatever else, it has no global references) I was able to load Glorp and have it running with the NativePostgresDriver. However there are a mix of things that are innecesary interdependant. For instance: * Running the tests the default glorp login doesn't have any encoding strategy, but it is required by the Glorp Session. So many tests fail. * The Encoders are in the DBXTalk packages :) I'm willing to help here, but what approach should we take here? I look forward for advice and guidance to accomplish this. Regards, Esteban A. Maringolo 2014-04-09 6:22 GMT-03:00 Sven Van Caekenberghe : > Thanks, Stephan. > > I will have a look and try to load/use Glorp myself towards the end of the > week. > > On 09 Apr 2014, at 11:08, Stephan Eggermont wrote: > >> Ok, there is a bug in The ConfigurationOfDBXDriver, >> the baseline has a versionString entry. That should be removed >> >> After removing that (and renaming Cache) it loads. >> >> Glorp loads already after renaming Cache. >> I'm not sure the isKindOf: on ProtoObject is safe. >> There is a renaming issue after loading Glorp (13188) >> >> Stephan > >
Re: [Pharo-users] Socket Handles to C
On 9 April 2014 16:59, Sean P. DeNigris wrote: > How do Socket handles map to C socket IDs? I'm wrapping libssh2 via Native > Boost and I'd much rather create sockets via Smalltalk than C, but I'm not > sure how to pass the handle to a C function expecting a C socket ID. Maybe > I'm missing something really simple? Thanks. > > socket plugin using own data structure for socket handles, it includes different kind of internal information, including OS-specific socket handle. typedef struct { int sessionID; int socketType; /* 0 = TCP, 1 = UDP */ void *privateSocketPtr; } SQSocket, *SocketPtr; extracting the OS-specific handle could be problematic, since it is in that opaque void *privateSocketPtr; field. Then: sqUnixSocket.c typedef struct privateSocketStruct { int s;/* Unix socket */ int connSema;/* connection io notification semaphore */ int readSema;/* read io notification semaphore */ int writeSema;/* write io notification semaphore */ int sockState;/* connection + data state */ int sockError;/* errno after socket error */ union sockaddr_any peer;/* default send/recv address for UDP */ socklen_t peerSize;/* dynamic sizeof(peer) */ union sockaddr_any sender;/* sender address for last UDP receive */ socklen_t senderSize;/* dynamic sizeof(sender) */ int multiListen;/* whether to listen for multiple connections */ int acceptedSock;/* a connection that has been accepted */ } privateSocketStruct; so, to extract OS-level socked handle *on unix*, you have to handle = ( (privateSocketStruct*) sqsocket.privateSocketPtr) -> s. where sqsocket is SQSocket. there's even macros for that: /*** Accessors for private socket members from a Squeak socket pointer ***/ #define _PSP(S)(((S)->privateSocketPtr)) #define PSP(S)((privateSocketStruct *)((S)->privateSocketPtr)) #define SOCKET(S)(PSP(S)->s) #define SOCKETSTATE(S)(PSP(S)->sockState) #define SOCKETERROR(S)(PSP(S)->sockError) #define SOCKETPEER(S)(PSP(S)->peer) #define SOCKETPEERSIZE(S)(PSP(S)->peerSize) > > - > Cheers, > Sean > -- > View this message in context: > http://forum.world.st/Socket-Handles-to-C-tp4753619.html > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com. > > -- Best regards, Igor Stasenko.
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Ok, forget the JSON stuff - it has nothing to do with the "problem". Other way round: My whole database and internal processing is done in UTF8. This is the most important point here to mention. Now the request comes into Zinc as mentioned below (the content of the request is a JSON string only): HTML-Request (charset=UTF-8) =(sends)=> ZINC HTTP Now Zinc sees the content of the body, knows that it is coded in UTF8 and creates a ZnStringEntity with UTF8Encoder. Zinc HTTP =(builds)=> ZnStringEntity (with UTF8Encoder) The instance of ZnRequest and its entity value is an instance of ZnStringEntity (with its encoder attribute is set to an instance to ZnUTF8Encoder). I checked the content of the string attribute of the ZnStringEntity and this string is NOT encoded in UTF8 any more, but in either ISO8859-? or WIN1252. I think, that this is ok for almost all people, because they work with some CodePages - but my internal processing assumes UTF8. I just fixed this for me by changing ZnStringEntity>>initializeEncoder to ALWAYS set the encoder attribute to ZnNullEncoder and now everthing is ok again. This means of course, that all apllication running with that source code work in UTF 8 only ... Marten Am 09.04.2014 18:42, schrieb Sven Van Caekenberghe: > Marten, > > On 09 Apr 2014, at 18:25, itli...@schrievkrom.de wrote: > >> Ok, if the browser sends POST/PUT request with a JSON structure it also >> sends charset = utf8 (in my case). That's ok, because for JSON this is >> more or less the default charset. >> >> Zinc now seems to notice, that UTF8 charset is needed and creates a >> ZnStringEntity with an UTF8Encoder. >> >> Now when my application tries to get the JSON string of that >> ZnStringEntity and builds the structure out of that string - and the >> strings are NOT UTF8, but converted to (?) ISO8859 ? > > (NeoJSONReader fromString: > (ZnEntity with: (NeoJSONWriter toString: { #message -> 'An der schönen > blauen Donau' } asDictionary))) > at: #message. > > You must be doing something possibly wrong when you < that ZnStringEntity and builds the structure out of that string>> (how do you > do that, BTW), so please write some code that demonstrates what is not right > according to you. > > Sven > -- Marten Feldtmann
Re: [Pharo-users] NeoJSON Parsing Nested Objects
Sean P. DeNigris wrote > I'd like to convert them to DropletSize objects Duh :-P reader for: DoResponse do: [ :m | m mapInstVar: #status. (m mapInstVar: #contents to: #sizes) valueSchema: #ArrayOfDropletSizes ]. - Cheers, Sean -- View this message in context: http://forum.world.st/NeoJSON-Parsing-Nested-Objects-tp4753695p4753696.html Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
[Pharo-users] NeoJSON Parsing Nested Objects
I'm wrapping the Digital Ocean API. This particular response has a status, and an array of droplet sizes. For example: '{"status":"OK","sizes":[{"id":66,"name":"512MB","slug":"512mb","memory":512,"cpu":1,"disk":20,"cost_per_hour":0.00744,"cost_per_month":"5.0"},{"id":63,"name":"1GB","slug":"1gb","memory":1024,"cpu":1,"disk":30,"cost_per_hour":0.01488,"cost_per_month":"10.0"}]}' As a start, I did: reader := NeoJSONReader on: jsonString readStream. reader for: DoResponse customDo: [ :m | m decoder: [ :dict | DoResponse new status: (dict at: 'status'); contents: (dict at: 'sizes') ] ]. response := reader nextAs: DoResponse. response isOk ifFalse: [ self error: 'Query failed!' ]. ^ response contents. However, the size objects are still plain dictionaries. I'd like to convert them to DropletSize objects. And I'd rather leverage NeoJSON than implement a custom DropletSize fromDictionary: if possible. What's the best way to handle this? Thanks. - Cheers, Sean -- View this message in context: http://forum.world.st/NeoJSON-Parsing-Nested-Objects-tp4753695.html Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Marten, On 09 Apr 2014, at 18:25, itli...@schrievkrom.de wrote: > Ok, if the browser sends POST/PUT request with a JSON structure it also > sends charset = utf8 (in my case). That's ok, because for JSON this is > more or less the default charset. > > Zinc now seems to notice, that UTF8 charset is needed and creates a > ZnStringEntity with an UTF8Encoder. > > Now when my application tries to get the JSON string of that > ZnStringEntity and builds the structure out of that string - and the > strings are NOT UTF8, but converted to (?) ISO8859 ? (NeoJSONReader fromString: (ZnEntity with: (NeoJSONWriter toString: { #message -> 'An der schönen blauen Donau' } asDictionary))) at: #message. You must be doing something possibly wrong when you <> (how do you do that, BTW), so please write some code that demonstrates what is not right according to you. Sven
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Ok, if the browser sends POST/PUT request with a JSON structure it also sends charset = utf8 (in my case). That's ok, because for JSON this is more or less the default charset. Zinc now seems to notice, that UTF8 charset is needed and creates a ZnStringEntity with an UTF8Encoder. Now when my application tries to get the JSON string of that ZnStringEntity and builds the structure out of that string - and the strings are NOT UTF8, but converted to (?) ISO8859 ? Marten -- Marten Feldtmann
Re: [Pharo-users] Memory profiling: usage per class - instance in old memory
On 09 Apr 2014, at 18:01, Thomas Bany wrote: > Thanks ! > > That's exactly what I was looking for. There is a compare method I dont quite > understand but I think I found what is going on. > > I failed to grasp that an Array reply to #sizeInMemory with it's own size, > without the sizes of its references. A single position object weight 96 > bytes, which make the whole Array weight arround 8Mb, and the 32 objects > arround 250 Mb. Yes, #sizeInMemory is confusing, a recursive version would be nice, but also dangerous as it could get in a loop. Here is something related: http://www.humane-assessment.com/blog/traversal-enabled-pharo-objects/ If you want to use a huge data structure, you have to think carefully about your representations. There are tricks you can use to conserve memory: use more primitive types (SmallIntegers, bit flags, Symbols), use shared instances, use alternatives like ZTimestamp which is half the size of DateAndTime, or use your own integer time, sparse data structures, and so on - and you can hide these optimisations behind your standard API. > I'm not sure I can get arround that aspect since the computation is costly > and I need its output multiple times. > > I will make further testing to see why the memory is not released at the end > of the execution. Good luck. > Thanks again ! > > > > 2014-04-09 15:19 GMT+02:00 Sven Van Caekenberghe : > Hi Thomas, > > Fixing memory consumption problems is hard, but important: memory efficient > code is automatically faster in the long run as well. > > Your issue sounds serious. However, I would start by trying to figure out > what is happening at your coding level: somehow you (or something you use) > must be holding on too much memory. Questioning low level memory management > functionality should be the last resort, not the first. > > There is SpaceTally that you could use before and after running part of your > code. Once something unexpected survives GC, there is the PointerFinder > functionality (Inspector > Explore Pointers) to find what holds onto objects. > But no matter what, it is hard. > > If you have some public code that you could share to demonstrate your > problem, then we could try to help. > > Sven > > On 09 Apr 2014, at 12:54, Thomas Bany wrote: > > > Hi, > > > > My app is a parser/filter for binary files, that produces a bunch of ascii > > files. > > > > At the begining of the parsing, the filtering step involves the storage of > > the positions of 32 objects, each second for a whole day. So that's 32 > > Arrays with 86400 elements each. > > > > During this step, the memory used by my image grows from 50Mb to ~500Mb. I > > find it far too large since I'm pretty sure my arrays are the largest > > objects I create and only weight something like 300kb. > > > > The profiling of the app shows that hte footprint of the "old memory" went > > up by 350Mb. Which I'm pretty sure is super bad. Maybe as a consequence, > > after the parsing is finished, the memory footprint of the image stays at > > ~500Mb > > > > What are the tools I have to find where precisely the memory usage explodes > > ? For example, is it possible to browse the "old memory" objects to see > > which one fails to get GC'ed ? > > > > Thanks in advance, > > > > Thomas. > -- Sven Van Caekenberghe Proudly supporting Pharo http://pharo.org http://association.pharo.org http://consortium.pharo.org
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
On 09 Apr 2014, at 17:29, itli...@schrievkrom.de wrote: > The browser sends UTF8 data and in my application code I get instances > of ZnStringEntity and the contained string is converted to (?) ISO8859-1 > (?) or CP-1252 (?). This seems to be due to the fact, that the entity > instance always has a ZnUTF8Encoder to do the conversion. I still don't understand the problem, but consider this: ZnServer startDefaultOn: 1701. ZnClient new url: 'http://localhost:1701/echo'; entity: (ZnEntity with: 'An der schönen blauen Donau'); post. ZnClient new url: 'http://localhost:1701/echo'; entity: (ZnEntity with: 'An der schönen blauen Donau' type: (ZnMimeType textPlain charSet: #'iso-8859-1'; yourself)); post; yourself. In the first case, a UTF-8 encoded string is POST-ed and correctly returned (in a UTF-8 encoded response). In the second case, an ISO-8859-1 encoded string is POST-ed and correctly returned (in a UTF-8 encoded response). In both cases the decoding was done correctly, using the specified charset (if that is missing, the ZnNullEncoder is used). Now, ö is not a perfect test example because its encoding value in Unicode, 246 decimal, U+00F6 hex, still fits in 1 byte and hence survives null encoding/decoding. That is why the following still works, although it is wrong to drop the charset. ZnClient new url: 'http://localhost:1701/echo'; entity: (ZnEntity with: 'An der schönen blauen Donau' type: (ZnMimeType textPlain clearCharSet; yourself)); post; yourself. HTH, Sven -- Sven Van Caekenberghe http://stfx.eu Smalltalk is the Red Pill
Re: [Pharo-users] Memory profiling: usage per class - instance in old memory
Thanks ! That's exactly what I was looking for. There is a compare method I dont quite understand but I think I found what is going on. I failed to grasp that an Array reply to #sizeInMemory with it's own size, without the sizes of its references. A single position object weight 96 bytes, which make the whole Array weight arround 8Mb, and the 32 objects arround 250 Mb. I'm not sure I can get arround that aspect since the computation is costly and I need its output multiple times. I will make further testing to see why the memory is not released at the end of the execution. Thanks again ! 2014-04-09 15:19 GMT+02:00 Sven Van Caekenberghe : > Hi Thomas, > > Fixing memory consumption problems is hard, but important: memory > efficient code is automatically faster in the long run as well. > > Your issue sounds serious. However, I would start by trying to figure out > what is happening at your coding level: somehow you (or something you use) > must be holding on too much memory. Questioning low level memory management > functionality should be the last resort, not the first. > > There is SpaceTally that you could use before and after running part of > your code. Once something unexpected survives GC, there is the > PointerFinder functionality (Inspector > Explore Pointers) to find what > holds onto objects. But no matter what, it is hard. > > If you have some public code that you could share to demonstrate your > problem, then we could try to help. > > Sven > > On 09 Apr 2014, at 12:54, Thomas Bany wrote: > > > Hi, > > > > My app is a parser/filter for binary files, that produces a bunch of > ascii files. > > > > At the begining of the parsing, the filtering step involves the storage > of the positions of 32 objects, each second for a whole day. So that's 32 > Arrays with 86400 elements each. > > > > During this step, the memory used by my image grows from 50Mb to ~500Mb. > I find it far too large since I'm pretty sure my arrays are the largest > objects I create and only weight something like 300kb. > > > > The profiling of the app shows that hte footprint of the "old memory" > went up by 350Mb. Which I'm pretty sure is super bad. Maybe as a > consequence, after the parsing is finished, the memory footprint of the > image stays at ~500Mb > > > > What are the tools I have to find where precisely the memory usage > explodes ? For example, is it possible to browse the "old memory" objects > to see which one fails to get GC'ed ? > > > > Thanks in advance, > > > > Thomas. > > >
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Am 09.04.2014 um 17:29 schrieb itli...@schrievkrom.de: > Am 09.04.2014 15:29, schrieb Sven Van Caekenberghe: >> Hi Marten, >> >> I will need (much) more detail, what are you trying to do that is not >> working according to you ? >> >> As far as I know Zinc HTTP Components does the right thing and can be used >> (configured) to do almost anything you want. It mostly depends on your mime >> types and their charset options. >> > > The browser sends UTF8 data and in my application code I get instances > of ZnStringEntity and the contained string is converted to (?) ISO8859-1 > (?) or CP-1252 (?). This seems to be due to the fact, that the entity > instance always has a ZnUTF8Encoder to do the conversion. > > I would like to have UTF8 everywhere ... without all these conversions … > There are no automatic conversions in Zinc. So Zinc is one of the pieces of software I know that do not assume stupid defaults :) Did you specify a proper Content-Type header including the charset information? Otherwise Zinc has no chance of knowing what to use and the default NullEncoder will make your string as wrong as your example above. Norbert
Re: [Pharo-users] [ANN] WIP iStoa
LOL. So you have no choice but to prepare for me an archive with your compiled VM, so I will use it ;-) (my system use glibc 2.15 anyway) Hilaire Le 09/04/2014 14:29, Bernat Romagosa a écrit : > Ouch! I think it was myself who compiled it... I'll check again tomorrow! > -- Dr. Geo http://drgeo.eu
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Am 09.04.2014 15:29, schrieb Sven Van Caekenberghe: > Hi Marten, > > I will need (much) more detail, what are you trying to do that is not working > according to you ? > > As far as I know Zinc HTTP Components does the right thing and can be used > (configured) to do almost anything you want. It mostly depends on your mime > types and their charset options. > The browser sends UTF8 data and in my application code I get instances of ZnStringEntity and the contained string is converted to (?) ISO8859-1 (?) or CP-1252 (?). This seems to be due to the fact, that the entity instance always has a ZnUTF8Encoder to do the conversion. I would like to have UTF8 everywhere ... without all these conversions ... Marten -- Marten Feldtmann
[Pharo-users] Socket Handles to C
How do Socket handles map to C socket IDs? I'm wrapping libssh2 via Native Boost and I'd much rather create sockets via Smalltalk than C, but I'm not sure how to pass the handle to a C function expecting a C socket ID. Maybe I'm missing something really simple? Thanks. - Cheers, Sean -- View this message in context: http://forum.world.st/Socket-Handles-to-C-tp4753619.html Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
Re: [Pharo-users] [ANN] BabyMock 2
Thanks Guillaume. It works well. With BabyMock2, the syntax is => aBlock Le 9 avr. 2014 à 11:13, Guillaume Larcheveque a écrit : > I think you can have your mock send an exception by using #answers:aBlock and > signal the exception in the block > > > 2014-04-08 15:58 GMT+02:00 Christophe Demarey : > Hello, > > Thanks. Very nice library! > > I have a question: is it possible to expect a method to throw an Exception? > > I would like something like: > protocol describe > once: mock recv: #aMethod ; > signal: anError. > I did not find anything in the documentation. > > I don't want to test that the mock signals an error but I want to test a code > that needs to take into account an exception. > > Best regards, > Christophe. > > Le 11 mars 2014 à 12:30, Attila Magyar a écrit : > > > I'm pleased to announce the 2.0 version of BabyMock. BabyMock is a visual > > mock object library that supports test-driven development. > > > > This version has a new syntax which is incompatible with the old version. > > Therefore it has a new repository > > http://smalltalkhub.com/#!/~zeroflag/BabyMock2 > > (BabyMock 1 is still available at its old location, but in the future I'd to > > focus on the development of BabyMock2, so don't expect too many changes > > regarding the old version). > > > > Changes in 2.0 > > > > - A new, extensible DSL (no more should/can) > > - Improved error messages, history of messages, detailed information about > > argument mismatches > > - An improved, Spec based GUI > > - Clicking on a mock opens an inspector on the expectations > > - Clicking on a message opens an inspector on the message > > - Object methods can be mocked by defaults > > - Blocks can be executed after receiving a message by the mock. The block > > has access to the arguments of the incoming message. > > - Any argument matcher > > - Cleanups and simplifications in the code > > > > I hope you don't mind the changes regarding the syntax. Personally I think > > it has lot more pros than cons. > > > > More information > > > > http://smalltalkhub.com/#!/~zeroflag/BabyMock2 > > > > p.s. > > It needs Pharo3.0. > > > > Attila > > > > > > > > -- > > View this message in context: > > http://forum.world.st/ANN-BabyMock-2-tp4748530.html > > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com. > > > > > > > -- > Guillaume Larcheveque > smime.p7s Description: S/MIME cryptographic signature
Re: [Pharo-users] Zinc HTTP server seems to convert always ...
Hi Marten, I will need (much) more detail, what are you trying to do that is not working according to you ? As far as I know Zinc HTTP Components does the right thing and can be used (configured) to do almost anything you want. It mostly depends on your mime types and their charset options. Sven On 09 Apr 2014, at 15:20, itli...@schrievkrom.de wrote: > Hey, > > it seems to me, that Zinc - out of the box - seems to convert from/to > UTF8. How can I tell Zinc to do NO conversion and that ZnStringEntity > should leave their strings as they are .. I am fighting here with German > Umlauts > > Marten > > -- > Marten Feldtmann >
[Pharo-users] Zinc HTTP server seems to convert always ...
Hey, it seems to me, that Zinc - out of the box - seems to convert from/to UTF8. How can I tell Zinc to do NO conversion and that ZnStringEntity should leave their strings as they are .. I am fighting here with German Umlauts Marten -- Marten Feldtmann
Re: [Pharo-users] Memory profiling: usage per class - instance in old memory
Hi Thomas, Fixing memory consumption problems is hard, but important: memory efficient code is automatically faster in the long run as well. Your issue sounds serious. However, I would start by trying to figure out what is happening at your coding level: somehow you (or something you use) must be holding on too much memory. Questioning low level memory management functionality should be the last resort, not the first. There is SpaceTally that you could use before and after running part of your code. Once something unexpected survives GC, there is the PointerFinder functionality (Inspector > Explore Pointers) to find what holds onto objects. But no matter what, it is hard. If you have some public code that you could share to demonstrate your problem, then we could try to help. Sven On 09 Apr 2014, at 12:54, Thomas Bany wrote: > Hi, > > My app is a parser/filter for binary files, that produces a bunch of ascii > files. > > At the begining of the parsing, the filtering step involves the storage of > the positions of 32 objects, each second for a whole day. So that's 32 Arrays > with 86400 elements each. > > During this step, the memory used by my image grows from 50Mb to ~500Mb. I > find it far too large since I'm pretty sure my arrays are the largest objects > I create and only weight something like 300kb. > > The profiling of the app shows that hte footprint of the "old memory" went up > by 350Mb. Which I'm pretty sure is super bad. Maybe as a consequence, after > the parsing is finished, the memory footprint of the image stays at ~500Mb > > What are the tools I have to find where precisely the memory usage explodes ? > For example, is it possible to browse the "old memory" objects to see which > one fails to get GC'ed ? > > Thanks in advance, > > Thomas.
Re: [Pharo-users] Memory profiling: usage per class - instance in old memory
I meant a single array weight something like 300Kb with the 32 of them weighting arround 10Mb. I tried to look closely at the way the memory (with VirtualMachine>#memoryEnd) was incrementing and it follows this pattern: - The memory costly function is defenitly the one storing my position arrays: #setPositionECEFOn: - Throughout the computation of the 32 daily positions, the memory used by the VM goes from 52Mb to 363Mb. Is this a normal behaviour ? On a side note, the computation of a single epoch (which is done 32*24*3600 times) uses 19 local variables. Not sure it is relevant. 2014-04-09 13:10 GMT+02:00 Stephan Eggermont : > Your calculation seem to be off. > 32 * 86400 objects = 2.8 million objects. A shortint = 4 bytes, making > 10.6 MB > Everything else (except value objects) is larger. > > Stephan >
Re: [Pharo-users] [ANN] WIP iStoa
Ouch! I think it was myself who compiled it... I'll check again tomorrow! 2014-04-09 14:00 GMT+02:00 Sergi Reyner : > 2014-04-09 8:25 GMT+01:00 Hilaire Fernandes : > > So it looks like the VM I used was compiled against GNU C Lib version >> 2.15 (stock Pharo VM from a few months ago), but more recent stock Pharo >> VM are compiled with older GNU lib C dependcies. And this is nice. >> > > +10 for the person who made it compile against an older glibc. There´s > life beyond Ubuntu! > > Cheers, > Sergi > -- Bernat Romagosa.
Re: [Pharo-users] Error: No factory specified
On 09.04.2014 00:51, Stephan Eggermont wrote: > That would be nice but creates problems. There is existing third > party code we want to use. We can be much more precise in naming, and > should not use generic names for specific implementations. > "When #totalWeight is no longer below #maximumWeight, the least > recently used item of the cache is evicted (removed) to make room. „ > is not part of the Cache behavior, but of LRUCache. Sorry for the noise, but cnr: “There are only two hard problems in computer science: cache invalidation and naming things.” -- Phil Karlton
Re: [Pharo-users] [ANN] WIP iStoa
2014-04-09 8:25 GMT+01:00 Hilaire Fernandes : > So it looks like the VM I used was compiled against GNU C Lib version > 2.15 (stock Pharo VM from a few months ago), but more recent stock Pharo > VM are compiled with older GNU lib C dependcies. And this is nice. > +10 for the person who made it compile against an older glibc. There´s life beyond Ubuntu! Cheers, Sergi
Re: [Pharo-users] Memory profiling: usage per class - instance in old memory
Your calculation seem to be off. 32 * 86400 objects = 2.8 million objects. A shortint = 4 bytes, making 10.6 MB Everything else (except value objects) is larger. Stephan
[Pharo-users] Memory profiling: usage per class - instance in old memory
Hi, My app is a parser/filter for binary files, that produces a bunch of ascii files. At the begining of the parsing, the filtering step involves the storage of the positions of 32 objects, each second for a whole day. So that's 32 Arrays with 86400 elements each. During this step, the memory used by my image grows from 50Mb to ~500Mb. I find it far too large since I'm pretty sure my arrays are the largest objects I create and only weight something like 300kb. The profiling of the app shows that hte footprint of the "old memory" went up by 350Mb. Which I'm pretty sure is super bad. Maybe as a consequence, after the parsing is finished, the memory footprint of the image stays at ~500Mb What are the tools I have to find where precisely the memory usage explodes ? For example, is it possible to browse the "old memory" objects to see which one fails to get GC'ed ? Thanks in advance, Thomas.
Re: [Pharo-users] Error: No factory specified
Thanks, Stephan. I will have a look and try to load/use Glorp myself towards the end of the week. On 09 Apr 2014, at 11:08, Stephan Eggermont wrote: > Ok, there is a bug in The ConfigurationOfDBXDriver, > the baseline has a versionString entry. That should be removed > > After removing that (and renaming Cache) it loads. > > Glorp loads already after renaming Cache. > I’m not sure the isKindOf: on ProtoObject is safe. > There is a renaming issue after loading Glorp (13188) > > Stephan
Re: [Pharo-users] [ANN] BabyMock 2
I think you can have your mock send an exception by using #answers:aBlock and signal the exception in the block 2014-04-08 15:58 GMT+02:00 Christophe Demarey : > Hello, > > Thanks. Very nice library! > > I have a question: is it possible to expect a method to throw an Exception? > > I would like something like: > protocol describe > once: mock recv: #aMethod ; > signal: anError. > I did not find anything in the documentation. > > I don't want to test that the mock signals an error but I want to test a > code that needs to take into account an exception. > > Best regards, > Christophe. > > Le 11 mars 2014 à 12:30, Attila Magyar a écrit : > > > I'm pleased to announce the 2.0 version of BabyMock. BabyMock is a visual > > mock object library that supports test-driven development. > > > > This version has a new syntax which is incompatible with the old version. > > Therefore it has a new repository > > http://smalltalkhub.com/#!/~zeroflag/BabyMock2 > > (BabyMock 1 is still available at its old location, but in the future > I'd to > > focus on the development of BabyMock2, so don't expect too many changes > > regarding the old version). > > > > Changes in 2.0 > > > > - A new, extensible DSL (no more should/can) > > - Improved error messages, history of messages, detailed information > about > > argument mismatches > > - An improved, Spec based GUI > > - Clicking on a mock opens an inspector on the expectations > > - Clicking on a message opens an inspector on the message > > - Object methods can be mocked by defaults > > - Blocks can be executed after receiving a message by the mock. The block > > has access to the arguments of the incoming message. > > - Any argument matcher > > - Cleanups and simplifications in the code > > > > I hope you don't mind the changes regarding the syntax. Personally I > think > > it has lot more pros than cons. > > > > More information > > > > http://smalltalkhub.com/#!/~zeroflag/BabyMock2 > > > > p.s. > > It needs Pharo3.0. > > > > Attila > > > > > > > > -- > > View this message in context: > http://forum.world.st/ANN-BabyMock-2-tp4748530.html > > Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com. > > > > -- *Guillaume Larcheveque*
Re: [Pharo-users] Error: No factory specified
Ok, there is a bug in The ConfigurationOfDBXDriver, the baseline has a versionString entry. That should be removed After removing that (and renaming Cache) it loads. Glorp loads already after renaming Cache. I’m not sure the isKindOf: on ProtoObject is safe. There is a renaming issue after loading Glorp (13188) Stephan
[Pharo-users] How to remap a keyboard?
I need to develop something in linux at the moment. I use virtualbox with linux on my mac laptop. It is basically ok to work with it but the keyboard mapping is not ok for pharo. I configured the keyboard on the linux to be German (Macintosh) but the special keys are all wrong. Being a long time linux/open source guy I know it will be a major pain to tweak it in linux (I think I did the last time with xkeymap 15 years ago). On the opposite I have the advantage of learning something new if I tweak the mapping in pharo instead of the operating system. So which would be the entry points in pharo if I want to remap my keyboad? Meaning rearranging the association of a bunch of keycodes to the interpreted character. thanks, Norbert
Re: [Pharo-users] Pharo Consortium: New Gold Member Lam Research
On 09 Apr 2014, at 10:38, Marcus Denker wrote: > The Pharo Consortium is very happy to announce that Lam Research has joined > the Consortium as a Gold Industrial Member. > > More about > - Lam Research: http://lamrc.com > - Pharo Consortium: http://consortium.pharo.org Again, great news ! > The goal of the Pharo Consortium is to allow companies to support the ongoing > development and future of Pharo. > > Individuals can support Pharo via the Pharo Association: > http://association.pharo.org
[Pharo-users] Pharo Consortium: New Gold Member Lam Research
The Pharo Consortium is very happy to announce that Lam Research has joined the Consortium as a Gold Industrial Member. More about - Lam Research: http://lamrc.com - Pharo Consortium: http://consortium.pharo.org The goal of the Pharo Consortium is to allow companies to support the ongoing development and future of Pharo. Individuals can support Pharo via the Pharo Association: http://association.pharo.org
Re: [Pharo-users] open file in append mode
Am 09.04.2014 um 10:17 schrieb Sven Van Caekenberghe : > Norbert, > > Just use #setToEnd once you opened the file stream. > Thank you. I didn’t find this one. I wasn’t thinking it thoroughly and added upToEnd: and wondered why my image explodes on startup :) Norbert > HTH, > > Sven > > On 09 Apr 2014, at 10:08, Norbert Hartl wrote: > >> How can I open a file in append mode in order to start writing at the end of >> the file? >> >> thanks, >> >> Norbert > >
Re: [Pharo-users] open file in append mode
Hi ! I use #setToEnd. Cheers 2014-04-09 10:08 GMT+02:00 Norbert Hartl : > How can I open a file in append mode in order to start writing at the end > of the file? > > thanks, > > Norbert >
Re: [Pharo-users] open file in append mode
Norbert, Just use #setToEnd once you opened the file stream. HTH, Sven On 09 Apr 2014, at 10:08, Norbert Hartl wrote: > How can I open a file in append mode in order to start writing at the end of > the file? > > thanks, > > Norbert
Re: [Pharo-users] [ANN] WIP iStoa
Sorry Hilaire, I just got to work and was about to check it. Nice to see you've already found the cause though :) 2014-04-09 9:25 GMT+02:00 Hilaire Fernandes : > Ok I got it, well I hope so. Debian Wheezy's GNU C lib is version 2.13. > So it looks like the VM I used was compiled against GNU C Lib version > 2.15 (stock Pharo VM from a few months ago), but more recent stock Pharo > VM are compiled with older GNU lib C dependcies. And this is nice. > > If I am wrong, I will be happy to read clarification. > > Thanks > > Hilaire > > Le 07/04/2014 10:01, Bernat Romagosa a écrit : > > Just FYI, I had to replace the shipped vm and plugins by the latest ones > > to get it to work on my Debian Wheezy machine. > > > > -- > Dr. Geo http://drgeo.eu > > > -- Bernat Romagosa.
[Pharo-users] open file in append mode
How can I open a file in append mode in order to start writing at the end of the file? thanks, Norbert
Re: [Pharo-users] [ANN] WIP iStoa
Where did you get your VM, because the one in Pharo web site does not have the same libc version dependencies? Thanks Hilaire Le 08/04/2014 10:48, Bernat Romagosa a écrit : > And this is the output of ldd -v on the vm I used to run iStoa: > -- Dr. Geo http://drgeo.eu
Re: [Pharo-users] [ANN] WIP iStoa
Ok I got it, well I hope so. Debian Wheezy's GNU C lib is version 2.13. So it looks like the VM I used was compiled against GNU C Lib version 2.15 (stock Pharo VM from a few months ago), but more recent stock Pharo VM are compiled with older GNU lib C dependcies. And this is nice. If I am wrong, I will be happy to read clarification. Thanks Hilaire Le 07/04/2014 10:01, Bernat Romagosa a écrit : > Just FYI, I had to replace the shipped vm and plugins by the latest ones > to get it to work on my Debian Wheezy machine. > -- Dr. Geo http://drgeo.eu
Re: [Pharo-users] What about a Pharo day in May-June?
On 09 Apr 2014, at 08:25, Pharo4Stef wrote: > Hi guys > > I think that it would be great to organize a gathering/talks/show me your > stuff/networking day before spring. > What do you think that this idea? > We could host it at Lille. > > The question is if not just doing it at ESUG… people can not travel twice. Marcus
Re: [Pharo-users] [Pharo-dev] What about a Pharo day in May-June?
There is already a Pharo sprint just after the MOOSEDAY organized in Paris in June. Envoyé de mon iPhone > Le 9 avr. 2014 à 08:25, Pharo4Stef a écrit : > > Hi guys > > I think that it would be great to organize a gathering/talks/show me your > stuff/networking day before spring. > What do you think that this idea? > We could host it at Lille. > > Stef