> Am 10.11.2016 um 10:42 schrieb Tudor Girba <tu...@tudorgirba.com>:
> 
> Hi Igor,
> 
> I am happy to see you getting active again. The next step is to commit code 
> at the rate you reply emails. I’d be even happier :).
> 
+1

> To address your point, of course it certainly would be great to have more 
> people work on automated support for swapping data in and out of the image. 
> That was the original idea behind the Fuel work. I have seen a couple of 
> cases on the mailing lists where people are actually using Fuel for caching 
> purposes. I have done this a couple of times, too. But, at this point these 
> are dedicated solutions and would be interesting to see it expand further.
> 
And still it would be to general. The only thing you can say is that swapping 
in/out will make it slower. So you usually don't want that it is swapping. It 
is comparable with swap space in OSes. In many use case scenarios having swap 
at all is an architectural design failure. So before having a problem that 
resources get sparse there are good points not to care too much. And if you 
want to do it there is no general solution to it. How do you swap out a partial 
graph with fuel? How can you load back a small part graph of the graph you 
swapped out? Do we need to reify object references into objects in order to 
make that smart? 
It is understandable from a developers perspective. You have a real problem you 
should solve but then you make up all sorts of technical problems that you 
think you need to solve instead of the original problem. That is one prominent 
way how projects fail. 

> However, your assumption is that the best design is one that deals with small 
> chunks of data at a time. This made a lot of sense when memory was expensive 
> and small. But, these days the cost is going down very rapidly, and sizes of 
> 128+ GB of RAM is nowadays quite cheap, and there are strong signs of super 
> large non-volatile memories become increasingly accessible. The software 
> design should take advantage of what hardware offers, so it is not 
> unreasonable to want to have a GC that can deal with large size.
> 
Be it small chunks of data or not. A statement that general is most likely to 
be wrong. So the best way might be to ignore it. Indeed you are right that 
hardware got cheap. Even more important is the fact that hardware is almost 
always cheaper than personal costs. Solving all those technical problems 
instead of real ones and not trying to act in an economical way ruins a lot of 
companies out there. You can ignore economical facts (are any other) but that 
doesn't make you really smart!

my 2 cents,

Norbert


> We should always challenge the assumptions behind our designs, because the 
> world keeps changing and we risk becoming irrelevant, a syndrome that is not 
> foreign to Smalltalk aficionados.
> 
> Cheers,
> Doru
> 
> 
>> On Nov 10, 2016, at 9:12 AM, Igor Stasenko <siguc...@gmail.com> wrote:
>> 
>> 
>> On 10 November 2016 at 07:27, Tudor Girba <tu...@tudorgirba.com> wrote:
>> Hi Igor,
>> 
>> Please refrain from speaking down on people.
>> 
>> 
>> Hi, Doru!
>> I just wanted to hear you :)
>> 
>> If you have a concrete solution for how to do things, please feel free to 
>> share it with us. We would be happy to learn from it.
>> 
>> 
>> Well, there's so many solutions, that i even don't know what to offer, and 
>> given the potential of smalltalk, i wonder why
>> you are not employing any. But in overall it is a quesition of storing most 
>> of your data on disk, and only small portion of it
>> in image (in most optimal cases - only the portion that user sees/operates 
>> with).
>> As i said to you before, you will hit this wall inevitably, no matter how 
>> much memory is available.
>> So, what stops you from digging in that direction?
>> Because even if you can fit all data in memory, consider how much time it 
>> takes for GC to scan 4+ Gb of memory, comparing to
>> 100 MB or less.
>> I don't think you'll find it convenient to work in environment where you'll 
>> have 2-3 seconds pauses between mouse clicks.
>> So, of course, my tone is not acceptable, but its pain to see how people 
>> remain helpless without even thinking about 
>> doing what they need. We have Fuel for how many years now? 
>> So it can't be as easy as it is, just serialize the data and purge it from 
>> image, till it will be required again.
>> Sure it will require some effort, but it is nothing comparing to day to day 
>> pain that you have to tolerate because of lack of solution.
>> 
>> Cheers,
>> Tudor
>> 
>> 
>>> On Nov 10, 2016, at 4:11 AM, Igor Stasenko <siguc...@gmail.com> wrote:
>>> 
>>> Nice progress, indeed.
>>> Now i hope at the end of the day, the guys who doing data 
>>> mining/statistical analysis will finally shut up and happily be able
>>> to work with more bloat without need of learning a ways to properly manage 
>>> memory & resources, and implement them finally.
>>> But i guess, that won't be long silence, before they again start screaming 
>>> in despair: please help, my bloat doesn't fits into memory... :)
>>> 
>>> On 9 November 2016 at 12:06, Sven Van Caekenberghe <s...@stfx.eu> wrote:
>>> OK, I am quite excited about the future possibilities of 64-bit Pharo. So I 
>>> played a bit more with the current test version [1], trying to push the 
>>> limits. In the past, it was only possible to safely allocate about 1.5GB of 
>>> memory even though a 32-bit process' limit is theoretically 4GB (the OS and 
>>> the VM need space too).
>>> 
>>> Allocating a couple of 1GB ByteArrays is one way to push memory use, but it 
>>> feels a bit silly. So I loaded a bunch of projects (including Seaside) to 
>>> push the class/method counts (7K classes, 100K methods) and wrote a script 
>>> [2] that basically copies part of the class/method metadata including 2 
>>> copies of each's methods source code as well as its AST (bypassing the 
>>> cache of course). This feels more like a real object graph.
>>> 
>>> I had to create no less than 7 (SEVEN) copies (each kept open in an 
>>> inspector) to break through the mythical 4GB limit (real allocated & used 
>>> memory).
>>> 
>>> <Screen Shot 2016-11-09 at 11.25.28.png>
>>> 
>>> I also have the impression that the image shrinking problem is gone 
>>> (closing everything frees memory, saving the image has it return to its 
>>> original size, 100MB in this case).
>>> 
>>> Great work, thank you. Bright future again.
>>> 
>>> Sven
>>> 
>>> PS: Yes, GC is slower; No, I did not yet try to save such a large image.
>>> 
>>> [1]
>>> 
>>> VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
>>> Image here: http://files.pharo.org/get-files/60/pharo-64.zip
>>> 
>>> [2]
>>> 
>>> | meta |
>>> ASTCache reset.
>>> meta := Dictionary new.
>>> Smalltalk allClassesAndTraits do: [ :each | | classMeta methods |
>>>  (classMeta := Dictionary new)
>>>    at: #name put: each name asSymbol;
>>>    at: #comment put: each comment;
>>>    at: #definition put: each definition;
>>>    at: #object put: each.
>>>  methods := Dictionary new.
>>>  classMeta at: #methods put: methods.
>>>  each methodsDo: [ :method | | methodMeta |
>>>    (methodMeta := Dictionary new)
>>>      at: #name put: method selector;
>>>      at: #source put: method sourceCode;
>>>      at: #ast put: method ast;
>>>      at: #args put: method argumentNames asArray;
>>>      at: #formatted put: method ast formattedCode;
>>>      at: #comment put: (method comment ifNotNil: [ :str | str 
>>> withoutQuoting ]);
>>>      at: #object put: method.
>>>    methods at: method selector put: methodMeta ].
>>>  meta at: each name asSymbol put: classMeta ].
>>> meta.
>>> 
>>> 
>>> 
>>> --
>>> Sven Van Caekenberghe
>>> Proudly supporting Pharo
>>> http://pharo.org
>>> http://association.pharo.org
>>> http://consortium.pharo.org
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> --
>>> Best regards,
>>> Igor Stasenko.
>> 
>> --
>> www.tudorgirba.com
>> www.feenk.com
>> 
>> "We can create beautiful models in a vacuum.
>> But, to get them effective we have to deal with the inconvenience of 
>> reality."
>> 
>> 
>> 
>> 
>> 
>> -- 
>> Best regards,
>> Igor Stasenko.
> 
> --
> www.tudorgirba.com
> www.feenk.com
> 
> "Not knowing how to do something is not an argument for how it cannot be 
> done."
> 
> 


Reply via email to