This is strange because I'm actually working on a single threaded ICENode and
don't have any memory leaks. The output port is a single structure of type
Vec3 in 0D context.
I can run it on a mesh with 3 millions points for 10 000 frames without
memory issue.

I'm using 2013 QFE7 btw.




On Tue, Oct 1, 2013 at 2:57 PM, Mathias N <mdawn...@gmail.com> wrote:

> Thanks for the reply.
>
> The trial version I'm using is 2014 SP2.
> Testing is being done in an empty scene with nothing but the newly created
> pointcloud and a simple ice tree. There are no attribute display properties
> present.
>
> To rule out any other possible culprits I created two identical nodes that
> simply output the float input (0D component), one single-threaded and the
> other multi-threaded. Both created via the wizard with the only manual
> change being setting each to pass the value instead of printing it as the
> sample code does.
>
> Switching between the two, the multi-threaded node barely affected memory
> usage, while the single-threaded node resulted in a steady increase in
> memory usage that was never released.
>
> It does not appear to have been fixed in SP2 :(
>
> Given that I am only passing a relatively modest number of components
> through it, I can imagine this would be quite a showstopper for larger sets
> of data.
>
>
>
> On Tue, Oct 1, 2013 at 7:25 PM, Guillaume Laforge <
> guillaume.laforge...@gmail.com> wrote:
>
>> Hi,
>>
>> Indeed, there has been some memory leaks issues in the past. If you were
>> testing in 2014, you should try 2014 SP2 as it fixed (some parts) of the
>> ICE related memory issue.
>> If 2014 SP2 doesn't fix the pb you can also check if you've got some
>> attribute display properties on your objects. They could be the culprit.
>>
>> Guillaume
>>
>>
>> On Tue, Oct 1, 2013 at 12:24 PM, Mathias N <mdawn...@gmail.com> wrote:
>>
>>> Please excuse any errors; mailing lists are confusing.
>>>
>>> While wrapping up work on a system that includes 5 custom icenodes, I
>>> noticed that memory usage was increasing by about 1mb per second during
>>> playback.
>>>
>>> I eventually narrowed it down to a per-point-array node, the only
>>> single-threaded node in the setup. In a test scene I have it reading about
>>> 30 turbulized mesh point positions and spitting them into a single
>>> per-object array, and after leaving it for 15-30 minutes a while memory
>>> usage had increased from 400mb to 1.5gb. It is not released until the
>>> program is closed.
>>>
>>> Being at loss as to what might be causing it I removed everything but
>>> the absolute necessary pieces of the code (port declarations etc), only to
>>> find that the problem persisted.
>>>
>>> Next I created a new icenode using the wizard, adding one input and one
>>> output port with the default settings, and setting it to single threaded.
>>> This also slowly increases memory consumption when executed. Using an 8x8x8
>>> grid as input it increases memory usage by about 150kb per second.
>>>
>>> If the input is singular (object context) there is no perceivable
>>> increase in memory usage. The rate of increase appears to be relative to
>>> the number of input components.
>>>
>>> I primarily use Softimage 2011, but running it in the 2014 trial
>>> produces the same result. Were it a legitimate bug it would seem unlikely
>>> that it would persist for so long.
>>>
>>> Could anyone shed some light on what is going on here? Is this normal?
>>>
>>> Thanks
>>>
>>>
>>> --------------------------
>>> To unsubscribe: mail softimage-requ...@listproc.autodesk.com with
>>> subject "unsubscribe" and reply to the confirmation email.
>>>
>>
>>
>> --------------------------
>> To unsubscribe: mail softimage-requ...@listproc.autodesk.com with
>> subject "unsubscribe" and reply to the confirmation email.
>>
>
>
> --------------------------
> To unsubscribe: mail softimage-requ...@listproc.autodesk.com with subject
> "unsubscribe" and reply to the confirmation email.
>
--------------------------
To unsubscribe: mail softimage-requ...@listproc.autodesk.com with subject 
"unsubscribe" and reply to the confirmation email.

Reply via email to