Thanks for replying and help . i tried to increase  worker.heap.memory.mb:
2048 but not working . DRPC stopped working . I wonder why it is stopped
and data set i used smaller than first one !! my data set are tweets and
i'm working on processing it . l tried local mode  but  also not working
result size stopped in size 57.7 KB. Is there any thing i should share it ?

On Wed, Jun 28, 2017 at 2:56 PM, Navin Ipe <[email protected]>
wrote:

> @Sam: You've provided very less information for us to help you. Prima
> facie, if you have allocated very less memory for your topologies, Storm is
> obviously running out of memory, the spouts and bolts are restarting which
> causes the Connection reset by peer error.
> The solution is to allow Storm to use more RAM (assuming there is more
> RAM).
>
> int RAM_IN_MB = 2048;
> Use stormConfig.put(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB, RAM_IN_MB);
>
> If you provide more details of the error, when it happens and what your
> program is trying to accomplish, the others on this forum would be able to
> help you better.
>
>
> On Wed, Jun 28, 2017 at 3:30 PM, sam mohel <[email protected]> wrote:
>
>> Is there any help.please ?
>>
>> On Wednesday, June 28, 2017, sam mohel <[email protected]> wrote:
>> > I submitted two topologies in production mode . First one has a data
>> set with size 215 MB and worked well  and gave me the results . Second
>> topology has a data set with size 170 MB with same configurations but
>> stopped worked after some times and didn't complete its result
>> > The error i got is drpc log file
>> >     TNonblockingServer [WARN] Got an IOException in internalRead!
>> >     java.io.IOException: Connection reset by peer
>> > I couldn't figure where is the problem as it supposed to work well as
>> second data set is smaller in size
>>
>
>
>
> --
> Regards,
> Navin
>

Reply via email to