you can use 3772 as long as it is not already in use. you can try netstat
-ant | grep 3772 before you start the topo and should be no references
using it. After you submit the topo should see a LISTENer if it starts
correctly.

But also I think storm.yaml supports only single drpc port so doubt you can
submit 2 topos trying diff drpc ports. I would think all active topologies
have to use common port

On Wed, Jul 12, 2017 at 10:00 AM, sam mohel <[email protected]> wrote:

> yes i tried another topology using drpc and worked except this i'm trying
> it now . although it worked in local mode , but same configuration of
> distributed for first topology not work with it . i will try 127.0.0.1 , i
> used 3772 with first topology and worked can i use it again with the second
> or changed it to 3774 ?
>
> Really appreciate  your time
>
> On Wed, Jul 12, 2017 at 3:49 PM, J.R. Pauley <[email protected]> wrote:
>
>> you really have 192.168.x.x literally? I would think that would never
>> work. Have you tried 127.0.0.1 to see if it makes any diff? Have you
>> verified the port is open and listening?
>>
>> On Wed, Jul 12, 2017 at 9:16 AM, sam mohel <[email protected]> wrote:
>>
>>> i set it in the code
>>> Config conf = new Config();
>>> List<String> dprcServers = new ArrayList<String>();
>>> dprcServers.add("192.168.x.x");
>>> conf.put(Config.DRPC_SERVERS, dprcServers);
>>> conf.put(Config.DRPC_PORT, 3772);
>>>
>>> LocalDRPC drpc = null;
>>> StormSubmitter.submitTopology(args[0], conf, buildTopology(drpc));
>>> DRPCClient client=new DRPCClient("192.168.x.x",3772);
>>>
>>>
>>> On Wed, Jul 12, 2017 at 2:31 PM, J.R. Pauley <[email protected]> wrote:
>>>
>>>> How come no drpc.servers config shown in storm.yaml?
>>>>
>>>> On Wed, Jul 12, 2017 at 8:22 AM, sam mohel <[email protected]> wrote:
>>>>
>>>>> thanks for replying Nazar . but how can i solve my problem  "
>>>>> DRPCExecutionException(msg:Request timed out)" ?
>>>>> Should i increase or decrease time of drpc server or client ?
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jul 12, 2017 at 11:16 AM, Nazar Kushpir <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Sam,
>>>>>> I don't see any gc-related info in your errors, but there is a 
>>>>>> "java.io.IOException:
>>>>>> Connection reset by peer" message, that can be related to some
>>>>>> network problems.
>>>>>> As for GC and more RAM: storm has an option
>>>>>> "topology.max.spout.pending" which allows you to limit the number of 
>>>>>> tuples
>>>>>> in topology at any point of time (by default it has no limit) - it helped
>>>>>> me to overcome "out of memory" errors, so it may help you too, since it
>>>>>> isn't set in your "storm.yaml" file.
>>>>>>
>>>>>> On Tue, Jul 11, 2017 at 10:20 PM, sam mohel <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> How can I figure the problem ? Is there any other place I can post
>>>>>>> my problem  ?
>>>>>>>
>>>>>>> On Tuesday, July 11, 2017, sam mohel <[email protected]> wrote:
>>>>>>> > Is there any help please?
>>>>>>> >
>>>>>>> > On Tuesday, July 11, 2017, sam mohel <[email protected]> wrote:
>>>>>>> >> thanks for replying and for this clarification  .
>>>>>>> >> here my full error and i hope some can help
>>>>>>> >> I'm using apache-stom-0.9.6 with jdk 1.7 and zookeeper-3.4.6 . i
>>>>>>> submitted my trident topology but got this in terminal
>>>>>>> >> [main] INFO  backtype.storm.StormSubmitter - Finished submitting
>>>>>>> topology: top
>>>>>>> >> Exception in thread "main" DRPCExecutionException(msg:Request
>>>>>>> timed out)
>>>>>>> >> at backtype.storm.generated.DistributedRPC$execute_result.read(
>>>>>>> DistributedRPC.java:904)
>>>>>>> >> at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient
>>>>>>> .java:78)
>>>>>>> >> at backtype.storm.generated.DistributedRPC$Client.recv_execute(
>>>>>>> DistributedRPC.java:92)
>>>>>>> >> at backtype.storm.generated.DistributedRPC$Client.execute(Distr
>>>>>>> ibutedRPC.java:78)
>>>>>>> >> at backtype.storm.utils.DRPCClient.execute(DRPCClient.java:71)
>>>>>>> >> at trident.mytopology.main(mytopology.java:319)
>>>>>>> >> in drpc.log file
>>>>>>> >>  [INFO] Starting Distributed RPC servers...
>>>>>>> >> 2017-07-11T08:10:44.139+0200 o.a.t.s.TNonblockingServer [WARN]
>>>>>>> Got an IOException in internalRead!
>>>>>>> >> java.io.IOException: Connection reset by peer
>>>>>>> >> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>>> ~[na:1.7.0_121]
>>>>>>> >> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>>>> ~[na:1.7.0_121]
>>>>>>> >> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>>>>>> ~[na:1.7.0_121]
>>>>>>> >> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_121]
>>>>>>> >> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
>>>>>>> ~[na:1.7.0_121]
>>>>>>> >> at 
>>>>>>> >> org.apache.thrift7.transport.TNonblockingSocket.read(TNonblockingSocket.java:141)
>>>>>>> ~[storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> at org.apache.thrift7.server.TNonblockingServer$FrameBuffer.int
>>>>>>> ernalRead(TNonblockingServer.java:669) [storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> at 
>>>>>>> >> org.apache.thrift7.server.TNonblockingServer$FrameBuffer.read(TNonblockingServer.java:458)
>>>>>>> [storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> at org.apache.thrift7.server.TNonblockingServer$SelectThread.ha
>>>>>>> ndleRead(TNonblockingServer.java:359) [storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> at org.apache.thrift7.server.TNonblockingServer$SelectThread.se
>>>>>>> lect(TNonblockingServer.java:304) [storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> at 
>>>>>>> >> org.apache.thrift7.server.TNonblockingServer$SelectThread.run(TNonblockingServer.java:243)
>>>>>>> [storm-core-0.9.6.jar:0.9.6]
>>>>>>> >> 2017-07-11T08:19:39.090+0200 b.s.d.drpc [WARN] Timeout DRPC
>>>>>>> request id: 1 start at 1499753374
>>>>>>> >> nimbus.log file didn't contain any errors or warning
>>>>>>> >> supervisor.log file contain still hasn't started
>>>>>>> >> storm.yaml
>>>>>>> >> storm.zookeeper.servers:
>>>>>>> >>      - "192.168.x.x"
>>>>>>> >>
>>>>>>> >>  nimbus.host : "192.168.x.x"
>>>>>>> >>  storm.local.dir: "/var/storm"
>>>>>>> >>
>>>>>>> >>  supervisor.childopts: "-Xmx1024m -XX:MaxPermSize=512m"
>>>>>>> >>  worker.childopts: "-Xmx2048m -XX:MaxPermSize=512m"
>>>>>>> >>  nimbus.childopts: "-Xmx2048m -XX:MaxPermSize=512m"
>>>>>>> >>  ui.port: 8080
>>>>>>> >>  storm.zookeeper.session.timeout: 40000
>>>>>>> >>  storm.zookeeper.connection.timeout: 30000
>>>>>>> >>  nimbus.task.timeout.secs: 600
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> On Tue, Jul 4, 2017 at 7:42 AM, Navin Ipe <
>>>>>>> [email protected]> wrote:
>>>>>>> >>>
>>>>>>> >>> :-) There are no demands here, dear Sam. Just requests for
>>>>>>> information so that we can help you better.
>>>>>>> >>> I haven't had to deal with DRPC, so couldn't help you with that,
>>>>>>> which is why I had also mentioned that the others on this forum will be
>>>>>>> able to help you if you provided more info.
>>>>>>> >>> This is a general procedure in every forum and in real life too.
>>>>>>> The help you get is directly proportional with the amount of quality 
>>>>>>> info
>>>>>>> you provide.
>>>>>>> >>>
>>>>>>> >>> To overcome a GC problem you either have to minimise the amount
>>>>>>> of memory your program uses (consider restructuring your datastructures 
>>>>>>> or
>>>>>>> using primitive datatypes) or you increase the amount of RAM and alter
>>>>>>> Storm's configuration to be able to recognize the newly available 
>>>>>>> increase
>>>>>>> in RAM. This is, if it is indeed a memory allocation problem.
>>>>>>> >>>
>>>>>>> >>> On Tue, Jul 4, 2017 at 2:08 AM, sam mohel <[email protected]>
>>>>>>> wrote:
>>>>>>> >>>>
>>>>>>> >>>> I wrote before another post and you demand from me to write my
>>>>>>> configurations and some details and I wrote it but you didn't help !! So
>>>>>>> thanks for that because other people helped and replied . Now this post 
>>>>>>> in
>>>>>>> general how can I overcome GC problem ? What are the things that I 
>>>>>>> should
>>>>>>> concentrate on it ? Again my question is on general
>>>>>>> >>>>
>>>>>>> >>>> On Monday, July 3, 2017, Navin Ipe <
>>>>>>> [email protected]> wrote:
>>>>>>> >>>> > Think about this Sam. If some stranger wrote what you wrote,
>>>>>>> and you tried understanding their problem looking at only what they 
>>>>>>> wrote,
>>>>>>> would you be able to figure out anything at all?
>>>>>>> >>>> > You haven't provided enough information for us to help you.
>>>>>>> >>>> > What is the exact error you are encountering? Paste it here.
>>>>>>> >>>> > How much memory have you allocated for storm (in the
>>>>>>> settings)?
>>>>>>> >>>> > How much of memory is available for storm to use (other apps
>>>>>>> also use memory)?
>>>>>>> >>>> > Are you trying to submit two topologies, each of which need
>>>>>>> 6GB RAM on an 8GB laptop?
>>>>>>> >>>> > Have you considered using online servers?
>>>>>>> >>>> >
>>>>>>> >>>> > Plenty of other things like this. Tell us exactly what the
>>>>>>> problem is, what you tried to solve it, what you googled to resolve it
>>>>>>> before asking us. This is how forums work. If you want people to help 
>>>>>>> you,
>>>>>>> give them the right info and respect their time.
>>>>>>> >>>> >
>>>>>>> >>>> >
>>>>>>> >>>> > On Sat, Jul 1, 2017 at 4:40 AM, sam mohel <
>>>>>>> [email protected]> wrote:
>>>>>>> >>>> >>
>>>>>>> >>>> >> I hope can find any help and many thanks for that
>>>>>>> >>>> >> I have problem with GC "garbage collector " and this is the
>>>>>>> second time i face this problem as my laptop with RAM 6 GB and it didn't
>>>>>>> work with the my topology so i increased my RAM to be 8 GB  to overcome
>>>>>>> this and fixed . Now i want to submit another one but RAM is not enough 
>>>>>>>  ,
>>>>>>> How can i overcome this problem ? Can i fix it instead of increasing 
>>>>>>> size
>>>>>>> of RAM ?
>>>>>>> >>>> >
>>>>>>> >>>> >
>>>>>>> >>>> > --
>>>>>>> >>>> > Regards,
>>>>>>> >>>> > Navin
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> --
>>>>>>> >>> Regards,
>>>>>>> >>> Navin
>>>>>>> >>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to