Sam

seems like you don't really know memory is the issue. Have you had ANY drpc
working in your environemtn? If not maybe best to start with a minimal test
implementation just to confirm or eliminate memory as an issue. Sorry I
don't have better idea.

On Sun, Jul 2, 2017 at 2:13 AM, sam mohel <[email protected]> wrote:

> is there any help , i tried to create swap file to increase the size but
> didn't fix the problem . is there any other way i can try it  ?
>
> On Sat, Jul 1, 2017 at 4:50 PM, sam mohel <[email protected]> wrote:
>
>> I don't have any errors in log files except drpc.log i have this
>> 2017-07-01T07:34:02.660+0200 b.s.d.drpc [INFO] Starting Distributed RPC
>> servers...
>> 2017-07-01T07:34:47.267+0200 o.a.t.s.TNonblockingServer [WARN] Got an
>> IOException in internalRead!
>> java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_121]
>> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> ~[na:1.7.0_121]
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>> ~[na:1.7.0_121]
>> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_121]
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
>> ~[na:1.7.0_121]
>> at 
>> org.apache.thrift7.transport.TNonblockingSocket.read(TNonblockingSocket.java:141)
>> ~[storm-core-0.9.6.jar:0.9.6]
>> at org.apache.thrift7.server.TNonblockingServer$FrameBuffer.
>> internalRead(TNonblockingServer.java:669) [storm-core-0.9.6.jar:0.9.6]
>> at org.apache.thrift7.server.TNonblockingServer$FrameBuffer.
>> read(TNonblockingServer.java:458) [storm-core-0.9.6.jar:0.9.6]
>> at org.apache.thrift7.server.TNonblockingServer$SelectThread.
>> handleRead(TNonblockingServer.java:359) [storm-core-0.9.6.jar:0.9.6]
>> at org.apache.thrift7.server.TNonblockingServer$SelectThread.
>> select(TNonblockingServer.java:304) [storm-core-0.9.6.jar:0.9.6]
>> at org.apache.thrift7.server.TNonblockingServer$SelectThread.
>> run(TNonblockingServer.java:243) [storm-core-0.9.6.jar:0.9.6]
>> 2017-07-01T07:44:17.639+0200 b.s.d.drpc [WARN] Timeout DRPC request id: 1
>> start at 1498887255
>>
>> and storm ui have numbers in their columns except transferred , capacity
>> and process latency .
>> i faced something like that before with a topology and when i increased
>> RAM the problem fixed , i think that is the reason here also with his
>> topology but i couldn't increase the RAM again
>>
>> On Sat, Jul 1, 2017 at 11:09 AM, J.R. Pauley <[email protected]> wrote:
>>
>>> Sam
>>> I would guess timeout is not your problem as defaults seem generous. The
>>> little I've learned the last couple days tell me to look at drpc.log and
>>> topology logs for some issue. Also I learned from Bobby I can't test this
>>> sort of thing in a LocalDRPC mode even with drpc client on same box it
>>> won't work, has to be a regular submitted topology. Other thing I found was
>>> drpc client version may matter. I reverted to 0.9.6 because everything was
>>> easier to configure. In 1.0.2 the client you have to give a config param
>>> map with lots of params that have to match the server config, and that
>>> sounds like an easy place to have some mismatch
>>>
>>> On Fri, Jun 30, 2017 at 6:13 PM, sam mohel <[email protected]> wrote:
>>>
>>>> I searched and found this ..  "The timeout on DRPC requests within the
>>>> DRPC server. Defaults to 10 minutes. Note that requests can also timeout
>>>> based on the socket timeout on the DRPC client, and separately based on the
>>>> topology message timeout for the topology implementing the DRPC function."
>>>>
>>>> Should i change  drpc.request.timeout.secs only or what ?
>>>> Any help i will appreciate
>>>>
>>>> On Fri, Jun 30, 2017 at 7:16 PM, sam mohel <[email protected]> wrote:
>>>>
>>>>> sorry the full error i got
>>>>> Exception in thread "main" DRPCExecutionException(msg:Request timed
>>>>> out)
>>>>> at backtype.storm.generated.DistributedRPC$execute_result.read(
>>>>> DistributedRPC.java:904)
>>>>> at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient
>>>>> .java:78)
>>>>> at backtype.storm.generated.DistributedRPC$Client.recv_execute(
>>>>> DistributedRPC.java:92)
>>>>> at backtype.storm.generated.DistributedRPC$Client.execute(Distr
>>>>> ibutedRPC.java:78)
>>>>> at backtype.storm.utils.DRPCClient.execute(DRPCClient.java:71)
>>>>>
>>>>>
>>>>> On Fri, Jun 30, 2017 at 3:36 PM, sam mohel <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> @jim yes I ran ./storm drpc in machine A with nimbus . And got the
>>>>>> error
>>>>>>
>>>>>> DRPCExecutionException(msg:Request failed)
>>>>>>
>>>>>> in console after sometimes from submitting the topology
>>>>>>
>>>>>> On Friday, June 30, 2017, Pauley, Jim <[email protected]>
>>>>>> wrote:
>>>>>> > Sam
>>>>>> > Re Bobby comment that your DRPC was not even running, did you
>>>>>> manually start the drpc from console and get it running? I know you can 
>>>>>> see
>>>>>> if it is listening with netstat -ant | grep 3772
>>>>>> > ________________________________
>>>>>> > From: sam mohel [[email protected]]
>>>>>> > Sent: Thursday, June 29, 2017 9:45 PM
>>>>>> > To: [email protected]
>>>>>> > Subject: [EXTERNAL] Re: DRPC problem
>>>>>> >
>>>>>> > excuse me . i tried to submit another topology and worked well with
>>>>>> same configurations . So why this topology has a problem !! How can i
>>>>>> figure where is the problem ?
>>>>>> > Any help i'll really appreciate it
>>>>>> > On Thu, Jun 29, 2017 at 8:35 PM, sam mohel <[email protected]>
>>>>>> wrote:
>>>>>> >>
>>>>>> >> Thanks for replying , really i will loss my mind from this error .
>>>>>> Can i change ip address or what should i do with Drpc server ?
>>>>>> >> On Thu, Jun 29, 2017 at 8:29 PM, Bobby Evans <[email protected]>
>>>>>> wrote:
>>>>>> >>>
>>>>>> >>> Your error message shows that your DRPC server isn't even
>>>>>> running, so you don't need to worry about the amount of memory until you
>>>>>> actually have the processes up and running.
>>>>>> >>> Once you have them running then you can look at the GC metrics to
>>>>>> see if it looks like you need to give it more heap.  This is generic for
>>>>>> java in general and has little to do with storm in particular.
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> - Bobby
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> On Thursday, June 29, 2017, 1:18:36 PM CDT, sam mohel <
>>>>>> [email protected]> wrote:
>>>>>> >>>
>>>>>> >>> Thanks but how can i know if Memory is suitable for my project or
>>>>>> not ?
>>>>>> >>> On Thu, Jun 29, 2017 at 8:10 PM, Bobby Evans <[email protected]>
>>>>>> wrote:
>>>>>> >>>
>>>>>> >>> No it has nothing to do with GC.  It means that the command line
>>>>>> confused the JVM and it thought you wanted to run a the main method in a
>>>>>> class called _JAAS_PLACEHOLDER.
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> - Bobby
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> On Thursday, June 29, 2017, 12:45:16 PM CDT, sam mohel <
>>>>>> [email protected]> wrote:
>>>>>> >>>
>>>>>> >>> Excuse me . Is that line
>>>>>> >>>
>>>>>> >>> Drpc.childopts"-Xmx768m _JAAS_PLACEHOLDER
>>>>>> -Xloggc:/var/log/storm/drpc- gc.log -XX:+PrintGCDetails
>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
>>>>>> >>>
>>>>>> >>> Means to check if there's a problem with garbage collector or not
>>>>>> ?
>>>>>> >>> And about want I got
>>>>>> >>>
>>>>>> >>> Could not find or load main class _JAAS_PLACEHOLDER . Is that
>>>>>> means error I should fix and I have problem with GC ?
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> > Delete _JAAS_PLACEHOLDER.  It is there as something you should
>>>>>> replace if you want to have security for your DRPC server.
>>>>>> >>> >
>>>>>> >>> >
>>>>>> >>> > - Bobby
>>>>>> >>> >
>>>>>> >>> >
>>>>>> >>> > On Thursday, June 29, 2017, 10:25:24 AM CDT, sam mohel <
>>>>>> [email protected]> wrote:
>>>>>> >>> >
>>>>>> >>> > Really appreciate your time Bobby . So the error I got when I
>>>>>> added this line to drpc.childopts
>>>>>> >>> > "-Xmx768m _JAAS_PLACEHOLDER -Xloggc:/var/log/storm/drpc- gc.log
>>>>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
>>>>>> >>> > I got Error: Could not find or load main class
>>>>>> _JAAS_PLACEHOLDER how can I fix it ?
>>>>>> >>> >
>>>>>> >>> > On Thursday, June 29, 2017, Bobby Evans <[email protected]>
>>>>>> wrote:
>>>>>> >>> >> I don't see issues with your configs on the surface.
>>>>>> >>> >>
>>>>>> >>> >>
>>>>>> >>> >> - Bobby
>>>>>> >>> >>
>>>>>> >>> >>
>>>>>> >>> >> On Thursday, June 29, 2017, 10:02:37 AM CDT, sam mohel <
>>>>>> [email protected]> wrote:
>>>>>> >>> >>
>>>>>> >>> >> Are my configurations right ?
>>>>>> >>> >> I got in terminal when submitted the topology after some times
>>>>>> >>> >> drpcexecutionexception(msg: request timed out)
>>>>>> >>> >> In log file of drpc I got
>>>>>> >>> >>  [INFO] Starting Distributed RPC servers...
>>>>>> >>> >>  [WARN] Timeout DRPC request id: 200 start at 1498657923
>>>>>> >>> >> In worker log file doesn't receive any error message .
>>>>>> >>> >> But I tried to add this drpc.childopts: "-Xmx768m
>>>>>> _JAAS_PLACEHOLDER -Xloggc:/var/log/storm/drpc- gc.log -XX:+PrintGCDetails
>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"  in storm.yaml to check if
>>>>>> there is problem in RAM and retsart submitting again .
>>>>>> >>> >> I got in drpc terminal
>>>>>> >>> >> Error: Could not find or load main class _JAAS_PLACEHOLDER
>>>>>> >>> >> And in terminal when submitted
>>>>>> >>> >> Exception in thread "main" java.lang.RuntimeException:
>>>>>> org.apache.thrift7.transport. TTransportException:
>>>>>> java.net.ConnectException: Connection refused (Connection refused)
>>>>>> >>> >> at backtype.storm.utils. DRPCClient.<init>(DRPCClient. java:42)
>>>>>> >>> >> at backtype.storm.utils. DRPCClient.<init>(DRPCClient. java:47)
>>>>>> >>> >> at trident.FirstStoryDetection. main(FirstStoryDetection.java:
>>>>>> 308)
>>>>>> >>> >> Caused by: org.apache.thrift7.transport. TTransportException:
>>>>>> java.net.ConnectException: Connection refused (Connection refused)
>>>>>> >>> >>
>>>>>> >>> >>
>>>>>> >>> >> On Thursday, June 29, 2017, Bobby Evans <[email protected]>
>>>>>> wrote:
>>>>>> >>> >>> You are going to need to look at the logs for your topology
>>>>>> and the logs for the drpc server to see if there is anything in there 
>>>>>> that
>>>>>> indicates what is happening.
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> - Bobby
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> On Wednesday, June 28, 2017, 11:21:14 PM CDT, sam mohel <
>>>>>> [email protected]> wrote:
>>>>>> >>> >>>
>>>>>> >>> >>> I submitted topology in local without any problem , but in
>>>>>> production mode i couldn't as you can see in ui zeros values in columns
>>>>>> except execute columns .
>>>>>> >>> >>> i got after sometimes in terminal
>>>>>> drpcexecutionexception(msg: request timed out)
>>>>>> >>> >>> my configurations are
>>>>>> >>> >>> Machine A and Machine B
>>>>>> >>> >>> storm.yaml in Machine A is
>>>>>> >>> >>> storm.zookeeper.servers:
>>>>>> >>> >>>      - "192.168.x.x"
>>>>>> >>> >>>
>>>>>> >>> >>>  nimbus.host : "192.168.x.x"
>>>>>> >>> >>>  supervisor.childopts: "-Xmx4g"
>>>>>> >>> >>>  worker.childopts: "-Xmx4g"
>>>>>> >>> >>> storm.yaml in Machine B is
>>>>>> >>> >>> storm.zookeeper.servers:
>>>>>> >>> >>>      - "192.168.x.x"
>>>>>> >>> >>>
>>>>>> >>> >>>  nimbus.host : "192.168.x.x"
>>>>>> >>> >>>  supervisor.childopts: "-Xmx4g"
>>>>>> >>> >>>  worker.childopts: "-Xmx4g"
>>>>>> >>> >>> i set drpc in the code
>>>>>> >>> >>> Config conf = new Config();
>>>>>> >>> >>> List<String> dprcServers = new ArrayList<String>();
>>>>>> >>> >>>      dprcServers.add("192.168.x.x") ;
>>>>>> >>> >>> conf.put(Config.DRPC_SERVERS, dprcServers);
>>>>>> >>> >>> conf.put(Config.DRPC_PORT, 3772);
>>>>>> >>> >>> // distributed mode
>>>>>> >>> >>> Config conf = createTopologyConfiguration( prop, true);
>>>>>> >>> >>> LocalDRPC drpc = null;
>>>>>> >>> >>> StormSubmitter.submitTopology( args[0], conf,
>>>>>> buildTopology(drpc));
>>>>>> >>> >>>              client=new DRPCClient("192.168.x.x", 3772);
>>>>>> >>> >>> i used same ip address for storm.zookeeper.servers ,
>>>>>>  nimbus.host ,dprcServers and DRPCClient . Is that wrong ?
>>>>>> >>> >>> and i ran nimbus , drpc,ui in Machine A ,   I ran supervisor
>>>>>> in Machine B
>>>>>> >>> >>> i appreciate any help , Thanks
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>
>>>>>> >
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to