[ 
https://issues.apache.org/jira/browse/BEAM-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15730202#comment-15730202
 ] 

Reza Nouri commented on BEAM-975:
---------------------------------

Hey [~jbonofre],

Here is from mongo log before failure:

2016-12-08T09:42:36.346+1100 I NETWORK  [initandlisten] Listener: accept() 
returns -1 errno:24 Too many open files
2016-12-08T09:42:36.346+1100 E NETWORK  [initandlisten] Out of file 
descriptors. Waiting one second before trying to accept more connections.
2016-12-08T09:42:37.743+1100 E STORAGE  [thread1] WiredTiger (24) 
[1481150557:743579][1523:0x700005b05000], log-server: data/db/journal: 
directory-list: opendir: Too many open files
2016-12-08T09:42:37.743+1100 E STORAGE  [thread1] WiredTiger (24) 
[1481150557:743728][1523:0x700005b05000], log-server: journal: directory-list, 
prefix "WiredTigerPreplog": Too many open files
2016-12-08T09:42:37.743+1100 E STORAGE  [thread1] WiredTiger (24) 
[1481150557:743750][1523:0x700005b05000], log-server: log pre-alloc server 
error: Too many open files
2016-12-08T09:42:37.743+1100 E STORAGE  [thread1] WiredTiger (24) 
[1481150557:743768][1523:0x700005b05000], log-server: log server error: Too 
many open files
2016-12-08T09:42:47.005+1100 W FTDC     [ftdc] Uncaught exception in 
'FileNotOpen: Failed to open interim file 
data/db/diagnostic.data/metrics.interim.temp' in full-time diagnostic data 
capture subsystem. Shutting down the full-time diagnostic data capture 
subsystem.
2016-12-08T09:43:27.758+1100 I NETWORK  [initandlisten] Listener: accept() 
returns -1 errno:24 Too many open files
2016-12-08T09:43:27.758+1100 E NETWORK  [initandlisten] Out of file 
descriptors. Waiting one second before trying to accept more connections.
2016-12-08T09:43:28.635+1100 W NETWORK  [HostnameCanonicalizationWorker] Failed 
to obtain address information for hostname dyn: nodename nor servname provided, 
or not known
2016-12-08T09:43:28.759+1100 I NETWORK  [initandlisten] Listener: accept() 
returns -1 errno:24 Too many open files
2016-12-08T09:43:28.759+1100 E NETWORK  [initandlisten] Out of file 
descriptors. Waiting one second before trying to accept more connections.
2016-12-08T09:43:29.021+1100 E STORAGE  [thread2] WiredTiger (24) 
[1481150609:20956][1523:0x700005c8e000], file:WiredTiger.wt, 
WT_SESSION.checkpoint: data/db/WiredTiger.turtle: handle-open: open: Too many 
open files
2016-12-08T09:43:29.021+1100 E STORAGE  [thread2] WiredTiger (24) 
[1481150609:21326][1523:0x700005c8e000], checkpoint-server: checkpoint server 
error: Too many open files
2016-12-08T09:43:29.021+1100 E STORAGE  [thread2] WiredTiger (-31804) 
[1481150609:21355][1523:0x700005c8e000], checkpoint-server: the process must 
exit and restart: WT_PANIC: WiredTiger library panic
2016-12-08T09:43:29.021+1100 I -        [thread2] Fatal Assertion 28558
2016-12-08T09:43:29.021+1100 I -        [thread2] 

***aborting after fassert() failure


2016-12-08T09:43:29.029+1100 F -        [thread2] Got signal: 6 (Abort trap: 6).

And then it throws connection timeout exception:

SEVERE: Servlet.service() for servlet [Curation] in context with path [] threw 
exception [org.apache.beam.sdk.Pipeline$PipelineExecutionException: 
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a 
server that matches ReadPreferenceServerSelector{readPreference=primary}. 
Client view of cluster state is {type=UNKNOWN, 
servers=[{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, 
exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, 
caused by {java.net.ConnectException: Connection refused}}]] with root cause
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a 
server that matches ReadPreferenceServerSelector{readPreference=primary}. 
Client view of cluster state is {type=UNKNOWN, 
servers=[{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, 
exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, 
caused by {java.net.ConnectException: Connection refused}}]
        at 
com.mongodb.connection.BaseCluster.createTimeoutException(BaseCluster.java:369)
        at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:101)
        at 
com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:75)
        at 
com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:71)
        at 
com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)
        at 
com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:89)
        at 
com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:84)
        at 
com.mongodb.operation.CommandReadOperation.execute(CommandReadOperation.java:55)
        at com.mongodb.Mongo.execute(Mongo.java:772)
        at com.mongodb.Mongo$2.execute(Mongo.java:759)
        at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:130)
        at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:124)
        at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:114)
        at 
org.apache.beam.sdk.io.mongodb.MongoDbIO$BoundedMongoDbSource.getEstimatedSizeBytes(MongoDbIO.java:226)
        at 
org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$InputProvider.createInitialSplits(BoundedReadEvaluatorFactory.java:145)
        at 
org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$InputProvider.getInitialInputs(BoundedReadEvaluatorFactory.java:136)
        at 
org.apache.beam.runners.direct.RootProviderRegistry.getInitialInputs(RootProviderRegistry.java:64)

I hope information above will be helpful. 

Thanks


> Issue with MongoDBIO
> --------------------
>
>                 Key: BEAM-975
>                 URL: https://issues.apache.org/jira/browse/BEAM-975
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-extensions
>            Reporter: Reza Nouri
>            Assignee: Jean-Baptiste Onofré
>             Fix For: 0.4.0-incubating
>
>
> It appears that there is an issue with MongoDBIO. I am using Apache Beam in a 
> REST service that reads data from Mongo. After a number of requests, mongoIO 
> throws the following exception:
> com.mongodb.MongoSocketReadException: Prematurely reached end of stream
>       at com.mongodb.connection.SocketStream.read(SocketStream.java:88)
>       at 
> com.mongodb.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:491)
>       at 
> com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:221)
>       at 
> com.mongodb.connection.CommandHelper.receiveReply(CommandHelper.java:134)
>       at 
> com.mongodb.connection.CommandHelper.receiveCommandResult(CommandHelper.java:121)
>       at 
> com.mongodb.connection.CommandHelper.executeCommand(CommandHelper.java:32)
>       at 
> com.mongodb.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:83)
>       at 
> com.mongodb.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:43)
>       at 
> com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
>       at 
> com.mongodb.connection.UsageTrackingInternalConnection.open(UsageTrackingInternalConnection.java:46)
>       at 
> com.mongodb.connection.DefaultConnectionPool$PooledConnection.open(DefaultConnectionPool.java:381)
>       at 
> com.mongodb.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:96)
>       at 
> com.mongodb.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:82)
>       at 
> com.mongodb.connection.DefaultServer.getConnection(DefaultServer.java:72)
>       at 
> com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(ClusterBinding.java:86)
>       at 
> com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:237)
>       at 
> com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:212)
>       at com.mongodb.operation.FindOperation.execute(FindOperation.java:482)
>       at com.mongodb.operation.FindOperation.execute(FindOperation.java:79)
>       at com.mongodb.Mongo.execute(Mongo.java:772)
>       at com.mongodb.Mongo$2.execute(Mongo.java:759)
>       at com.mongodb.OperationIterable.iterator(OperationIterable.java:47)
>       at com.mongodb.FindIterableImpl.iterator(FindIterableImpl.java:143)
>       at 
> org.apache.beam.sdk.io.mongodb.MongoDbIO$BoundedMongoDbReader.start(MongoDbIO.java:359)
>       at 
> org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.processElement(BoundedReadEvaluatorFactory.java:99)
>       at 
> org.apache.beam.runners.direct.TransformExecutor.processElements(TransformExecutor.java:154)
>       at 
> org.apache.beam.runners.direct.TransformExecutor.run(TransformExecutor.java:121)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> I suppose there must be a problem with Mongo connection which causes this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to