I run the import with --create-collection false and create the two
collections in Web UI , it was still get wrong, as follows:
2017-02-17T08:50:44Z [1] ERROR {cluster} ClusterComm::performRequests: got
no answer from shard:s27742101:/_db/_system/_api/document?collection=
s27742101&waitForSync=false&returnNew=false&returnOld=false with error 4
2017-02-17T08:52:34Z [1] ERROR {cluster} cannot create connection to server
'PRMR-b3440596-4484-4075-b05c-7c1d7c0a0758' at endpoint
'tcp://10.2.0.140:1026'
2017-02-17T08:52:34Z [1] ERROR {cluster} ClusterComm::performRequests: got
BACKEND_UNAVAILABLE or TIMEOUT from shard:s27742101:/_db/_system/_api/
collection/s27742101/count
2017-02-17T08:52:34Z [1] ERROR {cluster} cannot create connection to server
'PRMR-b3440596-4484-4075-b05c-7c1d7c0a0758' at endpoint
'tcp://10.2.0.140:1026'
root@e54d7cbde575:/mnt# arangoimp --server.endpoint tcp://10.2.0.141:1026
--server.connection-timeout 50 --server.request-timeout 12000 --file
relations.tsv --type tsv --collection "relations"
Please specify a password:
Connected to ArangoDB 'http+tcp://10.2.0.141:1026', version 3.1.10, database
: '_system', username: 'root'
----------------------------------------
database: _system
collection: relations
create: no
source filename: relations.tsv
file type: tsv
separator:
connect timeout: 50
request timeout: 12000
----------------------------------------
Starting TSV import...
2017-02-17T08:45:13Z [243] INFO processed 31096832 bytes (3%) of input file
2017-02-17T08:45:42Z [243] INFO processed 62193664 bytes (6%) of input file
2017-02-17T08:46:26Z [243] INFO processed 93290496 bytes (9%) of input file
2017-02-17T08:47:04Z [243] INFO processed 124387328 bytes (12%) of input
file
2017-02-17T08:47:45Z [243] INFO processed 155484160 bytes (15%) of input
file
2017-02-17T08:48:34Z [243] INFO processed 186580992 bytes (18%) of input
file
2017-02-17T08:49:10Z [243] INFO processed 217645056 bytes (21%) of input
file
2017-02-17T08:49:33Z [243] INFO processed 248741888 bytes (24%) of input
file
2017-02-17T08:49:58Z [243] INFO processed 279838720 bytes (27%) of input
file
2017-02-17T08:50:44Z [243] ERROR error message: timeout in cluster
operation
I see in the web that *1457* - timeout in cluster operation: Will be raised
when a coordinator in a cluster runs into a timeout for some cluster wide
operation.
1. May be I could add DBserver or Memory ?
2. There is any way to config the timeout parameter ?
在 2017年2月17日星期五 UTC+8下午4:37:05,[email protected]写道:
>
> I suspect that the import has created only one shard per collection (as is
> the default in arangoimp when using --create-collection true). Therefore,
> all data for each collection will have ended up on the same DBserver. I
> suspect that you have run into an out of memory situation on one of the
> DBservers. Since you work without replication, this is fatal for the import.
> What are the resource limits for your DBServer nodes (configured in the
> Mesos launch of the framework)?
> You should create the two collections beforehand with as many shards as
> you have DBServers and then run the import with --create-collection false.
>
--
You received this message because you are subscribed to the Google Groups
"ArangoDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.