Hi Cong
are you using OrientGraphFactory?
http://orientdb.com/docs/last/Graph-Factory.html
2015-09-18 4:33 GMT+02:00 Cong Sun :
> We are trying to do a multithread graph traversal by using a connection
> pool. Currently, each thread has a new connection and we close each
> connection when it is
We are trying to do a multithread graph traversal by using a connection pool.
Currently, each thread has a new connection and we close each connection when
it is done. If we do not specifically close the connection, the number of
connections will reach to the maximum capacity in a very short tim
Hi, in case its worth, I was having the same problem, and the new object
solution wasn't working, at the end, the solution was to
add graphFactory.getDatabase().reload(); betwen the cluster creation and
the object insertion
El martes, 12 de mayo de 2015, 2:21:55 (UTC-5), Zaraka escribió:
>
> If
Hi OrientDB Users
Could anyone help me with the following error.
I exported a database from one macbook and importing to another macbook.
Importing database DATABASE some_data -merge=true...
Started import of database 'remote:localhost/some' from some_data.json...
Importing database info...OK
I
For your first solution, I don't see the point in using the time series
structure at all. You're just using an indexed timestamp field and next
pointers to traverse the range. The solution I outlined doesn't need any
index or next links. Finding the first date is very fast and, no matter how
bi
I sent the schema privately.
On Monday, September 14, 2015 at 2:11:37 AM UTC-5, Giulia Brignoli wrote:
>
> Hi Alexander,
>
> can you send me your schema?
>
> Regards,
> Giulia
>
--
---
You received this message because you are subscribed to the Google Groups
"OrientDB" group.
To unsubscribe f
Thanks Curtis,
we were thinking something similar, but then thought that we could simplify
the search algorithm by following additions (using documents because we a
currently using Document API):
"find all documents between timestamp_1 and timestamp_2"
1. Add a link between adjacent documents
Hi Melvin,
I'm seeing the same behavior on version 2.1.2 running on 2 servers on
distributed mode. Did you manage to solve it?
Thanks
On Friday, April 10, 2015 at 5:28:03 AM UTC+2, Melvin Yam wrote:
>
> Continuing from the above, I went on to connect to the second server.
>
> orientdb> connect
I have a rest api, should I always use it like below or I don't need to
open and close the connections on every hit? maybe the database only?
orientdb.connect();
await orient.query('insert into User content '+JSON.stringify(validUser));
orient.disconnect();
export function connect() {
server
Hi Bryan,
probably the problems arises because the system has to expand all these big
collections in memory.
I'd suggest you to change your model a bit, adding a reverse link from
log_event to log (or even better, an edge), so that you can refactor your
query like following:
select from log_event
10 matches
Mail list logo