Hi,

It will be interesting to plug the Tokutek storage into ArangoDB ;)

https://en.wikipedia.org/wiki/TokuMX

вторник, 25 апреля 2017 г., 13:31:08 UTC+3 пользователь jan.stuecke написал:
>
> Hi Luke,
>
> Jan from ArangoDB here.
>
> Sounds indeed like the limitations our current storage engine has with 
> really huge dataset. We will release the next alpha of 3.2 with RocksDB 
> this week. 
>
> 3.2 beta next week including 
>
>    - an overhauled version of our current storage engine (in-memory 
>    optimized, vastly improved behavior with large datasets)
>    - pluggable storage engine (RocksDB, work with as much data as fits on 
>    disc/SSD)
>
> Would suggest to wait a few days, give it a spin and we would highly 
> appreciate any feedback you can provide.
>
> Best, Jan
>
>
>
>
>
> Am Sonntag, 23. April 2017 00:15:42 UTC+2 schrieb Luke Yang:
>>
>> We have couple hundred million records, total 1TB data to be imported 
>> into single collection. And when we ran the "arangoimp" on 144GB windows 
>> server, it started crawling after using up all 144GB RAM. Is there a 
>> different approach to make importing work faster? 
>>
>> Looked at other relevant QA, it looks like your 3.2 release may offer 
>> better solution to big data set. Do we need to wait for that in order to 
>> handle our task? 
>>
>> Thanks,
>>
>> Luke
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"ArangoDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to