*Hi ,* I want to simulate massive insertion a log ecosystem in my laptop . according to : http://orientdb.com/docs/2.1/Performance-Tuning.html & http://orientdb.com/docs/2.1/Performance-Tuning-Document.html , i tried to writing some codes to achieve the best number of insertion in per second . so i have some question about my research . i founded this result in my laptop : Laptop : lenovo flex2b - corei7 , ram 8GB & HDD 5400rpm .
first of all , in a loop my documents created . the average of insertion per second : 987 record. cpu time for jvm during the test : 90% .(the memory is growing up after some minutes). My Test Conditions for Database 1394/10/01 11:43:55.55 -> Database created - get server configuration 1394/10/01 11:43:55.55 -> environment.dumpCfgAtStartup : false 1394/10/01 11:43:55.55 -> environment.concurrent : true 1394/10/01 11:43:55.55 -> environment.allowJVMShutdown : true 1394/10/01 11:43:55.55 -> script.pool.maxSize : 20 1394/10/01 11:43:55.55 -> memory.useUnsafe : true 1394/10/01 11:43:55.55 -> memory.directMemory.safeMode : true 1394/10/01 11:43:55.55 -> memory.directMemory.onlyAlignedMemoryAccess : true 1394/10/01 11:43:55.55 -> jvm.gc.delayForOptimize : 600 1394/10/01 11:43:55.55 -> storage.diskCache.bufferSize : 5596 1394/10/01 11:43:55.55 -> storage.diskCache.writeCachePart : 15 1394/10/01 11:43:55.55 -> storage.diskCache.writeCachePageTTL : 86400 1394/10/01 11:43:55.55 -> storage.diskCache.writeCachePageFlushInterval : 25 1394/10/01 11:43:55.55 -> storage.diskCache.writeCacheFlushInactivityInterval : 60000 1394/10/01 11:43:55.55 -> storage.diskCache.writeCacheFlushLockTimeout : -1 1394/10/01 11:43:55.55 -> storage.diskCache.diskFreeSpaceLimit : 100 1394/10/01 11:43:55.55 -> storage.diskCache.diskFreeSpaceCheckInterval : 5 1394/10/01 11:43:55.55 -> storage.configuration.syncOnUpdate : true 1394/10/01 11:43:55.55 -> storage.compressionMethod : nothing 1394/10/01 11:43:55.55 -> storage.useWAL : false 1394/10/01 11:43:55.55 -> storage.wal.syncOnPageFlush : true 1394/10/01 11:43:55.55 -> storage.wal.cacheSize : 3000 1394/10/01 11:43:55.55 -> storage.wal.maxSegmentSize : 128 1394/10/01 11:43:55.55 -> storage.wal.maxSize : 4096 1394/10/01 11:43:55.55 -> storage.wal.commitTimeout : 1000 1394/10/01 11:43:55.55 -> storage.wal.shutdownTimeout : 10000 1394/10/01 11:43:55.55 -> storage.wal.fuzzyCheckpointInterval : 300 1394/10/01 11:43:55.55 -> storage.wal.reportAfterOperationsDuringRestore : 10000 1394/10/01 11:43:55.55 -> storage.wal.restore.batchSize : 50000 1394/10/01 11:43:55.55 -> storage.wal.readCacheSize : 1000 1394/10/01 11:43:55.55 -> storage.wal.fuzzyCheckpointShutdownWait : 600 1394/10/01 11:43:55.55 -> storage.wal.fullCheckpointShutdownTimeout : 600 1394/10/01 11:43:55.55 -> storage.wal.path : 1394/10/01 11:43:55.55 -> storage.makeFullCheckpointAfterCreate : true 1394/10/01 11:43:55.55 -> storage.makeFullCheckpointAfterOpen : true 1394/10/01 11:43:55.55 -> storage.makeFullCheckpointAfterClusterCreate : true 1394/10/01 11:43:55.55 -> storage.diskCache.pageSize : 64 1394/10/01 11:43:55.55 -> storage.lowestFreeListBound : 16 1394/10/01 11:43:55.55 -> storage.cluster.usecrc32 : false 1394/10/01 11:43:55.55 -> storage.lockTimeout : 0 1394/10/01 11:43:55.55 -> storage.record.lockTimeout : 2000 1394/10/01 11:43:55.55 -> storage.useTombstones : false 1394/10/01 11:43:55.55 -> record.downsizing.enabled : true 1394/10/01 11:43:56.56 -> object.saveOnlyDirty : false 1394/10/01 11:43:56.56 -> db.pool.min : 1 1394/10/01 11:43:56.56 -> db.pool.max : 150 1394/10/01 11:43:56.56 -> db.pool.idleTimeout : 0 1394/10/01 11:43:56.56 -> db.pool.idleCheckDelay : 0 1394/10/01 11:43:56.56 -> db.mvcc.throwfast : false 1394/10/01 11:43:56.56 -> db.validation : true 1394/10/01 11:43:56.56 -> nonTX.recordUpdate.synch : false 1394/10/01 11:43:56.56 -> nonTX.clusters.sync.immediately : manindex 1394/10/01 11:43:56.56 -> tx.trackAtomicOperations : false 1394/10/01 11:43:56.56 -> index.embeddedToSbtreeBonsaiThreshold : 40 1394/10/01 11:43:56.56 -> index.sbtreeBonsaiToEmbeddedThreshold : -1 1394/10/01 11:43:56.56 -> hashTable.slitBucketsBuffer.length : 1500 1394/10/01 11:43:56.56 -> index.auto.synchronousAutoRebuild : true 1394/10/01 11:43:56.56 -> index.auto.lazyUpdates : 10000 1394/10/01 11:43:56.56 -> index.flushAfterCreate : true 1394/10/01 11:43:56.56 -> index.manual.lazyUpdates : 1 1394/10/01 11:43:56.56 -> index.durableInNonTxMode : false 1394/10/01 11:43:56.56 -> index.txMode : FULL 1394/10/01 11:43:56.56 -> index.cursor.prefetchSize : 500000 1394/10/01 11:43:56.56 -> sbtree.maxDepth : 64 1394/10/01 11:43:56.56 -> sbtree.maxKeySize : 10240 1394/10/01 11:43:56.56 -> sbtree.maxEmbeddedValueSize : 40960 1394/10/01 11:43:56.56 -> sbtreebonsai.bucketSize : 2 1394/10/01 11:43:56.56 -> sbtreebonsai.linkBagCache.size : 100000 1394/10/01 11:43:56.56 -> sbtreebonsai.linkBagCache.evictionSize : 1000 1394/10/01 11:43:56.56 -> sbtreebonsai.freeSpaceReuseTrigger : 0.5 1394/10/01 11:43:56.56 -> ridBag.embeddedDefaultSize : 4 1394/10/01 11:43:56.56 -> ridBag.embeddedToSbtreeBonsaiThreshold : 40 1394/10/01 11:43:56.56 -> ridBag.sbtreeBonsaiToEmbeddedToThreshold : -1 1394/10/01 11:43:56.56 -> collections.preferSBTreeSet : false 1394/10/01 11:43:56.56 -> file.trackFileClose : false 1394/10/01 11:43:56.56 -> file.lock : true 1394/10/01 11:43:56.56 -> file.deleteDelay : 10 1394/10/01 11:43:56.56 -> file.deleteRetry : 50 1394/10/01 11:43:56.56 -> jna.disable.system.library : true 1394/10/01 11:43:56.56 -> network.maxConcurrentSessions : 1000 1394/10/01 11:43:56.56 -> network.socketBufferSize : 32768 1394/10/01 11:43:56.56 -> network.lockTimeout : 15000 1394/10/01 11:43:56.56 -> network.socketTimeout : 15000 1394/10/01 11:43:56.56 -> network.requestTimeout : 3600000 1394/10/01 11:43:56.56 -> network.retry : 5 1394/10/01 11:43:56.56 -> network.retryDelay : 500 1394/10/01 11:43:56.56 -> network.binary.loadBalancing.enabled : false 1394/10/01 11:43:56.56 -> network.binary.loadBalancing.timeout : 2000 1394/10/01 11:43:56.56 -> network.binary.maxLength : 32736 1394/10/01 11:43:56.56 -> network.binary.readResponse.maxTimes : 20 1394/10/01 11:43:56.56 -> network.binary.debug : false 1394/10/01 11:43:56.56 -> network.http.maxLength : 1000000 1394/10/01 11:43:56.56 -> network.http.charset : utf-8 1394/10/01 11:43:56.56 -> network.http.jsonResponseError : true 1394/10/01 11:43:56.56 -> network.http.jsonp : false 1394/10/01 11:43:56.56 -> oauth2.secretkey : utf-8 1394/10/01 11:43:56.56 -> network.http.sessionExpireTimeout : 300 1394/10/01 11:43:56.56 -> profiler.enabled : true 1394/10/01 11:43:56.56 -> profiler.config : 1394/10/01 11:43:56.56 -> profiler.autoDump.interval : 0 1394/10/01 11:43:56.56 -> profiler.maxValues : 200 1394/10/01 11:43:56.56 -> log.console.level : info 1394/10/01 11:43:56.56 -> log.file.level : fine 1394/10/01 11:43:56.56 -> command.timeout : 0 1394/10/01 11:43:56.56 -> query.scanThresholdTip : 50000 1394/10/01 11:43:56.56 -> query.limitThresholdTip : 10000 1394/10/01 11:43:56.56 -> client.channel.maxPool : 100 1394/10/01 11:43:56.56 -> client.connectionPool.waitTimeout : 5000 1394/10/01 11:43:56.56 -> client.channel.dbReleaseWaitTimeout : 10000 1394/10/01 11:43:56.56 -> client.ssl.enabled : false 1394/10/01 11:43:56.56 -> client.ssl.keyStore : 1394/10/01 11:43:56.56 -> client.ssl.keyStorePass : 1394/10/01 11:43:56.56 -> client.ssl.trustStore : 1394/10/01 11:43:56.56 -> client.ssl.trustStorePass : 1394/10/01 11:43:56.56 -> client.session.tokenBased : false 1394/10/01 11:43:56.56 -> server.channel.cleanDelay : 5000 1394/10/01 11:43:56.56 -> server.cache.staticFile : false 1394/10/01 11:43:56.56 -> server.log.dumpClientExceptionLevel : FINE 1394/10/01 11:43:56.56 -> server.log.dumpClientExceptionFullStackTrace : false 1394/10/01 11:43:56.56 -> distributed.crudTaskTimeout : 3000 1394/10/01 11:43:56.56 -> distributed.commandTaskTimeout : 10000 1394/10/01 11:43:56.56 -> distributed.commandLongTaskTimeout : 86400000 1394/10/01 11:43:56.56 -> distributed.deployDbTaskTimeout : 1200000 1394/10/01 11:43:56.56 -> distributed.deployChunkTaskTimeout : 15000 1394/10/01 11:43:56.56 -> distributed.deployDbTaskCompression : 7 1394/10/01 11:43:56.56 -> distributed.queueTimeout : 5000 1394/10/01 11:43:56.56 -> distributed.asynchQueueSize : 0 1394/10/01 11:43:56.56 -> distributed.asynchResponsesTimeout : 15000 1394/10/01 11:43:56.56 -> distributed.purgeResponsesTimerDelay : 15000 1394/10/01 11:43:56.56 -> distributed.queueMaxSize : 100 1394/10/01 11:43:56.56 -> distributed.backupDirectory : ../backup/databases 1394/10/01 11:43:56.56 -> distributed.concurrentTxMaxAutoRetry : 10 1394/10/01 11:43:57.57 -> distributed.concurrentTxAutoRetryDelay : 100 1394/10/01 11:43:57.57 -> db.makeFullCheckpointOnIndexChange : true 1394/10/01 11:43:57.57 -> db.makeFullCheckpointOnSchemaChange : true 1394/10/01 11:43:57.57 -> db.document.serializer : ORecordSerializerBinary 1394/10/01 11:43:57.57 -> lazyset.workOnStream : true 1394/10/01 11:43:57.57 -> db.mvcc : true 1394/10/01 11:43:57.57 -> db.use.distributedVersion : false 1394/10/01 11:43:57.57 -> mvrbtree.timeout : 0 1394/10/01 11:43:57.57 -> mvrbtree.nodePageSize : 256 1394/10/01 11:43:57.57 -> mvrbtree.loadFactor : 0.7 1394/10/01 11:43:57.57 -> mvrbtree.optimizeThreshold : 100000 1394/10/01 11:43:57.57 -> mvrbtree.entryPoints : 64 1394/10/01 11:43:57.57 -> mvrbtree.optimizeEntryPointsFactor : 1.0 1394/10/01 11:43:57.57 -> mvrbtree.entryKeysInMemory : false 1394/10/01 11:43:57.57 -> mvrbtree.entryValuesInMemory : false 1394/10/01 11:43:57.57 -> mvrbtree.ridBinaryThreshold : -1 1394/10/01 11:43:57.57 -> mvrbtree.ridNodePageSize : 64 1394/10/01 11:43:57.57 -> mvrbtree.ridNodeSaveMemory : false 1394/10/01 11:43:57.57 -> tx.commit.synch : false 1394/10/01 11:43:57.57 -> tx.autoRetry : 1 1394/10/01 11:43:57.57 -> tx.log.fileType : classic 1394/10/01 11:43:57.57 -> tx.log.synch : false 1394/10/01 11:43:57.57 -> tx.useLog : true 1394/10/01 11:43:57.57 -> index.auto.rebuildAfterNotSoftClose : true 1394/10/01 11:43:57.57 -> client.channel.minPool : 1 1394/10/01 11:43:57.57 -> storage.keepOpen : true 1394/10/01 11:43:57.57 -> cache.local.enabled : true 1394/10/01 11:43:57.57 -> Connected To Database *the first question : how can i improve average of insertion ?* *the second question : is there any sample or test for massive insertion ?* *the third question : is there any way to insert over 50000 record per second ?* Many Thanks . -- --- You received this message because you are subscribed to the Google Groups "OrientDB" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
