Re: What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
I got the answer for #3 here
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pages&links.
I will post the remaining questions in a separate thread.

On Mon, Feb 12, 2018 at 8:03 PM, John Wilson 
wrote:

> You're always helpful Val. Thanks!
>
>
> I have a question regarding Optimistic Locking
>
>
>1. The documentation here, https://cwiki.apache.
>org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+
>Architecture
>
> <https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+Architecture>,
>states that locks, for optimistic locking, are acquired during the
>"prepare" phase. But the graphic depicted there and the tutorial here,
>https://www.gridgain.com/resources/blog/apache-ignite-transactions-
>architecture-concurrency-modes-and-isolation-levels
>
> <https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-concurrency-modes-and-isolation-levels>,
>clearly indicate that locks are acquired during the commit phase; with a
>version information passed along with the key by the coordinator to the
>primary nodes. Can you please explain the discrepancy?
>
> And two questions regarding pages and page locking?
>
>1. Does Ignite use a lock-free algorithm for its B+ tree structure?
>Looking at the source code and the use of a tag field to avoid the ABA
>problem suggests that.
>2. Ignite documentation talks about entry-level locks and page locks.
>When exactly is a page locked and released? Also, when an entry is
>inserted/modified in a page, is the page locked, forbidding other threads
>from inserting other entries in the page, or only the entry's offset is
>locked allowing other threads to insert other entries in the page?
>3. What is the the difference between a directCount and indirectCount
>for a page?
>
> Thanks
>
> On Mon, Feb 12, 2018 at 7:33 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi John,
>>
>> 1. True.
>>
>> 2. The blog actually provides the answer:
>>
>> When the Backup Nodes detect the failure, they will notify the Transaction
>> coordinator that they committed the transaction successfully. In this
>> scenario, there is no data loss because the data are backed up and can
>> still
>> be accessed and used by applications.
>>
>> In other words, if primary node fails, backups will not wait for a
>> message,
>> but instead will commit right away and send an ack to the coordinator.
>> Once
>> coordinator gets all required acs, transaction completes.
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
You're always helpful Val. Thanks!


I have a question regarding Optimistic Locking


   1. The documentation here,
   
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+Architecture,
   states that locks, for optimistic locking, are acquired during the
   "prepare" phase. But the graphic depicted there and the tutorial here,
   
https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-concurrency-modes-and-isolation-levels,
   clearly indicate that locks are acquired during the commit phase; with a
   version information passed along with the key by the coordinator to the
   primary nodes. Can you please explain the discrepancy?

And two questions regarding pages and page locking?

   1. Does Ignite use a lock-free algorithm for its B+ tree structure?
   Looking at the source code and the use of a tag field to avoid the ABA
   problem suggests that.
   2. Ignite documentation talks about entry-level locks and page locks.
   When exactly is a page locked and released? Also, when an entry is
   inserted/modified in a page, is the page locked, forbidding other threads
   from inserting other entries in the page, or only the entry's offset is
   locked allowing other threads to insert other entries in the page?
   3. What is the the difference between a directCount and indirectCount
   for a page?

Thanks

On Mon, Feb 12, 2018 at 7:33 PM, vkulichenko 
wrote:

> Hi John,
>
> 1. True.
>
> 2. The blog actually provides the answer:
>
> When the Backup Nodes detect the failure, they will notify the Transaction
> coordinator that they committed the transaction successfully. In this
> scenario, there is no data loss because the data are backed up and can
> still
> be accessed and used by applications.
>
> In other words, if primary node fails, backups will not wait for a message,
> but instead will commit right away and send an ack to the coordinator. Once
> coordinator gets all required acs, transaction completes.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
Hi,

Assume the Prepare phase has completed and that the primary node has
received a commit message from the coordinator.

Two questions:

   1. A primary node commits a transaction before it forwards a commit
   message to the backup nodes. True?
   2. What happens if a Primary Node fails while it is committing but
   before the commit message is sent to backup nodes? Do the backup nodes
   commit after some timeout or will they send a fail message to the
   coordinator?

The doc below provides a nice description but doesn't exactly answer my
question.

https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-failover-and-recovery

Thanks,


Why does Ignite de-allocate memory regions ONLY during shutdown?

2018-01-08 Thread John Wilson
Hi,

I was looking at the UnsafeMemoryProvide and it looks to me that allocated
direct memory regions are deallocated only during shutdown.

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L80

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L63

My question:

if a memory region has been allocated and, during execution, all data pages
that are in the region are removed, then why is the memory region
de-allocated?

Thanks,


Re: How do I register a class for a use by the BinaryMarshaller

2017-12-12 Thread John Wilson
This doesn't work but I'm looking for an equivalent of this, which is
described here:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteBinary.html


 
 
 

org.apache.ignite.examples.client.binary.Employee
 
 
 
 




On Tue, Dec 12, 2017 at 11:25 AM, John Wilson 
wrote:

> Hi,
>
> How do I register a class, in the XML config file, to be used by the
> Binary Marshaller?
>
> Assume I have IgniteCache, I want the xml config
> equivalent for:
>
> binaryMarsh.context().descriptorForClass(Point.class, false)
>
>
> Thanks,
>


How do I register a class for a use by the BinaryMarshaller

2017-12-12 Thread John Wilson
Hi,

How do I register a class, in the XML config file, to be used by the Binary
Marshaller?

Assume I have IgniteCache, I want the xml config equivalent
for:

binaryMarsh.context().descriptorForClass(Point.class, false)


Thanks,


Quick Question on Check-pointing and Partition Files

2017-11-06 Thread John Wilson
Hi,

Ignite documentation states that during check-pointing dirty pages will be
written to partition files. I have two question based on this:


   1. What exactly is a partition file? What determines the number of
   partitions for a Cache?
   2. Are partition files immutable? or do dirty pages that are being
   written during checkpoint overwrite previously existing pages in the
   partition files?

Thanks,
John


Quick question on Atomic Mode

2017-11-02 Thread John Wilson
Hi,

I'm in atomic mode and I do a put operation on my cache and a power fail
happens in the middle of the put process... does Ignite use the WAL to
rollback a partial write? How does it guarantee that the atomic put is an
put successfully or do-nothing operation?

Thanks,


Re: Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
Excellent! Thanks!

On Wed, Nov 1, 2017 at 12:19 PM, Alexey Kukushkin  wrote:

> John,
>
> The default mode is the slowest of the 3 WAL write modes available. The
> other two are OS buffered write "LOG_ONLY" and Ignite buffered write
> "BACKGROUND".
>
> You might need some benchmarking in your real environment (hardware and
> software) with your specific task to understand if it is "slow". Often you
> can significantly speed the things up with native persistence performance
> tuning like adjusting page size, using separate disk for WAL, SSD
> over-provisioning, etc. Please read about some tricks here
> 
> .
>
>


Re: Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
With the WALMode = DEFAULT, which does a full-sync disk write, individual
put operations on a cache are written to disk then. That would make it
slow, right?

Thanks.

On Wed, Nov 1, 2017 at 10:58 AM, Dmitry Pavlov 
wrote:

> Hi John,
>
> No, WAL consists from several constant sized, append only files
> (segments).
> Segments are rotated and deleted after (WAL_History_size) checkpoints.
> WAL is common for all caches.
>
> If you are interested in low-level details of implementation, you can see
> it here in draft wiki page for Ignite developers https://cwiki.
> apache.org/confluence/display/IGNITE/Ignite+Persistent+
> Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALstructure
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 1 нояб. 2017 г. в 20:36, John Wilson :
>
>> Hi,
>>
>> Is the WAL a memory mapped file? Is it defined per cache?
>>
>> Thanks.
>>
>


Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
Hi,

Is the WAL a memory mapped file? Is it defined per cache?

Thanks.


Quick questions on replication and write synchronization mode

2017-10-30 Thread John Wilson
Hi,

1. Assume I write data item X with a FULL_ASYNC write synchronization mode,
what happens if I immediately attempt to read X? Will I read an old value
or do I wait till the previous writes are completed?
2. If the write mode is PRIMARY_ASYNC, will the immediate read operations
on X get answered just the primary node or will I read an old value or wait
for the previous write to complete?
3. During backups, do the backup nodes get data from the client or from the
primary node?

Thanks,


What is the purpose of the tag field in a lock state structure?

2017-10-17 Thread John Wilson
Hi,

I'm not clear with what the 2 byte TAG field in the lock state structure is
used for. Please explain.

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/util/OffheapReadWriteLock.java#L26

Thanks,
John


Atomicity Mode and WAL

2017-10-03 Thread John Wilson
Hi,

What is the purpose and difference in the use of the WAL in atomic mode vs.
in transaction mode?

Thanks


A quick question on Cache Data Consistency

2017-09-29 Thread John Wilson
Hi,

The documentation here,
https://apacheignite.readme.io/docs/primary-and-backup-copies, states that
regardless of write synchronization mode, cache data will always remain
fully consistent across all participating nodes. Yet, the Ignite book,
states that AP (of CAP theorem) is guaranteed under ATOMIC mode
(sacrificing consistency). I'm confused, please explain.

Thanks,
John


A quick question on Ignite's B+ tree implementation

2017-09-22 Thread John Wilson
Hi,

The internal nodes of a B+ tree, by definition, store only keys while the
leaf nodes store (or hold pointer to) the actual data.

The documentation here,
https://apacheignite.readme.io/docs/memory-architecture, states that each
index node (including internal nodes) store information to access the data
page and offset for the key in question (not just the leaf nodes)

Why call it a B+ tree.

Thanks,


Does Ignite write READ operations of Txs to WAL?

2017-09-19 Thread John Wilson
Hi,

Does Ignite write READ operations of transactions (e.g. for future auditing
purposes) in the WAL?

Thanks,


A quick question on WAL and Transaction

2017-09-14 Thread John Wilson
Hi,

Is an Ignite transaction, *with a recovery guarantee of power loss
(WALMode.DEFAULT),* considered committed only after its WAL log file has
been successfully *full-sync* written to disk? If so, doesn't this incur a
major slow down?

Thanks,

WALMode:
https://github.com/apache/ignite/blob/15613e2af5e0a4a0014bb5c6d6f6915038b1be1a/modules/core/src/main/java/org/apache/ignite/configuration/WALMode.java


Re: Java-level locks on cache entries

2017-09-12 Thread John Wilson
That one is for locking pages while a check point process is going on.

Thanks,

On Tue, Sep 12, 2017 at 6:57 AM, Konstantin Dudkov  wrote:

> Hi,
>
> Ignite uses page-level locks, see
> https://github.com/apache/ignite/blob/43be051cd33f0e35a5bf05fa3dbe73
> 660d2dcdd2/modules/core/src/main/java/org/apache/ignite/
> internal/processors/cache/persistence/tree/util/PageHandler.java#L245
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Java-level locks on cache entries

2017-09-11 Thread John Wilson
Hi,

The code path of a cache operation, e.g. cache.put(key, value), involves
locking the entry (entries) at java-level:

https://github.com/apache/ignite/blob/788adc0d00e0455e06cc9acabfe9ad425fdcd65b/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java#L2845

Also, the locking is released only after the serialization and the ultimate
writing of the cache entry (entries) into the off-heap data page.

My question:  if the locking is at java-level, how do we avoid the conflict
of many threads writing to the same data page offset? Or is it the case
that, before a lock is acquired, the offset within the data page that the
entry will be written to is known?

Thanks,


Re: Why is PageSize limited to just 16kB?

2017-09-08 Thread John Wilson
Thanks Dmitry!

On Fri, Sep 8, 2017 at 4:57 AM, Dmitry Pavlov  wrote:

> Hi John,
>
>
>
> Quite long page will require Ignite to use much time during loading page
> from disk or write it back during checkpointing. Ignite is able to change
> field value pointwise within page in case of update. In that case and if
> too long page is selected, one field update, for example 1 byte will
> require ignite to checkpoint 16K+ of data at next checkpoint.
>
>
>
> Going deeply to techical details: 16 K limitation came from internal
> 2-byte addressing of data within page. There is internal offset named
> 'item' which is 2 bytes in length and has 2 bits flags in it. 2^14=16384
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
> пт, 8 сент. 2017 г. в 0:22, John Wilson :
>
>> Hi,
>>
>> Ignite sets the maximum possible size for a page to 16KB. Why? What are
>> the drawbacks of having bigger page sizes?
>>
>> https://github.com/apache/ignite/blob/bd7bd226d959fbc686f6104a048106
>> b7b944347b/modules/core/src/main/java/org/apache/ignite/configuration/
>> MemoryConfiguration.java#L179
>>
>> Thanks,
>>
>


Why is PageSize limited to just 16kB?

2017-09-07 Thread John Wilson
Hi,

Ignite sets the maximum possible size for a page to 16KB. Why? What are the
drawbacks of having bigger page sizes?

https://github.com/apache/ignite/blob/bd7bd226d959fbc686f6104a048106b7b944347b/modules/core/src/main/java/org/apache/ignite/configuration/MemoryConfiguration.java#L179


Thanks,


Re: quick question on dirty pages and check pointing

2017-09-07 Thread John Wilson
Thanks!

On Thu, Sep 7, 2017 at 7:44 AM, Dmitry Pavlov  wrote:

> Hi John,
>
> Process of checkpointing makes all dirty pages to be written to disk. As
> result they will become clean (non dirty)
>
> When checkpoiting is not running, dirty page can't be evicted (if
> Persistent Data storage mode is enabled). Only clean page may be evicted
> from memory.
>
> In the same time too much dirty pages (for example 75%) will trigger
> checkpoint process.
>
> Sincerely,
> Dmitriy Pavlov
>
> чт, 7 сент. 2017 г. в 1:45, John Wilson :
>
>> Hi,
>>
>> Can old dirty pages purged from memory to make more room? Or only clean
>> old pages (pages that have already been written to disk by the periodical
>> check pointing) can be purged?
>>
>> If old dirty pages which has not been check-pointed can be purged, then
>> it means the purging process (in addition to check pointing) will write
>> pages to disk?
>>
>> Thanks,
>>
>


quick question on dirty pages and check pointing

2017-09-06 Thread John Wilson
Hi,

Can old dirty pages purged from memory to make more room? Or only clean old
pages (pages that have already been written to disk by the periodical check
pointing) can be purged?

If old dirty pages which has not been check-pointed can be purged, then it
means the purging process (in addition to check pointing) will write pages
to disk?

Thanks,


With onHeapCacheEnabled = false, BinaryOnHeapOutputStream is still used, why?

2017-09-05 Thread John Wilson
Hi,

I'm running the CacheAPIExample below with no on-heap caching, locally
using Intellij.

The stack frame shows that the entry I put is written on heap (using
BinaryOnheapOutputStream) and not off-heap (using BinaryOffheapOutputStream).
What's going on?

try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache API example started.");

CacheConfiguration cfg = new
CacheConfiguration<>();
cfg.setOnheapCacheEnabled(false);
cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache =
ignite.getOrCreateCache(cfg)) {
// Demonstrate atomic map operations.
cache.put(999, "777");
}
finally {
// Distributed cache could be removed from cluster only by
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}

Stack Frame:

 at org.apache.ignite.internal.util.GridUnsafe.putByte(GridUnsafe.java:394)
 *at
org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.unsafeWriteByte(BinaryHeapOutputStream.java:142)*
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.writeIntFieldPrimitive(BinaryWriterExImpl.java:999)
 at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:554)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
 at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
 at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
 at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshal(CacheObjectBinaryProcessorImpl.java:732)
 at
org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl.valueBytes(KeyCacheObjectImpl.java:78)
 at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1682)
 - locked <0xfc5> (a
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2462)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1944)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1797)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1689)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1170)
 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:659)
 at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2334)
 at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2311)
 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1005)
 at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:872)
 at
org.apache.ignite.examples.datagrid.CacheApiExample.main(CacheApiExample.java:56)


Re: Data Page Locking

2017-09-05 Thread John Wilson
Thanks Mikhail! If I may ask two additional questions:


   1. Is there any difference between data page eviction and check pointing
   (dirty pages being written to disk) when persistent store is enabled? My
   understanding is yes there is a difference: check pointing is a periodical
   process of writing dirty pages while data page eviction is evicting pages
   to make more room in memory. I'm right?
   2. Data page eviction works by going through each entry and evicting
   entries which are not locked in active transactions. However, when we write
   out pages to disk in a checkpoint process, do we write out the entire page
   as a chunk, or we write out all entries one-by-one?

Thanks

On Tue, Sep 5, 2017 at 11:03 AM, mcherkasov  wrote:

> Hi John,
>
> it's for internal use only, for example, a page can be locked for a
> checkpoint, to avoid writes for the page while we writing it to a disk.
>
> These bytes are used by OffheapReadWriteLock class, you can look at its
> usage if want learns more about this.
>
> Thanks,
> Mikhail.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Data Page Locking

2017-09-04 Thread John Wilson
Hi,

Ignite documentation describes how and when entry-based locks are obtained,
both in atomic and transactional atomicity modes.

I was wondering why and when locks on data pages are required/requested --
PageMemoryImp.java shows that data pages have 8 bytes reserved for LOCK:

https://github.com/apache/ignite/blob/15613e2af5e0a4a0014bb5c6d6f6915038b1be1a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImpl.java#L92

Thanks,


Re: Quick questions on Evictions

2017-09-04 Thread John Wilson
I appreciate the nice explanation. I got a few more questions:


   1. For the case where on-heap caching and persistent are both disabled,
   why does Ignite throw out out-dated pages from off-heap? Why not throw OOM
   error since the out-dated pages are not backed by persistent store and
   throwing away results in data loss?
   2. For off-heap eviction with persistent store enabled, will entries
   evicted from data pages be written to disk (in case they are dirty) or will
   they be thrown away (which would imply that entries eligible for eviction
   must be clean and have already been written to disk by checkpointing)?
   3.  Checkpointing works by locating dirty pages and writing them out. If
   a single entry in a data page is dirty (has been updated since the last
   check pointing), will checkpointing write the entire data page (all
   entries) to the partition files or just the dirty entry?

Thanks!

On Mon, Sep 4, 2017 at 8:17 AM, dkarachentsev 
wrote:

> Hi,
>
> Assume you have disabled onheapCahe and disabled persistence. In that case
> you may configure only datapage eviction mode, then outdated pages will be
> thrown away, when no free memory will be available for Ignite. Also you
> cannot configure per-entry eviction.
>
> OK, if you enable onheapCache, then Ignite will store on heap every entry
> that was read from off-heap (or disk). Next reads of it will not require
> off-heap readings, and every update will write to off-heap. To limit size
> of
> onheapCache you may set CacheConfiguration.setEvictionPolicy(), but it
> will
> not evict off-heap entries.
>
> So, off-heap eviction may be controlled with DataPageEvictionMode only, and
> as you suggested, it clears entries one-by-one from page, checking for
> current locks (transaction locks as well). If entry is locked, it won't be
> evicted.
>
> Thanks!
> -Dmitry.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Quick questions on Evictions

2017-09-01 Thread John Wilson
Hi,

I have been reading through Ignite doc and I still have these questions. I
appreciate your answer.

Assume my Ignite native persistence is *not *enabled:


   1. if on-heap cache is also not enabled, then there are no entry-based
   evictions, right?
   2. if on-heap cache is now enabled, does a write to on-heap cache also
   results in a write-through or write-behind behavior to the off-heap entry?
   3. If on-heap cache is not enabled but data page eviction mode is
   enabled, then where do evicted pages from off-heap go/written to?

and, need confirmation on how data page eviction is implemented:

4. when a data page eviction is initiated, Ignite works by iterating
through each entry in the page and evicting entries one by one. It may
happen that certain entries may be involved in active transactions and
hence certain entries may not be evicted at all.


Thanks,


Ignite Locking Mechanisms

2017-08-30 Thread John Wilson
Hi all,

When a key is inserted/updated, is the data page the key-value goes into
gets locked? If yes, how is this efficient? (since insertions/updates of
other key-values that map to the same data page have to wait)

I appreciate a detailed explanation.

Thanks,


Quick questions on Virtual Memory and Cache Memory Modes

2017-08-03 Thread John Wilson
A few quick questions:

   1. Are the cache memory modes: OFFHEAP_TIERED, OFFHEAP_VALUES, and
   ONHEAP_TIERED, deprecated in Ignite 2.1?
   2. In version 2.1, if I don't enable persistence store, do all data and
   indexes get stored on heap, possibly causing OME errors?
   3. Is the virtual memory used when persistence store is not enabled?

Thanks


Ignite Benchmark has very low CPU utilization, why?

2017-08-02 Thread John Wilson
Hi,

I was running Yardstick-Ignite benchmark,
https://github.com/apacheignite/yardstick-ignite, on a 3 node cluster. I
used one node for client node and the others for server nodes.

I run the IgnitePutBenchmark and found out that the CPU utilization on
server nodes is very low (~20%) Can you suggest a configuration to push the
CPU utilization to max? [all my nodes have 36 logical cores].

On the published benchmark results that compare against Hazelcast, what was
the CPU utilization?

benchmark.properties:

# JVM options.
JVM_OPTS=${JVM_OPTS}" -DIGNITE_QUIET=false"
# Uncomment to enable concurrent garbage collection (GC) if you encounter
long GC pauses.
JVM_OPTS=${JVM_OPTS}" \
-Xms6g \
-Xmx6g \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
"
# List of default probes.
# Add DStatProbe or VmStatProbe if your OS supports it (e.g. if running on
Linux).
BENCHMARK_DEFAULT_PROBES=ThroughputLatencyProbe,PercentileProbe,DStatProbe
# Packages where the specified benchmark is searched by reflection
mechanism.
BENCHMARK_PACKAGES=org.yardstickframework,org.apache.ignite.yardstick
# Deploy binaries to remote nodes.
AUTO_COPY=true
# Restart server after each benchmark.
RESTART_SERVERS=true
# Probe point writer class name.
# BENCHMARK_WRITER=
# Comma-separated list of the hosts to run BenchmarkServers on. 2 nodes on
local host are enabled by default.
SERVER_HOSTS=server-node-1,server-node-2
# Comma-separated list of the hosts to run BenchmarkDrivers on. 1 node on
local host is enabled by default.
DRIVER_HOSTS=localhost
# Remote username.
# REMOTE_USER=
# Number of nodes, used to wait for the specified number of nodes to start.
nodesNum=$((`echo ${SERVER_HOSTS} | tr ',' '\n' | wc -l` + `echo
${DRIVER_HOSTS} | tr ',' '\n' | wc -l`))
# Ignite version.
ver="ignite-1.9-"
# Backups count.
b=1
# Warmup.
w=60
# Duration.
d=300
# Threads count.
t=36
# Run configuration which contains all benchmarks.
# Note that each benchmark is set to run for 300 seconds (5 mins) with
warm-up set to 60 seconds (1 minute).
CONFIGS="\
-cfg ${SCRIPT_DIR}/../config/ignite-localhost-config.xml -nn ${nodesNum} -b
${b} -w ${w} -d ${d} -t ${t} -sm PRIMARY_SYNC --client -dn
IgnitePutBenchmark -sn IgniteNode -ds ${ver}atomic-put-${b}-backup,\

"


Re: Persistence Store and sun.misc.UNSAFE

2017-08-02 Thread John Wilson
I have both the server and client in debug mode and none of my break points
were hit. Which Java classes and methods (running on the server node) do
the actual off-heap store operations they get from clients?

Thanks,

On Wed, Aug 2, 2017 at 3:24 PM, vkulichenko 
wrote:

> Sami,
>
> Streamer is running on a client and it doesn't store any data. It just
> accumulates updates in local queues, batches them and sends to server
> nodes.
> The cache will then store the data on server side on off-heap memory.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Persistence-Store-and-sun-
> misc-UNSAFE-tp15920p15921.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Persistence Store and sun.misc.UNSAFE

2017-08-02 Thread John Wilson
Hi,

The link here, https://apacheignite.readme.io/v2.1/docs/durable-memory,
states that when persistence feature is enabled all data and indexes are
stored off-heap.

To verify, I ran
org.apache.ignite.examples.persistence.PersistenceStoreExample on Intellij
and followed the code path and it looks like every addData operation is
updating some on-heap buffer.

My question:

1. When we do streamer.addData or cache.put operation, do the key-value get
stored immediately on off-heap data page managed by the virtual memory?
2. My assumption is every off-heap read/write would use sun.misc.UNSAFE.
But I don't see streamer.addData hitting the classes that use
un.misc.unsafe.putXYZ.

Thanks,
Sami


B+ Tree, Index Pages and Memory Region

2017-07-27 Thread John Wilson
I'm trying to understanding the purpose of B+ Tree and Index Pages for
Apache Ignite as described here:
https://apacheignite.readme.io/docs/page-memory

I have a few questions:


   1. What exactly does an Index Page contain? An ordered list of hash code
   values for keys that fall into the index page and "other" information that
   will be used to locate and index into the data page to store/get the
   key-value pair?
   2. Since hash codes are being used in the index pages, what would happen
   if collision occurs?
   3. For a "typical" application, do we expect the number of data pages to
   be much higher than the number of index pages ? (since data pages contain
   key-value pairs)
   4. What type of relation exists between a distributed cache that we
   create using ignite.getOrCreateCache(name) and a memory region? 1-to-1,
   Many-to-1, 1-to-Many, or Many-to-Many?
   5. Consider the following pseudo code that starts two server nodes:

Ignite ignite = Ignition.start("two_server_node_config");
IgniteCache cache = ignite.getOrCreateCache("my_cache");
cache.put(7, "abcd");

   1. How does Ignite determine the node where to put the key into?
  2. Once the node is determined, how does Ignite locate the specific
  memory region the key belongs to?


Thanks