Re: read-though tutorial for a big table

2020-09-23 Thread vtchernyi
Hi Alex,I have some good news.>> experience and materials you mentioned in thisthreadMy tutorial is finally published: https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-apiHope that helpsVladimir12:52, 22 июня 2020 г., Alex Panchenko :Hello Vladimir,I'm building the high-load service to handle intensive read-write operationsusing Apache Ignite. I need exactly the same - "loading big tables fromrdbms (Postgres) and creating cache entries based on table info".Could you, please, share your experience and materials you mentioned in thisthread. I'd be much appreciated. I think it'd help me and others IgniteusersBTW"This approach was tested in production and showed good timing being pairedwith MSSQL, tables from tens to hundreds million rows."Is it possible to see some results of testing or/and performance metricsbefore and after using IgniteThanks!-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Remote Cluster ServiceGrid debug

2020-09-23 Thread marble.zh...@coinflex.com
Hi, 

I have a cluster running on the cloud server, now met some issues, is it
possible that I can connect with client in local to debug the service? If
ok, how do that, thanks.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Planning to pick Ignite 13006

2020-09-23 Thread Hemambara
Hi, I am interested to pick
https://issues.apache.org/jira/browse/IGNITE-13006 and work on it. Please
let me know for any constraints to pick it up. I see fix version on this as
2.10, but I do not see any branch created. Could you please let me know if
we are planning to start 2.10 and is it the right time to pick this jira. If
so which branch I should take it as a base to fork to my local and start
working



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Planning to pick Ignite 13006

2020-09-23 Thread Ilya Kasnacheev
Hello!

You can offer your pull requests against master branch. Once 2.10 is forked
it will be included.

Regards,
-- 
Ilya Kasnacheev


ср, 23 сент. 2020 г. в 14:08, Hemambara :

> Hi, I am interested to pick
> https://issues.apache.org/jira/browse/IGNITE-13006 and work on it. Please
> let me know for any constraints to pick it up. I see fix version on this as
> 2.10, but I do not see any branch created. Could you please let me know if
> we are planning to start 2.10 and is it the right time to pick this jira.
> If
> so which branch I should take it as a base to fork to my local and start
> working
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Remote Cluster ServiceGrid debug

2020-09-23 Thread aealexsandrov
Hi,

There is too little information about your case. Which kind of debug you are
going to have? 

Ignite has several monitoring tools. For example, you can use the web
console:

https://apacheignite-tools.readme.io/v2.8/docs/ignite-web-console

Probably you can use it.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


OutOfMemoryException with Persistence and Eviction Enabled

2020-09-23 Thread Mitchell Rathbun (BLOOMBERG/ 731 LEX)
We currently have a cache that is structured with a key of record type and a 
value that is a map from field id to field. So to update this cache, which has 
persistence enabled, we need to atomically load the value map for a key, add to 
that map, and write the map back to the cache. This can be done using invokeAll 
and a CacheEntryProcessor. However, when I test with a higher load (100k 
records with 50 fields each), I run into an OOM exception that I will post 
below. The cause of the exception is reported to be the failure to find a page 
to evict. However, even when I set the DataRegion's eviction threshold to .5 
and the page eviction mode to RANDOM_2_LRU, I am still getting the same error. 
I have 2 main questions from this:

1. Why is it failing to evict a page even with a lower threshold and eviction 
enabled? Is it failing to reach the threshold somehow? Are non-data pages like 
metadata and index pages taken into account when determining if the threshold 
has been reached?

2. We don't have this issue when using IgniteDataStreamer to write large 
amounts of data to the cache, we just can't get transactional support at the 
same time. Why is this OOME an issue with regular cache puts but not with 
IgniteDataStreamer? I would think that any issues with checkpointing and 
eviction would also occur with IgniteDataStreamer.

Re:OutOfMemoryException with Persistence and Eviction Enabled

2020-09-23 Thread Mitchell Rathbun (BLOOMBERG/ 731 LEX)
Here is the exception:

Sep 22, 2020 7:58:22 PM java.util.logging.LogManager$RootLogger log
SEVERE: Critical system error detected. Will be handled accordingly to 
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out of 
memory in data region [name=customformulacalcrts, initSize=190.7 MiB, 
maxSize=190.7 MiB, persistenceEnabled=true] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies]]
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of memory 
in data region [name=customformulacalcrts, initSize=190.7 MiB, maxSize=190.7 
MiB, persistenceEnabled=true] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.allocatePage(PageMemoryImpl.java:607)
  at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.allocateDataPage(AbstractFreeList.java:464)
 at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.insertDataRow(AbstractFreeList.java:491)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:59)
   at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:35)
   at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:103)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1691)
   at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.createRow(GridCacheOffheapManager.java:1910)
   at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5701)
 at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5643)
 at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3719)
  at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5900(BPlusTree.java:3613)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1895)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1872)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1779)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1638)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1621)
  at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
  at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
  at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4248)
 at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4226)
 at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdateLocal(GridCacheMapEntry.java:2106)
   at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.updateAllInternal(GridLocalAtomicCache.java:929)
at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.access$100(GridLocalAtomicCache.java:86)
at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache$6.call(GridLocalAtomicCache.java:776)
   at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6817)
   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
  at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: 
Failed t

Ignite hangs getting cache size

2020-09-23 Thread arw180
I'm running a 5 node Ignite cluster, version 2.8.1, with persistence enabled
and a small number of partitioned caches, ranging between a few thousand
records to one cache with over 1 billion records. No SQL use.

When I run a Java client app and connect to the cluster (with clientMode =
true), I connect fine and can retrieve the names of all caches on the
cluster quickly. However, attempting to get the size of a cache via
ignite.getOrCreateCache("existingCacheName").size() just hangs. This happens
regardless of which cache I try to get the size of.

I do see a suspicious error after a minute or so: WARNING: Node FAILED:
TcpDiscoveryNode[...] - it appears to be referencing my client node, so
perhaps that's the problem. That said, I don't know why the node failed,
what to do about it, or why it seems to happen so frequently. There are no
relevant logs coming from any of the ignite server nodes either.

Thanks for your help!

Alan





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [DISCUSSION] Renaming Ignite's product category

2020-09-23 Thread Denis Magda
Adam,

You defined GigaSpaces as a true in-memory computing platform. What is the
true platform for you?


-
Denis


On Fri, Sep 18, 2020 at 7:02 AM Carbone, Adam 
wrote:

> So when I came across Ignite It was described as an In Memory Data Grid
>
> So one way to look at this is who do you fashion as Ignite competing
> against?
>
> Are competing against Redis, Aerospike - In Memory Databases
>
> Or are you more competing with
>
> Gigaspaces - True In memory Compute platform
>
> And then you have like of
>
> Hazelcast that started as a Distributed Hash and have gained some
> features...
>
> On thing that I think is a differentiator that isn't being highlighted but
> Is  unique feature to Ignited, and the primary reason we ended up here; The
> integration with spark and it's distributed/shared Datasets/Dataframes.
>
> I don't know for me the In Memory Data Grid I think fits what Ignite is...
>
> Regards
>
> ~Adam
>
> Adam Carbone | Director of Innovation – Intelligent Platform Team |
> Bottomline Technologies
> Office: 603-501-6446 | Mobile: 603-570-8418
> www.bottomline.com
>
>
>
> On 9/17/20, 11:45 AM, "Glenn Wiebe"  wrote:
>
> I agree with Stephen about "database" devaluing what Ignite can do
> (though
> it probably hits the majority of existing use cases). I tend to go with
> "massively distributed storage and compute platform"
>
> I know, I didn't take sides, I just have both.
>
> Cheers,
>   Glenn
>
> On Thu., Sep. 17, 2020, 7:04 a.m. Stephen Darlington, <
> stephen.darling...@gridgain.com> wrote:
>
> > I think this is a great question. Explaining what Ignite does is
> always a
> > challenge, so having a useful “tag line” would be very valuable.
> >
> > I’m not sure what the answer is but I think calling it a “database”
> > devalues all the compute facilities. "Computing platform” may be too
> vague
> > but it at least says that we do more than “just” store data.
> >
> > On 17 Sep 2020, at 06:29, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > My vote is for the "distributed memory-first database". It clearly
> states
> > that Ignite is a database (which is true at this point), while still
> > emphasizing the in-memory computing power endorsed by the platform.
> >
> > The "in-memory computing platform" is an ambiguous term and doesn't
> really
> > reflect what Ignite is, especially in its current state.
> >
> > -Val
> >
> > On Wed, Sep 16, 2020 at 3:53 PM Denis Magda 
> wrote:
> >
> >> Igniters,
> >>
> >> Throughout the history of our project, we could see how the
> addition of
> >> certain features required us to reassess the project's name and
> category.
> >>
> >> Before Ignite joined the ASF, it supported only compute APIs
> resembling
> >> the
> >> MapReduce engine of Hadoop. Those days, it was fair to define
> Ignite as "a
> >> distributed in-memory computing engine". Next, at the time of the
> project
> >> donation, it already included key-value/SQL/transactional APIs, was
> used
> >> as
> >> a distributed cache, and significantly outgrew the "in-memory
> computing
> >> engine" use case. That's how the project transitioned to the product
> >> category of in-memory caches and we started to name it as an
> "in-memory
> >> data grid" or "in-memory computing platform" to differentiate from
> >> classical caching products such as Memcached and Redis.
> >>
> >> Nowadays, the project outgrew its caching use case, and the
> classification
> >> of Ignite as an "in-memory data grid" or "in-memory computing
> platform"
> >> doesn't sound accurate. We rebuilt our storage engine by replacing a
> >> typical key-value engine with a B-tree engine that spans across
> memory and
> >> disk tiers. And it's not surprising to see more deployments of
> Ignite as a
> >> database on its own. So, it feels like we need to reconsider Ignite
> >> positioning again so that a) application developers can discover it
> easily
> >> via search engines and b) the project can stand out from in-memory
> >> projects
> >> with intersecting capabilities.
> >>
> >> To the point, I'm suggesting to reposition Ignite in one of the
> following
> >> ways:
> >>
> >>1. Ignite is a "distributed X database". We are indeed a
> distributed
> >>partitioned database where X can be "multi-tiered" or
> "memory-first" to
> >>emphasize that we are more than an in-memory database.
> >>2. Keep defining Ignite as "an in-memory computing platform" but
> name
> >>our storage engine uniquely as "IgniteDB" to highlight that the
> >> platform is
> >>powered by a "distributed multi-tiered/memory-first database".
> >>
> >> What is your thinking?
> >>
> >>
> >> (Also, regardless of a selected name, Ignite still will be used as
> a ca