Unsubscribe

2021-07-12 Thread Michael Cherkasov



Re: Error Codes

2021-01-04 Thread Michael Cherkasov
Hi Ilya,

It's about logs only, I don't think we need this at the API level. Error
codes will make the solutions more searchable.
Plus we can build troubleshooting guides based on it, it will help us
gather information from user list and StackOverflow.

Even a solution for trivial cases will be helpful, once I was requested to
join the call late evening because ignite failed to copy WAL file and there
just was no space on the disk.
While the error was obvious for me, it's not obvious for all users.

Let's start from something simple, just assign error codes to
absolutely all exceptions first. So next year or two user list will full of
error codes and solutions for them.

Might be it's a change for Ignite 3.0? @Val, I think you can help with this
question.

Any thoughts/comments?

Thanks,
Mike.

сб, 2 янв. 2021 г. в 12:18, Ilya Kasnacheev :

> Hello!
>
> I don't think there's a direct link between an exception thrown in depths
> of Ignite code, and specific error which may be reported to user.
>
> A notorious example is CorruptedTreeException which is known to be thrown
> due to incorrect field type in binary object or bad SQL cast. So we could
> document it "If you get IGN13 error this means your persistence is
> corrupted beyond repair. This, or you have a typo in your SQL." - of course
> it will not help anyone.
>
> This means we can't get to the desired result by application of 1.
>
> There's got to be a different plan. First of all, we need to decide what's
> our target. Is it log, or is it API?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 1 янв. 2021 г. в 02:07, Michael Cherkasov  >:
>
>> Hi folks,
>>
>> I was thinking how we can simplify Ignite clusters troubleshooting and
>> the best of course if the cluster can do self-healing, like transaction
>> cancellation if tx blocks exchange or note restart on OOM error. However,
>> sometimes those mechanisms don't work well or user interaction is required.
>> Not all errors are obvious for users and it's not clear what actions
>> required to restore the cluster.
>> If you google exceptions or error messages and the results can be
>> ambiguous and not certain because different errors can have similar
>> exceptions and you need to analyze stack trace to distinguish them. So
>> googling isn't a straight and easy process in this case.
>> Almost all major DBs have error codes[1][2][3]
>> Let's do the same for Ignite, error codes easy to google, so user/dev
>> list will be significantly more useful. We can have documentation with an
>> error code registry and solutions for the errors.
>>
>> To implement this we need to do the following:
>> 1. all error messages/exceptions must have a unique error code(so, all
>> new PR must NOT be accepted if any exceptions/errors don't have error
>> codes.)
>> 2. to avoid error code duplication, all error codes will be stored as
>> files under some folder.
>> 3. those files can be a source of documentation for this error code.
>>
>> All this files can be empty, but futher, if exception will apper on user
>> list and someone will find solution, first, other people can easialy google
>> it by error code, and second, we can build documentation for this error
>> code base on user-list thread/stackoverflow/other source.
>>
>> Any thoughts?
>>
>> [1] Mysql
>> https://dev.mysql.com/doc/refman/8.0/en/error-message-elements.html
>> [2] OracleDB https://docs.oracle.com/pls/db92/db92.error_search
>> [3] PostgreSQL https://www.postgresql.org/docs/10/errcodes-appendix.html
>>
>> Thanks,
>> Mike.
>>
>


Error Codes

2020-12-31 Thread Michael Cherkasov
Hi folks,

I was thinking how we can simplify Ignite clusters troubleshooting and the
best of course if the cluster can do self-healing, like transaction
cancellation if tx blocks exchange or note restart on OOM error. However,
sometimes those mechanisms don't work well or user interaction is required.
Not all errors are obvious for users and it's not clear what actions
required to restore the cluster.
If you google exceptions or error messages and the results can be
ambiguous and not certain because different errors can have similar
exceptions and you need to analyze stack trace to distinguish them. So
googling isn't a straight and easy process in this case.
Almost all major DBs have error codes[1][2][3]
Let's do the same for Ignite, error codes easy to google, so user/dev list
will be significantly more useful. We can have documentation with an error
code registry and solutions for the errors.

To implement this we need to do the following:
1. all error messages/exceptions must have a unique error code(so, all new
PR must NOT be accepted if any exceptions/errors don't have error codes.)
2. to avoid error code duplication, all error codes will be stored as files
under some folder.
3. those files can be a source of documentation for this error code.

All this files can be empty, but futher, if exception will apper on user
list and someone will find solution, first, other people can easialy google
it by error code, and second, we can build documentation for this error
code base on user-list thread/stackoverflow/other source.

Any thoughts?

[1] Mysql
https://dev.mysql.com/doc/refman/8.0/en/error-message-elements.html
[2] OracleDB https://docs.oracle.com/pls/db92/db92.error_search
[3] PostgreSQL https://www.postgresql.org/docs/10/errcodes-appendix.html

Thanks,
Mike.


Issue with 2.8.1

2020-10-14 Thread Michael Weiss
I'm having a problem with starting a client node of ignite in 2.8.1 (server
also in 2.8.1), it works fine with server and client being 2.7. Shortly
after it connects to my ignite server it dies on creating the bean for
QuartzDataSourceInitializer on it's init method. Sorry I can't really post
much in the way of logs, I don't have an easy way to do that. Caused by is
"org.h2.jdbc.JdbcSQLException: Function "LOCK_MODE" not found; SQL
statement: CALL LOCK_MODE() [90022-197].

Appreciate any help on troubleshooting this.


Re: ClusterGroupEmptyException: Topology projection is empty.

2020-09-11 Thread Michael Cherkasov
visor, WebConsole or ControlCenter use the same mechanisms for cluster
monitoring, so if you use any of it see such kind of errors, you can ignore
them, they are harmless for your cluster.

чт, 10 сент. 2020 г. в 06:42, kay :

> Hello,
>
> I don't use Visor for cluster monitoring.. then Why that log is showing
> up..?
>
>
> Thank you so much!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Segmentation Behaviour

2020-09-11 Thread Michael Cherkasov
BTW,  you can try zookeeper discovery, I think it's the easier way to
resolve split-brain problem:
https://www.gridgain.com/docs/latest/developers-guide/clustering/zookeeper-discovery

пт, 11 сент. 2020 г. в 14:13, Michael Cherkasov :

> Make sure first you stop all nodes in one segment and only then start
> them, rolling restart might not fix cluster segmentation.
>
>
> пт, 11 сент. 2020 г. в 09:08, Denis Magda :
>
>> Hi Samuel,
>>
>> With the current behavior, the segments will not rejoin automatically.
>> Once the network is recovered from a network partitioning event, you need
>> to restart all the nodes of one of the segments. Those nodes will join the
>> other nodes and the cluster will become fully operational.
>>
>> Let me know if you have any other questions or guidance with this.
>>
>> -
>> Denis
>>
>>
>> On Fri, Sep 11, 2020 at 7:38 AM Samuel Ueltschi <
>> samuel.uelts...@bsi-software.com> wrote:
>>
>>> Hi
>>>
>>>
>>>
>>> I've been testing Ignite (2.8.1) and it's behaviour under network
>>> segmentation.
>>>
>>> According to the docs, Ignite nodes should be able to detect network
>>> segmentation and apply the configured SegmentationPolicy.
>>>
>>>
>>>
>>> However the segmentation handling didn't trigger as I would have
>>> expected it to do.
>>>
>>> For my tests, I setup three cluster nodes c1, c2 and c3 running in
>>> docker containers, all competing for a shared IgniteLock instance in a loop.
>>>
>>> Then I used iptables in container c2 to drop all incoming and outgoing
>>> packages on that node.
>>>
>>> After a few seconds I got the following events:
>>>
>>>
>>>
>>> c1:
>>>
>>> - EVT_NODE_FAILED for c2
>>>
>>>
>>>
>>> c2:
>>>
>>> - EVT_NODE_FAILED for c1
>>>
>>> - EVT_NODE_FAILED for c3
>>>
>>>
>>>
>>> c3:
>>>
>>> - EVT_NODE_FAILED for c2
>>>
>>>
>>>
>>> Then I reset the iptables rules expecting that c2 would rejoin the
>>> cluster and detect segmentation.
>>>
>>> However this didn't happen, c2 just keeps running as a second standalone
>>> cluster instance.
>>>
>>> Only after restarting c2 it rejoined the cluster.
>>>
>>>
>>>
>>> Eventually I was able to trigger the EVT_NODE_SEGMENTED event by pausing
>>> the c2 container for 1minute. After resuming, c2 detects the segmentation
>>> and runs the segmentation policy as excepcted.
>>>
>>>
>>>
>>> Is this behaviour correct? Shouldn't the Ignite cluster be able to
>>> recover from the first scenario?
>>>
>>> During a network segmentation no packages would be able to move between
>>> nodes, so the iptables approach should be realistic in my oppinion.
>>>
>>>
>>>
>>> Maybe I have some wrong assumptions about network segmentation so any
>>> feedback would be greatly appreciated.
>>>
>>>
>>>
>>> Cheers Sam
>>>
>>>
>>>
>>> --
>>> Software Engineer
>>> BSI Business Systems Integration AG
>>> Erlachstrasse 16B, CH-3012 Bern
>>> Telefon +41 31 850 12 06
>>>
>>> www.bsi-software.com
>>>
>>>
>>>
>>


Re: Ignite Segmentation Behaviour

2020-09-11 Thread Michael Cherkasov
Make sure first you stop all nodes in one segment and only then start them,
rolling restart might not fix cluster segmentation.


пт, 11 сент. 2020 г. в 09:08, Denis Magda :

> Hi Samuel,
>
> With the current behavior, the segments will not rejoin automatically.
> Once the network is recovered from a network partitioning event, you need
> to restart all the nodes of one of the segments. Those nodes will join the
> other nodes and the cluster will become fully operational.
>
> Let me know if you have any other questions or guidance with this.
>
> -
> Denis
>
>
> On Fri, Sep 11, 2020 at 7:38 AM Samuel Ueltschi <
> samuel.uelts...@bsi-software.com> wrote:
>
>> Hi
>>
>>
>>
>> I've been testing Ignite (2.8.1) and it's behaviour under network
>> segmentation.
>>
>> According to the docs, Ignite nodes should be able to detect network
>> segmentation and apply the configured SegmentationPolicy.
>>
>>
>>
>> However the segmentation handling didn't trigger as I would have expected
>> it to do.
>>
>> For my tests, I setup three cluster nodes c1, c2 and c3 running in docker
>> containers, all competing for a shared IgniteLock instance in a loop.
>>
>> Then I used iptables in container c2 to drop all incoming and outgoing
>> packages on that node.
>>
>> After a few seconds I got the following events:
>>
>>
>>
>> c1:
>>
>> - EVT_NODE_FAILED for c2
>>
>>
>>
>> c2:
>>
>> - EVT_NODE_FAILED for c1
>>
>> - EVT_NODE_FAILED for c3
>>
>>
>>
>> c3:
>>
>> - EVT_NODE_FAILED for c2
>>
>>
>>
>> Then I reset the iptables rules expecting that c2 would rejoin the
>> cluster and detect segmentation.
>>
>> However this didn't happen, c2 just keeps running as a second standalone
>> cluster instance.
>>
>> Only after restarting c2 it rejoined the cluster.
>>
>>
>>
>> Eventually I was able to trigger the EVT_NODE_SEGMENTED event by pausing
>> the c2 container for 1minute. After resuming, c2 detects the segmentation
>> and runs the segmentation policy as excepcted.
>>
>>
>>
>> Is this behaviour correct? Shouldn't the Ignite cluster be able to
>> recover from the first scenario?
>>
>> During a network segmentation no packages would be able to move between
>> nodes, so the iptables approach should be realistic in my oppinion.
>>
>>
>>
>> Maybe I have some wrong assumptions about network segmentation so any
>> feedback would be greatly appreciated.
>>
>>
>>
>> Cheers Sam
>>
>>
>>
>> --
>> Software Engineer
>> BSI Business Systems Integration AG
>> Erlachstrasse 16B, CH-3012 Bern
>> Telefon +41 31 850 12 06
>>
>> www.bsi-software.com
>>
>>
>>
>


Re: Ignite 2.8.1 ContinuousQuery: ClassNotFoundException on a client node

2020-09-10 Thread Michael Cherkasov
Hello Ivan,

Right now CQ deployed remote filter on all nodes, even on client nodes that
don't store any data and this doesn't make sense, I think Ignite should not
deploy it at all on non-affinity nodes. I filed a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-13432

Regarding the warning, the CQ is already deployed, but if you will try to
deploy a new one while the client is online it will fail, just make sure
that you deploy all CQ while clients are offline or put a remote filter to
client nodes too, btw peer class loading can help with this.

Thanks,
Mike.
чт, 10 сент. 2020 г. в 10:14, :

> Hi everyone!
>
> I am getting ClassNotFoundException on a client node when a server node
> registers a new ContinuousQuery with a remote filter class that is not in
> the classpath of a client node. Could someone please clarify if this is the
> expected behavior and all the client nodes must have all remove filters in
> their classpath? I wonder if it is correct that client nodes are
> participating in continuous query execution at all?
>
> According to the ticket (
> https://issues.apache.org/jira/browse/IGNITE-11907) it looks like a
> client node can even break some functionality on a server node that tries
> to initialize a new continuous query.
>
> At the same time I see the following message when a new client node enters
> the topology:
>
> 55
> 10.09.20 18:55:00.407 [tcp-client-disco-msg-worker-#4]  WARN
> tcp.TcpDiscoverySpi-Failed to unmarshal continuous query remote filter on
> client node. Can be ignored.
> 10.09.20 18:55:00.410 [tcp-client-disco-msg-worker-#4]  WARN
> tcp.TcpDiscoverySpi-Failed to unmarshal continuous query remote filter on
> client node. Can be ignored.
>
> So in case when a new client node joins all the running queries continue
> to work as expected (at least it is written in the log).
>
> Best regards,
> Ivan Fedorenkov
>


Re: Ignite Node shutdown..

2020-09-08 Thread Michael Cherkasov
Hi, you don't have enough off heap space for your data, so I wondering,
what kind of tests do you try to do?
Might be you can enable persistence?

пн, 7 сент. 2020 г. в 23:26, kay :

> Hello, I test if i use max heap size..
>
> I got this error and node was shut down..
>
> rmtAddr's client socket port is allocated by OS and it is randomized.
>
>
>
>
>  [1;31m[2020.09.08 14:27:04.251] [ERROR] [] Critical system error detected.
> Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.mem.IgniteOutOfMemoryException: Out of memory in data region
> [name=FW_PSF_REGION, initSize=15.0 MiB, maxSize=100.0 MiB,
> persistenceEnabled=false] Try the following:
>^-- Increase maximum off-heap memory size
> (DataRegionConfiguration.maxSize)
>^-- Enable Ignite persistence
> (DataRegionConfiguration.persistenceEnabled)
>^-- Enable eviction or expiration policies]]
>   [m org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of
> memory in data region [name=FW_PSF_REGION, initSize=15.0 MiB, maxSize=100.0
> MiB, persistenceEnabled=false] Try the following:
>^-- Increase maximum off-heap memory size
> (DataRegionConfiguration.maxSize)
>^-- Enable Ignite persistence
> (DataRegionConfiguration.persistenceEnabled)
>^-- Enable eviction or expiration policies
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1063)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:106)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1746)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:6409)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:6160)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:5849)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3817)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5700(BPlusTree.java:3711)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1955)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1839)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1696)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1679)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:441)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2314)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2636)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2097)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1914)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1714)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:300)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:486)
> [ignite-core-2.8.0.jar:2.8.0]
>   at
>
> 

Re: Increase the indexing speed while loading the cache from an RDBMS

2020-09-04 Thread Michael Cherkasov
>I've checked the RDBMS perf metrics. I do not see any spike in the CPU or
memory usage.
>Is there anything in particular that could cause a bottleneck from the DB
perspective ? The DB in use is a Postgres DB.

if you use cache store, each node will read the full table you want to
load, but only records which belong to a node will be saved(based on data
affinity), so Ignite will do N parallel read of the same table, where N is
number of nodes.
So if you have 50 nodes, then you should use data streamer, so one node
will read data and stream them to the cluster.
However if you have relatively small amount of nodes, it's fine to use
CacheStore to load data.

>Is there a sample/best practice reference code for Using Datastreamers to
directly query the DB to preload the cache ?
> My intention is to use Spring Data via IgniteRepositories. In order to
use an Ignite repository, I would have to preload the cache.
>And now, to preload the cache, I would have to query the DB. A bit of a
chicken and egg problem.
I would say, use JDBC and Ignite Data Streamer, it's one time action that
you don't need to repeat often, you can even java separate app to do this.

Thanks,
Mike.

чт, 27 авг. 2020 г. в 14:01, Srikanta Patanjali :

> Thanks Anton & Mikhail for your responses.
>
> I'll try to disable the WAL and test the cache preload performance. Also
> will execute a JFR to capture some info.
>
> @Mikhail I've checked the RDBMS perf metrics. I do not see any spike in
> the CPU or memory usage. Is there anything in particular that could cause a
> bottleneck from the DB perspective ? The DB in use is a Postgres DB.
>
> @Anton Is there a sample/best practice reference code for Using
> Datastreamers to directly query the DB to preload the cache ? My intention
> is to use Spring Data via IgniteRepositories. In order to use an Ignite
> repository, I would have to preload the cache. And now, to preload the
> cache, I would have to query the DB. A bit of a chicken and egg problem.
>
> Regards,
> Srikanta
>
> On Thu, Aug 27, 2020 at 7:54 PM Mikhail Cherkasov 
> wrote:
>
>> btw, might be bottleneck is your RDBMS, it can just stream data slowly,
>> slower then ignite can save, if make sense to check this version too.
>>
>> On Thu, Aug 27, 2020 at 10:46 AM akurbanov  wrote:
>>
>>> Hi,
>>>
>>> Which API are you using to load the entries and what is the node
>>> configuration? I would recommend to share the configs and try utilizing
>>> the
>>> data streamer.
>>>
>>> https://apacheignite.readme.io/docs/data-streamers
>>>
>>> I would recommend recording a JFR to find where the VM spends most time.
>>>
>>> Best regards,
>>> Anton
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>> --
>> Thanks,
>> Mikhail.
>>
>


Re: Ignite's memory consumption

2020-08-27 Thread Michael Cherkasov
512M of heap, plus 512M of offheap, plus some space for java metaspace, if
you don't set, it is unlimited, set to: -XX:MaxMetaspaceSize=256m

so, in total it will consume up to 1.25GB.
But even with what you described, I don't see any problem 760MB is less
than limits you set to heap+off-heap.

ср, 26 авг. 2020 г. в 09:17, Dana Milan :

> Hi all Igniters,
>
> I am trying to minimize Ignite's memory consumption on my server.
>
> Some background:
> My server has 16GB RAM, and is supposed to run applications other than
> Ignite.
> I use Ignite to store a cache. I use the TRANSACTIONAL_SNAPSHOT mode and I
> don't use persistence (configuration file attached). To read and update the
> cache I use SQL queries, through ODBC Client in C++ and through an
> embedded client-mode node in C#.
> My data consists of a table with 5 columns, and I guess around tens of
> thousands of rows.
> Ignite metrics tell me that my data takes 167MB ("CFGDataRegion region
> [used=167MB, free=67.23%, comm=256MB]", This region contains mainly this
> one cache).
>
> At the beginning, when I didn't tune the JVM at all, the Apache.Ignite
> process consumed around 1.6-1.9GB of RAM.
> After I've done some reading and research, I use the following JVM options
> which have brought the process to consume around 760MB as of now:
> -J-Xms512m
> -J-Xmx512m
> -J-Xmn64m
> -J-XX:+UseG1GC
> -J-XX:SurvivorRatio=128
> -J-XX:MaxGCPauseMillis=1000
> -J-XX:InitiatingHeapOccupancyPercent=40
> -J-XX:+DisableExplicitGC
> -J-XX:+UseStringDeduplication
>
> Currently Ignite is up for 29 hours on my server. When I only started the
> node, the Apache.Ignite process consumed around 600MB (after my data
> insertion, which doesn't change much after), and as stated, now it consumes
> around 760MB. I've been monitoring it every once in a while and this is not
> a sudden rise, it has been rising slowly but steadily ever since the node
> has started.
> I used DBeaver to look into node metrics system view
> , and I turned on the
> garbage collector logs. The garbage collector log shows that heap is
> constantly growing, but I guess this is due to the SQL queries and their
> results being stored there. (There are a few queries in a second, the
> results normally contain one row but can contain tens or hundreds of rows).
> After every garbage collection the heap usage is between 80-220MB. This is
> in accordance to what I see under HEAP_MEMORY_USED system view metric.
> Also, I can see that NONHEAP_MEMORY_COMITTED is around 102MB and
> NONHEAP_MEMORY_USED is around 98MB.
>
> My question is, what could be causing the constant growth in memory usage?
> What else consumes memory that doesn't appear in these metrics?
>
> Thanks for your help!
>


Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts

2020-08-27 Thread Michael Cherkasov
Hi, as I remember AtomicInteger uses the default data region as storage, so
if you don't have persistence for it, then it will be lost after restart.
Try to enable persistence and check it again.

ср, 26 авг. 2020 г. в 18:08, xmw45688 :

> *USE CASE *- use IgniteAtomicLong for table sequence generation (may not be
> correct approach in a distributed environment).
>
> *Ignite Server *(start Ignite as server mode) -
> apache-ignite-2.8.0.20190215
> daily build
> *Ignite Service* (start Ignite as client mode) - use Ignite Spring to
> initialize the sequence, see code snippet below.
> *code snippet*
>
> IgniteAtomicLong userSeq;
>
> @Autowired
> UserRepository userRepository;
>
> @Autowired
> Ignite igniteInstance;
>
> @PostConstruct
> @Override
> public void initSequence() {
> Long maxId = userRepository.getMaxId();
> if (maxId == null)
>
> { maxId = 0L; }
> LOG.info("Max User id: {}", maxId);
> userSeq = igniteInstance.atomicLong("userSeq", maxId, true);
> userSeq.getAndSet(maxId);
> }
>
> @Override
> public Long getNextSequence() {
> return userSeq.incrementAndGet();
> }
>
> *Exception*
> This code works well until the Ignite Server restarted (Ignite Service was
> not restarted).  It raised "Sequence was removed from cache" after Ignite
> Server node restarted.
>
> 020-08-11 16:14:46 [http-nio-8282-exec-3] ERROR
> c.p.c.p.service.PersistenceService - Error while saving entity:
> java.lang.IllegalStateException: Sequence was removed from cache: userSeq
> at
>
> org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.removedError(AtomicDataStructureProxy.java:145)
> at
>
> org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.checkRemoved(AtomicDataStructureProxy.java:116)
> at
>
> org.apache.ignite.internal.processors.datastructures.GridCacheAtomicLongImpl.incrementAndGet(GridCacheAtomicLongImpl.java:94)
>
> *Tried to reinitialize when the server node is down. But raises another
> exception - "cannot start/stop cache within lock or transaction"*
>
> How to solve such issues?  Any suggestions are appreciated.
>
> @Override
> public Long getNextSequence() {
> if (useSeq == null || userSeq.removed())
>
> { initSeqence(); }
> return userSeq.incrementAndGet();
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite partition mode

2020-08-27 Thread Michael Cherkasov
I'm not sure how you defined that only one node store data, but if so, it
means only one node in baseline:
https://www.gridgain.com/docs/latest/developers-guide/baseline-topology#baseline-topology
add the second node to baseline and it will store data too.

чт, 27 авг. 2020 г. в 15:08, Denis Magda :

> Explain the steps in detail
> -
> Denis
>
>
> On Wed, Aug 26, 2020 at 5:13 AM itsmeravikiran.c <
> itsmeravikira...@gmail.com> wrote:
>
>> By using ignite web console
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Increase the indexing speed while loading the cache from an RDBMS

2020-08-27 Thread Michael Cherkasov
Hi Srikanta,
 Have you tried to load data without indexto compare time?

I think Wal disabling for data load can save some hours for you:
https://www.gridgain.com/docs/latest/developers-guide/persistence/native-persistence#disabling-wal

Thanks,
Mike.

On Wed, Aug 26, 2020, 8:47 AM Srikanta Patanjali 
wrote:

> Currently I'm using Apache Ignite v2.8.1 to preload a cache from the
> RDBMS. There are two tables with each 27M rows. The index is defined on a
> single column of type String in 1st table and Integer in the 2nd table.
> Together the total size of the two tables is around 120GB.
>
> The preloading process (triggered using loadCacheAsync() from within a
> Java app) takes about 45hrs. The cache is persistence enabled and a
> common EBS volume (SSD) is being used for both the WAL and other locations.
>
> I'm unable to figure out the bottleneck for increasing the speed.
>
> Apart from defining a separate path for WAL and the persistence, is there
> any other way to load the cache faster (with indexing enabled) ?
>
>
> Thanks,
> Srikanta
>


Re: How to confirm that disk compression is in effect?

2020-08-27 Thread Michael Cherkasov
Could you please share your benchmark code? I believe compression might
depend on data you write, if it full random, it's difficult to compress the
data.

On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com> wrote:

> Hi,
>
> We turn on disk compression to see the trend of execution time and disk
> space.
>
> Our expectation is that after disk compression is turned on, although more
> CPU is used, the disk space is less occupied. Because more data is written
> per unit time, the overall execution time will be shortened in the case of
> insufficient memory.
>
> However, it is found that the execution time and disk consumption do not
> change significantly. We tested the diskPageCompressionLevel values as 0,
> 10 and 17 respectively.
>
> Our test method is as follows:
> The ignite-compress module has been introduced.
>
> The configuration of ignite is as follows:
> 
> http://www.springframework.org/schema/beans;
> 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>  xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>  "org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
> 
>  />
> 
> 
> 
> 
>  "org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>


Re: Select clause is quite more slow than same Select on MySQL

2020-08-27 Thread Michael Cherkasov
Hi Manuel,

Ignite is a grid solution, it is supposed to be scaled horizontally, so
MySQL can be more efficient in some cases, but it can not scaled well. So
ignite will win when you can not fit data on one machine and need to scale.
So, load more data, use more machines and at some point Apache Ignite will
show better performance.

Thanks,
Mike.

On Thu, Aug 27, 2020, 4:51 AM manueltg89 
wrote:

> Hi!,
>
> I have a table with 4 millions of registers, with only 1 node in Apache
> Ignite. I make a simple query to get all registers:
>
> select * from items;
>
> I make the query in Apache Ignite and this takes 83 seconds and same query
> in MySQL takes 33 seconds.
>
> Is it normal?
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: NullPointerException while cluster startup

2020-03-13 Thread Michael Cherkasov
Hi Sumit,

Did you have a chance to very the build by the link I sent to you?

Might be you can share a reproducer so I can verify it my own?

Thanks,
Mike.

пн, 9 мар. 2020 г. в 09:26, Mikhail :

> Hello,
>
> Looks like it was fixed in GridGain Community edition:
> https://www.gridgain.com/resources/download
>
> Could you please verify 8.7.12 release? I bet I've found a required fix by
> it wasn't donated to apache yet, as soon as you confirm that 8.7.12 works
> fine I'll create a ticket to donate fix to Apache Ignite.
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How much data is actually in RAM?

2019-06-14 Thread Michael Cherkasov
Hi John,

if you have native persistence enabled that ignite will evict data to a
disk if there's no space in RAM.
BUT you have to specify maxSize for RAM! so Ignite really doesn't have a
limit for space, it's just limited by your disk and you will get IgniteOOM
at some point.

So your current configuration should fail because Ignite will try to
allocate 72 GB in memory on start.

>1- If I get this right... Each node is using about 1GB on-heap JVM memory
each for the ignite process to run. Correct? Should this be set a bit
higher or that's ok?
Ignite store data in off heap, but heap will be used for everything else,
like message processing, marshaling and unmarshaling data and etc.
If you use SQL, ignite will load data from off heap to on heap to send
record as a response to client. Usually SQL the main consumer of heap, so
you need to tune you heap size based on your needs.

>2- There is total 216GB off-heap so about 72GB off-heap per node. So how
much of that is stored in actual RAM in the remainder
>of what is left from 8GB of the physical host? Is that the 20% value
indicated in the docs? Or should maxSize be set to 6L and not 72L giving
2GB free to the OS and ignite process?
As I said, this option defines not max size for storage, but the max size
for storage in RAM! Ignite always will use RAM memory that you allocate to
it for data storage, only if it's not enough memory for data storage it
will evict data to a disk.

Thanks,
Mike.


пт, 14 июн. 2019 г. в 14:02, John Smith :

> Hi, so I have 3 machines with 8GB RAM and 96GB disk each.
>
> I have configured the persistence as
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>   
>
>   
>   
> 
>   
>
> Looking at the logs:
> Topology snapshot [ver=3, locNode=xx, servers=3, clients=0,
> state=INACTIVE, CPUs=12, offheap=216.0GB, heap=3.0GB]
>
> 1- If I get this right... Each node is using about 1GB on-heap JVM memory
> each for the ignite process to run. Correct? Should this be set a bit
> higher or that's ok?
> 2- There is total 216GB off-heap so about 72GB off-heap per node. So how
> much of that is stored in actual RAM in the remainder of what is left from
> 8GB of the physical host? Is that the 20% value indicated in the docs? Or
> should maxSize be set to 6L and not 72L giving 2GB free to the OS and
> ignite process?
>
>
>
>
>


Re: Toxiproxy

2019-05-30 Thread Michael Cherkasov
Hi Delian,

I used it to test timeout for client nodes, I attached the example, I think
it can be adapted for your purposes.

>I've now set up Toxiproxy with a proxy (per node) for discovery,
>communication, shared mem and timeserver as the config file for each node
>allows me to explicitly set ports for these.
Ignite uses only communication and discovery by default. shared mem can't
be proxied and it isn't used by default, and timeserver - is an obsolete
configuration that isn't used anymore.

So you need to proxy only Discovery and Communication. In the attached
example I created a server node that listens on port 47600 for discovery
and 47200 for communication.
Client node has in the discovery list the following
address: "localhost:47500" so at port 47500 we have Toxiproxy. Also note,
that the very first server node need to connect it self, so I
left "localhost:47600" in the discovery list.
Okay, now client node will connect to the server node via Toxiproxy, but,
we can't specify particular address/port for the client to communicate with
the server, because ignite uses autodiscovery and all nodes send
communication address on node join view discovery protocol, and there a
trick:
to server node I added address resolver, it is used for heterogeneous
networks, when clients can't directly connect to servers and need to use
another set of addresses:

@NotNull private static AddressResolver getRslvr(String s) {
return new AddressResolver() {
@Override public Collection getExternalAddresses(
InetSocketAddress addr) throws IgniteCheckedException {
List res = Collections.singletonList(
new InetSocketAddress(addr.getHostName(),
addr.getPort() == 0 ? 0 : addr.getPort() - 100)
);

System.out.println(Thread.currentThread().getName() + " "
+ s + "resolve: " + addr + " ->" + res);
return res;
}
};
}

So what does this code mean? The server listens on port 47200, but before
to send it to the client, it will ask AdressResolver to convert its address
to something new, in this case, I just reducer port number by 100. so
Client will get localhost:47100 address for communication, however, even
with new address localhost:47100, Server node will send its original
address to client too localhost:47200, to rid of this original address I
added this lines:

Map> userAttr =
Collections.singletonMap("TcpCommunicationSpi.comm.tcp.addrs",
Collections.emptyList());

igniteCfg.setUserAttributes(userAttr);

so, now the client will get an empty list instead of original communication
address and will have to use adress returned by AddressResolver which is
localhost:47100 where we have Toxiproxy.

I think this example can be adapted to your case.
if you have any further question feel free to mail me, also I will
appreciate if you will share the result of your work.

Thanks,
Mike.

ср, 29 мая 2019 г. в 15:37, Delian :

> Is anyone aware whether Toxiproxy can be set up to sit between Ignite nodes
> in order to look at how things behave under various network conditions ? I
> am new to Ignite and am wondering whether my thinking is flawed.
>
> I have a simple 3 node cluster using the static TCP IP finder - all fine.
> I've now set up Toxiproxy with a proxy (per node) for discovery,
> communication, shared mem and timeserver as the config file for each node
> allows me to explicitly set ports for these.  Finally, the ip finders in
> the
> node configs point to the cluster nodes going through ToxiProxy - not
> direct.
>
> Nodes fire up but don't cluster, I'm seeing a lot of activity in Toxiproxy
> console where by nodes are sending requests on ports other than the above
> (in some cases incrementing so I assume a range is being attempted). As I
> have not explicitly set these up in Toxiproxy the requests seem to get
> routed to the upstream node on 47500 (service disco) which is obviously
> wrong in some cases. I see a number of open ports for the process - some of
> which I have set but some not and they are not the same across the nodes.
>
> 1) Can I statically set all these ports (even if I knew what they were) so
> I
> can create proxies for them with the hope that allows me to cluster up ?
>
> 2) I believe a ring topology is in play - are the hosts/ip's set up in the
> service disco config always used, i.e. so everything goes through Toxiproxy
> or is there the possibility they will connect direct and bypass ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


ToxiproxyTest.java
Description: application/ms-java


Re: Can't use loadCache with IgniteBiPredicate

2019-04-17 Thread Michael Hubbard
I seem to have found the answer.  Changing the storeKeepBinary field didn't
seem to help, but I was able to change things by going to a generic
IgniteBiPredicate and casting inside the apply method.

public void initFoo(int key){ 
  IgniteCache myCache = getCache(Foo.class); 
  if (myCache != null){ 
myCache.loadCache(new IgniteBiPredicate() { 
  @Override 
  public boolean apply(Object integer, Object foo){ 
return (Integer)integer == key; 
  } 
}, null); 
  } 
} 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can't use loadCache with IgniteBiPredicate

2019-04-17 Thread Michael Hubbard
This is the stacktrace for the exception:

Caused by: java.lang.ClassCastException:
org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to
com.sms.Foo
at
com.sms.flexnet.maintenance.data.IgniteFooDAO$3.apply(IgniteFooDAO.java:164)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.loadEntry(GridDhtCacheAdapter.java:639)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.access$600(GridDhtCacheAdapter.java:99)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$5.apply(GridDhtCacheAdapter.java:612)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$5.apply(GridDhtCacheAdapter.java:608)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter$3.apply(GridCacheStoreManagerAdapter.java:536)
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore$1.call(CacheAbstractJdbcStore.java:470)
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore$1.call(CacheAbstractJdbcStore.java:434)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)


For reference, the line in IgniteFooDAO that's listed in there is the
declaration of the IgniteBiPredicate apply method:

@Override
public boolean apply(Integer integer, Foo foo){





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Can't use loadCache with IgniteBiPredicate

2019-04-17 Thread Michael Hubbard
I'm trying to selectively initialize pieces of my IgniteCache that is
backed  by a postgres database.  When I use loadCache() with no arguments,
the entire cache loads fine.  However, the table I'm loading is very large,
and I want to use an IgniteBiPredicate to only load the pieces I care
about.  Whenever I try this, I get ClassCastExceptions.  Here is an example
of what I'm trying to do:

public void initFoo(int key){
  IgniteCache myCache = getCache(Foo.class);
  if (myCache != null){
myCache.loadCache(new IgniteBiPredicate() {
  @Override
  public boolean apply(Integer integer, Foo foo){
return integer == key;
  }
}, null);
  }
}

Whenever I try this, I get ClassCastExceptions on the apply method of this
bipredicate, saying that BinaryObjectImpl cannot be cast to Foo--why is
what's coming out of the cache a BinaryObjectImpl?  How am I supposed to
use the IgniteBiPredicate?


Re: Binary Serialization of Enums By Name

2019-02-04 Thread Michael Cherkasov
Hi Stuart,

I think you can use Binarylizable interface, you can implement your one
serialization for your enum.

Thanks,
Mike.

пн, 4 февр. 2019 г. в 15:12, Stuart Macdonald :

> Igniters,
>
> I have some cache objects which contain enum fields, which when persisted
> through Ignite binary persistence are persisted using their enum ordinal
> position. However these enums are often modified whereby new values are
> inserted in the middle of the existing enum values, which breaks
> deserialization by ordinal. Does anyone know of a way to have Ignite
> serialize enums by name (ie. standard java enum serialization), or to allow
> for custom serialization routines for enums?
>
> Many thanks,
> Stuart.
>


Re: Spring Cloud + Jdbc client driver

2019-02-04 Thread Michael Cherkasov
Hi,
Could you please describe your question in more details?
I think it's not clear what you are trying to ask.

Thanks,
Mike.

пн, 4 февр. 2019 г. в 16:44, nikfuri :

> Hi! Is there any availability to pass properties gained through config
> server
> to jdbc config file?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BufferUnderflowException on GridRedisProtocolParser

2018-10-24 Thread Michael Fong
Hi Stan,

Thanks for the suggestion.

I have tried to patch it, which would work 95% of the case, but would run
into another random issue. I have mailed dev list for suggestions.


Regards,

Michael

On Thu, Oct 11, 2018 at 7:10 PM Stanislav Lukyanov 
wrote:

> Well, you need to wait for the IGNITE-7153 fix then.
>
> Or contribute it! :)
>
> I checked the code, and it seems to be a relatively easy fix. One needs to
> alter the GridRedisProtocolParser
>
> to use ParserState in the way GridTcpRestParser::parseMemcachePacker does.
>
>
>
> Stan
>
>
>
> *From: *Michael Fong 
> *Sent: *11 октября 2018 г. 7:15
> *To: *user@ignite.apache.org
> *Subject: *Re: BufferUnderflowException on GridRedisProtocolParser
>
>
>
> Hi,
>
>
>
> The symptom seems very likely to IGNITE-7153, where the default tcp send
> buffer on that environment happens to be 4096
>
> [root]# cat /proc/sys/net/ipv4/tcp_wmem
>
> 4096
>
>
>
> Regards,
>
>
>
> On Thu, Oct 11, 2018 at 10:56 AM Michael Fong 
> wrote:
>
> Hi,
>
>
>
> Thank you for your response. Not to every request; we only see this for
> some specific ones - when elCnt (4270) > buf.limit (4096).  We are trying
> to narrowing down the data set to find the root cause.
>
>
>
> Thanks.
>
>
>
> Regards,
>
>
>
> On Thu, Oct 11, 2018 at 12:08 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
> Hello!
>
>
>
> Do you see this error on every request, or on some specific ones?
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> вт, 9 окт. 2018 г. в 15:54, Michael Fong :
>
> Hi, all
>
>
>
> We are evaluating Ignite compatibility with Redis protocol, and we hit an
> issue as the following:
>
> Does the stacktrace look a bit like IGNITE-7153?
>
>
>
> java.nio.BufferUnderflowException
>
> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
>
> at
> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
>
> at
> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
>
> at
> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
>
> at
> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
>
> at
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
>
> at
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
>
> at
> org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
> at java.lang.Thread.run(Thread.java:745)
>
> [2018-10-09
> 12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol]
> Closing NIO session because of unhandled exception.
>
> class org.apache.ignite.internal.util.nio.GridNioException: null
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-20 Thread Michael Fong
Hi,

Thanks for pointing that out!

I checked the code where BYTE_ARR_FLAG = (8 << 8)
so I set flags = 2048 in my C client program - linking libmemcached
library; so that Apache Ignite would recognize the SET value as byte array
successfully.
i.e.

rc = memcached_set(connection, key2, strlen(key2), value2,
strlen(value2), (time_t)0, (uint32_t)2048);


Regards,



On Thu, Oct 18, 2018 at 6:12 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> It will just take two first bytes from extras and turn them into a numeric
> variable.
>
> Later on, if flags & 0xff00 == 0x100b, then it is byte array, else it
> is string.
>
> You should find a way to make your implementation send '-128' as fourth
> byte (If I understand endianness correctly). You could look it up in
> tcpdump.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
>
> чт, 18 окт. 2018 г. в 10:33, Michael Fong :
>
>> Hi,
>>
>> Thanks for pointing out the mistake about the string data type. I used
>> python3 which supports bytes type and run the test case. Ignite would seems
>> still treat the received value as String type and would decode the data
>> correctly. I would like to try with a some simple C client program and see
>> how It works.
>>
>> In addition, I checked the source code, and it seems the data type of key
>> or value is
>> 1. computed via U.bytesToShort() method @
>> https://github.com/apache/ignite/blob/ignite-2.6/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/protocols/tcp/GridTcpRestParser.java#L664
>> 2. retrieved the type flag via masking bits off @
>> https://github.com/apache/ignite/blob/ignite-2.6/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/protocols/tcp/GridTcpRestParser.java#L749
>>
>> Could someone share some insight of what  U.bytesToShort() does ?
>>
>> Thanks.
>>
>>
>> On Wed, Oct 17, 2018 at 8:29 PM Павлухин Иван 
>> wrote:
>>
>>> Hi Michael,
>>>
>>> The troubles could be related to Python library. It seems in Python 2.7
>>> there is no such thing as "byte array". And value passed to the client is
>>> string in this case.
>>> I checked that Ignite recognizes bytes array type and stores in as byte
>>> array internally. I did following experiment with Spymemcached [1].
>>> public class Memcached {
>>> public static void main(String[] args) throws IOException {
>>> MemcachedClient client = new MemcachedClient(
>>> new BinaryConnectionFactory(),
>>> AddrUtil.getAddresses("127.0.0.1:11211"));
>>>
>>> client.add("a", Integer.MAX_VALUE, new byte[]{1, 2, 3});
>>> client.add("b", Integer.MAX_VALUE, "123");
>>>
>>> System.out.println(Arrays.toString((byte[])client.get("a")));
>>> System.out.println(client.get("b"));
>>>
>>> System.exit(0);
>>> }
>>> }
>>>
>>> And I see expected output:
>>> [1, 2, 3]
>>> 123
>>>
>>> [1] https://mvnrepository.com/artifact/net.spy/spymemcached/2.12.3
>>>
>>> ср, 17 окт. 2018 г. в 10:25, Павлухин Иван :
>>>
>>>> Hi Michael,
>>>>
>>>> Answering one of your questions.
>>>> > Does ignite internally have a way to store the data type when cache
>>>> entry is stored?
>>>> Yes, internally Ignite maintains data types for stored keys and values.
>>>>
>>>> Could you confirm that for real memcached your example works as
>>>> expected? I will try reproduce your Python example. It should not be hard
>>>> to check what exactly is stored inside Ignite.
>>>>
>>>> ср, 17 окт. 2018 г. в 5:25, Michael Fong :
>>>>
>>>>> bump :)
>>>>>
>>>>> Could anyone please help to answer a newbie question? Thanks in
>>>>> advance!
>>>>>
>>>>> On Mon, Oct 15, 2018 at 4:22 PM Michael Fong 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I kind of able to reproduce it with a small python script
>>>>>>
>>>>>> import pylibmc
>>>>>>
>>>>>> client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
>>>>>>
>>>>>>
>>>>>> ##abc
>>>>>> val = "abcd".decode("hex")
>>>>>> client.set("pyBin1", val)
>>>>>>
>>>>>> print "val decode w/ iso-8859-1: %s" % val.encode("hex")
>>>>>>
>>>>>> get_val = client.get("pyBin1")
>>>>>>
>>>>>> print "Value for 'pyBin1': %s" % get_val.encode("hex")
>>>>>>
>>>>>>
>>>>>> where the the program intends to insert a byte[] into ignite using
>>>>>> memcache binary protocol.
>>>>>> The output is
>>>>>>
>>>>>> val decode w/ iso-8859-1: abcd
>>>>>> Value for 'pyBin1': *efbfbdefbfbd*
>>>>>>
>>>>>> where, 'ef bf bd' are the replacement character for UTF-8 String.
>>>>>> Therefore, the value field seems to be treated as String in Ignite.
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Michael
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi, it looks strange to me. Do you have a reproducer?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>>>>
>>>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Ivan Pavlukhin
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Ivan Pavlukhin
>>>
>>


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-18 Thread Michael Fong
Hi,

Thanks for pointing out the mistake about the string data type. I used
python3 which supports bytes type and run the test case. Ignite would seems
still treat the received value as String type and would decode the data
correctly. I would like to try with a some simple C client program and see
how It works.

In addition, I checked the source code, and it seems the data type of key
or value is
1. computed via U.bytesToShort() method @
https://github.com/apache/ignite/blob/ignite-2.6/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/protocols/tcp/GridTcpRestParser.java#L664
2. retrieved the type flag via masking bits off @
https://github.com/apache/ignite/blob/ignite-2.6/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/protocols/tcp/GridTcpRestParser.java#L749

Could someone share some insight of what  U.bytesToShort() does ?

Thanks.


On Wed, Oct 17, 2018 at 8:29 PM Павлухин Иван  wrote:

> Hi Michael,
>
> The troubles could be related to Python library. It seems in Python 2.7
> there is no such thing as "byte array". And value passed to the client is
> string in this case.
> I checked that Ignite recognizes bytes array type and stores in as byte
> array internally. I did following experiment with Spymemcached [1].
> public class Memcached {
> public static void main(String[] args) throws IOException {
> MemcachedClient client = new MemcachedClient(
> new BinaryConnectionFactory(),
> AddrUtil.getAddresses("127.0.0.1:11211"));
>
> client.add("a", Integer.MAX_VALUE, new byte[]{1, 2, 3});
> client.add("b", Integer.MAX_VALUE, "123");
>
> System.out.println(Arrays.toString((byte[])client.get("a")));
> System.out.println(client.get("b"));
>
> System.exit(0);
> }
> }
>
> And I see expected output:
> [1, 2, 3]
> 123
>
> [1] https://mvnrepository.com/artifact/net.spy/spymemcached/2.12.3
>
> ср, 17 окт. 2018 г. в 10:25, Павлухин Иван :
>
>> Hi Michael,
>>
>> Answering one of your questions.
>> > Does ignite internally have a way to store the data type when cache
>> entry is stored?
>> Yes, internally Ignite maintains data types for stored keys and values.
>>
>> Could you confirm that for real memcached your example works as expected?
>> I will try reproduce your Python example. It should not be hard to check
>> what exactly is stored inside Ignite.
>>
>> ср, 17 окт. 2018 г. в 5:25, Michael Fong :
>>
>>> bump :)
>>>
>>> Could anyone please help to answer a newbie question? Thanks in advance!
>>>
>>> On Mon, Oct 15, 2018 at 4:22 PM Michael Fong 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I kind of able to reproduce it with a small python script
>>>>
>>>> import pylibmc
>>>>
>>>> client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
>>>>
>>>>
>>>> ##abc
>>>> val = "abcd".decode("hex")
>>>> client.set("pyBin1", val)
>>>>
>>>> print "val decode w/ iso-8859-1: %s" % val.encode("hex")
>>>>
>>>> get_val = client.get("pyBin1")
>>>>
>>>> print "Value for 'pyBin1': %s" % get_val.encode("hex")
>>>>
>>>>
>>>> where the the program intends to insert a byte[] into ignite using
>>>> memcache binary protocol.
>>>> The output is
>>>>
>>>> val decode w/ iso-8859-1: abcd
>>>> Value for 'pyBin1': *efbfbdefbfbd*
>>>>
>>>> where, 'ef bf bd' are the replacement character for UTF-8 String.
>>>> Therefore, the value field seems to be treated as String in Ignite.
>>>>
>>>> Regards,
>>>>
>>>> Michael
>>>>
>>>>
>>>>
>>>> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:
>>>>
>>>>> Hi, it looks strange to me. Do you have a reproducer?
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>>
>>>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin
>>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-16 Thread Michael Fong
bump :)

Could anyone please help to answer a newbie question? Thanks in advance!

On Mon, Oct 15, 2018 at 4:22 PM Michael Fong  wrote:

> Hi,
>
> I kind of able to reproduce it with a small python script
>
> import pylibmc
>
> client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
>
>
> ##abc
> val = "abcd".decode("hex")
> client.set("pyBin1", val)
>
> print "val decode w/ iso-8859-1: %s" % val.encode("hex")
>
> get_val = client.get("pyBin1")
>
> print "Value for 'pyBin1': %s" % get_val.encode("hex")
>
>
> where the the program intends to insert a byte[] into ignite using
> memcache binary protocol.
> The output is
>
> val decode w/ iso-8859-1: abcd
> Value for 'pyBin1': *efbfbdefbfbd*
>
> where, 'ef bf bd' are the replacement character for UTF-8 String.
> Therefore, the value field seems to be treated as String in Ignite.
>
> Regards,
>
> Michael
>
>
>
> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:
>
>> Hi, it looks strange to me. Do you have a reproducer?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-15 Thread Michael Fong
Hi,

I kind of able to reproduce it with a small python script

import pylibmc

client = pylibmc.Client (["127.0.0.1:11211"], binary=True)


##abc
val = "abcd".decode("hex")
client.set("pyBin1", val)

print "val decode w/ iso-8859-1: %s" % val.encode("hex")

get_val = client.get("pyBin1")

print "Value for 'pyBin1': %s" % get_val.encode("hex")


where the the program intends to insert a byte[] into ignite using memcache
binary protocol.
The output is

val decode w/ iso-8859-1: abcd
Value for 'pyBin1': *efbfbdefbfbd*

where, 'ef bf bd' are the replacement character for UTF-8 String.
Therefore, the value field seems to be treated as String in Ignite.

Regards,

Michael



On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:

> Hi, it looks strange to me. Do you have a reproducer?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Heap size

2018-10-11 Thread Michael Cherkasov
Hi Prasad,

G1 is a good choice for big heaps, if you have big GC pauses you need to
investigate this, might be GC tuning is required, might be you just need to
increase heap size, it depends on your case and very specific.
Also, please make sure that you use 31GB heap, or > 40 GB heap, numbers
between 31 and 40 is useless, the following article explains why it so:
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
Also, I want to clarify that ignite stores data in off-heap, heap usually
is required for SQL(ignite bring data from off-heap to heap) and for
internal structures and your compute tasks of course.

Thanks,
Mike.

чт, 11 окт. 2018 г. в 10:57, Prasad Bhalerao :

> Hi,
>
> Is anyone using on heap cache with more than 30 gb jvm heap size in
> production?
>
> Which gc algorithm are you using?
>
> If yes have you faced any issues relates to long gc pauses?
>
>
> Thanks,
> Prasad
>


Re: BufferUnderflowException on GridRedisProtocolParser

2018-10-10 Thread Michael Fong
Hi,

The symptom seems very likely to IGNITE-7153, where the default tcp send
buffer on that environment happens to be 4096
[root]# cat /proc/sys/net/ipv4/tcp_wmem
4096

Regards,

On Thu, Oct 11, 2018 at 10:56 AM Michael Fong  wrote:

> Hi,
>
> Thank you for your response. Not to every request; we only see this for
> some specific ones - when elCnt (4270) > buf.limit (4096).  We are trying
> to narrowing down the data set to find the root cause.
>
> Thanks.
>
> Regards,
>
> On Thu, Oct 11, 2018 at 12:08 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Do you see this error on every request, or on some specific ones?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 9 окт. 2018 г. в 15:54, Michael Fong :
>>
>>> Hi, all
>>>
>>> We are evaluating Ignite compatibility with Redis protocol, and we hit
>>> an issue as the following:
>>> Does the stacktrace look a bit like IGNITE-7153?
>>>
>>> java.nio.BufferUnderflowException
>>> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
>>> at
>>> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
>>> at
>>> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
>>> at
>>> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
>>> at
>>> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>>> at
>>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>>> at java.lang.Thread.run(Thread.java:745)
>>> [2018-10-09
>>> 12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol]
>>> Closing NIO session because of unhandled exception.
>>> class org.apache.ignite.internal.util.nio.GridNioException: null
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>>> at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>>> at
>>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>>


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-10 Thread Michael Fong
Hi,

We have a libmemcached (C) client to write pure binary[]  to Ignite and
another Java client to read from it.  We suspect Ignite stores it as
String(UTF-8), perhaps related IGNITE-7028 - even from the packet dump, the
data type is raw bytes (0)
 As the workaround, we modified the code a bit to return raw byte[] for
value field as shown in
https://github.com/mcfongtw/ignite/blob/a112bc53febd3903817b3837f7fd90b3257fa18b/modules/core/src/main/java/org/apache/ignite/internal/processors/rest/protocols/tcp/GridTcpRestParser.java

Thanks and Regards,


On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:

> Hi, it looks strange to me. Do you have a reproducer?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Does Ignite support GPB serialized data

2018-10-10 Thread Michael Fong
Very clear answer. Thank you.

On Mon, Oct 8, 2018 at 4:18 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Ignite does not have special support for protocol buffers.
>
> You are welcome to implement Binarylizable or Externalizable interfaces on
> your objects to specify serialization for them.
>
> You can also specify BinarySerializer for types that you do not control by
> putting them into BinaryConfiguration.setTypeConfigurations() and using
> that one with IgniteConfiguration:
>
> https://apacheignite.readme.io/docs/binary-marshaller#section-configuring-binary-objects
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 3 окт. 2018 г. в 18:24, Michael Fong :
>
>> Hi, all
>>
>> We have protocol buffer serialized binary data that would like to stored
>> into ignite, and wonder if Ignite supports gpb serialization out of the
>> box?
>>
>> If not, which serialization interface do we need to implement to
>> customize and override in the xml?
>>
>> Thanks in advance
>>
>


Re: BufferUnderflowException on GridRedisProtocolParser

2018-10-10 Thread Michael Fong
Hi,

Thank you for your response. Not to every request; we only see this for
some specific ones - when elCnt (4270) > buf.limit (4096).  We are trying
to narrowing down the data set to find the root cause.

Thanks.

Regards,

On Thu, Oct 11, 2018 at 12:08 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Do you see this error on every request, or on some specific ones?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 9 окт. 2018 г. в 15:54, Michael Fong :
>
>> Hi, all
>>
>> We are evaluating Ignite compatibility with Redis protocol, and we hit an
>> issue as the following:
>> Does the stacktrace look a bit like IGNITE-7153?
>>
>> java.nio.BufferUnderflowException
>> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
>> at
>> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
>> at
>> org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
>> at
>> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
>> at
>> org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
>> at
>> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
>> at
>> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
>> at
>> org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>> at java.lang.Thread.run(Thread.java:745)
>> [2018-10-09
>> 12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol]
>> Closing NIO session because of unhandled exception.
>> class org.apache.ignite.internal.util.nio.GridNioException: null
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
>> at
>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>


BufferUnderflowException on GridRedisProtocolParser

2018-10-09 Thread Michael Fong
Hi, all

We are evaluating Ignite compatibility with Redis protocol, and we hit an
issue as the following:
Does the stacktrace look a bit like IGNITE-7153?

java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
at
org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[2018-10-09
12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol]
Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)


Does Ignite support GPB serialized data

2018-10-03 Thread Michael Fong
Hi, all

We have protocol buffer serialized binary data that would like to stored
into ignite, and wonder if Ignite supports gpb serialization out of the
box?

If not, which serialization interface do we need to implement to customize
and override in the xml?

Thanks in advance


Writing binary [] to ignite via memcache binary protocol

2018-10-03 Thread Michael Fong
Hi, all,


New user to Ignite here.

We are evaluating the possibility to replace memcached w/ Apache Ignite
(v2.6). We notice recently that when we set a byte[] using
net.spy.memcached client, and it seems sometimes the type of cache Object
in Ignite is somehow set as String. Thus, when we get the value of the
entry, it could not always convert back to binary[] properly, unless we
specifically specified a byte[] transcoder when client performs get(key,
transcoder) operation.

Does ignite internally have a way to store the data type when cache entry
is stored? This would help a lot, since we might be working with driver of
different languages as well.

Thanks in advance!


Re: Batch insert into ignite

2018-07-25 Thread Michael Cherkasov
Hi again,

One more option as pure sql solution:
https://apacheignite-sql.readme.io/docs/copy
https://apacheignite-sql.readme.io/docs/set

However, IgniteDataStreamer should work faster, because as I know, these
SQL commands have data streamer as underlying implementation.

Thanks,
Mike.

2018-07-25 7:02 GMT+03:00 Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Thanks for the clarification Michael..will try the streamer option.
>
>
>
> Regards,
>
> Sriveena
>
>
>
> *From:* Michael Cherkasov [mailto:michael.cherka...@gmail.com]
> *Sent:* Tuesday, July 24, 2018 9:49 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Batch insert into ignite
>
>
>
> I can't say for sure which one should be better for sure, it must be
> benchmarked.
>
> But if you can use SqlFieldsQuery this means that you can use native
> Ignite API, right? So, in this case, IgniteDataStreamer
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fdata-streamers=02%7C01%7CSriveena.Mattaparthi%40ekaplus.com%7Cd7a2e5237e04417740a408d5f1814956%7C2a5b4e9716be4be4b2d40f3fcb3d373c%7C1%7C0%7C636680459959631292=Vm%2BmWgmalYFrA6O9fZEWPzW0XRF%2BOYFVaAIwus%2FNU5k%3D=0>
> would be the best choice.
>
>
>
>
>
> 2018-07-24 9:36 GMT+03:00 Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
> Hi Mikhail,
>
> Do we have this streaming/batch insert option, if we do a bulk insert
> using
> cache.query(new SqlFieldsQuery("sql");
>
> Also could you please confirm which one would perform better
> 1. Insert using IgniteJdbcDriver
> 2. Insert using SqlFieldsQuery
>
> Thanks & Regards,
> Sriveena
>
> -Original Message-
> From: Mikhail [mailto:michael.cherka...@gmail.com]
> Sent: Monday, July 23, 2018 10:32 PM
> To: user@ignite.apache.org
> Subject: Re: Batch insert into ignite
>
> Hi Sriveena,
>
> for data load you can try to use streaming mode for jdbc:
> https://apac01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fjdbc-
> driver%23section-streaming-modedata=02%7C01%7CSriveena.Mattaparthi%
> 40ekaplus.com%7Cd2ff72f68e13451756c008d5f0be11d6%
> 7C2a5b4e9716be4be4b2d40f3fcb3d373c%7C1%7C0%7C636679621511638203sdata=
> QGfXouNn0Juxv18j1p0rgnBx%2Fle90Ioo3E6W3S6ohR8%3Dreserved=0
> it will provide faster insert operation compare to regular insert, also you
> can disable WAL:
> https://apac01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fwrite-
> ahead-log%23section-wal-activation-and-deactivation&
> amp;data=02%7C01%7CSriveena.Mattaparthi%40ekaplus.com%
> 7Cd2ff72f68e13451756c008d5f0be11d6%7C2a5b4e9716be4be4b2d40f3fcb3d
> 373c%7C1%7C0%7C636679621511638203sdata=pjwp4tCzN9KnUq8s0cvTbmmaIDVj8O
> 4UPylv7acYEUg%3Dreserved=0
> to get better throughput.
>
> Thanks,
> Mike.
>
>
>
> --
>
> Sent from: https://apac01.safelinks.protection.outlook.com/?url=
> http%3A%2F%2Fapache-ignite-users.70518.x6.nabble.com%2F&
> amp;data=02%7C01%7CSriveena.Mattaparthi%40ekaplus.com%
> 7Cd2ff72f68e13451756c008d5f0be11d6%7C2a5b4e9716be4be4b2d40f3fcb3d
> 373c%7C1%7C0%7C636679621511638203sdata=0Z7q%
> 2BpGVuPa2rpB4wjYg4tKt5MPDYi4denqAbskOtZg%3Dreserved=0
>
>
>


Re: Batch insert into ignite

2018-07-24 Thread Michael Cherkasov
I can't say for sure which one should be better for sure, it must be
benchmarked.
But if you can use SqlFieldsQuery this means that you can use native Ignite
API, right? So, in this case, IgniteDataStreamer
 would be the best
choice.


2018-07-24 9:36 GMT+03:00 Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Hi Mikhail,
>
> Do we have this streaming/batch insert option, if we do a bulk insert
> using
> cache.query(new SqlFieldsQuery("sql");
>
> Also could you please confirm which one would perform better
> 1. Insert using IgniteJdbcDriver
> 2. Insert using SqlFieldsQuery
>
> Thanks & Regards,
> Sriveena
>
> -Original Message-
> From: Mikhail [mailto:michael.cherka...@gmail.com]
> Sent: Monday, July 23, 2018 10:32 PM
> To: user@ignite.apache.org
> Subject: Re: Batch insert into ignite
>
> Hi Sriveena,
>
> for data load you can try to use streaming mode for jdbc:
> https://apac01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fjdbc-
> driver%23section-streaming-modedata=02%7C01%7CSriveena.Mattaparthi%
> 40ekaplus.com%7Cd2ff72f68e13451756c008d5f0be11d6%
> 7C2a5b4e9716be4be4b2d40f3fcb3d373c%7C1%7C0%7C636679621511638203sdata=
> QGfXouNn0Juxv18j1p0rgnBx%2Fle90Ioo3E6W3S6ohR8%3Dreserved=0
> it will provide faster insert operation compare to regular insert, also you
> can disable WAL:
> https://apac01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fwrite-
> ahead-log%23section-wal-activation-and-deactivation&
> amp;data=02%7C01%7CSriveena.Mattaparthi%40ekaplus.com%
> 7Cd2ff72f68e13451756c008d5f0be11d6%7C2a5b4e9716be4be4b2d40f3fcb3d
> 373c%7C1%7C0%7C636679621511638203sdata=pjwp4tCzN9KnUq8s0cvTbmmaIDVj8O
> 4UPylv7acYEUg%3Dreserved=0
> to get better throughput.
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: https://apac01.safelinks.protection.outlook.com/?url=
> http%3A%2F%2Fapache-ignite-users.70518.x6.nabble.com%2F&
> amp;data=02%7C01%7CSriveena.Mattaparthi%40ekaplus.com%
> 7Cd2ff72f68e13451756c008d5f0be11d6%7C2a5b4e9716be4be4b2d40f3fcb3d
> 373c%7C1%7C0%7C636679621511638203sdata=0Z7q%
> 2BpGVuPa2rpB4wjYg4tKt5MPDYi4denqAbskOtZg%3Dreserved=0
>


Example of SQL query

2018-05-08 Thread michael
Can you someone give an example of a cache in memory being queried via rest.

Here is a simple person class (cut down from example code):
Person.java
  

and a populate it:
PersonTest.java
  

Now I try to query from a browser:

cache size works (just to prove Im connected and the cache is there)
http://127.0.0.1:8080/ignite?cmd=size=person 
gives
{"successStatus":0,"affinityNodeId":null,"error":null,"response":4,"sessionToken":null}

but a simple query by id fails:
http://127.0.0.1:8080/ignite?cmd=qryexe=person=1=String=1=id%20%3D%20%3F
 
gives
{"successStatus":1,"error":"class org.apache.ignite.IgniteException:
null","response":null,"sessionToken":null}

incedentally metadat fails too:
http://127.0.0.1:8080/ignite?cmd=metadata=person
{"successStatus":1,"error":"Failed to handle request: [req=CACHE_METADATA,
err=Failed to request meta data. person is not
found]","response":null,"sessionToken":null}















--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-05-02 Thread michael
Hi 

revisiting this now I am actually connecting to a data source and filling
the cache with data:

For simplicity the attached code 
cacheTableTest.java
<http://apache-ignite-users.70518.x6.nabble.com/file/t1696/cacheTableTest.java> 
 
only gets an update of a single string key value pair when in reality there
will be 100s of updates. I'm aware that creating the table MyData multiple
times will fail again this is so the code is a bit neater.

My problem is I am still unclear how to get data from a REST call. In
particular, whether I need the data in a cache or in a table, so the example
shows populating both.

A simple "get" by key works:
http://127.0.0.1:8080/ignite?cmd=get=MyCache="key123;

{"successStatus":0,"affinityNodeId":"05f313de-6f41-47bb-856a-03a32a0fe467","error":null,"response":"value123","sessionToken":null}

but my guess on the cache:
http://127.0.0.1:8080/ignite?cmd=qryexe=MyCache=1=String=key123=_key%20%3D%20%3F

fails:
{"successStatus":1,"error":"class org.apache.ignite.IgniteException:
null","response":null,"sessionToken":null}

and I dont understand how to query the table

thanks
Michael






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


What does "Non heap" mean in the log?

2018-04-25 Thread Michael Jay

 
Hi, I am confused about "Non heap" here, what does it represent and where it
will be used? What the difference between "Non heap" and off-heap? Moreover,
how to change its size?
thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-24 Thread michael
ok that's great; wasn't too clear from the docs that I needed the
addQueryField



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-23 Thread michael
Following on from his example of a simple cache with key value pairs (now
strings! IgniteCache), how would I create a sql query on the
data:

http://127.0.0.1:8080/ignite?cmd=qryexe=GridCacheTest=1=String=_key+%3D+1

gives:
{"successStatus":1,"error":"class org.apache.ignite.IgniteException:
null","sessionToken":null,"response":null}

Does this only make sense if the value is a complex type/structure


Similarly how would I do a qryfldexe to see all values



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-20 Thread michael
Hi Andrei

I simpler workaround for me for now is just to have keys a strings so REST

A few toString() tweaks to the sample code to put/get as strings instead of
integers shows this now works as expected both checking the cache in code
and from a browser.

thanks
Michael





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-19 Thread michael
Hi Andrei


1. Ignite version is 2.3.0
2. xml config -  attached.  Actually example-ignite.xml and imported
resource example-default.xml.
N.B. this is the deafult xml I got from install but with
peerClassLoadingEnabled set to true since I was also testing the broadcast
functionality.
3. GridCacheTest.java - attached. As I mentioned, pretty much as the
included example code with some additions:
- added some compute broadcast to test that functionality and also check
that the nodes can see each other
- cache.size() and cache.get(100) to check that the key/value (100,101)
added via REST from the browser gets added. When I do add an item using
REST, cache.size reports 21 but cache.get(100) gives null.

thank you

Michael

GridCacheTest.java
<http://apache-ignite-users.70518.x6.nabble.com/file/t1696/GridCacheTest.java>  
example-default.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t1696/example-default.xml> 
 
example-ignite.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t1696/example-ignite.xml>  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Inconsistency reading cache from code and via REST?

2018-04-18 Thread michael
My Setup in windows
 - all running locally pointing at same ignite xml config
 - 2 * ignite servers running ignite.bat 
 - Provided java sample code GridCacheTest (running in IntelliJ) to put 20
key value pairs in a cache

The sample code runs correctly and outputs the key/value pairs it reads back
from the cache (using cache.get(1) )

However, making a REST call from from a browser (IE) for key = 1 with:
http://127.0.0.1:8080/ignite?cmd=get=GridCacheTest=*1*

gives null:
{"successStatus":0,"affinityNodeId":"44392174-5fc5-446a-ba31-05ab0c56552d","error":null,"sessionToken":null,"response":*null*}

So tried a bit of troubleshooting:

>From the browser - check I really am looking/connect to the right place:
http://127.0.0.1:8080/ignite?cmd=size=GridCacheTest

gives the size of the cache:
{"successStatus":0,"affinityNodeId":null,"error":null,"sessionToken":null,"response":*20*}

Now I tried adding an item using the REST API from the browser using cmd=put
and key = *100* and value = *101*
the cache size now correctly shows *21*
{"successStatus":0,"affinityNodeId":null,"error":null,"sessionToken":null,"response":*21*}

and
http://127.0.0.1:8080/ignite?cmd=get=GridCacheTest=*100*
correctly gives
{"successStatus":0,"affinityNodeId":"9dbee249-d657-4097-b8be-08627382","error":null,"sessionToken":null,"response":"*101*"}

Even stranger back in the java code, some simple modifications to check the
size shows the cache is correctly 21 but cache.get(key) only give values for
1 to 20, for the newly added 100 it shows null.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problem of datastorage after compute finished

2018-04-13 Thread Michael Jay
Really appreciate for your reply, Andrei. 
Now I understand why the result didn't store on the same node where it was
calculated. I will try your suggestions.
But there still remain a problem that job b,d,f were calculated on Node_B,
the results were lost. No partitions store these jobs' results.
thanks again.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Problem of datastorage after compute finished

2018-04-13 Thread Michael Jay
Hi, all! I am trying to use the computegird of ignite to do some compute job,
but I met a problem.
I started two server nodes(different host) to form a clustergroup. Node_A
was started by eclipse, Node_B was started by ignite.cmd, two nodes had the
same configurations except heap size(Node_A heap_size=25G, Node_B
heap_size=10G).
I totally had 6 jobs(a,b,c,d,e,f) to compute and I expected to store the
computation result where the job was done(for example, job a was executed by
Node_A, then the computation result of job a was stored in Node_A). In my
case, reduce is not necessary, moreover, the computation result doesn't need
to be returned. I just want to split the tasks and store the result after
computation is finished.
Actually, Node_A executed job a,c,e and Node_B executed job b,d,f. However
Node_B didn't store the results of job b,d,f, Node_A stored the result of
job c,e, Node_B stored the result of job a. 
I could't figure out why remote node(Node_B) didn't execut the task of
storing the computation result and Node_A didn't store all the jobs' result
that it dealed with?
Is there something wrong with applying of  computetasksplitadapter is this
case?
Below are my code, config.xml and Node_A's log.(code run in Node_A)
thanks for reply! ComputeTaskSplitExample.java

  
default-config.xml
  
ignite-bcd6a658.log
 
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting "BinaryObjectException: Failed to deserialize object" while trying to execute the application using multi node

2018-03-28 Thread Michael Jay
Hi, val. I just met with the same problem. Would you mind giving more
detailed solutions? thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Slow Group-By

2018-02-26 Thread Williams, Michael
Not sure how to check the GC log, but here's a minimal complete example using 
two java classes:

Try these two queries.
SELECT COUNT(DISTINCT((idnumber,value))) FROM athing 
SELECT COUNT (*) FROM (SELECT COUNT(*) FROM athing GROUP BY idnumber,value)


Both queries do the same thing, you'll find the latter takes about 10 times 
more than the former, and it gets worse as you increase the number of records. 
Maybe something O(x) vs O(x^2) is happening? What do you think? The real query 
is way gnarlier, but I think I'm capturing the main thrust of it.

Mike


Startup Class:

package IgniteStartup;

import DataObjects.*;
import org.apache.ignite.*;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Random;

public class FireUp {

public static  boolean CreateCache(Ignite ignite, String cacheName, 
CacheMode cm,Class vclass, List vals)
{
CacheConfiguration myCache = new CacheConfiguration<>(cacheName);
myCache.setSqlSchema("PUBLIC");
myCache.setCacheMode(cm);
if(cm==CacheMode.PARTITIONED)
{

myCache.setQueryParallelism(Runtime.getRuntime().availableProcessors()*2);
}
myCache.setIndexedTypes(Long.class,vclass);

try (IgniteCache cache = ignite.getOrCreateCache(myCache)) {
try (IgniteDataStreamer stmr = 
ignite.dataStreamer(cacheName)) {
long i = 0;
for (T val : vals)
{
stmr.addData(i++, val);
}
stmr.flush();
}
}
return true;
}

public static void main(String[] args)
{
List lThings = new ArrayList<>();
Random z = new Random(0);
Iterator iStream= z.ints(0,10).iterator();

for(int i = 0; i < 1_000_000;++i)
{
lThings.add(new aThing(i,iStream.next()));
}
System.out.println("read in");
IgniteConfiguration cfg = new IgniteConfiguration();
Ignite ignite = Ignition.start(cfg);
CreateCache(ignite,"mapCache",CacheMode.PARTITIONED,  
aThing.class,lThings);
System.out.println("launched");
}

}

aThing data object class:

package DataObjects;
import org.apache.ignite.cache.affinity.AffinityKeyMapped;
import org.apache.ignite.cache.query.annotations.QuerySqlField;

import java.io.Serializable;

public class aThing implements Serializable {
@AffinityKeyMapped
@QuerySqlField(index=true)
int IdNumber;
@QuerySqlField
double value;
public aThing(int id,double val)
{
IdNumber=id;
value = val;
}

}


-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: Monday, February 26, 2018 12:07 PM
To: user@ignite.apache.org
Subject: RE: Slow Group-By

Hi Mike,

Have you checked GC log? Have you seen long pauses?

Is it possible to share SQL query and corresponding execution plan [1]?
Also, please share cache configurations.

[1]
https://urldefense.proofpoint.com/v2/url?u=https-3A__apacheignite-2Dsql.readme.io_docs_performance-2Dand-2Ddebugging-23using-2Dexplain-2Dstatement=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=k6XxWa56JYcjlQAzPcbU2SWWL75Rg1Dtn4Z8pwFJaUQ=Xq6WB1XIW2MH6QETwNEeGJb4TAbjNbll6Fke9Auwvtg=

Thanks!



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=k6XxWa56JYcjlQAzPcbU2SWWL75Rg1Dtn4Z8pwFJaUQ=Tp0ZzfXbunNjEVy2zziCsOJlLCOlx1_GB2fVRVU7Zo8=



RE: Slow Group-By

2018-02-26 Thread Williams, Michael
Unfortunately, at this stage in dev, I'm only doing runs on one machine, and 
though I am using partitioned data to do query parallelism, it seems I lose 
that in the GROUP BY.  Does GROUP_BY distribute at all? 

Might a spark layer on top give a better distribution path? 

Mike

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: Monday, February 26, 2018 11:17 AM
To: user@ignite.apache.org
Subject: RE: Slow Group-By

Hi Mike,

It seems that GROUP_BY requires to fetch all dataset into java heap (in order 
to sort data) and it may lead to long GC pauses.
I think that data collocation [1] should improve performance with using GROUP 
BY.

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__apacheignite.readme.io_docs_affinity-2Dcollocation=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=NkZ5g5gstJbpAgZaFvdxW5LiH0PKkDt17rQQ1t3pWlM=HrRyvf4qAOPX9Fc0eEdX83y-EvOBiWLqbn5f_aE99Pw=

Thanks!



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=NkZ5g5gstJbpAgZaFvdxW5LiH0PKkDt17rQQ1t3pWlM=U_kuoGAjhwdELc4JAGoFSPc76DNhaiSwpOJCDR3MGZ8=



RE: Slow Group-By

2018-02-26 Thread Williams, Michael
Oh, also, during the difference in time, only 1-2 cores seem to be involved.

Mike

From: Williams, Michael
Sent: Monday, February 26, 2018 9:40 AM
To: user@ignite.apache.org
Subject: Slow Group-By

All,

Any advice on speeding up group-by? I'm getting great performance before the 
group-by clause (on a fairly decent size data set) , but adding it slows things 
down horribly. Without the group-by clause, the query takes about a minute, and 
all cores are fully in use. With the group-by cores, the query takes far 
longer. The only difference in the EXPLAIN plans appears to be the addition of 
the group-by clause.  There are a lot of rows to be joined in the clause. Is 
there an equivalent to setQueryParallelism for this? Any things to try?

Thanks,
Mike


Slow Group-By

2018-02-26 Thread Williams, Michael
All,

Any advice on speeding up group-by? I'm getting great performance before the 
group-by clause (on a fairly decent size data set) , but adding it slows things 
down horribly. Without the group-by clause, the query takes about a minute, and 
all cores are fully in use. With the group-by cores, the query takes far 
longer. The only difference in the EXPLAIN plans appears to be the addition of 
the group-by clause.  There are a lot of rows to be joined in the clause. Is 
there an equivalent to setQueryParallelism for this? Any things to try?

Thanks,
Mike


Re: How to make full use of network bandwidth?

2018-02-24 Thread Michael Jay
Sorry, I almost forgot it. I have  tried the suggestions above, but it didn't
work well. Finally, I changed the parameter of writeSynchronizationMode to
"FULL_AYSNC", it worked.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Affinity Key Annotation

2018-02-16 Thread Williams, Michael
Ah, thanks!

From: Alexey Kukushkin [mailto:kukushkinale...@gmail.com]
Sent: Friday, February 16, 2018 5:40 AM
To: user@ignite.apache.org
Subject: Re: Affinity Key Annotation

Michael, data colocation work like this:

  *   Ignite caches have partitions assignments so that primary partitions with 
same ID are assigned to the same physical location (node).
  *   When you put/get data to a cache, Ignite maps value of a field annotated 
with @AffinityKeyMapped to a partition.
  *   Thus, entries having same value of the @AffinityKeyMapped field go to the 
same location.
In your example if recordNo and policyNo are equal then the entries will go to 
the same location.



RE: Affinity Key Annotation

2018-02-15 Thread Williams, Michael
Imagine, for example, I have four keys:

Public class ThingOneKey
{
@QuerySqlField
@AffinityKeyMapped
int recordNo;
...
}


Public class ThingTwoKey
{
@QuerySqlField
@AffinityKeyMapped
int recordNo;
@QuerySqlField
Date birthday;
...
}

Public class ThingThreeKey
{
@QuerySqlField
@AffinityKeyMapped
int policyNo;
...
}


Public class ThingFourKey
{
@QuerySqlField
@AffinityKeyMapped
int policyNo;
@QuerySqlField
int age;
...
}

I then proceed to create four caches, populated by  
  and .

I want ThingOne and ThingTwo, where recordNos are equal to be collocated. 

I also want ThingThree and ThingFour, where policyNos are equal, to be 
collocated. 

If ThingOne has a recordNo that equals a policyNo in ThingThree, will they be 
collocated based on the above? Is there a way to specify I care about the match 
of Things One and Two, but not eg Things Three and Two?

Thanks,
Mike
-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
Sent: Thursday, February 15, 2018 4:36 PM
To: user@ignite.apache.org
Subject: Re: Affinity Key Annotation

Mike,

Not sure I understand a question.. Can you show an example of your data model 
and clarify what kind of collocation strategy you want to achieve?

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=TzuqN8Q89omqLUglrnJEMreTKXCzR3CG2oveHM9lE5s=4TLYKrt3QoPoGXMw3A0x9UDhtyb6mcM6fhUWTya1hAU=



Affinity Key Annotation

2018-02-15 Thread Williams, Michael
Assume I have 4 integers tagged with @AffinityKeyMapped in different data 
structures. How does Ignite determine which correspond to affinity pairs? EG, 
if I wanted two pairs that have a mutual affinity, how would ignite know which 
to pair, or would all be paired together?

Thanks,
Mike



QuerySqlFunction

2018-02-14 Thread Williams, Michael
What changes do I need to do to make ZeroDeploy work with QuerySqlFunction  
definitions? I'm following the example and adding the class as follows, but 
even with peer class loading enabled, I get a gnarly error. Can clients marshal 
to servers? Any advice?


import org.apache.ignite.cache.query.annotations.QuerySqlFunction;

public class MyFunctions {
@QuerySqlFunction
public static int sqr(int x) {
return x * x;
}
}

...
cfg.setPeerClassLoadingEnabled(true);
cfg.setClientMode(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
try(Ignite ignite = Ignition.start(cfg))
...
myCache.setSqlFunctionClasses(MyFunctions.class);
...


Error:
class org.apache.ignite.IgniteCheckedException: Failed to find class with given 
class loader for unmarshalling (make sure same versions of all classes are 
available on all nodes or
enable peer-class-loading) [clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, 
cls=IgniteStartup.MyFunctions]
at 
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:126)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
at 
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:143)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9795)
at 
org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage.message(TcpDiscoveryCustomEventMessage.java:81)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:5460)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processCustomMessage(ServerImpl.java:5282)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2656)

Thanks,
Mike Williams



RE: Dates before epoch - java.sql.Date

2018-02-13 Thread Williams, Michael
Yep, all set. Sorry. Getting better at this, honestly.

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
Sent: Tuesday, February 13, 2018 6:07 PM
To: user@ignite.apache.org
Subject: Re: Dates before epoch - java.sql.Date

Mike,

Looks like you already have a solution, right?

https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_RE-2DDates-2Dbefore-2Depoch-2Djava-2Dsql-2DDate-2Dtp20034.html=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=qzHn3Eb4QZ_ZLBqfAELVQJ2by68li402eOFp9eVeMOk=9CtXiWxVKTUnQl0DrOdoywNcZn2aDtQZLjzLKLJudwE=

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=qzHn3Eb4QZ_ZLBqfAELVQJ2by68li402eOFp9eVeMOk=SBYXsRrSDdGJBj8axC5kWs8coxQ8tfwr-EhTVuswbi4=



MultiMap

2018-02-13 Thread Michael André Pearce
Hi 

I just filled out the survey from the website and it reminded me of one short 
coming of Apache ignite still that we some times hit that means we occasionally 
have to use Hazelcast still.

And that is lack of a multimap.

We saw last year this JIRA but it seems to have died.

https://issues.apache.org/jira/plugins/servlet/mobile#issue/IGNITE-640

Is there any life in this? It be great to close this api/functionality gap.

Cheers
Mike




Sent from my iPhone

Dates before epoch - java.sql.Date

2018-02-13 Thread Williams, Michael
If I need to store dates before the Unix epoch, should I use timestamp, as 
java.sql.Date doesn't store anything before 1970? Do date functions support 
timestamp as well as date?

See: https://docs.oracle.com/javase/7/docs/api/java/sql/Date.html
See also: https://apacheignite-sql.readme.io/docs/data-types#section-date


Mike Williams



RE: Dates before epoch - java.sql.Date

2018-02-13 Thread Williams, Michael
Nevermind, I made an error, works fine.

Mike

From: Williams, Michael
Sent: Tuesday, February 13, 2018 3:24 PM
To: user@ignite.apache.org
Subject: Dates before epoch - java.sql.Date

If I need to store dates before the Unix epoch, should I use timestamp, as 
java.sql.Date doesn't store anything before 1970? Do date functions support 
timestamp as well as date?

See: https://docs.oracle.com/javase/7/docs/api/java/sql/Date.html
See also: https://apacheignite-sql.readme.io/docs/data-types#section-date


Mike Williams



RE: Cat Example

2018-02-09 Thread Williams, Michael
So  this seems to be the closest I can get to something that looks like DML – 
this does work, and the only odd thing is that table DOG shows up twice in 
dBeaver, which I can’t quite figure out. Is this the right way to go about 
getting data in? Sorry, I’m just getting started with Ignite and my background 
is more C++ than Java.

Cat.java:


import org.apache.ignite.cache.query.annotations.*;

import java.io.*;

public class Cat implements Serializable  {
int legs;
String name;

Cat(int l, String n)
{
legs = l;
name = n;
}
Cat()
{
legs = 0;
name = "";
}

}

Main Class:


import org.apache.ignite.IgniteDataStreamer;
import org.apache.ignite.Ignition;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.QueryEntity;
import org.apache.ignite.cache.QueryIndex;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.QueryCursor;

import java.util.*;

public class Test {
public static void main(String[] args)
{

Ignite ignite = Ignition.start();
CacheConfiguration<Integer,Cat> cfg= new CacheConfiguration("CAT");
cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setSqlEscapeAll(true);
//cfg.setIndexedTypes(Integer.class,Cat.class);
QueryEntity qe = new QueryEntity();
qe.setKeyType(Integer.class.getName());
qe.setValueType(Cat.class.getName());
LinkedHashMap<String, String> fields = new LinkedHashMap();
fields.put("legs",Integer.class.getName());
fields.put("name",String.class.getName());
qe.setFields(fields);
Collection indexes = new ArrayList<>(1);
indexes.add(new QueryIndex("legs"));
qe.setIndexes(indexes);
qe.setTableName("DOG");
cfg.setQueryEntities(Arrays.asList(qe));
cfg.setSqlSchema("PUBLIC");

try(IgniteCache<Integer, Cat> cache = ignite.getOrCreateCache(cfg))
{
try (IgniteDataStreamer<Integer, Cat> stmr = 
ignite.dataStreamer("CAT")) {
for (int i = 0; i < 1_000_000; i++)
stmr.addData(i, new Cat(i+1,"Fluffy"));
stmr.flush();
}

SqlFieldsQuery sql = new SqlFieldsQuery("select * from DOG 
LIMIT 10");
try (QueryCursor<List> cursor = cache.query(sql)) {
for (List row : cursor)
System.out.println("cat=" + row.get(0));
}
}
System.out.print("UP!");
}}

From: Amir Akhmedov [mailto:amir.akhme...@gmail.com]
Sent: Friday, February 09, 2018 5:08 PM
To: user@ignite.apache.org
Subject: Re: Cat Example

Hi Mike,
As of today Ignite DML does not support transactions and every DML statement is 
executed as atomic (Igniters, please correct me if I'm wrong).
But still you can use DataStreamer[1] to improve data loading. Or you can use 
Ignite's Cache API with batch operations like putAll.

[1] 
https://apacheignite.readme.io/docs/data-streamers<https://urldefense.proofpoint.com/v2/url?u=https-3A__apacheignite.readme.io_docs_data-2Dstreamers=DwMFaQ=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=66Y7rAlRA_17xECa5FjIVnychzFgOU57cX95e0xK8Uw=QhZ2-mbmH1uyk6d0ECo3A_ghgTh1iNv1YQUGMYYx_NY=>

On Fri, Feb 9, 2018 at 2:56 PM, Williams, Michael 
<michael.willi...@transamerica.com<mailto:michael.willi...@transamerica.com>> 
wrote:
Is it possible to stream data into a table created by a query? For example, 
consider the following modified example. If I had a Person object, how would I 
replace the insert loop to improve speed?

Thanks,
Mike

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.configuration.CacheConfiguration;

public class Test {
private static final String DUMMY_CACHE_NAME = "dummy_cache";
public static void main(String[] args)
{
Ignite ignite = Ignition.start();
{

// Create dummy cache to act as an entry point for SQL queries (new 
SQL API which do not require this
// will appear in future versions, JDBC and ODBC drivers do not 
require it already).
CacheConfiguration cacheCfg = new 
CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
try (IgniteCache cache = ignite.getOrCreateCache(cacheCfg))
{
// Create reference City table based on 

RE: Cat Example

2018-02-09 Thread Williams, Michael
Is it possible to stream data into a table created by a query? For example, 
consider the following modified example. If I had a Person object, how would I 
replace the insert loop to improve speed?

Thanks,
Mike

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.configuration.CacheConfiguration;

public class Test {
private static final String DUMMY_CACHE_NAME = "dummy_cache";
public static void main(String[] args)
{
Ignite ignite = Ignition.start();
{

// Create dummy cache to act as an entry point for SQL queries (new 
SQL API which do not require this
// will appear in future versions, JDBC and ODBC drivers do not 
require it already).
CacheConfiguration cacheCfg = new 
CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
try (IgniteCache cache = ignite.getOrCreateCache(cacheCfg))
{
// Create reference City table based on REPLICATED template.
cache.query(new SqlFieldsQuery("CREATE TABLE city (id LONG 
PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"")).getAll();
// Create table based on PARTITIONED template with one backup.
cache.query(new SqlFieldsQuery("CREATE TABLE person (id LONG, 
name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id)) WITH 
\"template=replicated\"")).getAll();

// Create an index.
cache.query(new SqlFieldsQuery("CREATE INDEX on Person 
(city_id)")).getAll();

SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO city (id, 
name) VALUES (?, ?)");

cache.query(qry.setArgs(1L, "Forest Hill")).getAll();
cache.query(qry.setArgs(2L, "Denver")).getAll();
cache.query(qry.setArgs(3L, "St. Petersburg")).getAll();

qry = new SqlFieldsQuery("INSERT INTO person (id, name, 
city_id) values (?, ?, ?)");
for(long i = 0; i < 100_000;++i)
{
cache.query(qry.setArgs(i, "John Doe", 3L)).getAll();
}
System.out.print("HI!");




From: Denis Magda [mailto:dma...@apache.org]
Sent: Thursday, February 08, 2018 7:33 PM
To: user@ignite.apache.org
Subject: Re: Cat Example

Hi Mike,

If SQL indexes/configuration is set with the annotation and setIndexedTypes 
method then you have to use the type name (Cat in your case) as the SQL table 
name. It’s explained here:
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-annotation-based-configuration<https://urldefense.proofpoint.com/v2/url?u=https-3A__apacheignite-2Dsql.readme.io_docs_schema-2Dand-2Dindexes-23section-2Dannotation-2Dbased-2Dconfiguration=DwMFaQ=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=Nrv7gL0zYS9bFUir2zKHmI30_jJzYTzlic8Vk0kWonQ=UZWx1Uz419uytrzu-xRfSiuRqc3rszCQr5rJqNjT8TI=>

The cache name is used for IgniteCache APIs and other related methods.

—
Denis


On Feb 8, 2018, at 3:48 PM, Williams, Michael 
<michael.willi...@transamerica.com<mailto:michael.willi...@transamerica.com>> 
wrote:

Hi,

Quick question, submitted a ticket earlier. How would I modify the below code 
such that, when viewed through Sql (dbeaver, eg) it behaves as if it had been 
created through a CREATE TABLE statement, where the name of the table was 
catCache? I’m trying to directly populate a series of tables that will be used 
downstream primarily through SQL. I’d like to be able to go into dBeaver, 
browse the tables, and see 10 cats named Fluffy, if this is working correctly.
import org.apache.ignite.cache.query.annotations.*;
import 
java.io<https://urldefense.proofpoint.com/v2/url?u=http-3A__java.io_=DwMFaQ=9g4MJkl2VjLjS6R4ei18BA=ipRRuqPnuP3BWnXGSOR_sLoARpltax56uFYU6n57c3GFvMdyEV-dz2ez2lZZpYl0=Nrv7gL0zYS9bFUir2zKHmI30_jJzYTzlic8Vk0kWonQ=oFi3ym9G_GwhhFWtyRu--rylVH_5Vs2w_77tNCIb1Ls=>.*;

public class Cat implements Serializable  {
@QuerySqlField
int legs;
@QuerySqlField
String name;

Cat(int l, String n)
{
legs = l;
name = n;
}
}


import org.apache.ignite.Ignition;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.QueryCursor;
import java.util.List;
public class Test {
public static void main(String[] args)
{

Ignite ignite = Ignition.start();
CacheConfiguration<Integer,Cat> cfg= new 
CacheConfiguration("catCache");

cfg.setCacheMode(CacheMode.REPLICAT

Cat Example

2018-02-08 Thread Williams, Michael
Hi,

Quick question, submitted a ticket earlier. How would I modify the below code 
such that, when viewed through Sql (dbeaver, eg) it behaves as if it had been 
created through a CREATE TABLE statement, where the name of the table was 
catCache? I'm trying to directly populate a series of tables that will be used 
downstream primarily through SQL. I'd like to be able to go into dBeaver, 
browse the tables, and see 10 cats named Fluffy, if this is working correctly.
import org.apache.ignite.cache.query.annotations.*;
import java.io.*;

public class Cat implements Serializable  {
@QuerySqlField
int legs;
@QuerySqlField
String name;

Cat(int l, String n)
{
legs = l;
name = n;
}
}


import org.apache.ignite.Ignition;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.QueryCursor;
import java.util.List;
public class Test {
public static void main(String[] args)
{

Ignite ignite = Ignition.start();
CacheConfiguration cfg= new 
CacheConfiguration("catCache");

cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setSqlEscapeAll(true);
cfg.setSqlSchema("PUBLIC");
cfg.setIndexedTypes(Integer.class,Cat.class);
try(IgniteCache cache = ignite.getOrCreateCache(cfg))
{
for (int i = 0; i < 10; ++i) {
cache.put(i, new Cat(i + 1,"Fluffy"));
}/*
SqlFieldsQuery sql = new SqlFieldsQuery("select * from 
catCache");
try (QueryCursor cursor = cache.query(sql)) {
for (List row : cursor)
System.out.println("cat=" + row.get(0));
}*/
}
System.out.print("Got It!");

}}
Thanks,
Mike Williams



Re: @SpringApplicationContextResource / ApplicationContext / getBeansOfType()

2018-02-08 Thread Michael Cherkasov
Hi Navnet,

Could you please share a reproducer for this issue? Some small mvn based
project on github or as zip archive that will show the issue.

Thanks,
Mike.


2018-02-08 15:00 GMT-08:00 NK :

> Hi,
>
> I have a Spring Boot app using Ignite 2.3.0.
>
> I am invoking Ignite in a class called IgniteStarter using
> "IgniteSpring.start(springAppCtx)" where springAppCtx is my app's Spring
> Application Context.
>
> When I look for beans of a specific type in the main IgniteStarter class, I
> get the expected result. My code:
> Collection jdbcRepositories =
> springAppCtx.getBeansOfType(Repository.class).values();
>
> I have an IgniteService (bootstrapped by Ignite) where I need to use app
> context. When I use the same code as above (getBeansOfType(...)) in the
> IgniteService class, I don't get any beans.
>
> In the Ignite service, I am using ApplicationContext using annotation
> @SpringApplicationContextResource.
>
> I am able to get a correct bean count using
> springAppCtx.getBeanDefinitionCount() (so the context is set correctly),
> but
> getBeansOfType(...) doesn't work.
>
> Any pointers to why getBeansOfType(...) does not return anything on the
> spring app context managed / set by Ignite?
>
> Thanks,
> NK
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: slow query performance against berkley db

2018-02-05 Thread Michael Cherkasov
Rajesh,

>Does that mean Ignite cannot scale well against Berkley dB Incase of
single node?
Could you please clarify, what your question means?

Ignite can scale from a single node to hundreds(or even thousands, I have
seen the only cluster of 300 nodes, but this definitely not a limit).
It was designed to work as a distrebuted grid. So I think if you will try
to compare one node of Ignite with one node of SomeDB, ignite will lose.

But you can run 10 ignite nodes and they will be faster then 10 nodes of
somedb, furthermore, you can kill nodes and ignite will continue to work,
what will happen if a host with Berkley DB crashes?
So in case of crash can you transparently switch to other Berkley DB node
and continue to work?

Ignite is not just SQL DB, Ignite is a distributed data grid, it's strongly
consistent and HA database,
please make this into account when comparing it with other solutions.

Thanks,
Mike.



2018-02-05 9:23 GMT-08:00 Rajesh Kishore :

> Hi Christos
>
> Does that mean Ignite cannot scale well against Berkley dB Incase of
> single node?
>
> Regards
> Rajesh
>
> On 5 Feb 2018 10:08 p.m., "Christos Erotocritou" 
> wrote:
>
>> Hi Rajesh,
>>
>> Ignite is a distributed system, testing with one node is really not the
>> way.
>>
>> You need to consider having multiple nodes and portion and collocate your
>> data before.
>>
>> Thanks,
>> C
>>
>> On 5 Feb 2018, at 16:36, Rajesh Kishore  wrote:
>>
>> Hi,
>>
>> We are in the process of evaluating Ignite native persistence against
>> berkely db. For some reason Ignite query does not seem to be performant the
>> way application code behaves against berkley db
>>
>> Background:
>> Berkley db - As of now, we have berkley db for our application and the
>> data is stored as name value pair as byte stream in the berkley db's native
>> file system.
>>
>> Ignite DB - We are using Ignite DB's native persistence file system.
>> Created appropriate index and retrieving data using SQL involving multiple
>> joins.
>>
>> Ignite configuration : with native persistence enabled , only one node
>>
>> Data: As of now in the main table we have only *.1 M records *and in
>> supporting tables we have around 2 million records
>>
>> Ignite sql query used
>>
>> SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM
>> ( select st.entryID,st.attrName,st.attrValue, st.attrsType from
>> (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass
>> at1 WHERE  at1.attrValue= ? ) t
>> INNER JOIN
>> "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON
>> st.entryID = t.entryID  WHERE st.attrKind IN ('u','o')
>> ) f
>>INNER JOIN  (SELECT entryID from "dn".Ignite_DN where parentDN like ?
>> ) dnt ON f.entryID = dnt.entry
>>
>> The corresponding EXPLAIN PLAN
>>
>>
>>
>> [[SELECT
>> F__Z3.ENTRYID AS __C0_0,
>> F__Z3.ATTRNAME AS __C0_1,
>> F__Z3.ATTRVALUE AS __C0_2,
>> F__Z3.ATTRSTYPE AS __C0_3
>> FROM (
>> SELECT
>> ST__Z2.ENTRYID,
>> ST__Z2.ATTRNAME,
>> ST__Z2.ATTRVALUE,
>> ST__Z2.ATTRSTYPE
>> FROM (
>> SELECT
>> AT1__Z0.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0
>> WHERE AT1__Z0.ATTRVALUE = ?1
>> ) T__Z1
>> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2
>> ON 1=1
>> WHERE (ST__Z2.ATTRKIND IN('u', 'o'))
>> AND (ST__Z2.ENTRYID = T__Z1.ENTRYID)
>> ) F__Z3
>> /* SELECT
>> ST__Z2.ENTRYID,
>> ST__Z2.ATTRNAME,
>> ST__Z2.ATTRVALUE,
>> ST__Z2.ATTRSTYPE
>> FROM (
>> SELECT
>> AT1__Z0.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0
>> WHERE AT1__Z0.ATTRVALUE = ?1
>> ) T__Z1
>> /++ SELECT
>> AT1__Z0.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0
>> /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
>> ?1 ++/
>> WHERE AT1__Z0.ATTRVALUE = ?1
>>  ++/
>> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2
>> /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX:
>> ENTRYID = T__Z1.ENTRYID ++/
>> ON 1=1
>> WHERE (ST__Z2.ATTRKIND IN('u', 'o'))
>> AND (ST__Z2.ENTRYID = T__Z1.ENTRYID)
>>  */
>> INNER JOIN (
>> SELECT
>> __Z4.ENTRYID
>> FROM "dn".IGNITE_DN __Z4
>> WHERE __Z4.PARENTDN LIKE ?2
>> ) DNT__Z5
>> /* SELECT
>> __Z4.ENTRYID
>> FROM "dn".IGNITE_DN __Z4
>> /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/
>> WHERE (__Z4.ENTRYID IS ?3)
>> AND (__Z4.PARENTDN LIKE ?2): ENTRYID = F__Z3.ENTRYID
>>  */
>> ON 1=1
>> WHERE F__Z3.ENTRYID = DNT__Z5.ENTRYID
>> ORDER BY 1], [SELECT
>> __C0_0 AS ENTRYID,
>> __C0_1 AS ATTRNAME,
>> __C0_2 AS ATTRVALUE,
>> __C0_3 AS ATTRSTYPE
>> FROM PUBLIC.__T0
>> /* "Ignite_DSAttributeStore"."merge_sorted" 

Re: Peer to peer class loading + deployment modes

2018-02-05 Thread Michael Cherkasov
Hi Luqman,

you're right, peer class loading is exactly what you need, isolated or
shared mode fit your case and will automatically deploy a new compute
classes,
but unfortunately, there's no way to enable peer class loading without
cluster restart.
So you need to plan a maintenance for your cluster, turn it off, turn on
peer class loading and return cluster back.

Thanks,
Mike.

2018-02-05 3:00 GMT-08:00 luqmanahmad :

> Hi there,
>
> We have several hundred ignite server nodes and few clients which are
> loading classes to ignite on demand.
> Now from time to time we need to redeploy the client with some new changes,
> which could include changes in the classes which are already loaded to
> ignite worker nodes through ignite compute calls, but there are no changes
> in the ignite server nodes.
>
> Now according to [1] the classes will be loaded only once on the worker
> nodes. My question is can we update the classes loaded through peer class
> loading without restarting ignite server nodes. I was looking at [2] but
> couldn't find what I was looking for or may be I am not looking at the
> right
> thing.
>
> [1]  Zero Deployment 
> [2]  Deployment Modes  io/docs/deployment-modes>
>
> Any help over here would be appreciated.
>
> Many thanks,
> Luqman
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to activate cluster - table already exists

2018-01-26 Thread Michael Cherkasov
Hi Thomas,

Please share with us next time a reproducer with work folder.
It's something that should be checked.

Did you change your QueryEntities between cluster restart?

Thanks,
Mike.

2018-01-26 5:25 GMT-08:00 Thomas Isaksen :

> Hi Mikhail,
>
> I don't know what happened but I deleted some folders under
> $IGNITE_HOME/work/db with the same name and the problem cleared.
> I think maybe I killed Ignite before it could finish writing or something
> to that effect, which could have caused the problem.
>
> ./t
>
> -Original Message-
> From: Mikhail [mailto:michael.cherka...@gmail.com]
> Sent: fredag 26. januar 2018 01.51
> To: user@ignite.apache.org
> Subject: Re: Failed to activate cluster - table already exists
>
> Hi Thomas,
>
> Looks like you can reproduce the issue with a unit test.
>
> Could you please share it with us?
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cannot insert data into table using JDBC

2018-01-18 Thread Michael Jay
Thanks a lot.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to make full use of network bandwidth?

2018-01-08 Thread Michael Jay
Thank you, Alexey. I'll try you advice and let you know the result later. 
Thanks again.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to make full use of network bandwidth?

2018-01-04 Thread Michael Jay
Hi,
I have two physical hosts, each host has a sever node, node A and node B. I
created a table and inserted data into table via JDBCthin driver. But the
performance of inserting is poor. When the bandwidth of network is 100Mbps,
the throughput from node A to node B is only about 2Mbps, 1 million rows
need 800s to be finished. Then I only changed the bandwidth of network to
1Gbps, the throughput from node A to node B is up to 10Mbps, 1 million rows
need 250s to be finished. Moreover, the cacheMode is "partitioned". 
How to explain this case? Is there some limitaiton or configuration on
network? What did I miss? How can How can we  use all network bandwidth?
Thanks!!
  default-config.xml
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory leak in GridCachePartitionExchangeManager?

2017-12-26 Thread Michael Cherkasov
Hi,

I created a ticket for the issue you found:
https://issues.apache.org/jira/browse/IGNITE-7319

Thanks,
Mike.

2017-12-18 17:05 GMT+03:00 zbyszek :

>
> Dear all,
>
> I was wondering if this is a know issue which has a chance to be fixed in
> future or (I hope) it is me who missed something obvious in working with
> Ignite caches.
> I have a simple single node test app (built to investigate a memory leak
> observed in our PROD deployment), that creates c.a. 20 LOCAL caches per
> sec.
> with the config below:
>
> private IgniteCache createLocalCache(String name) {
> CacheConfiguration cCfg = new
> CacheConfiguration<>();
> cCfg.setName(name);
> cCfg.setGroupName("localCaches"); // without group leak is much
> bigger!
> cCfg.setStoreKeepBinary(true);
> cCfg.setCacheMode(CacheMode.LOCAL);
> cCfg.setOnheapCacheEnabled(false);
> cCfg.setCopyOnRead(false);
> cCfg.setBackups(0);
> cCfg.setWriteBehindEnabled(false);
> cCfg.setReadThrough(false);
> cCfg.setReadFromBackup(false);
> cCfg.setQueryEntities();
> return ignite.createCache(cCfg).withKeepBinary();
> }
>
> The caches are placed in the queue and are picked up by the worker thread
> which just destorys them after removing from the queue.
> This setup seems to generate a memory leak of about 1GB per day.
> When looking at heapdump, I see all space is occupied by instances of
> java.util.concurrent.ConcurrentSkipListMap$Node:
>
> (please copy paste table into notepad to see tables correctly)
>
> Objects by class
> +---
> 
> -++--+--
> -+
> |   Class
> |  Objects   | Shallow Size |   Retained Size   |
> +---
> 
> -++--+--
> -+
> |  java.util.concurrent.ConcurrentSkipListMap$Node
> |  4,987,415   13 %  |  119,697,960   10 %  |  ~  1,204,893,605  100 %  |
> |
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$
> CancelableTask
> |  4,985,687   13 %  |  239,312,976   20 %  |~  917,361,000   76 %  |
> |
> org.apache.ignite.internal.processors.cache.query.continuous.
> CacheContinuousQueryManager$BackupCleaner
> |  4,985,680   13 %  |  119,656,320   10 %  |~  558,390,752   46 %  |
> |  org.jsr166.ConcurrentHashMap8
> |  4,990,926   13 %  |  199,637,040   17 %  |~  439,459,352   36 %  |
> |  org.jsr166.LongAdder8
> |  4,992,416   13 %  |  159,757,312   13 %  |~  159,757,312   13 %  |
> |  org.apache.ignite.lang.IgniteUuid
> |  4,989,306   13 %  |  119,743,344   10 %  |~  119,745,456   10 %  |
> |  java.util.concurrent.ConcurrentSkipListMap$Index
> |  2,488,9877 %  |   59,735,6885 %  |~  119,502,384   10 %  |
> |  java.util.concurrent.ConcurrentSkipListMap$HeadIndex
> | 490 %  |1,5680 %  |~  106,991,8329 %  |
> |  org.jsr166.ConcurrentHashMap8$ValuesView
> |  4,985,368   13 %  |   79,765,8887 %  | ~  79,765,8887 %  |
> |  java.util.HashMap$Node
> | 44,3350 %  |1,418,7200 %  | ~  79,618,1047 %  |
> |  java.util.HashMap$Node[]
> | 13,0930 %  |1,098,8560 %  | ~  68,150,5206 %  |
> |  java.util.HashMap
> | 13,5500 %  |  650,4000 %  | ~  67,636,1126 %  |
> |  java.util.concurrent.ConcurrentSkipListMap
> | 100 %  |  4800 %  | ~  59,830,7685 %  |
>
>
> Merged paths to java.util.concurrent.ConcurrentSkipListMap$Node instances
> (first 5 levels) reports no obvious dominator
> (at least no dominator from my test namespace):
>
>
> Merged paths
> +---
> -+--
> --++--+
> |Name
> |  Objects   | Retained Size  |  Dominators  |
> +---
> -+--
> --++--+
> |  +---
> |  4,987,415  100 %  |  1,037,112,344  100 %  |  |
> ||
> |||  |
> |+--- dominator)> |  1,245,699   25 %  |
> 1,037,015,992   99 %  |  |
> || |
> |||  |
> || +---java.util.concurrent.ConcurrentSkipListMap$Index
> |  

Re: Cannot insert data into table using JDBC

2017-12-19 Thread Michael Jay
Hi,has it been solved or is it a bug? I just met with the same problem.When I
set "streaming=false" ,it worked, data can be inserted via JdbcClientDriver.
However, when streaming=true, I got message that said "schema not found" .
Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


insert so slow via JDBC thin driver

2017-12-14 Thread Michael Jay
Hello, I am a new Ignite leaner. I want to insert 50,000,000 rows into a
table. Here,i got a problem.
When one host and one sever node, the speed of insert is about 2,000,000 per
minute,  the usage of cpu is 30-40%; however two hosts and two sever nodes,
about 100,000 per minute,and the usage of cpu is only 5%. It's so slow,what
can i do to improve the performance? Thanks.

my default-config.xml:
   





   








































10x.x.x.226:47500..47509

10x.x.x.75:47500..47509












--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: using JDBC with Ignite cluster, configured with persistent storage

2017-12-05 Thread Michael Cherkasov
Hi,

I can not understand from your email what was wrong, could you please
provide more details about your case?
Maybe you can send me a demo app that will show the exception you have?

Thanks,
Mike.


2017-12-05 1:45 GMT+03:00 soroka21 :

> Hi,
> I'm trying to use JDBC connection to run SQL on cluster, configured with
> Persistent storage.
> Persistent storage means what I have to make cluster active before I can do
> any DDL (or even request list of tables?)
> below is the output of sqlline, I'm trying to use, :
>
>
> 0: jdbc:ignite:thin://10.238.42.86/> !tables
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:671)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata.getTables(
> JdbcThinDatabaseMetadata.java:740)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sqlline.Reflector.invoke(Reflector.java:75)
> at sqlline.Commands.metadata(Commands.java:194)
> at sqlline.Commands.tables(Commands.java:332)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:791)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
>
> 0: jdbc:ignite:thin://10.238.42.86/> CREATE TABLE table4(_id varchar,F00
> varchar,F01 bigint,F02 double,F03 timestamp,F04 varchar,F05 bigint,F06
> double,F07 timestamp,F08 varchar,F09 bigint, PRIMARY KEY(_id)) WITH
> "cache_name=table4, value_type=table4";
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:671)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:130)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute(JdbcThinStatement.java:299)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
> Cluster configuration looks like this:
> 
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
> 
> xxx.xxx.xxx.xxx:47500..
> 47509
> yyy.yyy.yyy.yyy:47500..
> 47509
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
>
> 
>
>
> 
>
>
>  value="true"/>
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite poor performance

2017-12-05 Thread Michael Cherkasov
Hi Sasha,

Did you have time to try my advice? Did it help you?

Thanks,
Mike.

2017-12-04 17:00 GMT+03:00 Sasha Haykin :

> Hi,
>
> I Working on big POC for telecom industry solution
>
>
> I have loaded to ignite 5,000,000 objects/rows (I loaded it in the client
> side Please find the code below)
>
>
>
> In the metric the used of heap looks too small (for 5M rows) is it normal
> utilization?
>
>
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>
> ^-- Node [id=c4f42f84, uptime=00:37:00.639]
>
> ^-- H/N/C [hosts=2, nodes=2, CPUs=12]
>
> ^-- CPU [cur=0.13%, avg=4.76%, GC=0%]
>
> ^-- PageMemory [pages=818037]
>
> ^-- Heap [used=1655MB, free=59.53%, comm=4090MB]
>
> ^-- Non heap [used=63MB, free=95.82%, comm=65MB]
>
> ^-- Public thread pool [active=0, idle=1, qSize=0]
>
> ^-- System thread pool [active=0, idle=8, qSize=0]
>
> ^-- Outbound messages queue [size=0]
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
> FreeList [name=null, buckets=256, dataPages=600522, reusePages=0]
>
>
>
> When I’m querying the data VIA JDBC the performance is very bad (I’m
> comparing it to VERTICA DB)
>
>
>
> What I’m doing wrong?
>
>
>
> The success factors for the POC is to  query from  100M rows of data (with
> filters on dates+ filter on one element ) and get results in 3 seconds is
> it possible to achieve it with Ignite?
>
>
>
> Here are the example of the data + query:
> *SELECT* *count*(*) *FROM* HPA *WHERE* DEST_NE_NAME='destDest17' ;
>
> COUNT(*) |
>
> -|
>
> 48382|
>
> *Query execution **time is more than 7 seconds*
>
>
>
> *Data sample:*
>
>
>
>
>
> ID  |APPLICATION_ID_NAME  |AVG_FAILURE_DURATION |AVG_SUCCESS_DURATION
> |DEST_NE_NAME |DESTINATION_HOST  |DESTINATION_REALM_NAME
> |NUM_OF_RETRANSMISSION_FRAMES |NUMBER_OF_REQUESTS |NUMBER_OF_RESPONSES
> |ORIGIN_HOST  |ORIGIN_REALM_NAME  |PROCEDURE_DURATION
> |PROCEDURE_DURATION_COUNTER |PROCEDURE_DURATION_MAX |PROCEDURE_SUBTYPE
> |PROCEDURE_TYPE |RELEASE_CAUSE |RELEASE_TYPE   |SOURCE_NE_NAME
> |START_TIME  |TIME_STAMP  |TRANSPORT_LAYER_PROTOCOL_NAME
> |VLAN_ID |
>
> |-|-|---
> --|-|--|
> |-|---|-
> ---|-|---|--
> -|---|--
> -|---|---|--|---
> |---||--
> --|---||
>
> 0   |APPLICATION_ID_NAME9 |1|6
> |destDest39   |DESTINATION_HOST5 |DESTINATION_REALM_NAME4
> |1|1  |9
> |ORIGIN_HOST3 |ORIGIN_REALM_NAME6 |6  |4
>  |8  |PROCEDURE_SUBTYPE2
> |17 |43|RELEASE_TYPE7  |SourceDest71   |2017-12-04
> 14:37:56 |2017-12-04 14:37:56 |TRANSPORT_LAYER_PROTOCOL_NAME1 |48  |
>
>
>
> 1   |APPLICATION_ID_NAME7 |7|1
> |destDest56   |DESTINATION_HOST5 |DESTINATION_REALM_NAME8
> |8|6  |4
> |ORIGIN_HOST1 |ORIGIN_REALM_NAME4 |2
> |29 |3  |PROCEDURE_SUBTYPE2
> |15 |20|RELEASE_TYPE77 |SourceDest33   |2017-12-04
> 14:37:57 |2017-12-04 14:37:57 |TRANSPORT_LAYER_PROTOCOL_NAME7 |47  |
>
>
>
> There are indexes:
>
> ID
>
> START_TIME
>
> START_TIME + SOURCE_NE_NAME + DEST_NE_NAME  (*CREATE* *INDEX*
> DINAMICHPA_TIME_DEST_SOURCE *ON* HPA (START_TIME,SOURCE_NE_NAME,
> DEST_NE_NAME))
>
>
>
>
> [04-12-2017 13:09:44][INFO 
> ][grid-nio-worker-tcp-comm-2-#27][TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.0.5.236:47101,
> rmtAddr=/172.16.10.232:53754]
>
> [04-Dec-2017 13:09:52][WARN ][query-#290][IgniteH2Indexing] Query
> execution is too long [time=7152 ms, sql='SELECT
>
> COUNT(*) __C0_0
>
> FROM PUBLIC.HPA __Z0
>
> WHERE __Z0.DEST_NE_NAME = ?1', plan=
>
> SELECT
>
> COUNT(*) AS __C0_0
>
> FROM PUBLIC.HPA __Z0
>
> /* PUBLIC.DINAMICHPA_TIME_DEST_SOURCE: DEST_NE_NAME = ?1 */
>
> WHERE __Z0.DEST_NE_NAME = ?1
>
> , parameters=[destDest17]]
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>
> ^-- Node [id=c4f42f84, uptime=00:37:00.639]
>
> ^-- H/N/C [hosts=2, nodes=2, CPUs=12]
>
> ^-- CPU [cur=0.13%, avg=4.76%, GC=0%]
>
> ^-- PageMemory [pages=818037]
>
> ^-- Heap [used=1655MB, free=59.53%, comm=4090MB]
>
> ^-- Non heap [used=63MB, free=95.82%, comm=65MB]
>
> ^-- Public thread pool [active=0, idle=1, qSize=0]
>
> ^-- System thread pool [active=0, idle=8, qSize=0]
>
> ^-- Outbound 

Re: Ignite .NET - Delete statement

2017-12-04 Thread Michael Cherkasov
Hi Jérôme,

Could you please send me a full reproducer? And could you please clarify
which version of Ignite is used?

Thanks,
Mike.

2017-12-04 12:38 GMT+03:00 jerpic :

> Hello, I try to delete some records according to the DELETE statement (see
> below). Unfortunately, I got every time an exception (see the attached
> file). Exception.txt
> 
> Do you have an idea about the origin of this issue ? Here is the code :
>
>  public void deleteOneWayProposals(string _route, DateTime _searchDate, 
> string _provider)
> {
> try
> {
> var query = buildSqlDeleteOneWayProposals(_route, 
> _searchDate, _provider);
>
> Console.WriteLine("QUERY : " + query);
>
> var res = cacheOneWayProposals.QueryFields(query);
>
>
> } catch (Exception exp)
> {
> Console.WriteLine(exp.Message);
> }
> }
>
>  private SqlFieldsQuery buildSqlDeleteOneWayProposals(string _route, DateTime 
> _searchDate, string _provider)
> {
>
> string delete = "DELETE " +
> "FROM \"" + ONEWAY_PROPOSALS_CACHE + "\".ONEWAYPROPOSAL as 
> ONEWAY " +
> "WHERE " +
> "ONEWAY.ROUTE = ? " +
> "AND " +
> "ONEWAY.SEARCHDEPARTUREDATE = PARSEDATETIME 
> (?,'-MM-dd') " +
> "AND " +
> "ONEWAY.PROVIDER = ? ";
>
>
> var fieldsQuery = new SqlFieldsQuery(delete);
> fieldsQuery.EnableDistributedJoins = true;
>
> fieldsQuery.Arguments = new Object[] { _route, 
> _searchDate.ToString("-MM-dd"), _provider};
>
> if (log.IsDebugEnabled)
> {
> log.Debug("QUERY = " + fieldsQuery.ToString());
> }
>
> return fieldsQuery;
> }
>
> Regards, Jérôme.
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Ignite poor performance

2017-12-04 Thread Michael Cherkasov
Also, for indexes of String type you can increase inline size:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/QueryIndex.html#setInlineSize(int)
the default is 10, you can increase it to 32 or 64, it depends on your
data, so you can experiment with different values and choose the best for
your case.

Thanks,
Mike.

2017-12-04 18:02 GMT+03:00 Michael Cherkasov <michael.cherka...@gmail.com>:

> Hi Sasha,
>
> it's definitely can work faster:
> 1. add index for DEST_NE_NAME:
> https://apacheignite.readme.io/docs/cache-queries#section-
> query-configuration-by-annotations
>
> 2. enable query parallelism:
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#query-
> parallelism
>
> Thanks,
> Mike.
>
>
> 2017-12-04 17:00 GMT+03:00 Sasha Haykin <sas...@radcom.com>:
>
>> Hi,
>>
>> I Working on big POC for telecom industry solution
>>
>>
>> I have loaded to ignite 5,000,000 objects/rows (I loaded it in the client
>> side Please find the code below)
>>
>>
>>
>> In the metric the used of heap looks too small (for 5M rows) is it normal
>> utilization?
>>
>>
>>
>> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>>
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>
>> ^-- Node [id=c4f42f84, uptime=00:37:00.639]
>>
>> ^-- H/N/C [hosts=2, nodes=2, CPUs=12]
>>
>> ^-- CPU [cur=0.13%, avg=4.76%, GC=0%]
>>
>> ^-- PageMemory [pages=818037]
>>
>> ^-- Heap [used=1655MB, free=59.53%, comm=4090MB]
>>
>> ^-- Non heap [used=63MB, free=95.82%, comm=65MB]
>>
>> ^-- Public thread pool [active=0, idle=1, qSize=0]
>>
>> ^-- System thread pool [active=0, idle=8, qSize=0]
>>
>> ^-- Outbound messages queue [size=0]
>>
>> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>> FreeList [name=null, buckets=256, dataPages=600522, reusePages=0]
>>
>>
>>
>> When I’m querying the data VIA JDBC the performance is very bad (I’m
>> comparing it to VERTICA DB)
>>
>>
>>
>> What I’m doing wrong?
>>
>>
>>
>> The success factors for the POC is to  query from  100M rows of data
>> (with filters on dates+ filter on one element ) and get results in 3
>> seconds is it possible to achieve it with Ignite?
>>
>>
>>
>> Here are the example of the data + query:
>> *SELECT* *count*(*) *FROM* HPA *WHERE* DEST_NE_NAME='destDest17' ;
>>
>> COUNT(*) |
>>
>> -|
>>
>> 48382|
>>
>> *Query execution **time is more than 7 seconds*
>>
>>
>>
>> *Data sample:*
>>
>>
>>
>>
>>
>> ID  |APPLICATION_ID_NAME  |AVG_FAILURE_DURATION |AVG_SUCCESS_DURATION
>> |DEST_NE_NAME |DESTINATION_HOST  |DESTINATION_REALM_NAME
>> |NUM_OF_RETRANSMISSION_FRAMES |NUMBER_OF_REQUESTS |NUMBER_OF_RESPONSES
>> |ORIGIN_HOST  |ORIGIN_REALM_NAME  |PROCEDURE_DURATION
>> |PROCEDURE_DURATION_COUNTER |PROCEDURE_DURATION_MAX |PROCEDURE_SUBTYPE
>> |PROCEDURE_TYPE |RELEASE_CAUSE |RELEASE_TYPE   |SOURCE_NE_NAME
>> |START_TIME  |TIME_STAMP  |TRANSPORT_LAYER_PROTOCOL_NAME
>> |VLAN_ID |
>>
>> |-|-|---
>> --|-|--|
>> |-|---|-
>> ---|-|---|--
>> -|---|--
>> -|---|---|--|---
>> |---||--
>> --|---||
>>
>> 0   |APPLICATION_ID_NAME9 |1|6
>> |destDest39   |DESTINATION_HOST5 |DESTINATION_REALM_NAME4
>> |1|1  |9
>> |ORIGIN_HOST3 |ORIGIN_REALM_NAME6 |6  |4
>>  |8  |PROCEDURE_SUBTYPE2
>> |17 |43|RELEASE_TYPE7  |SourceDest71   |2017-12-04
>> 14:37:56 |2017-12-04 14:37:56 |TRANSPORT_LAYER_PROTOCOL_NAME1 |48  |
>>
>>
>>
>> 1   |APPLICATION_ID_NAME7 |7|1
>> |destDest56   |DESTINATION_HOST5 |DESTINATION_REALM_NAME8
>> |8|6  |4
>> |ORIGIN_HOST1 |ORIGIN_REALM_NAME4 |2
>> |29 |3  |PROCEDURE_SUBTYPE2
>> |15 |20

Re: Ignite poor performance

2017-12-04 Thread Michael Cherkasov
Hi Sasha,

it's definitely can work faster:
1. add index for DEST_NE_NAME:
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations

2. enable query parallelism:
https://apacheignite-sql.readme.io/docs/performance-and-debugging#query-parallelism

Thanks,
Mike.


2017-12-04 17:00 GMT+03:00 Sasha Haykin :

> Hi,
>
> I Working on big POC for telecom industry solution
>
>
> I have loaded to ignite 5,000,000 objects/rows (I loaded it in the client
> side Please find the code below)
>
>
>
> In the metric the used of heap looks too small (for 5M rows) is it normal
> utilization?
>
>
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>
> ^-- Node [id=c4f42f84, uptime=00:37:00.639]
>
> ^-- H/N/C [hosts=2, nodes=2, CPUs=12]
>
> ^-- CPU [cur=0.13%, avg=4.76%, GC=0%]
>
> ^-- PageMemory [pages=818037]
>
> ^-- Heap [used=1655MB, free=59.53%, comm=4090MB]
>
> ^-- Non heap [used=63MB, free=95.82%, comm=65MB]
>
> ^-- Public thread pool [active=0, idle=1, qSize=0]
>
> ^-- System thread pool [active=0, idle=8, qSize=0]
>
> ^-- Outbound messages queue [size=0]
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
> FreeList [name=null, buckets=256, dataPages=600522, reusePages=0]
>
>
>
> When I’m querying the data VIA JDBC the performance is very bad (I’m
> comparing it to VERTICA DB)
>
>
>
> What I’m doing wrong?
>
>
>
> The success factors for the POC is to  query from  100M rows of data (with
> filters on dates+ filter on one element ) and get results in 3 seconds is
> it possible to achieve it with Ignite?
>
>
>
> Here are the example of the data + query:
> *SELECT* *count*(*) *FROM* HPA *WHERE* DEST_NE_NAME='destDest17' ;
>
> COUNT(*) |
>
> -|
>
> 48382|
>
> *Query execution **time is more than 7 seconds*
>
>
>
> *Data sample:*
>
>
>
>
>
> ID  |APPLICATION_ID_NAME  |AVG_FAILURE_DURATION |AVG_SUCCESS_DURATION
> |DEST_NE_NAME |DESTINATION_HOST  |DESTINATION_REALM_NAME
> |NUM_OF_RETRANSMISSION_FRAMES |NUMBER_OF_REQUESTS |NUMBER_OF_RESPONSES
> |ORIGIN_HOST  |ORIGIN_REALM_NAME  |PROCEDURE_DURATION
> |PROCEDURE_DURATION_COUNTER |PROCEDURE_DURATION_MAX |PROCEDURE_SUBTYPE
> |PROCEDURE_TYPE |RELEASE_CAUSE |RELEASE_TYPE   |SOURCE_NE_NAME
> |START_TIME  |TIME_STAMP  |TRANSPORT_LAYER_PROTOCOL_NAME
> |VLAN_ID |
>
> |-|-|---
> --|-|--|
> |-|---|-
> ---|-|---|--
> -|---|--
> -|---|---|--|---
> |---||--
> --|---||
>
> 0   |APPLICATION_ID_NAME9 |1|6
> |destDest39   |DESTINATION_HOST5 |DESTINATION_REALM_NAME4
> |1|1  |9
> |ORIGIN_HOST3 |ORIGIN_REALM_NAME6 |6  |4
>  |8  |PROCEDURE_SUBTYPE2
> |17 |43|RELEASE_TYPE7  |SourceDest71   |2017-12-04
> 14:37:56 |2017-12-04 14:37:56 |TRANSPORT_LAYER_PROTOCOL_NAME1 |48  |
>
>
>
> 1   |APPLICATION_ID_NAME7 |7|1
> |destDest56   |DESTINATION_HOST5 |DESTINATION_REALM_NAME8
> |8|6  |4
> |ORIGIN_HOST1 |ORIGIN_REALM_NAME4 |2
> |29 |3  |PROCEDURE_SUBTYPE2
> |15 |20|RELEASE_TYPE77 |SourceDest33   |2017-12-04
> 14:37:57 |2017-12-04 14:37:57 |TRANSPORT_LAYER_PROTOCOL_NAME7 |47  |
>
>
>
> There are indexes:
>
> ID
>
> START_TIME
>
> START_TIME + SOURCE_NE_NAME + DEST_NE_NAME  (*CREATE* *INDEX*
> DINAMICHPA_TIME_DEST_SOURCE *ON* HPA (START_TIME,SOURCE_NE_NAME,
> DEST_NE_NAME))
>
>
>
>
> [04-12-2017 13:09:44][INFO 
> ][grid-nio-worker-tcp-comm-2-#27][TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.0.5.236:47101,
> rmtAddr=/172.16.10.232:53754]
>
> [04-Dec-2017 13:09:52][WARN ][query-#290][IgniteH2Indexing] Query
> execution is too long [time=7152 ms, sql='SELECT
>
> COUNT(*) __C0_0
>
> FROM PUBLIC.HPA __Z0
>
> WHERE __Z0.DEST_NE_NAME = ?1', plan=
>
> SELECT
>
> COUNT(*) AS __C0_0
>
> FROM PUBLIC.HPA __Z0
>
> /* PUBLIC.DINAMICHPA_TIME_DEST_SOURCE: DEST_NE_NAME = ?1 */
>
> WHERE __Z0.DEST_NE_NAME = ?1
>
> , parameters=[destDest17]]
>
> [04-12-2017 13:10:27][INFO ][grid-timeout-worker-#23][IgniteKernal]
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>
> ^-- Node [id=c4f42f84, uptime=00:37:00.639]
>
> ^-- H/N/C [hosts=2, nodes=2, CPUs=12]
>
> ^-- CPU [cur=0.13%, avg=4.76%, GC=0%]
>
> ^-- PageMemory [pages=818037]
>
> ^-- Heap [used=1655MB, 

Re: Trouble with v 2.3.0 on AIX

2017-11-29 Thread Michael Cherkasov
Hi Vladimir,

Unfortunately I don't have AIX or any other big-endian machine at hand, so
could you please assist me with fixing this bug.

Could you please run the following unit test on your AIX box:
org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2ByteOrderSelfTest
Could you please say results of this test?

Thanks,
Mike.

2017-11-02 12:03 GMT+03:00 Vladimir :

> Hi,
>
> Just upgraded from Ignite 2.1.0 to v.2.3.0. And now our several nodes
> cannot
> start on AIX 7. The error:
>
> ERROR 2017-11-02 11:59:01.331 [grid-nio-worker-tcp-comm-1-#22]
> org.apache.ignite.internal.util.nio.GridDirectParser: Failed to read
> message
> [msg=GridIoMessage [plc=0, topic=null, topicOrd=-1, ordered=false,
> timeout=0, skipOnTimeout=false, msg=null],
> buf=java.nio.DirectByteBuffer[pos=16841 lim=16844 cap=32768],
> reader=DirectMessageReader [state=DirectMessageState [pos=0,
> stack=[StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=GridDhtPartitionsFullMessage [parts=null,
> partCntrs=null, partCntrs2=null, partHistSuppliers=null,
> partsToReload=null,
> topVer=null, errs=null, compress=false, resTopVer=null, partCnt=0,
> super=GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=5, minorTopVer=0], discoEvt=null,
> nodeId=e3ac3f40, evt=NODE_JOINED], lastVer=GridCacheVersion [topVer=0,
> order=1509612963224, nodeOrder=0], super=GridCacheMessage [msgId=369,
> depInfo=null, err=null, skipPrepare=false]]], mapIt=null, it=null,
> arrPos=-1, keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=CacheGroupAffinityMessage [], mapIt=null, it=null,
> arrPos=-1, keyDone=true, readSize=7, readItems=2, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=GridLongList [idx=0, arr=[]], mapIt=null, it=null,
> arrPos=-1, keyDone=false, readSize=512, readItems=47, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=40, tmpArrBytes=40,
> msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
> readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
> uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], null, null, null,
> null, null, null]], lastRead=true], ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=1,
> bytesRcvd=404253, bytesSent=1989, bytesRcvd0=16886, bytesSent0=28,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-1,
> igniteInstanceName=null, finished=false, hashCode=-2134841549,
> interrupted=false, runner=grid-nio-worker-tcp-comm-1-#22]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=16841 lim=16844 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=6, resendCnt=0, rcvCnt=0,
> sentCnt=6, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode
> [id=7683662b-16c9-42b7-aa0d-8328a60fc58e, addrs=[127.0.0.1],
> sockAddrs=[/127.0.0.1:6250], discPort=6250, order=1, intOrder=1,
> lastExchangeTime=1509612963744, loc=false, ver=2.3.0#20171028-sha1:
> 8add7fd5,
> isClient=false], connected=true, connectCnt=11, queueLimit=131072,
> reserveCnt=175, pairedConnections=false],
> outRecovery=GridNioRecoveryDescriptor [acked=6, resendCnt=0, rcvCnt=0,
> sentCnt=6, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode
> [id=7683662b-16c9-42b7-aa0d-8328a60fc58e, addrs=[127.0.0.1],
> sockAddrs=[/127.0.0.1:6250], discPort=6250, order=1, intOrder=1,
> lastExchangeTime=1509612963744, loc=false, ver=2.3.0#20171028-sha1:
> 8add7fd5,
> isClient=false], connected=true, connectCnt=11, queueLimit=131072,
> reserveCnt=175, pairedConnections=false], super=GridNioSessionImpl
> [locAddr=/127.0.0.1:6284, rmtAddr=/127.0.0.1:61790,
> createTime=1509613141318, closeTime=0, bytesSent=28, bytesRcvd=16886,
> bytesSent0=28, bytesRcvd0=16886, sndSchedTime=1509613141318,
> lastSndTime=1509613141318, lastRcvTime=1509613141318, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@8e66d834, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.position(Buffer.java:255) ~[?:1.8.0]
> at
> org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2.
> 

Re: Initial query resent the data when client got reconnect

2017-11-22 Thread Michael Cherkasov
There's a ticket for this feature:
https://issues.apache.org/jira/browse/IGNITE-5625
please vote for it and at some point, it will be implemented.

2017-11-21 13:52 GMT+03:00 gunman524 :

> Mike, double checked Mongodb way, their ObjectID only provide a unique ID,
> sorry for my info.   But there is open source project provide the auto
> increment feature for mongodb.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteInterruptedException: Node is stopping

2017-11-22 Thread Michael Cherkasov
Hi Hyma,

Could you please show a code snippet where it is hanged?

Thanks,
Mike.

2017-11-22 12:48 GMT+03:00 Hyma :

> Thanks Mikhail.
>
> I suspected to increase the spark heartbeat/network timeout. But my
> question
> here is if an executor is lost, corresponding ignite node also gets out of
> cluster. In that case, ignite takes care of re balancing between the other
> active nodes right. My spark job was not killed and it keeps on running
> until I terminate the job, Instead my job is getting hung at the ignite
> cache load step for hours.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Logging query on server node sent from client node

2017-11-22 Thread Michael Cherkasov
Hi,

I don't think that you run ignite with proper configuration, because the
snippet that you sent won't work.
EventType.EVTS_CACHE_QUERY is array and you should have the folllowin
exception on logs:
Caused by: org.springframework.beans.TypeMismatchException: Failed to
convert property value of type 'java.util.ArrayList' to required type
'int[]' for property 'includeEventTypes'; nested exception is
java.lang.IllegalArgumentException: Cannot convert value of type 'int[]' to
required type 'int' for property 'includeEventTypes[9]': PropertyEditor
[org.springframework.beans.propertyeditors.CustomNumberEditor] returned
inappropriate value of type 'int[]'

replace it with:



I attached working example to the email.

Thanks,
Mike.

2017-11-16 10:50 GMT+03:00 opsharma1986 :

> Yes, tried this below configuration
>  class="org.apache.ignite.configuration.
> IgniteConfiguration">
> 
>  static-field="org.apache.ignite.events.EventType.EVTS_CACHE_QUERY"/>
> 
> ...
> Rest-of-the-configs
>
> It did not work for me.
>
> Thanks
> Om
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.SqlQuery;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.events.CacheQueryExecutedEvent;
import org.apache.ignite.events.EventAdapter;
import org.apache.ignite.events.EventType;
import org.apache.ignite.lang.IgnitePredicate;

public class QueryExecutedEvent {

static class MyValue {
@QuerySqlField(index = true)
private int search;
}

public static void main(String[] args) {
try (Ignite ignite = Ignition.start("example-ignite.xml")) {
IgnitePredicate locLsnr = new IgnitePredicate()
{
@Override
public boolean apply(EventAdapter evt) {
switch (evt.type()) {

case EventType.EVT_CACHE_QUERY_EXECUTED:
CacheQueryExecutedEvent cacheExecEvent = (CacheQueryExecutedEvent)evt;
String clause = cacheExecEvent.clause();
System.out.println("query "+clause);
break;
case EventType.EVT_CACHE_QUERY_OBJECT_READ:
CacheQueryExecutedEvent cacheObjReadEvent =
(CacheQueryExecutedEvent)evt;
String clause2 = cacheObjReadEvent.clause();
System.out.println("query "+clause2);
break;
default:
System.out.println(evt.getClass());
break;
}

return true;
}
};

ignite.events().localListen(locLsnr,
EventType.EVT_CACHE_QUERY_EXECUTED, EventType.EVT_CACHE_QUERY_OBJECT_READ);

CacheConfiguration cfg = new CacheConfiguration<>("DFLT");

cfg.setIndexedTypes(Integer.class, MyValue.class);

IgniteCache cache = ignite.createCache(cfg);


SqlQuery query = new SqlQuery(MyValue.class, "search = ?");
query.setArgs(5);
query.setLocal(true);

cache.query(query).getAll();
}
}
}





http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>































localhost:47500..47600













http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>










Re: Ignite-cassandra module issue

2017-11-09 Thread Michael Cherkasov
Hi Dmitriy,

I created a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-6853
it will be fixed in 2.4.

Thanks,
Mike.

2017-11-09 2:35 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:

> Hi Michael, do you have any update for the issue?
>
> On Thu, Nov 2, 2017 at 5:14 PM, Michael Cherkasov <
> michael.cherka...@gmail.com> wrote:
>
>> Hi Tobias,
>>
>> Thank you for explaining how to reproduce it, I'll try your instruction.
>> I spend several days trying to reproduce the issue,
>> but I thought that the reason of this is too high load and I didn't stop
>> client during testing.
>> I'll check your instruction and try to fix the issue.
>>
>> Thanks,
>> Mike.
>>
>> 2017-10-25 16:23 GMT+03:00 Tobias Eriksson <tobias.eriks...@qvantel.com>:
>>
>>> Hi Andrey et al
>>>
>>> I believe I now know what the problem is, the Cassandra session is
>>> refreshed, but before it is a prepared statement is created/used and there,
>>> and so using a new session with an old prepared statement is not working.
>>>
>>>
>>>
>>> The way to reproduce is
>>>
>>> 1)   Start Ignite Server Node
>>>
>>> 2)   Start client which inserts a batch of 100 elements
>>>
>>> 3)   End client
>>>
>>> 4)   Now Ignite Server Node returns the Cassandra Session to the
>>> pool
>>>
>>> 5)   Wait 5+ minutes
>>>
>>> 6)   Now Ignite Server Node has does a clean-up of the “unused”
>>> Cassandra sessions
>>>
>>> 7)   Start client which inserts a batch of 100 elements
>>>
>>> 8)   Boom ! The exception starts to happen
>>>
>>>
>>>
>>> Reason is
>>>
>>> 1)   Execute is called for a BATCH
>>>
>>> 2)   Prepared-statement is reused since there is a cache of those
>>>
>>> 3)   It is about to do session().execute( batch )
>>>
>>> 4)   BUT the call to session() results in refreshing the session,
>>> and this is where the prepared statements as the old session new them are
>>> cleaned up
>>>
>>> 5)   Now it is looping over 100 times with a NEW session but with
>>> an OLD prepared statement
>>>
>>>
>>>
>>> This is a bug,
>>>
>>>
>>>
>>> -Tobias
>>>
>>>
>>>
>>>
>>>
>>> *From: *Andrey Mashenkov <andrey.mashen...@gmail.com>
>>> *Reply-To: *"user@ignite.apache.org" <user@ignite.apache.org>
>>> *Date: *Wednesday, 25 October 2017 at 14:12
>>> *To: *"user@ignite.apache.org" <user@ignite.apache.org>
>>> *Subject: *Re: Ignite-cassandra module issue
>>>
>>>
>>>
>>> Hi Tobias,
>>>
>>>
>>>
>>> What ignite version do you use? May be this was already fixed in latest
>>> one?
>>>
>>> I see related fix inclueded in upcoming 2.3 version.
>>>
>>>
>>>
>>> See IGNITE-5897 [1] issue. It is unobvious, but this fix session
>>> init\end logic, so session should be closed in proper way.
>>>
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
>>> tobias.eriks...@qvantel.com> wrote:
>>>
>>> Hi
>>>  Sorry did not include the context when I replied
>>>  Has anyone been able to resolve this problem, cause I have it too on and
>>> off
>>> In fact it sometimes happens just like that, e.g. I have been running my
>>> Ignite client and then stop it, and then it takes a while and run it
>>> again,
>>> and all by a sudden this error shows up. An that is the first thing that
>>> happens, and there is NOT a massive amount of load on Cassandra at that
>>> time. But I have also seen it when I hammer Ignite/Cassandra with
>>> updates/inserts.
>>>
>>> This is a deal-breaker for me, I need to understand how to fix this,
>>> cause
>>> having this in production is not an option.
>>>
>>> -Tobias
>>>
>>>
>>> Hi!
>>> I'm using the cassandra as persistence store for my caches and have one
>>> issue by handling a huge data (via IgniteDataStreamer from kafka).
>>> Ignite Configuration:
>>

Re: Ignite-cassandra module issue

2017-11-02 Thread Michael Cherkasov
Hi Tobias,

Thank you for explaining how to reproduce it, I'll try your instruction. I
spend several days trying to reproduce the issue,
but I thought that the reason of this is too high load and I didn't stop
client during testing.
I'll check your instruction and try to fix the issue.

Thanks,
Mike.

2017-10-25 16:23 GMT+03:00 Tobias Eriksson :

> Hi Andrey et al
>
> I believe I now know what the problem is, the Cassandra session is
> refreshed, but before it is a prepared statement is created/used and there,
> and so using a new session with an old prepared statement is not working.
>
>
>
> The way to reproduce is
>
> 1)   Start Ignite Server Node
>
> 2)   Start client which inserts a batch of 100 elements
>
> 3)   End client
>
> 4)   Now Ignite Server Node returns the Cassandra Session to the pool
>
> 5)   Wait 5+ minutes
>
> 6)   Now Ignite Server Node has does a clean-up of the “unused”
> Cassandra sessions
>
> 7)   Start client which inserts a batch of 100 elements
>
> 8)   Boom ! The exception starts to happen
>
>
>
> Reason is
>
> 1)   Execute is called for a BATCH
>
> 2)   Prepared-statement is reused since there is a cache of those
>
> 3)   It is about to do session().execute( batch )
>
> 4)   BUT the call to session() results in refreshing the session, and
> this is where the prepared statements as the old session new them are
> cleaned up
>
> 5)   Now it is looping over 100 times with a NEW session but with an
> OLD prepared statement
>
>
>
> This is a bug,
>
>
>
> -Tobias
>
>
>
>
>
> *From: *Andrey Mashenkov 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Wednesday, 25 October 2017 at 14:12
> *To: *"user@ignite.apache.org" 
> *Subject: *Re: Ignite-cassandra module issue
>
>
>
> Hi Tobias,
>
>
>
> What ignite version do you use? May be this was already fixed in latest
> one?
>
> I see related fix inclueded in upcoming 2.3 version.
>
>
>
> See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
> logic, so session should be closed in proper way.
>
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>
>
>
>
>
> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
> tobias.eriks...@qvantel.com> wrote:
>
> Hi
>  Sorry did not include the context when I replied
>  Has anyone been able to resolve this problem, cause I have it too on and
> off
> In fact it sometimes happens just like that, e.g. I have been running my
> Ignite client and then stop it, and then it takes a while and run it again,
> and all by a sudden this error shows up. An that is the first thing that
> happens, and there is NOT a massive amount of load on Cassandra at that
> time. But I have also seen it when I hammer Ignite/Cassandra with
> updates/inserts.
>
> This is a deal-breaker for me, I need to understand how to fix this, cause
> having this in production is not an option.
>
> -Tobias
>
>
> Hi!
> I'm using the cassandra as persistence store for my caches and have one
> issue by handling a huge data (via IgniteDataStreamer from kafka).
> Ignite Configuration:
> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setIgniteInstanceName("test");
> igniteConfiguration.setClientMode(true);
> igniteConfiguration.setGridLogger(new Slf4jLogger());
> igniteConfiguration.setMetricsLogFrequency(0);
> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
> binaryConfiguration.setCompactFooter(false);
> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
> igniteConfiguration.setPeerClassLoadingEnabled(true);
> final MemoryPolicyConfiguration memoryPolicyConfiguration = new
> MemoryPolicyConfiguration();
> memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
> memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
> memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);
>
> memoryPolicyConfiguration.setPageEvictionMode(
> DataPageEvictionMode.RANDOM_2_LRU);
> final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>
> Cache configuration:
> final CacheConfiguration cacheConfiguration = new
> CacheConfiguration<>();
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheConfiguration.setStoreKeepBinary(true);
> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
> cacheConfiguration.setBackups(0);
> cacheConfiguration.setStatisticsEnabled(false);
> cacheConfiguration.setName("TestCache");
>
> cacheConfiguration.setReadThrough(true);
> 

Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-09-29 Thread Michael Cherkasov
Hi Kenan,

I set up exactly the same config as you have, and I wasn't able to read
data back form cassandra too.

Then I found an error in configuration in persitence-settings.xml you have
'table="testresponse"' while you put data to 'table="test_response"', so I
added underscore and now it works fine.

Thanks,
Mike.


2017-09-25 17:16 GMT+03:00 Kenan Dalley :

> Any other thoughts?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to access cache data in Dbeaver

2017-09-27 Thread Michael Cherkasov
Do you want to say that you didn't change anything in configuration?

Can you see you data in )H2 console? (
https://apacheignite.readme.io/docs/sql-performance-and-debugging#using-h2-debug-console
)

Thanks,
Mike.

2017-09-27 12:10 GMT+03:00 devkumar :

> @Mikhail, I updated the post. I am able to get cache data in Sqlworkbench
> with 20 server and 7 client node. but when i did same activity with 20
> server node and 20 client node, i am seeing the cache is empty in
> Sqlworkbench. while ignite api is saying that data is present in the cache.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to access cache data in Dbeaver

2017-09-27 Thread Michael Cherkasov
Hi, I've recently was able to run DBeaver for ignite and have no problem to
query data with it.

Could you please show how you configured indexing in ignite(QueryEntity? )?
And how did you query data in DBeaver? Did DBeaver show any error message?

Thanks,
Mike.

2017-09-27 11:55 GMT+03:00 devkumar :

> My ignite serve is having 20 nodes and 20 clients. i am able to connect to
> the cache using DBeaver. But not able to see any records in DBeaver for any
> of the cache, while ignite get-cache-size API response is showing that
> records are present in the cache.
> Please refer to the screenshot for more information
>  configuration.png>
>  t1260/ignite_rest_api_response.png>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Design question on the Ignite Web Session Clustering.

2017-09-08 Thread Michael Cherkasov
Hi,

I answered in SO, I copy it here:
"""
if you have a local cache for sessions and sticky sessions why do you need
to use ignite at all?

However, It's better to go with ignite, your app will have HA, if some node
is failed, the whole app still will work fine. I agree you should split app
cluster and ignite cluster, however, I think you shouldn't care about
connection problem about the server and client. This kind of problems
should lead to 500 error, would you emulate main storage if you DB go down
or you can't connect to it?
"""
Thanks,
Mikhail.

2017-09-07 20:12 GMT+03:00 Shrikant Patel :

> Hi All,
>
>
>
> I have design question about Ignite web session clustering.
>
>
>
> I have springboot app with UI. It clustered app ie multiple instance of 
> springboot app behind the load balancer. I am using 
> org.apache.ignite.cache.websession.WebSessionFilter()to intercept request and 
> create\manage session for any incoming request.
>
>
>
> I have 2 option
>
>
>
> 1.  Embed the ignite node inside springboot app. So have these embedded 
> ignite node (on each springboot JVM) be part of cluster. This way request 
> session is replicated across the entire springboot cluster. On load balancer 
> I don’t have to maintain the sticky connection. The request can go to any app 
> in round robin or least load algorithm.
>
>
>
> Few considerations
>
> a.  Architect is simple. I don’t have worry about the cache being down 
> etc.
>
> b.  Now the cache being embedded, its using CPU and memory from app jvm. 
> It has potential of starving my app of resources.
>
>
>
> 2.  Have ignite cluster running outside of app JVM. So now I run client 
> node in springboot app and connect to main ignite cluster.
>
>
>
> Few considerations
>
>
>
> a.  For any reason, if the client node cannot connect to main ignite 
> cluster. Do I have to manage the session manually and then push those session 
> manually at later point to the ignite cluster??
>
> b.  If I manage session locally I will need to have sticky connection on 
> the load balancer. Which I want to avoid if possible.
>
>
>1. I am leaning to approach 2, but want to make it simple. So if
>   client node cannot create session (override 
> org.apache.ignite.cache.websession.WebSessionFilter())
>   it redirects user to page indicating the app is down or to another app 
> node
>   in the cluster.
>
>
>
>
>
> Are there any other design approach I can take?
>
> Am I overlooking anything in either approach?
>
>
>
> If you have dealt with it, please share your thoughts.
>
>
>
> Thanks in advance.
>
> Shri
>
>
> This e-mail and its contents (to include attachments) are the property of
> National Health Systems, Inc., its subsidiaries and affiliates, including
> but not limited to Rx.com Community Healthcare Network, Inc. and its
> subsidiaries, and may contain confidential and proprietary or privileged
> information. If you are not the intended recipient of this e-mail, you are
> hereby notified that any unauthorized disclosure, copying, or distribution
> of this e-mail or of its attachments, or the taking of any unauthorized
> action based on information contained herein is strictly prohibited.
> Unauthorized use of information contained herein may subject you to civil
> and criminal prosecution and penalties. If you are not the intended
> recipient, please immediately notify the sender by telephone at
> 800-433-5719 or return e-mail and permanently delete the original e-mail.
>


Re: 2.1.4 visor console - activate/deactivate command?

2017-09-07 Thread Michael Cherkasov
Hi,
there's no such version like 2.1.4, the latest one is 2.1.

Thanks,
Mike.

2017-09-07 19:00 GMT+03:00 mzingman :

> I've downloaded latest ignite-fabric (2.1.4), and running ignitevisorcmd,
> but
> having trouble finding activate/deactivate grid command (promised in
> Release
> Notes). Please help, thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IGFS With HDFS Configuration

2017-08-21 Thread Michael Cherkasov
Hi Pradeep,

https://issues.apache.org/jira/browse/ZEPPELIN-2674 is resolved,
Zeppelin 0.8 will be issued with ignite 2.0 support.

Thanks,
Mikhail.


2017-08-04 19:09 GMT+03:00 Mikhail Cherkasov :

> Hi Pradeep,
>
> I can't run it with 1.9 too, however, the same config works fine with 2.0.
> You can try to build zeppelin your self, just apply the following patch:
> https://github.com/apache/zeppelin/pull/2445/files
> to some stable version of zeppelin.
> I'm not sure but it should work, anyway, the only way to check this is to
> build with the patch and run.
>
> Thanks,
> Mikhail.
>
> On Thu, Aug 3, 2017 at 9:57 PM, pradeepchanumolu 
> wrote:
>
>> I couldn't get it running with the v1.9.0.
>>
>> I bumped the version to the latest v2.1.0 and could get the ignite
>> services
>> up. (Zeppelin only supports v1.9.0 btw)
>> I am attaching my config file to this message.
>> default-config.xml
>> > default-config.xml>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/IGFS-With-HDFS-Configuration-tp15830p15964.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Thanks,
> Mikhail.
>


RE: IgniteAtomicSequence durability

2017-08-17 Thread Michael Griggs
Dmitry,

Can Ignite 2.1 persist an Ignite atomicSequence on disk?

Regards
Mike

-Original Message-
From: dkarachentsev [mailto:dkarachent...@gridgain.com] 
Sent: 16 August 2017 15:15
To: user@ignite.apache.org
Subject: Re: IgniteAtomicSequence durability

Hi Mike,

All atomics may be configured with
IgniteConfiguration.setAtomicConfiguration() or Ignite.atomicSequence(),
where you can specify number of backups, or set it as REPLICATED, but cannot
configure persistent store. 

Only Ignite 2.1 can persist datastructures on disk.

Thanks!
-Dmitry.



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteAtomicSequence-durabili
ty-tp16220p16229.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



IgniteAtomicSequence durability

2017-08-16 Thread Michael Griggs
Igniters,

I have had an interesting question: 

> Since I will be inserting records into the cache I am implementing an
instance of the IgniteAtomicSequence to provide the 
> unique id for each entry in the cache.  In order for the sequence to
survive termination and re-initialization of the service I 
> want to store the IgniteAtomicSequence in a separate persisted cache.  

Whilst in principle it seems possible to me that the output value of an
IgniteAtomicSequence could be stored in a cache with Persistent Store, it
nevertheless seems a very heavyweight approach.  Does IgniteAtomicSequence
have any durability functionality (I can't find any in the docmentation)?  

Mike



Re: how to append data to IGFS so that data gets saved to Hive partitioned table?

2017-08-07 Thread Michael Cherkasov
Ok, I'll try to reproduce this issue locally, will response tomorrow.

2017-08-05 13:50 GMT+03:00 csumi :

> Yes if I create partition using hive, select works fine. Please see last
> two
> bullets of my previous comment. Copying here for your quick reference:
>
> -  Now insert new row in the table to the partition created through
> code
> earlier
> insert into table stocks3 PARTITION (years=2017,months=7,days=4)
> values('AAPL',1501236980,120.34);
> -   Run select query again. Now it gives 3 rows. Two of which were
> inserted
> using insert command and one through code which was not coming in select
> query earlier.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/how-to-append-data-to-IGFS-so-that-
> data-gets-saved-to-Hive-partitioned-table-tp15725p16013.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: how to append data to IGFS so that data gets saved to Hive partitioned table?

2017-08-04 Thread Michael Cherkasov
Hi,

Could you please clarify, if you run all actions using IGFS, but instaed of
fs.appedn use Hive, like:
insert into table stocks PARTITION (years=2004,months=12,days=3)
values('AAPL',1501236980,120.34);

Does select work this time?

Thanks,
Mikhail.

2017-08-04 12:56 GMT+03:00 csumi :

> Let me try to clear here with the sequence of steps performed.
> -   Created table with partition through hive using below query. It
> creates a
> directory in hdfs.
> create table stocks3 (stock string, time timestamp, price float)
> PARTITIONED BY (years bigint, months bigint, days bigint) ROW FORMAT
> DELIMITED FIELDS TERMINATED BY ',';
> -   Then I get streaming data and using IgniteFileSystem's
> append/create
> method, it gets saved to ignited hadoop.
> -   Run below select query. No result returned
> select * from stocks3;
> -   Stop ignite and run the select again on hive. No result with below
> logs
>
> hive> select * from stocks3;
> 17/08/04 14:59:08 INFO conf.HiveConf: Using the default value passed in for
> log id: b5e3e924-e46a-481c-8aef-30d48605a2da
> 17/08/04 14:59:08 INFO session.SessionState: Updating thread name to
> b5e3e924-e46a-481c-8aef-30d48605a2da main
> 17/08/04 14:59:08 WARN operation.Operation: Unable to create operation log
> file:
> D:\tmp\hive\\operation_logs\b5e3e924-e46a-481c-8aef-
> 30d48605a2da\137adad6-ea23-462c-a414-6ce260e5bd49
> java.io.IOException: The system cannot find the path specified
> at java.io.WinNTFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1012)
> at
> org.apache.hive.service.cli.operation.Operation.
> createOperationLog(Operation.java:237)
> at
> org.apache.hive.service.cli.operation.Operation.beforeRun(
> Operation.java:279)
> at
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:314)
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.
> executeStatementInternal(HiveSessionImpl.java:499)
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(
> HiveSessionImpl.java:486)
> at
> org.apache.hive.service.cli.CLIService.executeStatementAsync(
> CLIService.java:295)
> at
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(
> ThriftCLIService.java:506)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:491)
> at
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(
> HiveConnection.java:1412)
> at com.sun.proxy.$Proxy21.ExecuteStatement(Unknown Source)
> at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(
> HiveStatement.java:308)
> at
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250)
> at
> org.apache.hive.beeline.Commands.executeInternal(Commands.java:988)
> at org.apache.hive.beeline.Commands.execute(Commands.java:1160)
> at org.apache.hive.beeline.Commands.sql(Commands.java:1074)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1148)
> at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:976)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:886)
> at org.apache.hive.beeline.cli.HiveCli.runWithArgs(HiveCli.
> java:35)
> at org.apache.hive.beeline.cli.HiveCli.main(HiveCli.java:29)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:491)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> 17/08/04 14:59:08 INFO ql.Driver: Compiling
> command(queryId=_20170804145908_b270c978-ab00-
> 4160-a2a6-c19b42eab676):
> select * from stocks3
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Starting Semantic Analysis
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Completed phase 1 of Semantic
> Analysis
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for source tables
> 17/08/04 14:59:08 INFO metastore.HiveMetaStore: 0: get_table : db=yt
> tbl=stocks3
> 17/08/04 14:59:08 INFO HiveMetaStore.audit: ugi=  ip=unknown-ip-addr
> cmd=get_table : db=yt tbl=stocks3
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for subqueries
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for destination
> tables
> 17/08/04 14:59:09 INFO ql.Context: New scratch dir is
> hdfs://localhost:9000/tmp/hive//b5e3e924-e46a-
> 

Re: Best practise for setting Ignite.Active to true when using persistence layer in Ignite 2.1

2017-08-03 Thread Michael Cherkasov
Hi again,

Checkout the email on dev list: "Cluster auto activation design proposal"
https://issues.apache.org/jira/browse/IGNITE-5851
As you can see this feature is targeted to 2.2.

Thanks,
Mikhail.

2017-08-03 6:58 GMT+03:00 Raymond Wilson <raymond_wil...@trimble.com>:

> Michael,
>
>
>
> Is there a reference implementation in Ignite 2.1 for an agent that
> listens to topology changes to decide when to set active to true?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Michael Cherkasov [mailto:michael.cherka...@gmail.com]
> *Sent:* Thursday, August 3, 2017 1:25 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Best practise for setting Ignite.Active to true when using
> persistence layer in Ignite 2.1
>
>
>
> >Does this mean we have to listen to events of server nodes going up and
> down and activate and deactivate the cluster?
>
>
>
> No, you need to deactivate cluster when you going to shutdown the whole
> cluster. And when you return cluster back to  online, you need to wait when
> all nodes are in place and then activate it.
>
>
>
>
>
> 2017-08-02 16:22 GMT+03:00 Rohan Shetty <rohan.she...@gmail.com>:
>
> Does this mean we have to listen to events of server nodes going up and
> down and activate and deactivate the cluster?
>
>
>
> On Wed, Aug 2, 2017 at 3:18 PM, Michael Cherkasov <
> michael.cherka...@gmail.com> wrote:
>
> when all nodes are up, so in latest topology snapshot you can see servers
> count == servers count you run, then cluster can be activated.
>
>
>
> 2017-08-02 0:51 GMT+03:00 Raymond Wilson <raymond_wil...@trimble.com>:
>
> Hi Mikhail,
>
>
>
> Thanks for the clarifications.
>
>
>
> Yes, I knew setting active was only required when using the persistence
> layer, which is the topic of the question J
>
>
>
> I was interested if there were best practices or approaches for
> determining when the grid had fully initialized. I realise this is somewhat
> application specific, but was looking for an established pattern before I
> invented one myself.
>
>
>
> In my case I have an affinity function that responds to topology changes
> which intrinsically would know when it had a ‘quorum’. Is this a typical
> place for setting active to true.
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Mikhail Cherkasov [mailto:mcherka...@gridgain.com]
> *Sent:* Tuesday, August 1, 2017 11:59 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Best practise for setting Ignite.Active to true when using
> persistence layer in Ignite 2.1
>
>
>
> Hi Raymond,
>
>
>
> Ignite cluster is inactive on startup only if persistence is enabled. This
> is done to avoid unnecessary partition exchanges between nodes,
>
> for example, if you have 3 nodes and 1 backup enabled and you start only 2
> of 3 nodes, then they will treat the third node as dead and start process
>
> of restoring data from backup and rebalance data to spread them among 2
> nodes, when you add the missed third node back the process will be repeated.
>
>
>
> So we start cluster as in active. When all nodes are started and ready, so
> no cluster topology changes aren't expected, you should activate cluster.
>
>
>
> Also when you turn off cluster, some nodes can still accept request for
> data update and other nodes won't see it, so understand which node has the
> latest
>
> data we need to start all nodes first and only then activate cluster.
>
>
>
> Thanks,
>
> Mikhail.
>
>
>
> On Tue, Aug 1, 2017 at 5:05 AM, Raymond Wilson <raymond_wil...@trimble.com>
> wrote:
>
> Hi,
>
>
>
> I am experimenting with a POC looking into using the Ignite persistence
> layer.
>
>
>
> One aspect of this is setting the grid to be ‘Active’ after all cache grid
> nodes have instantiated.
>
>
>
> In practical terms, what is the best practice for ensuring the cluster is
> running and in a good state to be set to active? What is the downside of
> setting active to true before all grid nodes are running?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
>
>
>
>
> --
>
> Thanks,
>
> Mikhail.
>
>
>
>
>
>
>


Re: Best practise for setting Ignite.Active to true when using persistence layer in Ignite 2.1

2017-08-03 Thread Michael Cherkasov
Hi Raymond,

Unfortunately right now there's no auto-activation, restarting cluster is
like rare event that should be controlled
manually. However you can listen for EVT_NODE_JOINED event, when all nodes
in place you can activate a cluster.

And you only need this if you have ignite persistence turn on and you have
some data on the disk.

Thanks,
Mikhail.

2017-08-03 6:58 GMT+03:00 Raymond Wilson <raymond_wil...@trimble.com>:

> Michael,
>
>
>
> Is there a reference implementation in Ignite 2.1 for an agent that
> listens to topology changes to decide when to set active to true?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Michael Cherkasov [mailto:michael.cherka...@gmail.com]
> *Sent:* Thursday, August 3, 2017 1:25 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Best practise for setting Ignite.Active to true when using
> persistence layer in Ignite 2.1
>
>
>
> >Does this mean we have to listen to events of server nodes going up and
> down and activate and deactivate the cluster?
>
>
>
> No, you need to deactivate cluster when you going to shutdown the whole
> cluster. And when you return cluster back to  online, you need to wait when
> all nodes are in place and then activate it.
>
>
>
>
>
> 2017-08-02 16:22 GMT+03:00 Rohan Shetty <rohan.she...@gmail.com>:
>
> Does this mean we have to listen to events of server nodes going up and
> down and activate and deactivate the cluster?
>
>
>
> On Wed, Aug 2, 2017 at 3:18 PM, Michael Cherkasov <
> michael.cherka...@gmail.com> wrote:
>
> when all nodes are up, so in latest topology snapshot you can see servers
> count == servers count you run, then cluster can be activated.
>
>
>
> 2017-08-02 0:51 GMT+03:00 Raymond Wilson <raymond_wil...@trimble.com>:
>
> Hi Mikhail,
>
>
>
> Thanks for the clarifications.
>
>
>
> Yes, I knew setting active was only required when using the persistence
> layer, which is the topic of the question J
>
>
>
> I was interested if there were best practices or approaches for
> determining when the grid had fully initialized. I realise this is somewhat
> application specific, but was looking for an established pattern before I
> invented one myself.
>
>
>
> In my case I have an affinity function that responds to topology changes
> which intrinsically would know when it had a ‘quorum’. Is this a typical
> place for setting active to true.
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Mikhail Cherkasov [mailto:mcherka...@gridgain.com]
> *Sent:* Tuesday, August 1, 2017 11:59 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Best practise for setting Ignite.Active to true when using
> persistence layer in Ignite 2.1
>
>
>
> Hi Raymond,
>
>
>
> Ignite cluster is inactive on startup only if persistence is enabled. This
> is done to avoid unnecessary partition exchanges between nodes,
>
> for example, if you have 3 nodes and 1 backup enabled and you start only 2
> of 3 nodes, then they will treat the third node as dead and start process
>
> of restoring data from backup and rebalance data to spread them among 2
> nodes, when you add the missed third node back the process will be repeated.
>
>
>
> So we start cluster as in active. When all nodes are started and ready, so
> no cluster topology changes aren't expected, you should activate cluster.
>
>
>
> Also when you turn off cluster, some nodes can still accept request for
> data update and other nodes won't see it, so understand which node has the
> latest
>
> data we need to start all nodes first and only then activate cluster.
>
>
>
> Thanks,
>
> Mikhail.
>
>
>
> On Tue, Aug 1, 2017 at 5:05 AM, Raymond Wilson <raymond_wil...@trimble.com>
> wrote:
>
> Hi,
>
>
>
> I am experimenting with a POC looking into using the Ignite persistence
> layer.
>
>
>
> One aspect of this is setting the grid to be ‘Active’ after all cache grid
> nodes have instantiated.
>
>
>
> In practical terms, what is the best practice for ensuring the cluster is
> running and in a good state to be set to active? What is the downside of
> setting active to true before all grid nodes are running?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
>
>
>
>
> --
>
> Thanks,
>
> Mikhail.
>
>
>
>
>
>
>


Re: Best practise for setting Ignite.Active to true when using persistence layer in Ignite 2.1

2017-08-02 Thread Michael Cherkasov
>Does this mean we have to listen to events of server nodes going up and
down and activate and deactivate the cluster?

No, you need to deactivate cluster when you going to shutdown the whole
cluster. And when you return cluster back to  online, you need to wait when
all nodes are in place and then activate it.


2017-08-02 16:22 GMT+03:00 Rohan Shetty <rohan.she...@gmail.com>:

> Does this mean we have to listen to events of server nodes going up and
> down and activate and deactivate the cluster?
>
> On Wed, Aug 2, 2017 at 3:18 PM, Michael Cherkasov <
> michael.cherka...@gmail.com> wrote:
>
>> when all nodes are up, so in latest topology snapshot you can see servers
>> count == servers count you run, then cluster can be activated.
>>
>> 2017-08-02 0:51 GMT+03:00 Raymond Wilson <raymond_wil...@trimble.com>:
>>
>>> Hi Mikhail,
>>>
>>>
>>>
>>> Thanks for the clarifications.
>>>
>>>
>>>
>>> Yes, I knew setting active was only required when using the persistence
>>> layer, which is the topic of the question J
>>>
>>>
>>>
>>> I was interested if there were best practices or approaches for
>>> determining when the grid had fully initialized. I realise this is somewhat
>>> application specific, but was looking for an established pattern before I
>>> invented one myself.
>>>
>>>
>>>
>>> In my case I have an affinity function that responds to topology changes
>>> which intrinsically would know when it had a ‘quorum’. Is this a typical
>>> place for setting active to true.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>> *From:* Mikhail Cherkasov [mailto:mcherka...@gridgain.com]
>>> *Sent:* Tuesday, August 1, 2017 11:59 PM
>>> *To:* user@ignite.apache.org
>>> *Subject:* Re: Best practise for setting Ignite.Active to true when
>>> using persistence layer in Ignite 2.1
>>>
>>>
>>>
>>> Hi Raymond,
>>>
>>>
>>>
>>> Ignite cluster is inactive on startup only if persistence is enabled.
>>> This is done to avoid unnecessary partition exchanges between nodes,
>>>
>>> for example, if you have 3 nodes and 1 backup enabled and you start only
>>> 2 of 3 nodes, then they will treat the third node as dead and start process
>>>
>>> of restoring data from backup and rebalance data to spread them among 2
>>> nodes, when you add the missed third node back the process will be repeated.
>>>
>>>
>>>
>>> So we start cluster as in active. When all nodes are started and ready,
>>> so no cluster topology changes aren't expected, you should activate cluster.
>>>
>>>
>>>
>>> Also when you turn off cluster, some nodes can still accept request for
>>> data update and other nodes won't see it, so understand which node has the
>>> latest
>>>
>>> data we need to start all nodes first and only then activate cluster.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Mikhail.
>>>
>>>
>>>
>>> On Tue, Aug 1, 2017 at 5:05 AM, Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
>>> Hi,
>>>
>>>
>>>
>>> I am experimenting with a POC looking into using the Ignite persistence
>>> layer.
>>>
>>>
>>>
>>> One aspect of this is setting the grid to be ‘Active’ after all cache
>>> grid nodes have instantiated.
>>>
>>>
>>>
>>> In practical terms, what is the best practice for ensuring the cluster
>>> is running and in a good state to be set to active? What is the downside of
>>> setting active to true before all grid nodes are running?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Raymond.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Mikhail.
>>>
>>
>>
>


  1   2   >