[GitHub] ignite pull request #4368: IGNITE-7165 Rebalance control version holds by ex...
Github user Mmuzaf closed the pull request at: https://github.com/apache/ignite/pull/4368 ---
[GitHub] ignite pull request #4112: IGNITE-7165: check affinity changed
Github user Mmuzaf closed the pull request at: https://github.com/apache/ignite/pull/4112 ---
[GitHub] ignite pull request #4099: IGNITE-7165: check aff assignments
Github user Mmuzaf closed the pull request at: https://github.com/apache/ignite/pull/4099 ---
[GitHub] ignite pull request #4048: IGNITE-7165: do not cancel rebalance at client jo...
Github user Mmuzaf closed the pull request at: https://github.com/apache/ignite/pull/4048 ---
Re: Apache Ignite 2.7: scope, time and release manager
Hello, Denis. Actually, I'm on vacation till July, 31. After it I will put all my efforts to manage Ignite release. Can we wait until Tuesday? В Чт, 26/07/2018 в 17:01 -0700, Denis Magda пишет: > Nikolay, > > Could you please prepare Ignite 2.7 page similar to the following? > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6 > > We need to start tracking the progress and most essential capabilities that > will get into the release. > > -- > Denis > On Tue, Jul 24, 2018 at 1:39 AM Vyacheslav Daradur > wrote: > > Hi, Igniters! > > > > The end of September for Ignite 2.7 release sounds good to me. > > > > I'm working on Service Grid and going to deliver the following tasks: > > - Use discovery messages for service deployment [[1] > > - Collect service deployment results asynchronously on coordinator [2] > > - Propagate service deployment results from assigned nodes to initiator [3] > > - Handle topology changes during service deployment [4] > > - Propagate deployed services to joining nodes [5] > > - Replace service instance parameter with a class name in > > ServiceConfiguration [6] (planned to be implemented by Amelchev > > Nikita) > > > > [1] https://issues.apache.org/jira/browse/IGNITE-8361 > > [2] https://issues.apache.org/jira/browse/IGNITE-8362 > > [3] https://issues.apache.org/jira/browse/IGNITE-3392 > > [4] https://issues.apache.org/jira/browse/IGNITE-8363 > > [5] https://issues.apache.org/jira/browse/IGNITE-8364 > > [6] https://issues.apache.org/jira/browse/IGNITE-8366 > > On Mon, Jul 23, 2018 at 4:02 PM Dmitry Pavlov wrote: > > > > > > Hi Denis, Nikolay, > > > > > > I've issued a number of tickets to update dependencies versions. I would > > > like all these updates are available within 2.7. > > > > > > Sincerely, > > > Dmitriy Pavlov > > > > > > сб, 21 июл. 2018 г. в 3:28, Pavel Petroshenko : > > > > > > > Hi Denis, Nikolay, > > > > > > > > The proposed 2.7 release timing sounds reasonable to me. > > > > Python [1], PHP [2], and Node.js [3] thin clients should take the train. > > > > > > > > p. > > > > > > > > [1] https://jira.apache.org/jira/browse/IGNITE-7782 > > > > [2] https://jira.apache.org/jira/browse/IGNITE-7783 > > > > [3] https://jira.apache.org/jira/browse/IGNITE- > > > > > > > > > > > > On Fri, Jul 20, 2018 at 2:35 PM, Denis Magda > > > > wrote: > > > > > > > > > Igniters, > > > > > > > > > > Let's agree on the time and the scope of 2.7. As for the release > > > > > manager, > > > > > we had a conversation with Nikolay Izhikov and he decided to try the > > > > > role > > > > > out. Thanks, Nikolay! > > > > > > > > > > Nikolay, we need to prepare a page like that [1] once the release > > > > > terms > > > > are > > > > > defined. > > > > > > > > > > I propose us to roll Ignite 2.7 at the end of September. Folks who are > > > > > working on SQL, core, C++/NET, thin clients, ML, service grid > > > > > optimizations, data structures please enlist what you're ready to > > > > deliver. > > > > > > > > > > > > > > > [1] > > > > > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6 > > > > > > > > > > > > > > > signature.asc Description: This is a digitally signed message part
Re: Apache Flink Sink + Ignite: Ouch! Argument is invalid
Hi Andrew, As we discussed I have updated the PR, please take a look. If it looks good then I can go ahead and merge the changes. PR : https://github.com/apache/ignite/pull/4398 Review : https://reviews.ignite.apache.org/ignite/review/IGNT-CR-695 Regards, Saikat On Thu, Jul 26, 2018 at 11:25 PM, Saikat Maitra wrote: > Hi Ray, > > We will need to use igniteSink.setAllowOverwrite(true) flag so that > latest computed values are stored in cache. Also we need not call > igniteSink.open(new Configuration) > > Please take a look into the below modified wordCount sample. > > https://github.com/samaitra/flink-fn/blob/master/flink-fn/ > src/main/scala/com/samaitra/WordCount.scala > > Please review and share feedback > > Regards > Saikat > > On Thu, Jul 26, 2018 at 1:16 AM, Ray wrote: > >> Hi Saikat, >> >> The results flink calculated before sending to sink is correct, but the >> results in Ignite is not correct. >> You can remove the sink and print the stream content to validate my point. >> >> >> >> -- >> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ >> > >
Re: Apache Flink Sink + Ignite: Ouch! Argument is invalid
Hi Ray, We will need to use igniteSink.setAllowOverwrite(true) flag so that latest computed values are stored in cache. Also we need not call igniteSink.open( new Configuration) Please take a look into the below modified wordCount sample. https://github.com/samaitra/flink-fn/blob/master/flink-fn/src/main/scala/com/samaitra/WordCount.scala Please review and share feedback Regards Saikat On Thu, Jul 26, 2018 at 1:16 AM, Ray wrote: > Hi Saikat, > > The results flink calculated before sending to sink is correct, but the > results in Ignite is not correct. > You can remove the sink and print the stream content to validate my point. > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ >
Re: Apache Ignite 2.7: scope, time and release manager
Nikolay, Could you please prepare Ignite 2.7 page similar to the following? https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6 We need to start tracking the progress and most essential capabilities that will get into the release. -- Denis On Tue, Jul 24, 2018 at 1:39 AM Vyacheslav Daradur wrote: > Hi, Igniters! > > The end of September for Ignite 2.7 release sounds good to me. > > I'm working on Service Grid and going to deliver the following tasks: > - Use discovery messages for service deployment [[1] > - Collect service deployment results asynchronously on coordinator [2] > - Propagate service deployment results from assigned nodes to initiator [3] > - Handle topology changes during service deployment [4] > - Propagate deployed services to joining nodes [5] > - Replace service instance parameter with a class name in > ServiceConfiguration [6] (planned to be implemented by Amelchev > Nikita) > > [1] https://issues.apache.org/jira/browse/IGNITE-8361 > [2] https://issues.apache.org/jira/browse/IGNITE-8362 > [3] https://issues.apache.org/jira/browse/IGNITE-3392 > [4] https://issues.apache.org/jira/browse/IGNITE-8363 > [5] https://issues.apache.org/jira/browse/IGNITE-8364 > [6] https://issues.apache.org/jira/browse/IGNITE-8366 > On Mon, Jul 23, 2018 at 4:02 PM Dmitry Pavlov > wrote: > > > > Hi Denis, Nikolay, > > > > I've issued a number of tickets to update dependencies versions. I would > > like all these updates are available within 2.7. > > > > Sincerely, > > Dmitriy Pavlov > > > > сб, 21 июл. 2018 г. в 3:28, Pavel Petroshenko : > > > > > Hi Denis, Nikolay, > > > > > > The proposed 2.7 release timing sounds reasonable to me. > > > Python [1], PHP [2], and Node.js [3] thin clients should take the > train. > > > > > > p. > > > > > > [1] https://jira.apache.org/jira/browse/IGNITE-7782 > > > [2] https://jira.apache.org/jira/browse/IGNITE-7783 > > > [3] https://jira.apache.org/jira/browse/IGNITE- > > > > > > > > > On Fri, Jul 20, 2018 at 2:35 PM, Denis Magda > wrote: > > > > > > > Igniters, > > > > > > > > Let's agree on the time and the scope of 2.7. As for the release > manager, > > > > we had a conversation with Nikolay Izhikov and he decided to try the > role > > > > out. Thanks, Nikolay! > > > > > > > > Nikolay, we need to prepare a page like that [1] once the release > terms > > > are > > > > defined. > > > > > > > > I propose us to roll Ignite 2.7 at the end of September. Folks who > are > > > > working on SQL, core, C++/NET, thin clients, ML, service grid > > > > optimizations, data structures please enlist what you're ready to > > > deliver. > > > > > > > > > > > > [1] > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6 > > > > > > > > > > > -- > Best Regards, Vyacheslav D. >
[GitHub] ignite pull request #4441: IGNITE-9100 Split Basic and Cache TC configuratio...
GitHub user EdShangGG opened a pull request: https://github.com/apache/ignite/pull/4441 IGNITE-9100 Split Basic and Cache TC configurations on pure in-memory⦠⦠and with disk usage one You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9100 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4441.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4441 commit 9bf7833d3c0abd761b21347b0e813ca41ce307ad Author: Eduard Shangareev Date: 2018-07-26T22:57:57Z IGNITE-9100 Split Basic and Cache TC configurations on pure in-memory and with disk usage one ---
[jira] [Created] (IGNITE-9100) Split Basic and Cache TC configurations on pure in-memory and with disk usage one
Eduard Shangareev created IGNITE-9100: - Summary: Split Basic and Cache TC configurations on pure in-memory and with disk usage one Key: IGNITE-9100 URL: https://issues.apache.org/jira/browse/IGNITE-9100 Project: Ignite Issue Type: Improvement Reporter: Eduard Shangareev Assignee: Eduard Shangareev -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Deprecating LOCAL cache
Guys, I just want to make sure we are all on the same page. The main use case for LOCAL caches is to have a local hash map querable with SQL and automatically persisted to a 3rd party DB. I want to discourage people from saying "nobody needs some feature". None of the people in this discussion are users of any features - we are all developers of the features. Instead of guessing whether to deprecate something or not, I would actually see if it is even worth a discussion. How much effort is required to fix the bug found in the LOCAL cache? D. On Thu, Jul 26, 2018 at 12:19 PM, Dmitry Pavlov wrote: > Hi Alexey, > > There is nothing to be sorry about :) Сommunity appreciates an alternative > vision, this allows us to make as informed decisions as it possible. > > Thank you for finding this fact, it is very interesting. > > I'm not sure all these examples were prepared by experienced Ignite users. > So idea of deprecation may have one more argument. Deprecation will help us > to inform users about LOCAL cache: Probably local cache is not what they > need. > > Sincerely, > Dmitriy Pavlov > > чт, 26 июл. 2018 г. в 16:57, Alexey Zinoviev : > > > Sorry, guys, I'll put my 1 cent > > > > I'd like this idea "Implement LOCAL caches as PARTITIONED caches over > the > > local node." > > It make sense for examples/testing in pseudo-distributed mode and so far. > > > > But I think that the deprecation based on user-list mentions is a wrong > > way. Please look here > > https://github.com/search?q=%22CacheMode.LOCAL%22+%26+ignite&type=Code > > There a lot of hello world examples with LOCAL mode. > > > > And of course, we can ask about that on user-list, not here, to vote for > > the deprecation like this. > > > > 2018-07-26 11:23 GMT+03:00 Vladimir Ozerov : > > > > > I meant LOCAL + non-LOCAL transactions of course. > > > > > > On Wed, Jul 25, 2018 at 10:42 PM Dmitriy Setrakyan < > > dsetrak...@apache.org> > > > wrote: > > > > > > > Vladimir, > > > > > > > > Are you suggesting that a user cannot span more than one local cache > > in a > > > > cross cache LOCAL transactions. This is extremely surprising to me, > as > > it > > > > would require almost no effort to support it. As far as mixing the > > local > > > > caches with distributed caches, then I agree, cross-cache > transactions > > do > > > > not make sense. > > > > > > > > I am not sure why deprecating local caches has become a pressing > > issue. I > > > > can see that there are a few bugs, but why not just fix them and move > > on? > > > > Can someone explain why supporting LOCAL caches is such a burden? > > > > > > > > Having said that, I am not completely opposed to deprecating LOCAL > > > caches. > > > > I just want to know why. > > > > > > > > D. > > > > > > > > On Wed, Jul 25, 2018 at 10:55 AM, Vladimir Ozerov < > > voze...@gridgain.com> > > > > wrote: > > > > > > > > > Dima, > > > > > > > > > > LOCAL cache adds very little value to the product. It doesn't > support > > > > > cross-cache transactions, consumes a lot of memory, much slower > than > > > any > > > > > widely-used concurrent hash map. Let's go the same way as Java - > mark > > > > LOCAL > > > > > cache as "deprecated for removal", and then remove it in 3.0. > > > > > > > > > > On Wed, Jul 25, 2018 at 12:10 PM Dmitrii Ryabov < > > somefire...@gmail.com > > > > > > > > > wrote: > > > > > > > > > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would > be > > > > much > > > > > > easier and faster than fixing all bugs. > > > > > > > > > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan < > > dsetrak...@apache.org > > > >: > > > > > > > > > > > > > I would stay away from deprecating such huge pieces as a whole > > > LOCAL > > > > > > cache. > > > > > > > In retrospect, we should probably not even have LOCAL caches, > but > > > > now I > > > > > > am > > > > > > > certain that it is used by many users. > > > > > > > > > > > > > > I would do one of the following, whichever one is easier: > > > > > > > > > > > > > >- Fix the issues found with LOCAL caches, including > > persistence > > > > > > support > > > > > > >- Implement LOCAL caches as PARTITIONED caches over the > local > > > > node. > > > > > In > > > > > > >this case, we would have to hide any distribution-related > > config > > > > > from > > > > > > >users, like affinity function, for example. > > > > > > > > > > > > > > D. > > > > > > > > > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > > > > > implemented > > > > > > > > separately and therefore has to be maintained separately. If > > > that's > > > > > the > > > > > > > > only issue, why not keep LOCAL cache mode on public API, but > > > > > implement > > > > > > it > > > > > > > > as a PARTITIONED cache with a node filter forcefully set? > > That's > > > > > > similar > > > > > > > to > > >
Re: Thin Client lib: Python
Dmitriy, I would stop using the word "hashcode" in this context. Hash code has a special meaning in Ignite and is used to determine key-to-node affinity. I agree that passing "cache_name" is the best option. I have no idea when "cache_name" is not going to be known and do not think we need to support this case at all. My suggestion is to drop the cache_id use case altogether. Also I am really surprised that we do not have a cache abstraction in python and need to pass cache name and connection into every method. To be honest, this smells really bad that such a popular modern language like Python forces us to have such a clumsy API. Can you please take a look at the Redis python clients and see if there is a better way to support this? https://redis.io/clients#python D. On Thu, Jul 26, 2018 at 9:51 AM, Dmitry Melnichuk < dmitry.melnic...@nobitlost.com> wrote: > Hi, Ilya! > > I considered this option. Indeed, the code would look cleaner if only one > kind of identifier (preferably the human-readable name) was used. But there > can be a hypothetical situation, when the user is left with hash code only. > (For example, obtained from some other API.) It would be sad to have an > identifier and not be able to use it. > > Now I really think about using hash codes and names interchangeably, so > both > > ``` > cache_put(conn, 'my-cache', value=1, key='a') > ``` > > and > > > ``` > cache_put(conn, my_hash_code, value=1, key='a') > ``` > > will be allowed. > > This will be a minor complication on my side, and quite reasonable one. > > > On 07/26/2018 10:44 PM, Ilya Kasnacheev wrote: > >> Hello! >> >> Why not use cache name as string here, instead of cache_id()? >> >> cache_put(conn, 'my-cache', value=1, key='a') >> >> Regards, >> >>
Re: SSLParameters for SslContextFactory
Hello! I really dislike the fact that SSLParameters has 6 setter methods, and we only support one of them, when two more clash with SSL settings which are set elsewhere. I.e. what happens if I pass SSLParameters with setAlgorithmConstraints() or setProtocols() called on them? Is it possible that we will just have an array of allowed ciphers in configuration? Regards, -- Ilya Kasnacheev 2018-07-26 20:16 GMT+03:00 Michael Cherkasov : > Hi all, > > I want to add SSLParameters for SslContextFactory. > > Right now there's no way to specify a particular set of cipher suites that > you want to use. > there's even old request to add this functionality: > https://issues.apache.org/jira/browse/IGNITE-6167 > even with current API you can achieve this, but this requires a lot of > boilerplate code, to avoid this I added SSLParameters, that would be > applied to all SSL connections, please review my pull request: > https://github.com/apache/ignite/pull/4440 > > I think this patch covers 6167, so I want to push it in context of this > ticket. > > Thanks, > Mike. >
Re: Service grid redesign
Anton, I believe, there are cases, when people want to have node singleton services, that are deployed to clients, as well as to all other nodes. And currently clients can execute compute jobs, issued by other clients, and services are not very different from them. Clients may store data and run code. We shouldn't consider them as "end-user nodes". Only thin clients should be run by end users. But I agree, that we shouldn't encourage people to use services this way. So, if it doesn't complicate the implementation too much, then a warning in log will be enough, I think. Denis чт, 26 июл. 2018 г. в 19:56, Anton Vinogradov : > Folks, > > I don't think that it's a good idea to host services on client nodes. > Client topology is not stable enough, and I don't see how to guarantee > availability of such services. > We'll have huge problems to guarantee availability in case of blinking > clients. > > Also, taking into account that ignite cluster can have more that one user > it looks odd that one user able to start service at another user's hardware > (bitcoin miners can be disagree with me). > > In case you want to use nodes only to host services - all you need is to > filter them from cache affinity functions. > > I propose to implement Services pretty close to Cache implementation. > It's a bad idea to reinvent the weel there. > Let's just analyse Cache's code and do the same for services with same > guarantee. > > ср, 25 июл. 2018 г. в 21:58, Vyacheslav Daradur : > > > Denis, long service initialization isn't a big problem for us. > > > > The problem is hung initialization, that means the service deployment > > will never complete. > > On Wed, Jul 25, 2018 at 8:08 PM Denis Mekhanikov > > wrote: > > > > > > Vyacheslav, > > > > > > I think, that this timeout shouldn't be mandatory, and it should be > > > disabled by default. > > > We should be ready for long service initialization. So, it shouldn't be > > > done in any crucial threads like discovery or exchange. > > > > > > Denis > > > > > > ср, 25 июл. 2018 г. в 15:59, Vyacheslav Daradur : > > > > > > > FYI, I've filled the tickets: > > > > https://issues.apache.org/jira/browse/IGNITE-9075 > > > > https://issues.apache.org/jira/browse/IGNITE-9076 > > > > On Wed, Jul 25, 2018 at 12:54 PM Vyacheslav Daradur < > > daradu...@gmail.com> > > > > wrote: > > > > > > > > > > > I think such timeout should be determined on per-service level. > > Can we > > > > make > > > > > > it part of the service configuration, or pass it into deploy > > method? > > > > > > > > > > Agree, per ServiceConfiguration level is more flexible solution. > > > > > > > > > > > This is a great question. Will the service be able to continue > > > > operating > > > > > > after the cache is destroyed? If not, I would undeploy it > > > > automatically. If > > > > > > yes, I would keep it. Please make sure that you are carefully > > printing > > > > out > > > > > > informative logs in either case, to make sure that there is no > > magic > > > > > > happening that is hidden from users. > > > > > > > > > > A service will be able to work till topology's change after that we > > > > > have to recalculate assignments and at this moment we won't > determine > > > > > suitable nodes. > > > > > > > > > > I will fill new tickets to work on these questions and to implement > > > > > solutions in the second iteration if nobody doesn't mind. > > > > > Anyway, it will have been done to a release. > > > > > On Wed, Jul 25, 2018 at 12:08 PM Dmitriy Setrakyan > > > > > wrote: > > > > > > > > > > > > On Tue, Jul 24, 2018 at 9:14 PM, Vyacheslav Daradur < > > > > daradu...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Igniters, please help me to clarify the following questions: > > > > > > > > > > > > > > 1). According to the issue [1] we should propagate services > > > > deployment > > > > > > > results to an initiator, that means we should wait for wor > > > > > > > Service#init method completion. > > > > > > > How should we handle Service#init method hangup? > > > > > > > I propose to introduce some kind of > > > > > > > IgniteSystemProperties#IGNITE_SERVICE_INIT_TIMEOUT to interrupt > > long > > > > > > > initialization. > > > > > > > > > > > > > > > > > > > I think such timeout should be determined on per-service level. > > Can we > > > > make > > > > > > it part of the service configuration, or pass it into deploy > > method? > > > > > > > > > > > > > > > > > > > 2) Should we automatically undeploy services, which had been > > deployed > > > > > > > using #deployKeyAffinitySingleton, on destroying of related > > > > IgniteCache? > > > > > > > > > > > > > > > > > > > > This is a great question. Will the service be able to continue > > > > operating > > > > > > after the cache is destroyed? If not, I would undeploy it > > > > automatically. If > > > > > > yes, I would keep it. Please make sure that you are carefully > > printing > > > > out > > > > > > informative logs in either case, to make sure
Re: Thin Client lib: Python
Thanks Dmitry. I'll look at the docs. On Wed, Jul 25, 2018 at 8:11 PM, Dmitry Melnichuk < dmitry.melnic...@nobitlost.com> wrote: > Hi Prachi! > > At the moment I already have my documents (temporarily) published at RTD. > This is how they look like at a whole: > > https://apache-ignite-binary-protocol-client.readthedocs.io/ > > I already have a separate section on examples: > > https://apache-ignite-binary-protocol-client.readthedocs.io/ > en/latest/examples.html > > My build process is also documented here > > https://apache-ignite-binary-protocol-client.readthedocs.io/ > en/latest/readme.html#documentation > > and there > > https://github.com/nobitlost/ignite/blob/ignite-7782/modules > /platforms/python/README.md > > This instructions works for me, and I have at least one report of > successful documentation build from elsewhere. And RTD is using basically > the same way to put my docs online. > > My way of document building is pretty common for Python package > developers, but if it needs some modifications to fit into Ignite process, > please let me know. > > All the document sources (both autodoc'ed and hand-crafted) is available at > > https://github.com/nobitlost/ignite/tree/ignite-7782/modules > /platforms/python/docs > > I will be glad to answer any questions. > > > On 07/26/2018 06:25 AM, Prachi Garg wrote: > >> Hi Dmitry M, >> >> I am resposible for managing the Ignite documentation. At some point we >> will merge the python documentation on github into the main Ignite >> documentation. Currently, I am trying to restructure our thin client >> documentation in a way that it (thin client documentation) is consistent >> for all supported languages - Java, Node.js, Python etc. >> >> I looked at the python document on github. Under the :mod:`~pyignite.api` >> section, I see all the components - cache config, key value, sql, binary >> types - but there are no code snippets. Is it possible for you describe >> these components with code examples? >> >> See for example - >> https://apacheignite.readme.io/docs/java-thin-client-api#sec >> tion-sql-queries >> where the SQL Queries section explains, with example, how the thin client >> SQL API can be used. >> >> Similarly, please see - >> https://apacheignite.readme.io/docs/java-thin-client-security >> https://apacheignite.readme.io/docs/java-thin-client-high-availability >> https://apacheignite.readme.io/docs/java-thin-client-api >> >> Thanks, >> -Prachi >> >
[jira] [Created] (IGNITE-9099) IgniteCache java doc does not cover all possible exceptions
Mikhail Cherkasov created IGNITE-9099: - Summary: IgniteCache java doc does not cover all possible exceptions Key: IGNITE-9099 URL: https://issues.apache.org/jira/browse/IGNITE-9099 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov IgniteCache java doc does not cover all possible exceptions. For example on if try to close cache after node stop there would be the following exception: org.apache.ignite.IgniteException: Failed to execute dynamic cache change request, node is stopping. at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:986) at org.apache.ignite.internal.util.future.IgniteFutureImpl.convertException(IgniteFutureImpl.java:168) at org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.close(GatewayProtectedCacheProxy.java:1346) However, API for close method doesn't mention any exception at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
SSLParameters for SslContextFactory
Hi all, I want to add SSLParameters for SslContextFactory. Right now there's no way to specify a particular set of cipher suites that you want to use. there's even old request to add this functionality: https://issues.apache.org/jira/browse/IGNITE-6167 even with current API you can achieve this, but this requires a lot of boilerplate code, to avoid this I added SSLParameters, that would be applied to all SSL connections, please review my pull request: https://github.com/apache/ignite/pull/4440 I think this patch covers 6167, so I want to push it in context of this ticket. Thanks, Mike.
[GitHub] ignite pull request #4440: ssl parameters
GitHub user mcherkasov opened a pull request: https://github.com/apache/ignite/pull/4440 ssl parameters You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite master-ssl-params Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4440.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4440 commit 0caa8400f9f939bc020f43d2eda604f5e899cca6 Author: mcherkasov Date: 2018-07-23T14:47:42Z Added ssl parameters for ssl context configuration. commit 809bd7f3343d8f22458b982e9cbaff6f7932ea52 Author: mcherkasov Date: 2018-07-26T15:28:43Z Added java doc. Added test. Fixed code style. commit 5d919872c2e0579dbce045edb1668605ccb35247 Author: mcherkasov Date: 2018-07-26T17:03:49Z Fixed code style. ---
Re: Service grid redesign
Folks, I don't think that it's a good idea to host services on client nodes. Client topology is not stable enough, and I don't see how to guarantee availability of such services. We'll have huge problems to guarantee availability in case of blinking clients. Also, taking into account that ignite cluster can have more that one user it looks odd that one user able to start service at another user's hardware (bitcoin miners can be disagree with me). In case you want to use nodes only to host services - all you need is to filter them from cache affinity functions. I propose to implement Services pretty close to Cache implementation. It's a bad idea to reinvent the weel there. Let's just analyse Cache's code and do the same for services with same guarantee. ср, 25 июл. 2018 г. в 21:58, Vyacheslav Daradur : > Denis, long service initialization isn't a big problem for us. > > The problem is hung initialization, that means the service deployment > will never complete. > On Wed, Jul 25, 2018 at 8:08 PM Denis Mekhanikov > wrote: > > > > Vyacheslav, > > > > I think, that this timeout shouldn't be mandatory, and it should be > > disabled by default. > > We should be ready for long service initialization. So, it shouldn't be > > done in any crucial threads like discovery or exchange. > > > > Denis > > > > ср, 25 июл. 2018 г. в 15:59, Vyacheslav Daradur : > > > > > FYI, I've filled the tickets: > > > https://issues.apache.org/jira/browse/IGNITE-9075 > > > https://issues.apache.org/jira/browse/IGNITE-9076 > > > On Wed, Jul 25, 2018 at 12:54 PM Vyacheslav Daradur < > daradu...@gmail.com> > > > wrote: > > > > > > > > > I think such timeout should be determined on per-service level. > Can we > > > make > > > > > it part of the service configuration, or pass it into deploy > method? > > > > > > > > Agree, per ServiceConfiguration level is more flexible solution. > > > > > > > > > This is a great question. Will the service be able to continue > > > operating > > > > > after the cache is destroyed? If not, I would undeploy it > > > automatically. If > > > > > yes, I would keep it. Please make sure that you are carefully > printing > > > out > > > > > informative logs in either case, to make sure that there is no > magic > > > > > happening that is hidden from users. > > > > > > > > A service will be able to work till topology's change after that we > > > > have to recalculate assignments and at this moment we won't determine > > > > suitable nodes. > > > > > > > > I will fill new tickets to work on these questions and to implement > > > > solutions in the second iteration if nobody doesn't mind. > > > > Anyway, it will have been done to a release. > > > > On Wed, Jul 25, 2018 at 12:08 PM Dmitriy Setrakyan > > > > wrote: > > > > > > > > > > On Tue, Jul 24, 2018 at 9:14 PM, Vyacheslav Daradur < > > > daradu...@gmail.com> > > > > > wrote: > > > > > > > > > > > Igniters, please help me to clarify the following questions: > > > > > > > > > > > > 1). According to the issue [1] we should propagate services > > > deployment > > > > > > results to an initiator, that means we should wait for wor > > > > > > Service#init method completion. > > > > > > How should we handle Service#init method hangup? > > > > > > I propose to introduce some kind of > > > > > > IgniteSystemProperties#IGNITE_SERVICE_INIT_TIMEOUT to interrupt > long > > > > > > initialization. > > > > > > > > > > > > > > > > I think such timeout should be determined on per-service level. > Can we > > > make > > > > > it part of the service configuration, or pass it into deploy > method? > > > > > > > > > > > > > > > > 2) Should we automatically undeploy services, which had been > deployed > > > > > > using #deployKeyAffinitySingleton, on destroying of related > > > IgniteCache? > > > > > > > > > > > > > > > > > This is a great question. Will the service be able to continue > > > operating > > > > > after the cache is destroyed? If not, I would undeploy it > > > automatically. If > > > > > yes, I would keep it. Please make sure that you are carefully > printing > > > out > > > > > informative logs in either case, to make sure that there is no > magic > > > > > happening that is hidden from users. > > > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3392 > > > > > > On Tue, Jul 24, 2018 at 3:01 PM Vyacheslav Daradur < > > > daradu...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > > Got it. > > > > > > > > > > > > > > Yes, we will preserve this behavior. > > > > > > > > > > > > > > Thanks! > > > > > > > On Tue, Jul 24, 2018 at 2:20 PM Dmitriy Setrakyan < > > > dsetrak...@apache.org> > > > > > > wrote: > > > > > > > > > > > > > > > > By default the client nodes should be excluded form service > > > > > > deployment. The > > > > > > > > only way to include clients is to explicitly specify them > > > through node > > > > > > > > filter. This is how services are deployed today and we should > > > preserve
[GitHub] ignite pull request #4422: IGNITE-9046 Actualize dependency versions for Cas...
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4422 ---
[jira] [Created] (IGNITE-9098) IgniteCacheClientReconnectTest.testClientReconnectOnExchangeHistoryExhaustion is flaky after IGNITE-8998
Ilya Kasnacheev created IGNITE-9098: --- Summary: IgniteCacheClientReconnectTest.testClientReconnectOnExchangeHistoryExhaustion is flaky after IGNITE-8998 Key: IGNITE-9098 URL: https://issues.apache.org/jira/browse/IGNITE-9098 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Ilya Kasnacheev Assignee: Anton Kalashnikov Before 78e0bb7efbc53e969c4c4918b6c6272c7b98dc36 test always passed, but after this commit test fails sporadically, such as in https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ClientNodes&branch=pull%2F4420%2Fhead&tab=buildTypeStatusDiv Not part of any released versions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Deprecating LOCAL cache
Hi Alexey, There is nothing to be sorry about :) Сommunity appreciates an alternative vision, this allows us to make as informed decisions as it possible. Thank you for finding this fact, it is very interesting. I'm not sure all these examples were prepared by experienced Ignite users. So idea of deprecation may have one more argument. Deprecation will help us to inform users about LOCAL cache: Probably local cache is not what they need. Sincerely, Dmitriy Pavlov чт, 26 июл. 2018 г. в 16:57, Alexey Zinoviev : > Sorry, guys, I'll put my 1 cent > > I'd like this idea "Implement LOCAL caches as PARTITIONED caches over the > local node." > It make sense for examples/testing in pseudo-distributed mode and so far. > > But I think that the deprecation based on user-list mentions is a wrong > way. Please look here > https://github.com/search?q=%22CacheMode.LOCAL%22+%26+ignite&type=Code > There a lot of hello world examples with LOCAL mode. > > And of course, we can ask about that on user-list, not here, to vote for > the deprecation like this. > > 2018-07-26 11:23 GMT+03:00 Vladimir Ozerov : > > > I meant LOCAL + non-LOCAL transactions of course. > > > > On Wed, Jul 25, 2018 at 10:42 PM Dmitriy Setrakyan < > dsetrak...@apache.org> > > wrote: > > > > > Vladimir, > > > > > > Are you suggesting that a user cannot span more than one local cache > in a > > > cross cache LOCAL transactions. This is extremely surprising to me, as > it > > > would require almost no effort to support it. As far as mixing the > local > > > caches with distributed caches, then I agree, cross-cache transactions > do > > > not make sense. > > > > > > I am not sure why deprecating local caches has become a pressing > issue. I > > > can see that there are a few bugs, but why not just fix them and move > on? > > > Can someone explain why supporting LOCAL caches is such a burden? > > > > > > Having said that, I am not completely opposed to deprecating LOCAL > > caches. > > > I just want to know why. > > > > > > D. > > > > > > On Wed, Jul 25, 2018 at 10:55 AM, Vladimir Ozerov < > voze...@gridgain.com> > > > wrote: > > > > > > > Dima, > > > > > > > > LOCAL cache adds very little value to the product. It doesn't support > > > > cross-cache transactions, consumes a lot of memory, much slower than > > any > > > > widely-used concurrent hash map. Let's go the same way as Java - mark > > > LOCAL > > > > cache as "deprecated for removal", and then remove it in 3.0. > > > > > > > > On Wed, Jul 25, 2018 at 12:10 PM Dmitrii Ryabov < > somefire...@gmail.com > > > > > > > wrote: > > > > > > > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would be > > > much > > > > > easier and faster than fixing all bugs. > > > > > > > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan < > dsetrak...@apache.org > > >: > > > > > > > > > > > I would stay away from deprecating such huge pieces as a whole > > LOCAL > > > > > cache. > > > > > > In retrospect, we should probably not even have LOCAL caches, but > > > now I > > > > > am > > > > > > certain that it is used by many users. > > > > > > > > > > > > I would do one of the following, whichever one is easier: > > > > > > > > > > > >- Fix the issues found with LOCAL caches, including > persistence > > > > > support > > > > > >- Implement LOCAL caches as PARTITIONED caches over the local > > > node. > > > > In > > > > > >this case, we would have to hide any distribution-related > config > > > > from > > > > > >users, like affinity function, for example. > > > > > > > > > > > > D. > > > > > > > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > > > > implemented > > > > > > > separately and therefore has to be maintained separately. If > > that's > > > > the > > > > > > > only issue, why not keep LOCAL cache mode on public API, but > > > > implement > > > > > it > > > > > > > as a PARTITIONED cache with a node filter forcefully set? > That's > > > > > similar > > > > > > to > > > > > > > what we do with REPLICATED caches which are actually > PARTITIONED > > > with > > > > > > > infinite number of backups. > > > > > > > > > > > > > > This way we fix the issues described by Stan and don't have to > > > > > deprecate > > > > > > > anything. > > > > > > > > > > > > > > -Val > > > > > > > > > > > > > > On Wed, Jul 25, 2018 at 12:53 AM Stanislav Lukyanov < > > > > > > > stanlukya...@gmail.com> > > > > > > > wrote: > > > > > > > > > > > > > > > Hi Igniters, > > > > > > > > > > > > > > > > I’d like to start a discussion about the deprecation of the > > LOCAL > > > > > > caches. > > > > > > > > > > > > > > > > LOCAL caches are an edge-case functionality > > > > > > > > I haven’t done any formal analysis, but from my experience > > LOCAL > > > > > caches > > > > > > > > are needed very rarely, if ever. > > > > > > > > I think most usages of LOCAL
[jira] [Created] (IGNITE-9097) CacheContinuousQueryOperationFromCallbackTest fails sporadically in master
Ilya Kasnacheev created IGNITE-9097: --- Summary: CacheContinuousQueryOperationFromCallbackTest fails sporadically in master Key: IGNITE-9097 URL: https://issues.apache.org/jira/browse/IGNITE-9097 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Ilya Kasnacheev Such as https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ContinuousQuery1&branch=pull%2F4420%2Fhead&tab=buildTypeStatusDiv It doesn't fail every time, but 2 out of 10, and the first run usually fails. In 2.6 it doesn't! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: iep-6 metrics ticket review
Hi Ilya, We all agreed change is good, but we'd like to be absolutely sure there is no performance drop. Dmitriy G. was one from reviewer, so I hope he would provide any additional info about change. Could you please assist here? Sincerely, Dmitriy Pavlov чт, 26 июл. 2018 г. в 18:25, Aleksey Kuznetsov : > Hi, Igniters! > > I have the ticket [1] reviewed, it introduce large changes to cache. > > How can I assure it causes no performance drop ? > > [1] : https://issues.apache.org/jira/browse/IGNITE-6846 > > ср, 11 апр. 2018 г. в 3:32, Valentin Kulichenko < > valentin.kuliche...@gmail.com>: > > > This is on my plate, will try to take a look this week. > > > > -Val > > > > On Mon, Apr 9, 2018 at 10:28 AM, Denis Magda wrote: > > > > > Val, > > > > > > As an initial reviewer and reporter, could you have a look and sign the > > > contribution off? > > > > > > -- > > > Denis > > > > > > On Mon, Apr 9, 2018 at 12:56 AM, Aleksey Kuznetsov < > > > alkuznetsov...@gmail.com > > > > wrote: > > > > > > > Hi ,Igniters! > > > > > > > > Do we still need this ticket, about invoke metrics : [1] ? > > > > > > > > If yes, than could somebody review it ? > > > > > > > > If no, should we close this ticket ? > > > > > > > > [1] : https://issues.apache.org/jira/browse/IGNITE-6846 > > > > -- > > > > > > > > *Best Regards,* > > > > > > > > *Kuznetsov Aleksey* > > > > > > > > > >
[jira] [Created] (IGNITE-9096) ContinuousProcessor fails to handle routines with classes loaded by P2P deployment mechanism
Sergey Chugunov created IGNITE-9096: --- Summary: ContinuousProcessor fails to handle routines with classes loaded by P2P deployment mechanism Key: IGNITE-9096 URL: https://issues.apache.org/jira/browse/IGNITE-9096 Project: Ignite Issue Type: Bug Components: zookeeper Affects Versions: 2.6 Reporter: Sergey Chugunov Fix For: 2.7 When server node joins to the cluster where some CQ-routines were deployed with P2P deployment mechanism, it fails with the following exception: {noformat} class org.apache.ignite.IgniteCheckedException: Failed to start manager: GridManagerAdapter [enabled=true, name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager] at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1760) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1051) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2020) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1725) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1153) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:651) at org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882) at org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845) at org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833) at org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testServerJoinWithP2PClassDeployedInCluster(ZookeeperDiscoverySpiTest.java:404) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at junit.framework.TestCase.runTest(TestCase.java:176) at org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2087) at org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140) at org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2002) at java.lang.Thread.run(Thread.java:745) Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start SPI: ZookeeperDiscoverySpi [zkRootPath=/apacheIgnite, zkConnectionString=127.0.0.1:46727,127.0.0.1:36728,127.0.0.1:34199, joinTimeout=0, sesTimeout=1, clientReconnectDisabled=false, internalLsnr=null, stats=org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryStatistics@2c5b3f94] at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:916) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1755) ... 19 more Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to join cluster at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:713) at org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:474) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) ... 21 more Caused by: class org.apache.ignite.IgniteCheckedException: null at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307) at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:232) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:159) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:151) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:700) ... 23 more Caused by: java.lang.NullPointerException at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.node(GridDiscoveryManager.java:1797) at org.apache.ignite.internal.managers.deployment.GridDeploymentClassLoader.sendResourceRequest(GridDeploymentClassLoader.java:731) at org.apache.ignite.internal.managers.deployment.GridDeploymentClassLoader.getResourceAsStream(GridDeploymentClassLoader.java:694) at org.apache.ignite.internal.managers.deployment.GridDeploymentPerVersionStore.checkLoadRemoteClass(GridDeploymentPerVersionStore.java:717) at org.apache.ignite.internal.managers.deployment.GridDeploymentPerVersionStore.getDeployment(GridDeploymentPerVersionStore.java:297) at org.apache.ignite.internal.managers.deployment.GridDeploy
[jira] [Created] (IGNITE-9095) IgnitePdsBinarySortObjectFieldsTest always fails due to assertion in master
Ilya Kasnacheev created IGNITE-9095: --- Summary: IgnitePdsBinarySortObjectFieldsTest always fails due to assertion in master Key: IGNITE-9095 URL: https://issues.apache.org/jira/browse/IGNITE-9095 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Ilya Kasnacheev For example https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_PdsIndexing&branch=pull%2F4420%2Fhead&tab=buildTypeStatusDiv Worked in 2.6! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: iep-6 metrics ticket review
Hi, Igniters! I have the ticket [1] reviewed, it introduce large changes to cache. How can I assure it causes no performance drop ? [1] : https://issues.apache.org/jira/browse/IGNITE-6846 ср, 11 апр. 2018 г. в 3:32, Valentin Kulichenko < valentin.kuliche...@gmail.com>: > This is on my plate, will try to take a look this week. > > -Val > > On Mon, Apr 9, 2018 at 10:28 AM, Denis Magda wrote: > > > Val, > > > > As an initial reviewer and reporter, could you have a look and sign the > > contribution off? > > > > -- > > Denis > > > > On Mon, Apr 9, 2018 at 12:56 AM, Aleksey Kuznetsov < > > alkuznetsov...@gmail.com > > > wrote: > > > > > Hi ,Igniters! > > > > > > Do we still need this ticket, about invoke metrics : [1] ? > > > > > > If yes, than could somebody review it ? > > > > > > If no, should we close this ticket ? > > > > > > [1] : https://issues.apache.org/jira/browse/IGNITE-6846 > > > -- > > > > > > *Best Regards,* > > > > > > *Kuznetsov Aleksey* > > > > > >
Re: Deprecating LOCAL cache
+1 for deprecation in 2.7 and removal in 3.0. (binding) :) I do not see any reason to use local cache instead of CHM, even for testing. чт, 26 июл. 2018 г. в 17:05, Pavel Kovalenko : > Stan, > > I don't think that it is a good way to spawn such crutches in the codebase > for extra rare cases of some of the users. > I think we should do it in right way or do nothing and just remove LOCAL > caches completely. > > 2018-07-25 16:19 GMT+03:00 Stanislav Lukyanov : > > > In my view the approach in implementing LOCAL caches isn’t supposed to be > > highly efficient – just a > > functional workaround for the existing users of LOCAL cache. > > Moreover, if the workaround is easy but slightly awkward it isn’t a bad > > thing – a user needs to understand that their use case > > isn't directly supported and they shouldn’t expect too much of it. > > That’s a drawback of the existing LOCAL cache – it appears as a > > well-supported use case in the API, but if one actually tries to use it > > they’ll see lower performance and more awkward behavior than what they > > could expect. > > > > Stan > > > > From: Pavel Kovalenko > > Sent: 25 июля 2018 г. 15:27 > > To: dev@ignite.apache.org > > Subject: Re: Deprecating LOCAL cache > > > > It's not easy to just make such caches as PARTITIONED with NodeFilter. > > Even in the case when a node is not affinity node for this cache we > create > > entities like GridClientPartitionTopology for such caches on all nodes. > > These caches participate in the exchange, calculate affinity, etc. on all > > nodes. > > If you create 1 instance of local cache on N nodes you will get N^2 > useless > > entities which will eat resources. > > So, this approach should be carefully analyzed before the proposed > > implementation. > > > > 2018-07-25 11:58 GMT+03:00 Dmitrii Ryabov : > > > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would be > much > > > easier and faster than fixing all bugs. > > > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan : > > > > > > > I would stay away from deprecating such huge pieces as a whole LOCAL > > > cache. > > > > In retrospect, we should probably not even have LOCAL caches, but > now I > > > am > > > > certain that it is used by many users. > > > > > > > > I would do one of the following, whichever one is easier: > > > > > > > >- Fix the issues found with LOCAL caches, including persistence > > > support > > > >- Implement LOCAL caches as PARTITIONED caches over the local > node. > > In > > > >this case, we would have to hide any distribution-related config > > from > > > >users, like affinity function, for example. > > > > > > > > D. > > > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > > implemented > > > > > separately and therefore has to be maintained separately. If that's > > the > > > > > only issue, why not keep LOCAL cache mode on public API, but > > implement > > > it > > > > > as a PARTITIONED cache with a node filter forcefully set? That's > > > similar > > > > to > > > > > what we do with REPLICATED caches which are actually PARTITIONED > with > > > > > infinite number of backups. > > > > > > > > > > This way we fix the issues described by Stan and don't have to > > > deprecate > > > > > anything. > > > > > > > > > > -Val > > > > > > > > > > On Wed, Jul 25, 2018 at 12:53 AM Stanislav Lukyanov < > > > > > stanlukya...@gmail.com> > > > > > wrote: > > > > > > > > > > > Hi Igniters, > > > > > > > > > > > > I’d like to start a discussion about the deprecation of the LOCAL > > > > caches. > > > > > > > > > > > > LOCAL caches are an edge-case functionality > > > > > > I haven’t done any formal analysis, but from my experience LOCAL > > > caches > > > > > > are needed very rarely, if ever. > > > > > > I think most usages of LOCAL caches I’ve seen were misuses: the > > users > > > > > > actually needed a simple HashMap, or an actual PARTITIONED cache. > > > > > > > > > > > > LOCAL caches are easy to implement on top of PARTITIONED > > > > > > If one requires a LOCAL cache (which is itself questionable, as > > > > discussed > > > > > > above) it is quite easy to implement one on top of PARTITIONED > > cache. > > > > > > A node filter of form `node -> node.id().equals(localNodeId)` is > > > > enough > > > > > > to make the cache to be stored on the node that created it. > > > > > > Locality of access to the cache (i.e. making it unavailable from > > > other > > > > > > nodes) can be achieved on the application level. > > > > > > > > > > > > LOCAL caches are hard to maintain > > > > > > A quick look at the open issues mentioning “local cache” suggests > > > that > > > > > > this is a corner case for implementation of many Ignite features: > > > > > > > > > > > > https://issues.apache.org/jira/issues/?jql=text%20~%20% > > > > > 22local%20cache%22%20and%20%20project%20%3D%20IGNITE% > >
[jira] [Created] (IGNITE-9094) Request for commit check is sent to backup nodes twice on primary node left.
Alexei Scherbakov created IGNITE-9094: - Summary: Request for commit check is sent to backup nodes twice on primary node left. Key: IGNITE-9094 URL: https://issues.apache.org/jira/browse/IGNITE-9094 Project: Ignite Issue Type: Bug Reporter: Alexei Scherbakov Fix For: 2.7 This causes twice as needed messages during recovery. First place: {noformat} at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxFinishRequest.(GridDhtTxFinishRequest.java:161) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.checkCommittedRequest(GridNearTxFinishFuture.java:911) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.access$400(GridNearTxFinishFuture.java:71) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture$FinishMiniFuture.onNodeLeft(GridNearTxFinishFuture.java:1005) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:820) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:741) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:479) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:417) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$19.apply(GridNearTxLocal.java:3354) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$19.apply(GridNearTxLocal.java:3335) at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383) at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347) at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:495) at org.apache.ignite.internal.processors.cache.GridCacheCompoundFuture.onDone(GridCacheCompoundFuture.java:56) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:474) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearPessimisticTxPrepareFuture.onDone(GridNearPessimisticTxPrepareFuture.java:409) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearPessimisticTxPrepareFuture.onDone(GridNearPessimisticTxPrepareFuture.java:58) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451) at org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285) at org.apache.ignite.internal.util.future.GridCompoundFuture.apply(GridCompoundFuture.java:144) at org.apache.ignite.internal.util.future.GridCompoundFuture.apply(GridCompoundFuture.java:45) at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383) at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347) at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:495) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:474) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:462) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearPessimisticTxPrepareFuture$MiniFuture.onError(GridNearPessimisticTxPrepareFuture.java:515) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearPessimisticTxPrepareFuture$MiniFuture.onNodeLeft(GridNearPessimisticTxPrepareFuture.java:496) at org.apache.ignite.internal.processors.cache.distributed.near.GridNearPessimisticTxPrepareFuture.onNodeLeft(GridNearPessimisticTxPrepareFuture.java:87) at org.apache.ignite.internal.processors.cache.GridCacheMvccManager$4.onEvent(GridCacheMvccManager.java:266) at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager$LocalListenerWrapper.onEvent(GridEventStorageManager.java:1384) at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager.notifyListeners(GridEventStorageManager.java:873) at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager.notifyListeners(GridEventStorageManager.java:858) at org.apache.ignite.internal.managers.eventstorage.G
Re: Thin Client lib: Python
Hello! I expect the effect to be neligible, and UX gain is well worth it. In case it will ever become a sensitive issue, hashcode-based operation might be retained as mentioned earlier. Regards, -- Ilya Kasnacheev 2018-07-26 17:39 GMT+03:00 Igor Sapego : > Ilya, > > This may affect performance in a negative way, as it requires > additional hashcode calculation on every cache operation. > > Best Regards, > Igor > > > On Thu, Jul 26, 2018 at 5:02 PM Ilya Kasnacheev > > wrote: > > > Hello! > > > > I think that having both options is indeed preferable. > > > > Regards, > > > > -- > > Ilya Kasnacheev > > > > 2018-07-26 16:51 GMT+03:00 Dmitry Melnichuk < > > dmitry.melnic...@nobitlost.com> > > : > > > > > Hi, Ilya! > > > > > > I considered this option. Indeed, the code would look cleaner if only > one > > > kind of identifier (preferably the human-readable name) was used. But > > there > > > can be a hypothetical situation, when the user is left with hash code > > only. > > > (For example, obtained from some other API.) It would be sad to have an > > > identifier and not be able to use it. > > > > > > Now I really think about using hash codes and names interchangeably, so > > > both > > > > > > ``` > > > cache_put(conn, 'my-cache', value=1, key='a') > > > ``` > > > > > > and > > > > > > > > > ``` > > > cache_put(conn, my_hash_code, value=1, key='a') > > > ``` > > > > > > will be allowed. > > > > > > This will be a minor complication on my side, and quite reasonable one. > > > > > > > > > On 07/26/2018 10:44 PM, Ilya Kasnacheev wrote: > > > > > >> Hello! > > >> > > >> Why not use cache name as string here, instead of cache_id()? > > >> > > >> cache_put(conn, 'my-cache', value=1, key='a') > > >> > > >> Regards, > > >> > > >> > > >
Re: Thin Client lib: Python
Ilya, This may affect performance in a negative way, as it requires additional hashcode calculation on every cache operation. Best Regards, Igor On Thu, Jul 26, 2018 at 5:02 PM Ilya Kasnacheev wrote: > Hello! > > I think that having both options is indeed preferable. > > Regards, > > -- > Ilya Kasnacheev > > 2018-07-26 16:51 GMT+03:00 Dmitry Melnichuk < > dmitry.melnic...@nobitlost.com> > : > > > Hi, Ilya! > > > > I considered this option. Indeed, the code would look cleaner if only one > > kind of identifier (preferably the human-readable name) was used. But > there > > can be a hypothetical situation, when the user is left with hash code > only. > > (For example, obtained from some other API.) It would be sad to have an > > identifier and not be able to use it. > > > > Now I really think about using hash codes and names interchangeably, so > > both > > > > ``` > > cache_put(conn, 'my-cache', value=1, key='a') > > ``` > > > > and > > > > > > ``` > > cache_put(conn, my_hash_code, value=1, key='a') > > ``` > > > > will be allowed. > > > > This will be a minor complication on my side, and quite reasonable one. > > > > > > On 07/26/2018 10:44 PM, Ilya Kasnacheev wrote: > > > >> Hello! > >> > >> Why not use cache name as string here, instead of cache_id()? > >> > >> cache_put(conn, 'my-cache', value=1, key='a') > >> > >> Regards, > >> > >> >
Re: Quick questions on B+ Trees and Partitions
Hi John, 1. B + tree in a partition is a primary key index, this means that we use a B+ tree index for searching data in the partition. 2. Not fully understood a question, please explain more details. 3. It depends how many partitions do you have for these caches, by default it 1024 per cache, in your default case it will be 2048 B+ trees. 4. if I understood your question correctly, what differentiates between B+ tree in a partition and B+ tree in SQL index? SQL B+ tree is one (or as much as you created indexes) per cache on Node (value field index), partition B+ tree (primary key index) as many as many partitions. On Thu, Jul 26, 2018 at 3:07 AM John Wilson wrote: > Hi, > > >1. B+ tree initialization, BPlusTree.initTree, seems to be called for >every partition. Why? > > https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree.java >2. The documentation here, >https://apacheignite.readme.io/docs/memory-architecture, also states >that for each SQL index, Ignite instantiates and manages a dedicated B+ >Tree. So, is the number of B+ trees determined by partition number of # > of >indexes defined? >3. Assume I have a Person cache and an Organization cache. How many B+ >trees are defined for each cache >4. What differentiates one B+ tree from another B+ tree? Just the cache >it represents? > > > Thanks, > John >
Re: Deprecating LOCAL cache
Stan, I don't think that it is a good way to spawn such crutches in the codebase for extra rare cases of some of the users. I think we should do it in right way or do nothing and just remove LOCAL caches completely. 2018-07-25 16:19 GMT+03:00 Stanislav Lukyanov : > In my view the approach in implementing LOCAL caches isn’t supposed to be > highly efficient – just a > functional workaround for the existing users of LOCAL cache. > Moreover, if the workaround is easy but slightly awkward it isn’t a bad > thing – a user needs to understand that their use case > isn't directly supported and they shouldn’t expect too much of it. > That’s a drawback of the existing LOCAL cache – it appears as a > well-supported use case in the API, but if one actually tries to use it > they’ll see lower performance and more awkward behavior than what they > could expect. > > Stan > > From: Pavel Kovalenko > Sent: 25 июля 2018 г. 15:27 > To: dev@ignite.apache.org > Subject: Re: Deprecating LOCAL cache > > It's not easy to just make such caches as PARTITIONED with NodeFilter. > Even in the case when a node is not affinity node for this cache we create > entities like GridClientPartitionTopology for such caches on all nodes. > These caches participate in the exchange, calculate affinity, etc. on all > nodes. > If you create 1 instance of local cache on N nodes you will get N^2 useless > entities which will eat resources. > So, this approach should be carefully analyzed before the proposed > implementation. > > 2018-07-25 11:58 GMT+03:00 Dmitrii Ryabov : > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would be much > > easier and faster than fixing all bugs. > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan : > > > > > I would stay away from deprecating such huge pieces as a whole LOCAL > > cache. > > > In retrospect, we should probably not even have LOCAL caches, but now I > > am > > > certain that it is used by many users. > > > > > > I would do one of the following, whichever one is easier: > > > > > >- Fix the issues found with LOCAL caches, including persistence > > support > > >- Implement LOCAL caches as PARTITIONED caches over the local node. > In > > >this case, we would have to hide any distribution-related config > from > > >users, like affinity function, for example. > > > > > > D. > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > implemented > > > > separately and therefore has to be maintained separately. If that's > the > > > > only issue, why not keep LOCAL cache mode on public API, but > implement > > it > > > > as a PARTITIONED cache with a node filter forcefully set? That's > > similar > > > to > > > > what we do with REPLICATED caches which are actually PARTITIONED with > > > > infinite number of backups. > > > > > > > > This way we fix the issues described by Stan and don't have to > > deprecate > > > > anything. > > > > > > > > -Val > > > > > > > > On Wed, Jul 25, 2018 at 12:53 AM Stanislav Lukyanov < > > > > stanlukya...@gmail.com> > > > > wrote: > > > > > > > > > Hi Igniters, > > > > > > > > > > I’d like to start a discussion about the deprecation of the LOCAL > > > caches. > > > > > > > > > > LOCAL caches are an edge-case functionality > > > > > I haven’t done any formal analysis, but from my experience LOCAL > > caches > > > > > are needed very rarely, if ever. > > > > > I think most usages of LOCAL caches I’ve seen were misuses: the > users > > > > > actually needed a simple HashMap, or an actual PARTITIONED cache. > > > > > > > > > > LOCAL caches are easy to implement on top of PARTITIONED > > > > > If one requires a LOCAL cache (which is itself questionable, as > > > discussed > > > > > above) it is quite easy to implement one on top of PARTITIONED > cache. > > > > > A node filter of form `node -> node.id().equals(localNodeId)` is > > > enough > > > > > to make the cache to be stored on the node that created it. > > > > > Locality of access to the cache (i.e. making it unavailable from > > other > > > > > nodes) can be achieved on the application level. > > > > > > > > > > LOCAL caches are hard to maintain > > > > > A quick look at the open issues mentioning “local cache” suggests > > that > > > > > this is a corner case for implementation of many Ignite features: > > > > > > > > > > https://issues.apache.org/jira/issues/?jql=text%20~%20% > > > > 22local%20cache%22%20and%20%20project%20%3D%20IGNITE% > > > > 20and%20status%20%3D%20open > > > > > In particular, a recent SO question brought up the fact that LOCAL > > > caches > > > > > don’t support native persistence: > > > > > > > > > > https://stackoverflow.com/questions/51511892/how-to- > > > > configure-persistent-storage-for-apache-ignite-cache > > > > > Having to ask ourselves “how does it play with LOCAL caches” every > > time > > > > we > > > > > write any code in Ignite se
Re: Thin Client lib: Python
Hello! I think that having both options is indeed preferable. Regards, -- Ilya Kasnacheev 2018-07-26 16:51 GMT+03:00 Dmitry Melnichuk : > Hi, Ilya! > > I considered this option. Indeed, the code would look cleaner if only one > kind of identifier (preferably the human-readable name) was used. But there > can be a hypothetical situation, when the user is left with hash code only. > (For example, obtained from some other API.) It would be sad to have an > identifier and not be able to use it. > > Now I really think about using hash codes and names interchangeably, so > both > > ``` > cache_put(conn, 'my-cache', value=1, key='a') > ``` > > and > > > ``` > cache_put(conn, my_hash_code, value=1, key='a') > ``` > > will be allowed. > > This will be a minor complication on my side, and quite reasonable one. > > > On 07/26/2018 10:44 PM, Ilya Kasnacheev wrote: > >> Hello! >> >> Why not use cache name as string here, instead of cache_id()? >> >> cache_put(conn, 'my-cache', value=1, key='a') >> >> Regards, >> >>
[GitHub] ignite pull request #4318: IGNITE-8935 toString() or exclusion for most clas...
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4318 ---
Re: Deprecating LOCAL cache
Sorry, guys, I'll put my 1 cent I'd like this idea "Implement LOCAL caches as PARTITIONED caches over the local node." It make sense for examples/testing in pseudo-distributed mode and so far. But I think that the deprecation based on user-list mentions is a wrong way. Please look here https://github.com/search?q=%22CacheMode.LOCAL%22+%26+ignite&type=Code There a lot of hello world examples with LOCAL mode. And of course, we can ask about that on user-list, not here, to vote for the deprecation like this. 2018-07-26 11:23 GMT+03:00 Vladimir Ozerov : > I meant LOCAL + non-LOCAL transactions of course. > > On Wed, Jul 25, 2018 at 10:42 PM Dmitriy Setrakyan > wrote: > > > Vladimir, > > > > Are you suggesting that a user cannot span more than one local cache in a > > cross cache LOCAL transactions. This is extremely surprising to me, as it > > would require almost no effort to support it. As far as mixing the local > > caches with distributed caches, then I agree, cross-cache transactions do > > not make sense. > > > > I am not sure why deprecating local caches has become a pressing issue. I > > can see that there are a few bugs, but why not just fix them and move on? > > Can someone explain why supporting LOCAL caches is such a burden? > > > > Having said that, I am not completely opposed to deprecating LOCAL > caches. > > I just want to know why. > > > > D. > > > > On Wed, Jul 25, 2018 at 10:55 AM, Vladimir Ozerov > > wrote: > > > > > Dima, > > > > > > LOCAL cache adds very little value to the product. It doesn't support > > > cross-cache transactions, consumes a lot of memory, much slower than > any > > > widely-used concurrent hash map. Let's go the same way as Java - mark > > LOCAL > > > cache as "deprecated for removal", and then remove it in 3.0. > > > > > > On Wed, Jul 25, 2018 at 12:10 PM Dmitrii Ryabov > > > > wrote: > > > > > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would be > > much > > > > easier and faster than fixing all bugs. > > > > > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan >: > > > > > > > > > I would stay away from deprecating such huge pieces as a whole > LOCAL > > > > cache. > > > > > In retrospect, we should probably not even have LOCAL caches, but > > now I > > > > am > > > > > certain that it is used by many users. > > > > > > > > > > I would do one of the following, whichever one is easier: > > > > > > > > > >- Fix the issues found with LOCAL caches, including persistence > > > > support > > > > >- Implement LOCAL caches as PARTITIONED caches over the local > > node. > > > In > > > > >this case, we would have to hide any distribution-related config > > > from > > > > >users, like affinity function, for example. > > > > > > > > > > D. > > > > > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > > > implemented > > > > > > separately and therefore has to be maintained separately. If > that's > > > the > > > > > > only issue, why not keep LOCAL cache mode on public API, but > > > implement > > > > it > > > > > > as a PARTITIONED cache with a node filter forcefully set? That's > > > > similar > > > > > to > > > > > > what we do with REPLICATED caches which are actually PARTITIONED > > with > > > > > > infinite number of backups. > > > > > > > > > > > > This way we fix the issues described by Stan and don't have to > > > > deprecate > > > > > > anything. > > > > > > > > > > > > -Val > > > > > > > > > > > > On Wed, Jul 25, 2018 at 12:53 AM Stanislav Lukyanov < > > > > > > stanlukya...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Hi Igniters, > > > > > > > > > > > > > > I’d like to start a discussion about the deprecation of the > LOCAL > > > > > caches. > > > > > > > > > > > > > > LOCAL caches are an edge-case functionality > > > > > > > I haven’t done any formal analysis, but from my experience > LOCAL > > > > caches > > > > > > > are needed very rarely, if ever. > > > > > > > I think most usages of LOCAL caches I’ve seen were misuses: the > > > users > > > > > > > actually needed a simple HashMap, or an actual PARTITIONED > cache. > > > > > > > > > > > > > > LOCAL caches are easy to implement on top of PARTITIONED > > > > > > > If one requires a LOCAL cache (which is itself questionable, as > > > > > discussed > > > > > > > above) it is quite easy to implement one on top of PARTITIONED > > > cache. > > > > > > > A node filter of form `node -> node.id().equals(localNodeId)` > is > > > > > enough > > > > > > > to make the cache to be stored on the node that created it. > > > > > > > Locality of access to the cache (i.e. making it unavailable > from > > > > other > > > > > > > nodes) can be achieved on the application level. > > > > > > > > > > > > > > LOCAL caches are hard to maintain > > > > > > > A quick look at the open issues mentioning “local cache” > suggests > > > > tha
[GitHub] ignite pull request #4439: IGNITE-9030 Apache Ignite 2.7 Linux packages vers...
GitHub user vveider opened a pull request: https://github.com/apache/ignite/pull/4439 IGNITE-9030 Apache Ignite 2.7 Linux packages version update You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9030 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4439.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4439 commit c0ffa0660695acf95e906aac3925a69a68b2c20a Author: Ivanov Petr Date: 2018-07-26T13:52:25Z IGNITE-9030 Apache Ignite 2.7 Linux packages version update ---
Re: Thin Client lib: Python
Hi, Ilya! I considered this option. Indeed, the code would look cleaner if only one kind of identifier (preferably the human-readable name) was used. But there can be a hypothetical situation, when the user is left with hash code only. (For example, obtained from some other API.) It would be sad to have an identifier and not be able to use it. Now I really think about using hash codes and names interchangeably, so both ``` cache_put(conn, 'my-cache', value=1, key='a') ``` and ``` cache_put(conn, my_hash_code, value=1, key='a') ``` will be allowed. This will be a minor complication on my side, and quite reasonable one. On 07/26/2018 10:44 PM, Ilya Kasnacheev wrote: Hello! Why not use cache name as string here, instead of cache_id()? cache_put(conn, 'my-cache', value=1, key='a') Regards,
[jira] [Created] (IGNITE-9093) IgniteDbPutGetWithCacheStoreTest.testReadThrough fails every time when run on master
Ilya Kasnacheev created IGNITE-9093: --- Summary: IgniteDbPutGetWithCacheStoreTest.testReadThrough fails every time when run on master Key: IGNITE-9093 URL: https://issues.apache.org/jira/browse/IGNITE-9093 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Ilya Kasnacheev Such as in https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Pds1&branch=pull%2F4420%2Fhead&tab=buildTypeStatusDiv Used to work every time in 2.6 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9092) SQL: Support CREATE TABLE .. AS (SELECT...) syntax
Stepan Pilschikov created IGNITE-9092: - Summary: SQL: Support CREATE TABLE .. AS (SELECT...) syntax Key: IGNITE-9092 URL: https://issues.apache.org/jira/browse/IGNITE-9092 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.6, 2.5 Reporter: Stepan Pilschikov Currently syntax is not supported {code:java} create table test as (select * from created_table); Error: CREATE TABLE ... AS ... syntax is not supported (state=0A000,code=0) java.sql.SQLException: CREATE TABLE ... AS ... syntax is not supported at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:762) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:475) at sqlline.Commands.execute(Commands.java:823) at sqlline.Commands.sql(Commands.java:733) at sqlline.SqlLine.dispatch(SqlLine.java:795) at sqlline.SqlLine.begin(SqlLine.java:668) at sqlline.SqlLine.start(SqlLine.java:373) at sqlline.SqlLine.main(SqlLine.java:265){code} But in H2 this statement allowed to do -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Async cache groups rebalance not started with rebalanceOrder ZERO
Maxim, 1) There is a typo at javadoc, feel free to fix it. 2) It's a bad idea to rebalance more than 1 cache simultaneously. - It's hard to determine error reason in that case when (not "if", but "when" :) ) we'll gain issue at prod (100+ caches case). - We should have limited rebalance load. Rebalance should not cause thousand messages per second, this will lead to cluster death. rebalanceThreadPoolSize(), rebalanceBatchSize() and rebalanceBatchesPrefetchCount() provides us guarantee of limited but proper load. 3) Correct fix for situation you described is to restart rebalancing (chained) for both caches on timeout. And that's what we'll gain once cluster detect that node have IO issues and start new topology without it. So, seems, only javadoc fixes required. ср, 18 июл. 2018 г. в 15:13, Yakov Zhdanov : > Maxim, I checked and it seems that send retry count is used only in cache > IO manager and the usage is semantically very far from what I suggest. > Resend count limits the attempts count, while I meant successfull send but > possible problems on supplier side. > > --Yakov > > 2018-07-17 19:01 GMT+03:00 Maxim Muzafarov : > > > Yakov, > > > > But we already have DFLT_SEND_RETRY_CNT and DFLT_SEND_RETRY_DELAY for > > configuring our CommunicationSPI behavior. What if user configure this > > parameters his own way and he will see a lot of WARN messages in log > which > > have no sense? > > > > May be we use GridCachePartitionExchangeManager#forceRebalance (or may > > be forceReassign) if we fail rebalance all that retries. What do you > think? > > > > > > > > пн, 16 июл. 2018 г. в 21:12, Yakov Zhdanov : > > > > > Maxim, I looked at the code you provided. I think we need to add some > > > timeout validation and output warning to logs on demander side in case > > > there is no supply message within 30 secs and repeat demanding process. > > > This should apply to any demand message throughout the rebalancing > > process > > > not only the 1st one. > > > > > > You can use the following message > > > > > > Failed to wait for supply message from node within 30 secs [cache=C, > > > partId=XX] > > > > > > Alex Goncharuk do you have comments here? > > > > > > Yakov Zhdanov > > > www.gridgain.com > > > > > > 2018-07-14 19:45 GMT+03:00 Maxim Muzafarov : > > > > > > > Yakov, > > > > > > > > Yes, you're right. Whole rebalancing progress will be stopped. > > > > > > > > Actually, rebalancing order doesn't matter you right it too. Javadoc > > just > > > > says the idea how rebalance should work for caches but in fact it > don't > > > > work as described. Personally, I'd prefer to start rebalance of each > > > cache > > > > group in async way independently. > > > > > > > > Please, look at my reproducer [1]. > > > > > > > > Scenario: > > > > Cluster with two REPLICATEDED caches. > > > > Start new node. > > > > First rebalance cache group is failed to start (e.g. network issues) > - > > > it's > > > > OK. > > > > Second rebalance cache group will neber be started - whole futher > > > progress > > > > stucks (I think rebalance here should be started!). > > > > > > > > > > > > [1] > > > > https://github.com/Mmuzaf/ignite/blob/rebalance-cancel/ > > > > modules/core/src/test/java/org/apache/ignite/internal/ > > > > processors/cache/distributed/rebalancing/ > > GridCacheRebalancingCancelSelf > > > > Test.java > > > > > > > > пт, 13 июл. 2018 г. в 17:46, Yakov Zhdanov : > > > > > > > > > Maxim, I do not understand the problem. Imagine I do not have any > > > > ordering > > > > > but rebalancing of some cache fails to start - so in my > understanding > > > > > overall rebalancing progress becomes blocked. Is that true? > > > > > > > > > > Can you pleaes provide reproducer for your problem? > > > > > > > > > > --Yakov > > > > > > > > > > 2018-07-09 16:42 GMT+03:00 Maxim Muzafarov : > > > > > > > > > > > Hello Igniters, > > > > > > > > > > > > Each cache group has “rebalance order” property. As javadoc for > > > > > > getRebalanceOrder() says: “Note that cache with order {@code 0} > > does > > > > not > > > > > > participate in ordering. This means that cache with rebalance > order > > > > > {@code > > > > > > 0} will never wait for any other caches. All caches with order > > {@code > > > > 0} > > > > > > will be rebalanced right away concurrently with each other and > > > ordered > > > > > > rebalance processes. If not set, cache order is 0, i.e. > rebalancing > > > is > > > > > not > > > > > > ordered.” > > > > > > > > > > > > In fact GridCachePartitionExchangeManager always build the chain > > of > > > > > > rebalancing cache groups to start (even for cache order ZERO): > > > > > > > > > > > > ignite-sys-cache -> cacheR -> cacheR3 -> cacheR2 -> cacheR5 -> > > > cacheR1. > > > > > > > > > > > > If one of these groups will fail to start further groups will > never > > > be > > > > > run. > > > > > > > > > > > > * Question 1*: Should we fix javadoc description or create a bug > > for > > > > > fixing > > > > > > such rebalance
[GitHub] ignite pull request #4379: GG-13998 IGNITE-5975 Fixed hang on concurrent nod...
Github user alamar closed the pull request at: https://github.com/apache/ignite/pull/4379 ---
Re: Quick question on data and index pages
Hi, 1. As the name implies, indirectCount is the count of indirect items, which are references to direct items. According to our DataPage format, we keep all items in the beginning of the page. Take a look at this diagram: https://cwiki-test.apache.org/confluence/download/attachments/73632614/Part%206.%205.%20Page%20structure%20%281%29.png?version=1&modificationDate=1525443891000&api=v2. If we remove It2, we will have to move It3 onto it's place. But we already have external references to It3 by it's index (3). So, to keep those external references correct, we have to mark item on index 3 as "indirect" and make it point on index 2. In this case, such page will have directCount == 2 and indirectCount == 1. 2. No, only index pages are organized in a B+ tree. Data pages are organized in another data structure called FreeList - it stores how many free space is available on each data page and provides fast access to pages that have space >= specified. 3. Yes. The most significant difference is that internal nodes need to store links to nodes on the next level. Check classes BPlusInnerIO and BPlusLeafIO (and their subclasses) if you are interested in more details. On Thu, Jul 26, 2018 at 6:22 AM, John Wilson wrote: > Hi, > > 1. What are direct and indirect count in data page header used for? What is > the difference? > > [ > https://cwiki-test.apache.org/confluence/display/IGNITE/ > Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory- > underthehood-Freelists > ] > > 2. Are data pages organized in a B+ tree structure or index pages only? > > 3. Is there any difference between internal and leaf nodes in the B+ tree > structure? > > > Thanks, > -- Best regards, Ilya
Re: Thin Client lib: Python
Hello! Why not use cache name as string here, instead of cache_id()? cache_put(conn, 'my-cache', value=1, key='a') Regards, -- Ilya Kasnacheev 2018-07-26 5:11 GMT+03:00 Dmitry Melnichuk : > Either > > ``` > conn = Connection('example.com', 10800) > cache_put(conn, cache_id('my-cache'), 'a', 1) > ``` > > or > > ``` > conn = Connection('example.com', 10800) > my_cache_id = cache_id('my-cache') > cache_put(conn, my_cache_id, 'a', 1) > ``` > > It is also possible to give parameters names, if you like to. > > ``` > conn = Connection('example.com', 10800) > cache_put(conn, cache_id('my-cache'), key='a', value=1) > ``` > > This should also work, but not recommended: > > ``` > conn = Connection('example.com', 10800) > cache_put(conn, cache_id('my-cache'), value=1, key='a') > ``` > > All variants can coexist in one user program. > > > On 07/26/2018 05:46 AM, Dmitriy Setrakyan wrote: > >> I am still confused. Let's work through an example. Suppose I have a cache >> named "my_cache" and I want to put an entry with key "a" and value "1". >> >> In Java, this code will look like this: >> >> >> *IgniteCache<...> myCache = ignite.cache("my-cache");myCache.put("a", >>> 1);* >>> >> >> >> How will the same code look in Python? >> >> D. >> >
[GitHub] ignite pull request #4433: IGNITE-9083 Compute (Affinity Run) TC configurati...
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4433 ---
[GitHub] ignite pull request #4438: IGNITE-9089 Web Agent not starting in docker cont...
GitHub user vveider opened a pull request: https://github.com/apache/ignite/pull/4438 IGNITE-9089 Web Agent not starting in docker container. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9089 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4438.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4438 commit c1effcd6276c111e0a828c4d863c1ce3c4fb0cd3 Author: Ivanov Petr Date: 2018-07-26T12:37:39Z IGNITE-9089 Web Agent not starting in docker container. ---
[jira] [Created] (IGNITE-9091) IEP-25: creating documentation
Alex Volkov created IGNITE-9091: --- Summary: IEP-25: creating documentation Key: IGNITE-9091 URL: https://issues.apache.org/jira/browse/IGNITE-9091 Project: Ignite Issue Type: Task Components: documentation Reporter: Alex Volkov It would be great to have proper documentation for IEP-25: [https://cwiki.apache.org/confluence/display/IGNITE/IEP-25:+Partition+Map+Exchange+hangs+resolving] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #4437: IGNITE-9084 Fix WAL rebalance iterator exception
GitHub user Jokser opened a pull request: https://github.com/apache/ignite/pull/4437 IGNITE-9084 Fix WAL rebalance iterator exception You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9084 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4437.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4437 commit 6e9012dce35c9a433da6b1be41bb43764d396134 Author: Pavel Kovalenko Date: 2018-07-26T12:06:32Z IGNITE-9084 Tests experiments. ---
RE: Adding experimental support for Intel Optane DC Persistent Memory
Ah, ok, it’s just the ‘.’ at the end of the link. Removed it and it’s fine. From: Stanislav Lukyanov Sent: 26 июля 2018 г. 15:12 To: dev@ignite.apache.org Subject: RE: Adding experimental support for Intel Optane DC Persistent Memory Hi, The link you’ve shared gives me 404. Perhaps you need to add a permission for everyone to access the page? Thanks, Stan From: Mammo, Mulugeta Sent: 26 июля 2018 г. 2:44 To: dev@ignite.apache.org Subject: Adding experimental support for Intel Optane DC Persistent Memory Hi, I have added a new proposal to support Intel Optane DC Persistent Memory for Ignite here: https://cwiki.apache.org/confluence/display/IGNITE/Adding+Experimental+Support+for+Intel+Optane+DC+Persistent+Memory. I'm looking forward to your feedback and collaboration on this. Thanks, Mulugeta
[GitHub] ignite pull request #4436: IGNITE-9064: Decision tree optimization
GitHub user avplatonov opened a pull request: https://github.com/apache/ignite/pull/4436 IGNITE-9064: Decision tree optimization @dmitrievanthony @artemmalykh Please review this code You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite IGNITE-9064 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4436.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4436 commit df45dba7dae1c6b9604474ec9fb912781d965a2f Author: Alexey Platonov Date: 2018-07-23T07:43:11Z O(n^2) in Regression Trees and memory leak elimination commit 7b31a02dd4ed6ff9396f50c0dab3187165303c39 Author: Alexey Platonov Date: 2018-07-24T12:48:21Z features index commit 8afc3af5b33bd560a26c081188ba8c19f4a6df3a Author: Alexey Platonov Date: 2018-07-24T14:07:32Z create tree data index, adopt for classification commit 615906bdeea7c5cb30d0839ab2906868306d5103 Author: Alexey Platonov Date: 2018-07-24T14:31:37Z adopt index for regression commit 4b52959a294603c80342511538eeff79071b64cc Author: Alexey Platonov Date: 2018-07-24T14:32:43Z Merge branch 'master' of https://github.com/apache/ignite into ml/optimizations commit b13e43560d4ec063615951d1c0a3f7810986b257 Author: Alexey Platonov Date: 2018-07-25T12:32:05Z use index projections and caching commit f697380019c95278c0a98bb1a18b684c3bd6 Author: Alexey Platonov Date: 2018-07-26T08:19:06Z some refactoring and comments commit 01096df8ab4ae0733285ff37ce37c63eb2a7a87a Author: Alexey Platonov Date: 2018-07-26T10:04:47Z make indexes is optional commit 9848faf37325391776f4a3014c6ec814797dc851 Author: Alexey Platonov Date: 2018-07-26T10:38:54Z add useIndex flag exhaustive search to tests ---
RE: Adding experimental support for Intel Optane DC Persistent Memory
Hi, The link you’ve shared gives me 404. Perhaps you need to add a permission for everyone to access the page? Thanks, Stan From: Mammo, Mulugeta Sent: 26 июля 2018 г. 2:44 To: dev@ignite.apache.org Subject: Adding experimental support for Intel Optane DC Persistent Memory Hi, I have added a new proposal to support Intel Optane DC Persistent Memory for Ignite here: https://cwiki.apache.org/confluence/display/IGNITE/Adding+Experimental+Support+for+Intel+Optane+DC+Persistent+Memory. I'm looking forward to your feedback and collaboration on this. Thanks, Mulugeta
[GitHub] ignite pull request #4435: IGNITE-9058 Updated Apache Tomcat dependency to v...
GitHub user daradurvs opened a pull request: https://github.com/apache/ignite/pull/4435 IGNITE-9058 Updated Apache Tomcat dependency to version 9.0.10 You can merge this pull request into a Git repository by running: $ git pull https://github.com/daradurvs/ignite ignite-9058-fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4435.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4435 commit 34e6553a584d6e1f64198beeb952bf70d58a123d Author: Fedotov Date: 2018-07-26T11:56:36Z updated to 9.0.10 ---
[jira] [Created] (IGNITE-9090) When client node make cache.QueryCursorImpl.getAll they have OOM and continue working
ARomantsov created IGNITE-9090: -- Summary: When client node make cache.QueryCursorImpl.getAll they have OOM and continue working Key: IGNITE-9090 URL: https://issues.apache.org/jira/browse/IGNITE-9090 Project: Ignite Issue Type: Bug Affects Versions: 2.4 Environment: 2 server node, 1 client Reporter: ARomantsov Fix For: 2.7 {code:java} [12:21:22,390][SEVERE][query-#69][GridCacheIoManager] Failed to process message [senderId=30cab4ec-1da7-4e9f-a262-bdfa4d466865, messageType=class o.a.i.i.processors.cache.query.GridCacheQueryResponse] java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.Long.valueOf(Long.java:840) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:250) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:555) at org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:917) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421) at org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:227) at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94) at org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1777) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716) at org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310) at org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99) at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.unmarshalCollection0(GridCacheQueryResponse.java:189) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.finishUnmarshal(GridCacheQueryResponse.java:162) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1530) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:576) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:101) at org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1613) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125) at org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752) at org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125) at org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [12:21:28,573][INFO][ignite-update-notifier-timer][GridUpdateNotifier] Update status is not available. [12:21:23,759][WARNING][jvm-pause-detector-worker][] Possible too long JVM pause: 22446 milliseconds. [12:21:23,758][INFO][grid-timeout-worker-#39][IgniteKernal] Metrics for local node (to disable set 'metricsLogFrequency' to 0) ^-- Node [id=c1f087b1, uptime=00:01:25.431] ^-- H/N/C [hosts=2, nodes=3, CPUs=32] ^-- CPU [cur=100%, avg=79.09%, GC=8.93%] ^-- PageMemory [pages=0] ^-- Heap [used=216MB, free=8.57%, comm=236MB] ^--
[jira] [Created] (IGNITE-9089) Web Agent not starting in docker container.
Ilya Murchenko created IGNITE-9089: -- Summary: Web Agent not starting in docker container. Key: IGNITE-9089 URL: https://issues.apache.org/jira/browse/IGNITE-9089 Project: Ignite Issue Type: Bug Affects Versions: 2.6 Reporter: Ilya Murchenko Assignee: Peter Ivanov Fix For: 2.7 After a successful build from the Dockerfile in the [Github repository|https://github.com/apache/ignite/blob/master/docker/web-agent/Dockerfile] Web Agent application not starting in docker container with the following error: {code:java} /bin/sh: ignite-web-agent.sh: not found {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #4418: IGNITE-9058 'Update Apache Tomcat dependency vers...
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4418 ---
[jira] [Created] (IGNITE-9088) Add ability to dump persistence after particular test
Pavel Kovalenko created IGNITE-9088: --- Summary: Add ability to dump persistence after particular test Key: IGNITE-9088 URL: https://issues.apache.org/jira/browse/IGNITE-9088 Project: Ignite Issue Type: Improvement Components: persistence Reporter: Pavel Kovalenko Assignee: Pavel Kovalenko Fix For: 2.7 Sometimes it's needed to analyze persistence after a particular test finish on TeamCity. We need to add an ability to dump persistence dirs/files to the specified directory on test running host for further analysis. This should be managed by a property. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Dirty Reads and READ_COMMITTED
Hi all, Let's initially agree we're discussing about PESSIMISTIC, since OPTIMISTIC provides no warranty on read before it commited. Also, we should have some reproducer to make sure we're discussing same issue. I guess it's possible to have dirty reads only in following case - first transaction commiting now under writelocks (I hope we acquire them (all) before firs write, and release (all) after last write) - second transaction does not acquire readlocks and just reads what can anyway, in case we have some issues here, it should be easy to write a reproducer. Or we already have tests covers isolation issues? чт, 26 июл. 2018 г. в 1:58, Dmitriy Setrakyan : > Let's suppose that some transaction is trying to update values A and B: > > *tx.start() * > > > > > > *cache.put("A", 1);cache.put("B", 2); * > > > > *tx. commit();* > > > If you use READ_COMMITTED isolation level, it is possible that you will > read the new value of A and the old value of B. If you need to make sure > that you read the new values for A and B, then you need to use > REPEATABLE_READ transaction in PESSIMISTIC mode. Note, that you can also > use OPTIMISTIC SERIALIZABLE transactions as well, but in this case if you > run into a conflict, then the transaction will be rolled back. > > D. > > On Wed, Jul 25, 2018 at 11:19 PM, Valentin Kulichenko < > valentin.kuliche...@gmail.com> wrote: > > > I believe Ignite updates values during the commit phase, so it's not > > possible to get a dirty read even if you do not acquire distributed locks > > on reads with READ_COMMITTED isolation. But I would let other community > > members who are more knowledgeable in this topic to add there comments, > as > > it's possible that I'm missing something. > > > > As for documentation, looks like semantic of "lock" there always implies > > that it is help until transaction is committed or rolled back. Probably > it > > makes sense to clarify this as well. > > > > And yes, please disregard the REPEATABLE_READ point. I misread you > initial > > message a little bit. > > > > -Val > > > > On Wed, Jul 25, 2018 at 11:25 AM John Wilson > > wrote: > > > > > And no. I'm not describing REPEATABLE_READ. I'm describing > > > https://en.wikipedia.org/wiki/Isolation_(database_systems)#Dirty_reads > > and > > > how READ_COMMITTED isolation can avoid dirty reads. > > > > > > On Wed, Jul 25, 2018 at 11:20 AM, John Wilson > > > > wrote: > > > > > > > I agree with your description. But the documentation https:// > > > > apacheignite.readme.io/docs/transactions is misleading in stating > that > > > no > > > > read locks are acquired (says "READ_COMMITTED - Data is read without > a > > > lock > > > > and is never cached in the transaction itself."). Which should be > > wrong. > > > > Read locks are acquired but they are released as soon as the read is > > > > complete (and they are not held until the transaction commits or > rolls > > > > back). > > > > > > > > Thanks, > > > > > > > > On Wed, Jul 25, 2018 at 10:33 AM, Valentin Kulichenko < > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > >> Hi John, > > > >> > > > >> Read committed isolation typically implies that lock on read is not > > held > > > >> throughout the transaction lifecycle, i.e. released right after read > > is > > > >> completed (in Ignite this just means that no lock is required at > all). > > > >> This > > > >> semantic allows second read to get an updated value that was already > > > >> committed by another transaction. See Wikipedia description for > > example: > > > >> > > > https://en.wikipedia.org/wiki/Isolation_(database_systems)# > > Read_committed > > > >> > > > >> What you're describing is REPEATABLE_READ isolation which guarantees > > > that > > > >> subsequent reads return the same value. In Ignite it's achieved by > > > >> acquiring lock on read and releasing it only on commit/rollback. > > > >> > > > >> -Val > > > >> > > > >> On Wed, Jul 25, 2018 at 10:12 AM John Wilson < > sami.hailu...@gmail.com > > > > > > >> wrote: > > > >> > > > >> > Hi, > > > >> > > > > >> > Consider the following transaction where we read key 1 twice. > > > >> > > > > >> > try (Transaction tx = Ignition.ignite().transactions > > > >> ().txStart(PESSIMISTIC, > > > >> > READ_COMMITTED)) { > > > >> > cache.get(1); > > > >> > //... > > > >> > cache.get(1); > > > >> > tx.commit(); > > > >> > } > > > >> > > > > >> > According to the documentation here, > > > >> > https://apacheignite.readme.io/docs/transactions, data is read > > > without > > > >> a > > > >> > lock and is never cached. If that is the case, then how do we > avoid > > a > > > >> dirty > > > >> > read on the second cache.get(1)? Another uncommitted transaction > may > > > >> update > > > >> > the key between the first and second reads. > > > >> > > > > >> > In most RDMS, a READ_COMMITTED isolation level, acquires locks for > > > both > > > >> > read and writes. The read lock is released after a read while the > > > write > > > >> > lock is held until
Re: [MTCGA]: new failures in builds [1532575] needs to be handled
Hi Sergey, thank you. I hope community members will pick up this issue. чт, 26 июл. 2018 г. в 14:21, Sergey Chugunov : > No functionality was broken, the problem is in the test itself. I created a > ticket [1] to fix it and going to mute it on TC. > > [1] https://issues.apache.org/jira/browse/IGNITE-9087 > > On Wed, Jul 25, 2018 at 8:42 PM Sergey Chugunov > > wrote: > > > I'll take a look at this test as I'm an author of it. > > > > On Wed, Jul 25, 2018 at 6:56 PM wrote: > > > >> Hi Ignite Developer, > >> > >> I am MTCGA.Bot, and I've detected some issue on TeamCity to be > addressed. > >> I hope you can help. > >> > >> *New test failure in master > >> > IgniteCacheClientReconnectTest.testClientInForceServerModeStopsOnExchangeHistoryExhaustion > >> > >> > https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=2977382929811006222&branch=%3Cdefault%3E&tab=testDetails > >> Changes may led to failure were done by > >> - somefireone > >> > http://ci.ignite.apache.org/viewModification.html?modId=826273&personal=false > >> - vinokurov.pasha > >> > http://ci.ignite.apache.org/viewModification.html?modId=826253&personal=false > >> - dmitriy.govorukhin > >> > http://ci.ignite.apache.org/viewModification.html?modId=826250&personal=false > >> - kaa.dev > >> > http://ci.ignite.apache.org/viewModification.html?modId=826246&personal=false > >> - vanen31 > >> > http://ci.ignite.apache.org/viewModification.html?modId=826242&personal=false > >> - garus.d.g > >> > http://ci.ignite.apache.org/viewModification.html?modId=826234&personal=false > >> - ivandasch > >> > http://ci.ignite.apache.org/viewModification.html?modId=826229&personal=false > >> - av > >> > http://ci.ignite.apache.org/viewModification.html?modId=826218&personal=false > >> - estanilovskiy > >> > http://ci.ignite.apache.org/viewModification.html?modId=826197&personal=false > >> - dmitriy.govorukhin > >> > http://ci.ignite.apache.org/viewModification.html?modId=826195&personal=false > >> > >> - If your changes can led to this failure(s), please create > issue > >> with label MakeTeamCityGreenAgain and assign it to you. > >> -- If you have fix, please set ticket to PA state and write to > >> dev list fix is ready > >> -- For case fix will require some time please mute test and set > >> label Muted_Test to issue > >> - If you know which change caused failure please contact change > >> author directly > >> - If you don't know which change caused failure please send > >> message to dev list to find out > >> Should you have any questions please contact dpav...@apache.org or > write > >> to dev.list > >> Best Regards, > >> MTCGA.Bot > >> Notification generated at Wed Jul 25 18:02:09 MSK 2018 > >> > > >
Re: [MTCGA]: new failures in builds [1532575] needs to be handled
No functionality was broken, the problem is in the test itself. I created a ticket [1] to fix it and going to mute it on TC. [1] https://issues.apache.org/jira/browse/IGNITE-9087 On Wed, Jul 25, 2018 at 8:42 PM Sergey Chugunov wrote: > I'll take a look at this test as I'm an author of it. > > On Wed, Jul 25, 2018 at 6:56 PM wrote: > >> Hi Ignite Developer, >> >> I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. >> I hope you can help. >> >> *New test failure in master >> IgniteCacheClientReconnectTest.testClientInForceServerModeStopsOnExchangeHistoryExhaustion >> >> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=2977382929811006222&branch=%3Cdefault%3E&tab=testDetails >> Changes may led to failure were done by >> - somefireone >> http://ci.ignite.apache.org/viewModification.html?modId=826273&personal=false >> - vinokurov.pasha >> http://ci.ignite.apache.org/viewModification.html?modId=826253&personal=false >> - dmitriy.govorukhin >> http://ci.ignite.apache.org/viewModification.html?modId=826250&personal=false >> - kaa.dev >> http://ci.ignite.apache.org/viewModification.html?modId=826246&personal=false >> - vanen31 >> http://ci.ignite.apache.org/viewModification.html?modId=826242&personal=false >> - garus.d.g >> http://ci.ignite.apache.org/viewModification.html?modId=826234&personal=false >> - ivandasch >> http://ci.ignite.apache.org/viewModification.html?modId=826229&personal=false >> - av >> http://ci.ignite.apache.org/viewModification.html?modId=826218&personal=false >> - estanilovskiy >> http://ci.ignite.apache.org/viewModification.html?modId=826197&personal=false >> - dmitriy.govorukhin >> http://ci.ignite.apache.org/viewModification.html?modId=826195&personal=false >> >> - If your changes can led to this failure(s), please create issue >> with label MakeTeamCityGreenAgain and assign it to you. >> -- If you have fix, please set ticket to PA state and write to >> dev list fix is ready >> -- For case fix will require some time please mute test and set >> label Muted_Test to issue >> - If you know which change caused failure please contact change >> author directly >> - If you don't know which change caused failure please send >> message to dev list to find out >> Should you have any questions please contact dpav...@apache.org or write >> to dev.list >> Best Regards, >> MTCGA.Bot >> Notification generated at Wed Jul 25 18:02:09 MSK 2018 >> >
[jira] [Created] (IGNITE-9087) testClientInForceServerModeStopsOnExchangeHistoryExhaustion refactoring
Sergey Chugunov created IGNITE-9087: --- Summary: testClientInForceServerModeStopsOnExchangeHistoryExhaustion refactoring Key: IGNITE-9087 URL: https://issues.apache.org/jira/browse/IGNITE-9087 Project: Ignite Issue Type: Test Reporter: Sergey Chugunov Initial implementation of the test relied on massive parallel client start to get into a situation of exchange history exhaustion. But after fix IGNITE-8998 even in massive start scenario probability of client's exchange being cleaned up from exchange history becomes much smaller. Test should be refactored so it won't rely on parallel operations but delay exchange finish (e.g. by delaying particular messages). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9086) Error during commit transaction on primary node may lead to breaking transaction data integrity
Pavel Kovalenko created IGNITE-9086: --- Summary: Error during commit transaction on primary node may lead to breaking transaction data integrity Key: IGNITE-9086 URL: https://issues.apache.org/jira/browse/IGNITE-9086 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.6, 2.5, 2.4 Reporter: Pavel Kovalenko Fix For: 2.7 Transaction properties are PESSIMISTIC, REPEATABLE READ. If primary partitions participating in the transaction are spread across several nodes and commit is failed on some of the primary nodes while other primary nodes have committed transaction it may lead to breaking transaction data integrity. A data become inconsistent even after rebalance when the node with failed commit returns back to the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Deprecating LOCAL cache
I meant LOCAL + non-LOCAL transactions of course. On Wed, Jul 25, 2018 at 10:42 PM Dmitriy Setrakyan wrote: > Vladimir, > > Are you suggesting that a user cannot span more than one local cache in a > cross cache LOCAL transactions. This is extremely surprising to me, as it > would require almost no effort to support it. As far as mixing the local > caches with distributed caches, then I agree, cross-cache transactions do > not make sense. > > I am not sure why deprecating local caches has become a pressing issue. I > can see that there are a few bugs, but why not just fix them and move on? > Can someone explain why supporting LOCAL caches is such a burden? > > Having said that, I am not completely opposed to deprecating LOCAL caches. > I just want to know why. > > D. > > On Wed, Jul 25, 2018 at 10:55 AM, Vladimir Ozerov > wrote: > > > Dima, > > > > LOCAL cache adds very little value to the product. It doesn't support > > cross-cache transactions, consumes a lot of memory, much slower than any > > widely-used concurrent hash map. Let's go the same way as Java - mark > LOCAL > > cache as "deprecated for removal", and then remove it in 3.0. > > > > On Wed, Jul 25, 2018 at 12:10 PM Dmitrii Ryabov > > wrote: > > > > > +1 to make LOCAL as filtered PARTITIONED cache. I think it would be > much > > > easier and faster than fixing all bugs. > > > > > > 2018-07-25 11:51 GMT+03:00 Dmitriy Setrakyan : > > > > > > > I would stay away from deprecating such huge pieces as a whole LOCAL > > > cache. > > > > In retrospect, we should probably not even have LOCAL caches, but > now I > > > am > > > > certain that it is used by many users. > > > > > > > > I would do one of the following, whichever one is easier: > > > > > > > >- Fix the issues found with LOCAL caches, including persistence > > > support > > > >- Implement LOCAL caches as PARTITIONED caches over the local > node. > > In > > > >this case, we would have to hide any distribution-related config > > from > > > >users, like affinity function, for example. > > > > > > > > D. > > > > > > > > On Wed, Jul 25, 2018 at 9:05 AM, Valentin Kulichenko < > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > It sounds like the main drawback of LOCAL cache is that it's > > > implemented > > > > > separately and therefore has to be maintained separately. If that's > > the > > > > > only issue, why not keep LOCAL cache mode on public API, but > > implement > > > it > > > > > as a PARTITIONED cache with a node filter forcefully set? That's > > > similar > > > > to > > > > > what we do with REPLICATED caches which are actually PARTITIONED > with > > > > > infinite number of backups. > > > > > > > > > > This way we fix the issues described by Stan and don't have to > > > deprecate > > > > > anything. > > > > > > > > > > -Val > > > > > > > > > > On Wed, Jul 25, 2018 at 12:53 AM Stanislav Lukyanov < > > > > > stanlukya...@gmail.com> > > > > > wrote: > > > > > > > > > > > Hi Igniters, > > > > > > > > > > > > I’d like to start a discussion about the deprecation of the LOCAL > > > > caches. > > > > > > > > > > > > LOCAL caches are an edge-case functionality > > > > > > I haven’t done any formal analysis, but from my experience LOCAL > > > caches > > > > > > are needed very rarely, if ever. > > > > > > I think most usages of LOCAL caches I’ve seen were misuses: the > > users > > > > > > actually needed a simple HashMap, or an actual PARTITIONED cache. > > > > > > > > > > > > LOCAL caches are easy to implement on top of PARTITIONED > > > > > > If one requires a LOCAL cache (which is itself questionable, as > > > > discussed > > > > > > above) it is quite easy to implement one on top of PARTITIONED > > cache. > > > > > > A node filter of form `node -> node.id().equals(localNodeId)` is > > > > enough > > > > > > to make the cache to be stored on the node that created it. > > > > > > Locality of access to the cache (i.e. making it unavailable from > > > other > > > > > > nodes) can be achieved on the application level. > > > > > > > > > > > > LOCAL caches are hard to maintain > > > > > > A quick look at the open issues mentioning “local cache” suggests > > > that > > > > > > this is a corner case for implementation of many Ignite features: > > > > > > > > > > > > https://issues.apache.org/jira/issues/?jql=text%20~%20% > > > > > 22local%20cache%22%20and%20%20project%20%3D%20IGNITE% > > > > > 20and%20status%20%3D%20open > > > > > > In particular, a recent SO question brought up the fact that > LOCAL > > > > caches > > > > > > don’t support native persistence: > > > > > > > > > > > > https://stackoverflow.com/questions/51511892/how-to- > > > > > configure-persistent-storage-for-apache-ignite-cache > > > > > > Having to ask ourselves “how does it play with LOCAL caches” > every > > > time > > > > > we > > > > > > write any code in Ignite seems way to much for the benefits we > gain > > > > from > > > > > it. > > > > > > > > > > > > Proposal > > > > > > Let’s deprecate
[GitHub] ignite pull request #4434: IGNITE-8361 Use discovery messages for service de...
GitHub user daradurvs opened a pull request: https://github.com/apache/ignite/pull/4434 IGNITE-8361 Use discovery messages for service deployment The PR contains changes to cover the following tasks: - [Use discovery messages for service deployment](https://issues.apache.org/jira/browse/IGNITE-8361) - [Collect service deployment results asynchronously on coordinator](https://issues.apache.org/jira/browse/IGNITE-8362) - [Propagate service deployment results from assigned nodes to initiator](https://issues.apache.org/jira/browse/IGNITE-3392) - [Handle topology changes during service deployment](https://issues.apache.org/jira/browse/IGNITE-8363) - [Propagate deployed services to joining nodes](https://issues.apache.org/jira/browse/IGNITE-8364) You can merge this pull request into a Git repository by running: $ git pull https://github.com/daradurvs/ignite ignite-8361-to-master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4434.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4434 commit 4517b5c43125107cb25f8e5e1da5bc00a8f5c252 Author: Vyacheslav Daradur Date: 2018-07-26T07:47:40Z implemented ---
[jira] [Created] (IGNITE-9085) Web console: Actualize Login page carousel images
Vasiliy Sisko created IGNITE-9085: - Summary: Web console: Actualize Login page carousel images Key: IGNITE-9085 URL: https://issues.apache.org/jira/browse/IGNITE-9085 Project: Ignite Issue Type: Bug Components: wizards Reporter: Vasiliy Sisko Assignee: Vica Abramova -- This message was sent by Atlassian JIRA (v7.6.3#76005)