[jira] [Created] (IGNITE-2561) Optimize ATOMIC cache updates with single key.
Vladimir Ozerov created IGNITE-2561: --- Summary: Optimize ATOMIC cache updates with single key. Key: IGNITE-2561 URL: https://issues.apache.org/jira/browse/IGNITE-2561 Project: Ignite Issue Type: Sub-task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Assignee: Vladimir Ozerov Priority: Critical Fix For: 1.6 We can significantly minimize amount of network traffic and GC pressure for the most frequent ATOMIC cache scenario - update with a single key. To achieve this we must add special versions of all update request/responses as well as corresponding futures. Furthermore, we should not create KV maps for single-key operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: API doc for .NET
We have doxygen plans which generate docs. Though, it appears that these docs are not published on the website. Who can assist us with this? On Fri, Feb 5, 2016 at 1:57 AM, Valentin Kulichenko < valentin.kuliche...@gmail.com> wrote: > Igniters, > > Do we have API doc for .NET? If not, I think we should create it and put on > the website. > > -Val >
Re: Injections in entry processor
Makes sense! I create the ticket: https://issues.apache.org/jira/browse/IGNITE-2560 -Val On Thu, Feb 4, 2016 at 6:22 PM, Dmitriy Setrakyan wrote: > Agree. I think we can avoid the performance degradation… If we cache the > fact that EP does not have any resource annotations, then we can skip the > injection without any performance impact, no? > > On Thu, Feb 4, 2016 at 5:48 PM, Valentin Kulichenko < > valentin.kuliche...@gmail.com> wrote: > > > Igniters, > > > > I noticed that we don't inject resources to entry processors. This > doesn't > > look consistent, because do this everywhere else (closures, jobs, > > listeners, etc.). But at the same time I believe it will cause > performance > > degradation, because we will have to inject on each operation. > > > > Any thoughts on that? Maybe we should support it, but make it optional > with > > a configuration flag? > > > > -Val > > >
[jira] [Created] (IGNITE-2560) Support injections in entry processors
Valentin Kulichenko created IGNITE-2560: --- Summary: Support injections in entry processors Key: IGNITE-2560 URL: https://issues.apache.org/jira/browse/IGNITE-2560 Project: Ignite Issue Type: Improvement Components: cache Reporter: Valentin Kulichenko Fix For: 1.6 Currently resources are not injected in entry processor, which is not consistent with other functionality, like closures, jobs, listeners, etc. To avoid performance degradation we should introspect the class only once, cache this information and do not try to inject if there are no annotations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Injections in entry processor
Agree. I think we can avoid the performance degradation… If we cache the fact that EP does not have any resource annotations, then we can skip the injection without any performance impact, no? On Thu, Feb 4, 2016 at 5:48 PM, Valentin Kulichenko < valentin.kuliche...@gmail.com> wrote: > Igniters, > > I noticed that we don't inject resources to entry processors. This doesn't > look consistent, because do this everywhere else (closures, jobs, > listeners, etc.). But at the same time I believe it will cause performance > degradation, because we will have to inject on each operation. > > Any thoughts on that? Maybe we should support it, but make it optional with > a configuration flag? > > -Val >
Injections in entry processor
Igniters, I noticed that we don't inject resources to entry processors. This doesn't look consistent, because do this everywhere else (closures, jobs, listeners, etc.). But at the same time I believe it will cause performance degradation, because we will have to inject on each operation. Any thoughts on that? Maybe we should support it, but make it optional with a configuration flag? -Val
[jira] [Created] (IGNITE-2559) Transaction hangs if entry processor is not serializable
Valentin Kulichenko created IGNITE-2559: --- Summary: Transaction hangs if entry processor is not serializable Key: IGNITE-2559 URL: https://issues.apache.org/jira/browse/IGNITE-2559 Project: Ignite Issue Type: Bug Reporter: Valentin Kulichenko Priority: Critical Fix For: 1.6 Test attached. If entry processor doesn't implement {{Serializable}}, the exception is thrown, but transaction hangs forever. Hanged thread dump: {noformat} "main" #1 prio=5 os_prio=31 tid=0x7faf8380 nid=0x1703 waiting on condition [0x70218000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00076b1e75f0> (a org.apache.ignite.internal.util.future.GridEmbeddedFuture) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:157) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117) at org.apache.ignite.internal.processors.cache.GridCacheAdapter$24.op(GridCacheAdapter.java:2296) at org.apache.ignite.internal.processors.cache.GridCacheAdapter$24.op(GridCacheAdapter.java:2283) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4291) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.invoke0(GridCacheAdapter.java:2283) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.invoke(GridCacheAdapter.java:2261) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.invoke(IgniteCacheProxy.java:1518) at ScratchClient.main(ScratchClient.java:29) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2558) NearCacheConfiguration should not extend MutableConfiguration
Valentin Kulichenko created IGNITE-2558: --- Summary: NearCacheConfiguration should not extend MutableConfiguration Key: IGNITE-2558 URL: https://issues.apache.org/jira/browse/IGNITE-2558 Project: Ignite Issue Type: Improvement Components: cache Reporter: Valentin Kulichenko Fix For: 1.6 Currently {{NearCacheConfiguration}} extends {{MutableConfiguration}} which means that it inherits all the properties that don't make sense for it. Also it's possible to use it to create a cache with {{CacheManager}}, which is also confusing. {{NearCacheConfiguration}} should not extend and classes and should be used only directly in {{CacheConfiguration.setNearConfiguration()}} or {{Ignite.createCache()}} and similar methods. In addition, it would be useful to add {{setCopyOnRead}} property for near cache. Now it's not possible to control this behavior for near cache only. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Transformers in SCAN queries
Val, >From my point of view, special query class that allows transforming is confusing. Two points about it: 1. ScanTransformQuery will duplicate ScanQuery code with all drawbacks that we have. Moreover, any fix for ScanQuery should be repeated for ScanTransformQuery. It will lead to bugs. DRY. 2. If some users want to do transformations for SqlQuery we will introduce SqlTransformQuery. Right? At this point see previous item. Transformation logic is some kind of strategy. IMHO, the most convenient API should get transformation logic as some function provided by user. On Thu, Feb 4, 2016 at 11:05 PM, Valentin Kulichenko < valentin.kuliche...@gmail.com> wrote: > Agree with Sergi. Mixing SQL with Java code transformers is confusing. In > rare case when it's really required, user can implement a custom function. > > I copy-pasted the API to the ticket [1]. Please provide any additional > comments there. > > [1] https://issues.apache.org/jira/browse/IGNITE-2546 > > -Val > > On Thu, Feb 4, 2016 at 10:08 AM, Andrey Gura wrote: > > > Sergi, > > > > > > > What you are going to transform remotely here? > > > > > > I'm not going. I believe :) > > > > Just hypothetical use case: You have one SqlFieldsQuery but different > > requirements for returned values. For one case you have to return some > > string fields as is and for another case you have to trim string to 32 > > characters. Of course you still can trim strings locally but transformers > > allow you do it remotely. > > > > Anyway, this solution can be usefull for rest query types. > > > > On Thu, Feb 4, 2016 at 8:54 PM, Sergi Vladykin > > > wrote: > > > > > The whole point of Transformer is to do remote transform, how will this > > > work with SqlFieldsQuery? What you are going to transform remotely > here? > > I > > > believe all the transformations must happen at SQL level here. > > > > > > Sergi > > > > > > > > > > > > 2016-02-04 20:10 GMT+03:00 Andrey Gura : > > > > > > > SqlQuery, TextQuery and SpiQuery are similar to ScanQuery because all > > of > > > > them also defined as Query>. > > > > > > > > It can be usefull to have one query SqlQuery that can return > different > > > > result that will be produced from cache entry by transformer. > > > > > > > > Actually only SqlFieldsQuery has different definition. So > transformers > > > can > > > > be applied to any type of query (including SqlFieldsQuery, I > believe). > > > > > > > > On Thu, Feb 4, 2016 at 7:42 PM, Sergi Vladykin < > > sergi.vlady...@gmail.com > > > > > > > > wrote: > > > > > > > > > I don't like the idea of having additional method *query(Query > > qry, > > > > > Transformer transfomer); *because I don't see how these > > > > transformers > > > > > will work for example with SQL, but this API makes you think that > > > > > transformers are supported for all the query types. > > > > > > > > > > Sergi > > > > > > > > > > 2016-02-04 16:46 GMT+03:00 Andrey Gura : > > > > > > > > > > > Val, > > > > > > > > > > > > can we introduce new method into IgnteCache API? > > > > > > > > > > > > Now we have method: public QueryCursor query(Query > qry); > > > > > > > > > > > > New method will be something like this: QueryCursor > > > > query(Query > > > > > > qry, Transformer transfomer); > > > > > > > > > > > > It allows provide transformers for all query types and chnages > will > > > be > > > > > > related only with query cursor functionality. > > > > > > > > > > > > Will it work? > > > > > > > > > > > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev < > > > > andrewkor...@hotmail.com > > > > > > > > > > > > wrote: > > > > > > > > > > > > > Another perhaps bigger problem with running queries (including > > scan > > > > > > > queries) using closures was discussed at length on the @dev not > > so > > > > long > > > > > > > ago. It has to do with partitions migration due to cluster > > topology > > > > > > changes > > > > > > > which may result in the query returning incomplete result. And > > > while > > > > it > > > > > > is > > > > > > > possible to solve this problem for the scan queries by using > some > > > > > clever > > > > > > > tricks, all bets are off with the SQL queries.Andrey > > > > > > > _ > > > > > > > From: Valentin Kulichenko > > > > > > > Sent: Thursday, February 4, 2016 6:29 AM > > > > > > > Subject: Re: Transformers in SCAN queries > > > > > > > To: > > > > > > > > > > > > > > > > > > > > >Dmitry, > > > > > > > > > > > > > > The main difference in my view is that you lose pagination > when > > > > > sending > > > > > > > results from servers to client. What if one wants to iterate > > > through > > > > > all > > > > > > > entries in cache? > > > > > > > > > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > > > > > > dsetrak...@apache.org> > > > > > > > wrote: > > > > > > > > > > > > > > > Valentin, > > > > > > > > > > > > > > > > Wouldn’t the same effect be achieved by broadcasting a > closure > >
[GitHub] ignite pull request: IGNITE-2249 - Do not deserialize services on ...
Github user vkulichenko closed the pull request at: https://github.com/apache/ignite/pull/364 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
API doc for .NET
Igniters, Do we have API doc for .NET? If not, I think we should create it and put on the website. -Val
Fwd: Apache Drill querying IGFS-accelerated (H)DFS?
Igniters, Interesting question was posted on the user list (see below). Have we ever tested Hadoop Accelerator with Apache Drill? -Val -- Forwarded message -- From: pshomov Date: Thu, Feb 4, 2016 at 1:56 PM Subject: Apache Drill querying IGFS-accelerated (H)DFS? To: u...@ignite.apache.org Hi guys, I am all new to Hadoop and Ignite and I am trying to achieve something which I am not sure is possible, at least not at this moment. We are using Apache Drill to query files in HDFS but we would really appreciate being able to speed up HDFS. According to the Ignite documentation it seems that is not possible. Can you please advise me in which direction I should explore? We think Ignite is very promising so if we can use IGFS on top of something else that gives us reliable distributed storage I am ready to explore that too. Any feedback would be much appreciated! Best regards, Petar -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS-accelerated-H-DFS-tp2840.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: affinityRun() and affinityCall() (JIRA ticket)
Hi! I looked through your latest pull request and left several comments on the ticket. -Val On Fri, Jan 29, 2016 at 8:56 PM, Valentin Kulichenko < valentin.kuliche...@gmail.com> wrote: > Responded in the ticket. > > -Val > > On Fri, Jan 29, 2016 at 7:05 AM, Dood@ODDO wrote: > >> Val, >> >> Before I go on and submit pull requests etc. - would you comment on the >> path I am taking with this? As I said I am not a JAVA developer but I am >> trying to teach myself the language and contribute at the same time ;) >> >> Here are my thoughts on implementing this for the queue >> (GridCacheQueueAdapter.java). I have also declared the following in >> IgniteQueue.java: >> >> @IgniteAsyncSupported >> public void affinityRun(IgniteRunnable job) throws IgniteException; >> >> @IgniteAsyncSupported >> public R affinityCall(IgniteCallable job) throws IgniteException; >> >> Here is what is in GridCacheQueueAdapter.java >> >> /** {@inheritDoc} */ >> public void affinityRun(IgniteRunnable job) { >> if (!collocated) >> throw new IgniteException("Illegal operation requested on non-collocated >> queue:affinityRun()."); >> >> try { >> compute.affinityRun(cache.name(),queueKey,job); >> } >> catch (IgniteException e) { >> throw e; >> } >> } >> >> /** {@inheritDoc} */ >> public R affinityCall(IgniteCallable job) { >> if (!collocated) >> throw new IgniteException("Illegal operation requested on non-collocated >> queue:affinityCall()."); >> >> try { >> return compute.affinityCall(cache.name(),queueKey,job); >> } >> catch (IgniteException e) { >> throw e; >> } >> } >> >> I have included the following at the top of the class >> GridCacheQueueAdapter: >> private final IgniteCompute compute; >> >> this.compute = cctx.kernalContext().grid().compute(); >> >> Let me know what you think! >> >> >> On 1/27/2016 3:55 PM, Valentin Kulichenko wrote: >> >>> Hi, >>> >>> Both GridCacheQueueAdapter and GridCacheSetImpl have a reference to >>> GridCacheContext which represents the underlying cache for the data >>> structure. GridCacheContext.name() will give you the correct cache name >>> that you can use when calling affinityRun method. >>> >>> -Val >>> >>> On Wed, Jan 27, 2016 at 9:13 AM, Dood@ODDO wrote: >>> >>> Hello, I am playing with https://issues.apache.org/jira/browse/IGNITE-1144 as introduction to hacking on Ignite. I am not a Java developer by day but have experience writing code in various languages. This is my first in-depth exposure to Ignite internals (have lightly used it as a user in a POC project). Looking at this ticket, I am guessing that what it needs to do is get the cache name from the kernel context. After that it can just pass on the call (such as affinityRun()) to the regular affinityRun() call with the cache name filled in as the first parameter. This is because an internal (un-exposed) cache is used to track the queue/set data structures. Is this all correct? My question is: how do I get the cache name from within the queue implementation. Thanks! >> >
Re: Transformers in SCAN queries
Agree with Sergi. Mixing SQL with Java code transformers is confusing. In rare case when it's really required, user can implement a custom function. I copy-pasted the API to the ticket [1]. Please provide any additional comments there. [1] https://issues.apache.org/jira/browse/IGNITE-2546 -Val On Thu, Feb 4, 2016 at 10:08 AM, Andrey Gura wrote: > Sergi, > > > > What you are going to transform remotely here? > > > I'm not going. I believe :) > > Just hypothetical use case: You have one SqlFieldsQuery but different > requirements for returned values. For one case you have to return some > string fields as is and for another case you have to trim string to 32 > characters. Of course you still can trim strings locally but transformers > allow you do it remotely. > > Anyway, this solution can be usefull for rest query types. > > On Thu, Feb 4, 2016 at 8:54 PM, Sergi Vladykin > wrote: > > > The whole point of Transformer is to do remote transform, how will this > > work with SqlFieldsQuery? What you are going to transform remotely here? > I > > believe all the transformations must happen at SQL level here. > > > > Sergi > > > > > > > > 2016-02-04 20:10 GMT+03:00 Andrey Gura : > > > > > SqlQuery, TextQuery and SpiQuery are similar to ScanQuery because all > of > > > them also defined as Query>. > > > > > > It can be usefull to have one query SqlQuery that can return different > > > result that will be produced from cache entry by transformer. > > > > > > Actually only SqlFieldsQuery has different definition. So transformers > > can > > > be applied to any type of query (including SqlFieldsQuery, I believe). > > > > > > On Thu, Feb 4, 2016 at 7:42 PM, Sergi Vladykin < > sergi.vlady...@gmail.com > > > > > > wrote: > > > > > > > I don't like the idea of having additional method *query(Query > qry, > > > > Transformer transfomer); *because I don't see how these > > > transformers > > > > will work for example with SQL, but this API makes you think that > > > > transformers are supported for all the query types. > > > > > > > > Sergi > > > > > > > > 2016-02-04 16:46 GMT+03:00 Andrey Gura : > > > > > > > > > Val, > > > > > > > > > > can we introduce new method into IgnteCache API? > > > > > > > > > > Now we have method: public QueryCursor query(Query qry); > > > > > > > > > > New method will be something like this: QueryCursor > > > query(Query > > > > > qry, Transformer transfomer); > > > > > > > > > > It allows provide transformers for all query types and chnages will > > be > > > > > related only with query cursor functionality. > > > > > > > > > > Will it work? > > > > > > > > > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev < > > > andrewkor...@hotmail.com > > > > > > > > > > wrote: > > > > > > > > > > > Another perhaps bigger problem with running queries (including > scan > > > > > > queries) using closures was discussed at length on the @dev not > so > > > long > > > > > > ago. It has to do with partitions migration due to cluster > topology > > > > > changes > > > > > > which may result in the query returning incomplete result. And > > while > > > it > > > > > is > > > > > > possible to solve this problem for the scan queries by using some > > > > clever > > > > > > tricks, all bets are off with the SQL queries.Andrey > > > > > > _ > > > > > > From: Valentin Kulichenko > > > > > > Sent: Thursday, February 4, 2016 6:29 AM > > > > > > Subject: Re: Transformers in SCAN queries > > > > > > To: > > > > > > > > > > > > > > > > > >Dmitry, > > > > > > > > > > > > The main difference in my view is that you lose pagination when > > > > sending > > > > > > results from servers to client. What if one wants to iterate > > through > > > > all > > > > > > entries in cache? > > > > > > > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > > > > > dsetrak...@apache.org> > > > > > > wrote: > > > > > > > > > > > > > Valentin, > > > > > > > > > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure > > to > > > > the > > > > > > > cluster and executing scan-query on every node locally? > > > > > > > > > > > > > > D. > > > > > > > > > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > > > > > Igniters, > > > > > > > > > > > > > > > > I keep getting requests from our users to add optional > > > > transformers > > > > > to > > > > > > > SCAN > > > > > > > > queries. This will allow to iterate through cache, but do > not > > > > > transfer > > > > > > > > whole key-value pairs across networks (e.g., get only keys). > > The > > > > > > feature > > > > > > > > looks useful and I created a ticket [1]. > > > > > > > > > > > > > > > > I am struggling with the design now. The problem is that I > > > wanted > > > > to > > > > > > > extend > > > > > > > > existing ScanQuery object for this, but this seems to be > > > > impossible > > > >
Re: Transformers in SCAN queries
Sergi, > What you are going to transform remotely here? I'm not going. I believe :) Just hypothetical use case: You have one SqlFieldsQuery but different requirements for returned values. For one case you have to return some string fields as is and for another case you have to trim string to 32 characters. Of course you still can trim strings locally but transformers allow you do it remotely. Anyway, this solution can be usefull for rest query types. On Thu, Feb 4, 2016 at 8:54 PM, Sergi Vladykin wrote: > The whole point of Transformer is to do remote transform, how will this > work with SqlFieldsQuery? What you are going to transform remotely here? I > believe all the transformations must happen at SQL level here. > > Sergi > > > > 2016-02-04 20:10 GMT+03:00 Andrey Gura : > > > SqlQuery, TextQuery and SpiQuery are similar to ScanQuery because all of > > them also defined as Query>. > > > > It can be usefull to have one query SqlQuery that can return different > > result that will be produced from cache entry by transformer. > > > > Actually only SqlFieldsQuery has different definition. So transformers > can > > be applied to any type of query (including SqlFieldsQuery, I believe). > > > > On Thu, Feb 4, 2016 at 7:42 PM, Sergi Vladykin > > > wrote: > > > > > I don't like the idea of having additional method *query(Query qry, > > > Transformer transfomer); *because I don't see how these > > transformers > > > will work for example with SQL, but this API makes you think that > > > transformers are supported for all the query types. > > > > > > Sergi > > > > > > 2016-02-04 16:46 GMT+03:00 Andrey Gura : > > > > > > > Val, > > > > > > > > can we introduce new method into IgnteCache API? > > > > > > > > Now we have method: public QueryCursor query(Query qry); > > > > > > > > New method will be something like this: QueryCursor > > query(Query > > > > qry, Transformer transfomer); > > > > > > > > It allows provide transformers for all query types and chnages will > be > > > > related only with query cursor functionality. > > > > > > > > Will it work? > > > > > > > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev < > > andrewkor...@hotmail.com > > > > > > > > wrote: > > > > > > > > > Another perhaps bigger problem with running queries (including scan > > > > > queries) using closures was discussed at length on the @dev not so > > long > > > > > ago. It has to do with partitions migration due to cluster topology > > > > changes > > > > > which may result in the query returning incomplete result. And > while > > it > > > > is > > > > > possible to solve this problem for the scan queries by using some > > > clever > > > > > tricks, all bets are off with the SQL queries.Andrey > > > > > _ > > > > > From: Valentin Kulichenko > > > > > Sent: Thursday, February 4, 2016 6:29 AM > > > > > Subject: Re: Transformers in SCAN queries > > > > > To: > > > > > > > > > > > > > > >Dmitry, > > > > > > > > > > The main difference in my view is that you lose pagination when > > > sending > > > > > results from servers to client. What if one wants to iterate > through > > > all > > > > > entries in cache? > > > > > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > > > > dsetrak...@apache.org> > > > > > wrote: > > > > > > > > > > > Valentin, > > > > > > > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure > to > > > the > > > > > > cluster and executing scan-query on every node locally? > > > > > > > > > > > > D. > > > > > > > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > > > Igniters, > > > > > > > > > > > > > > I keep getting requests from our users to add optional > > > transformers > > > > to > > > > > > SCAN > > > > > > > queries. This will allow to iterate through cache, but do not > > > > transfer > > > > > > > whole key-value pairs across networks (e.g., get only keys). > The > > > > > feature > > > > > > > looks useful and I created a ticket [1]. > > > > > > > > > > > > > > I am struggling with the design now. The problem is that I > > wanted > > > to > > > > > > extend > > > > > > > existing ScanQuery object for this, but this seems to be > > > impossible > > > > > > because > > > > > > > it already extends Query> and thus can > iterate > > > > only > > > > > > > through entries. > > > > > > > > > > > > > > The only option I see now is to create a separate query type, > > > > > copy-paste > > > > > > > everything from ScanQuery and add *mandatory* transformer. > > > Something > > > > > like > > > > > > > this: > > > > > > > > > > > > > > ScanTransformQuery extends Query { > > > > > > > IgniteBiPredicate filter; > > > > > > > IgniteClosure, R> transformer; > > > > > > > int part; > > > > > > > ... > > > > > > > } > > > > > > > > > > > > > > Thoughts? Does anyone has other ideas? > >
[jira] [Created] (IGNITE-2557) ODBC: Add integrity tests.
Igor Sapego created IGNITE-2557: --- Summary: ODBC: Add integrity tests. Key: IGNITE-2557 URL: https://issues.apache.org/jira/browse/IGNITE-2557 Project: Ignite Issue Type: Sub-task Components: odbc Affects Versions: 1.5.0.final Reporter: Igor Sapego Assignee: Igor Sapego Fix For: 1.6 Need to add integrity tests that would work through system API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Transformers in SCAN queries
The whole point of Transformer is to do remote transform, how will this work with SqlFieldsQuery? What you are going to transform remotely here? I believe all the transformations must happen at SQL level here. Sergi 2016-02-04 20:10 GMT+03:00 Andrey Gura : > SqlQuery, TextQuery and SpiQuery are similar to ScanQuery because all of > them also defined as Query>. > > It can be usefull to have one query SqlQuery that can return different > result that will be produced from cache entry by transformer. > > Actually only SqlFieldsQuery has different definition. So transformers can > be applied to any type of query (including SqlFieldsQuery, I believe). > > On Thu, Feb 4, 2016 at 7:42 PM, Sergi Vladykin > wrote: > > > I don't like the idea of having additional method *query(Query qry, > > Transformer transfomer); *because I don't see how these > transformers > > will work for example with SQL, but this API makes you think that > > transformers are supported for all the query types. > > > > Sergi > > > > 2016-02-04 16:46 GMT+03:00 Andrey Gura : > > > > > Val, > > > > > > can we introduce new method into IgnteCache API? > > > > > > Now we have method: public QueryCursor query(Query qry); > > > > > > New method will be something like this: QueryCursor > query(Query > > > qry, Transformer transfomer); > > > > > > It allows provide transformers for all query types and chnages will be > > > related only with query cursor functionality. > > > > > > Will it work? > > > > > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev < > andrewkor...@hotmail.com > > > > > > wrote: > > > > > > > Another perhaps bigger problem with running queries (including scan > > > > queries) using closures was discussed at length on the @dev not so > long > > > > ago. It has to do with partitions migration due to cluster topology > > > changes > > > > which may result in the query returning incomplete result. And while > it > > > is > > > > possible to solve this problem for the scan queries by using some > > clever > > > > tricks, all bets are off with the SQL queries.Andrey > > > > _ > > > > From: Valentin Kulichenko > > > > Sent: Thursday, February 4, 2016 6:29 AM > > > > Subject: Re: Transformers in SCAN queries > > > > To: > > > > > > > > > > > >Dmitry, > > > > > > > > The main difference in my view is that you lose pagination when > > sending > > > > results from servers to client. What if one wants to iterate through > > all > > > > entries in cache? > > > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > > > dsetrak...@apache.org> > > > > wrote: > > > > > > > > > Valentin, > > > > > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure to > > the > > > > > cluster and executing scan-query on every node locally? > > > > > > > > > > D. > > > > > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > > > Igniters, > > > > > > > > > > > > I keep getting requests from our users to add optional > > transformers > > > to > > > > > SCAN > > > > > > queries. This will allow to iterate through cache, but do not > > > transfer > > > > > > whole key-value pairs across networks (e.g., get only keys). The > > > > feature > > > > > > looks useful and I created a ticket [1]. > > > > > > > > > > > > I am struggling with the design now. The problem is that I > wanted > > to > > > > > extend > > > > > > existing ScanQuery object for this, but this seems to be > > impossible > > > > > because > > > > > > it already extends Query> and thus can iterate > > > only > > > > > > through entries. > > > > > > > > > > > > The only option I see now is to create a separate query type, > > > > copy-paste > > > > > > everything from ScanQuery and add *mandatory* transformer. > > Something > > > > like > > > > > > this: > > > > > > > > > > > > ScanTransformQuery extends Query { > > > > > > IgniteBiPredicate filter; > > > > > > IgniteClosure, R> transformer; > > > > > > int part; > > > > > > ... > > > > > > } > > > > > > > > > > > > Thoughts? Does anyone has other ideas? > > > > > > > > > > > > [1]https://issues.apache.org/jira/browse/IGNITE-2546 > > > > > > > > > > > > -Val > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Andrey Gura > > > GridGain Systems, Inc. > > > www.gridgain.com > > > > > > > > > -- > Andrey Gura > GridGain Systems, Inc. > www.gridgain.com >
Re: [jira] [Created] (IGNITE-2554) Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache
Hi Ken, I've investigated the issue and a short answer is it's not a newbie issue. Detailed answer is there is 2 possible ways to fix it (at least): a simple fix and a not simple fix. I need to do the simple fix by yourself because it's need for related issue. The not simple fix is hard for newbie. May be it should be fixed as separate issue. By the way, I will add my investigation results to the ticket and you can try to fix it according to my comments and provide a patch. Thanks, -- Artem -- On Thu, Feb 4, 2016 at 6:14 PM, Ken Cheng wrote: > fall into newbie tag? as I am a newbie to ignite. 😁 > > Thanks, > kcheng > > On Thu, Feb 4, 2016 at 11:10 PM, Ken Cheng wrote: > > > Hi Artem Shutak , > > > > *Does this issue easy to reproduce? As I would like to work on it.* > > > > > > > > > > Thanks, > > kcheng > > > > On Thu, Feb 4, 2016 at 10:56 PM, Artem Shutak (JIRA) > > wrote: > > > >> Artem Shutak created IGNITE-2554: > >> > >> > >> Summary: Affinity.mapKeyToNode() method throw > >> "ArithmeticException: / by zero" for LOCAL cache > >> Key: IGNITE-2554 > >> URL: https://issues.apache.org/jira/browse/IGNITE-2554 > >> Project: Ignite > >> Issue Type: Bug > >> Reporter: Artem Shutak > >> Priority: Minor > >> > >> > >> Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" > for > >> LOCAL cache. > >> > >> The following code > >> {code} > >> public static void main(String[] args) { > >> try (Ignite ignite = Ignition.start(new IgniteConfiguration())) > { > >> CacheConfiguration cc = new CacheConfiguration(); > >> > >> cc.setCacheMode(LOCAL); > >> cc.setName("myCache"); > >> > >> ignite.getOrCreateCache(cc); > >> > >> ignite.affinity("myCache").mapKeyToNode("myKey"); > >> } > >> } > >> {code} > >> > >> Produce the following exception. > >> > >> {noformat} > >> Exception in thread "main" java.lang.ArithmeticException: / by zero > >> at > >> > org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeysToNodes(GridCacheAffinityImpl.java:210) > >> at > >> > org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeyToNode(GridCacheAffinityImpl.java:187) > >> at main > >> {noformat} > >> > >> The issue is {{cctx.discovery().cacheAffinityNodes(cctx.name(), > >> topVer)}} returns empty nodes collection. > >> > >> > >> > >> -- > >> This message was sent by Atlassian JIRA > >> (v6.3.4#6332) > >> > > > > >
[jira] [Created] (IGNITE-2556) The majority of method names in scalar module are (technically) invalid scala
Alec Zorab created IGNITE-2556: -- Summary: The majority of method names in scalar module are (technically) invalid scala Key: IGNITE-2556 URL: https://issues.apache.org/jira/browse/IGNITE-2556 Project: Ignite Issue Type: Bug Reporter: Alec Zorab According to current scala language specification, "the ‘$’ character is reserved for compiler-synthesized identifiers. User programs should not define identifiers which contain ‘$’ characters." [1] [1] http://www.scala-lang.org/files/archive/spec/2.11/01-lexical-syntax.html#identifiers -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Transformers in SCAN queries
SqlQuery, TextQuery and SpiQuery are similar to ScanQuery because all of them also defined as Query>. It can be usefull to have one query SqlQuery that can return different result that will be produced from cache entry by transformer. Actually only SqlFieldsQuery has different definition. So transformers can be applied to any type of query (including SqlFieldsQuery, I believe). On Thu, Feb 4, 2016 at 7:42 PM, Sergi Vladykin wrote: > I don't like the idea of having additional method *query(Query qry, > Transformer transfomer); *because I don't see how these transformers > will work for example with SQL, but this API makes you think that > transformers are supported for all the query types. > > Sergi > > 2016-02-04 16:46 GMT+03:00 Andrey Gura : > > > Val, > > > > can we introduce new method into IgnteCache API? > > > > Now we have method: public QueryCursor query(Query qry); > > > > New method will be something like this: QueryCursor query(Query > > qry, Transformer transfomer); > > > > It allows provide transformers for all query types and chnages will be > > related only with query cursor functionality. > > > > Will it work? > > > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev > > > wrote: > > > > > Another perhaps bigger problem with running queries (including scan > > > queries) using closures was discussed at length on the @dev not so long > > > ago. It has to do with partitions migration due to cluster topology > > changes > > > which may result in the query returning incomplete result. And while it > > is > > > possible to solve this problem for the scan queries by using some > clever > > > tricks, all bets are off with the SQL queries.Andrey > > > _ > > > From: Valentin Kulichenko > > > Sent: Thursday, February 4, 2016 6:29 AM > > > Subject: Re: Transformers in SCAN queries > > > To: > > > > > > > > >Dmitry, > > > > > > The main difference in my view is that you lose pagination when > sending > > > results from servers to client. What if one wants to iterate through > all > > > entries in cache? > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > > dsetrak...@apache.org> > > > wrote: > > > > > > > Valentin, > > > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure to > the > > > > cluster and executing scan-query on every node locally? > > > > > > > > D. > > > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > > > Igniters, > > > > > > > > > > I keep getting requests from our users to add optional > transformers > > to > > > > SCAN > > > > > queries. This will allow to iterate through cache, but do not > > transfer > > > > > whole key-value pairs across networks (e.g., get only keys). The > > > feature > > > > > looks useful and I created a ticket [1]. > > > > > > > > > > I am struggling with the design now. The problem is that I wanted > to > > > > extend > > > > > existing ScanQuery object for this, but this seems to be > impossible > > > > because > > > > > it already extends Query> and thus can iterate > > only > > > > > through entries. > > > > > > > > > > The only option I see now is to create a separate query type, > > > copy-paste > > > > > everything from ScanQuery and add *mandatory* transformer. > Something > > > like > > > > > this: > > > > > > > > > > ScanTransformQuery extends Query { > > > > > IgniteBiPredicate filter; > > > > > IgniteClosure, R> transformer; > > > > > int part; > > > > > ... > > > > > } > > > > > > > > > > Thoughts? Does anyone has other ideas? > > > > > > > > > > [1]https://issues.apache.org/jira/browse/IGNITE-2546 > > > > > > > > > > -Val > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > Andrey Gura > > GridGain Systems, Inc. > > www.gridgain.com > > > -- Andrey Gura GridGain Systems, Inc. www.gridgain.com
Re: snapshot transaction isolation
On Thu, Feb 4, 2016 at 1:56 AM, Alexey Goncharuk wrote: > So, basically, we want to add lockAll() method that locks entries without > returning their values to a client - this is a good idea. I do not want, > however, to call it SNAPSHOT isolation, because this is not what it is. > I think I see your point. All we need to do is allow for lock() and lockAll() invocations within a transaction, no? Do you know why we currently prohibit it?
Re: Transformers in SCAN queries
I don't like the idea of having additional method *query(Query qry, Transformer transfomer); *because I don't see how these transformers will work for example with SQL, but this API makes you think that transformers are supported for all the query types. Sergi 2016-02-04 16:46 GMT+03:00 Andrey Gura : > Val, > > can we introduce new method into IgnteCache API? > > Now we have method: public QueryCursor query(Query qry); > > New method will be something like this: QueryCursor query(Query > qry, Transformer transfomer); > > It allows provide transformers for all query types and chnages will be > related only with query cursor functionality. > > Will it work? > > On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev > wrote: > > > Another perhaps bigger problem with running queries (including scan > > queries) using closures was discussed at length on the @dev not so long > > ago. It has to do with partitions migration due to cluster topology > changes > > which may result in the query returning incomplete result. And while it > is > > possible to solve this problem for the scan queries by using some clever > > tricks, all bets are off with the SQL queries.Andrey > > _ > > From: Valentin Kulichenko > > Sent: Thursday, February 4, 2016 6:29 AM > > Subject: Re: Transformers in SCAN queries > > To: > > > > > >Dmitry, > > > > The main difference in my view is that you lose pagination when sending > > results from servers to client. What if one wants to iterate through all > > entries in cache? > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan < > dsetrak...@apache.org> > > wrote: > > > > > Valentin, > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure to the > > > cluster and executing scan-query on every node locally? > > > > > > D. > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > Igniters, > > > > > > > > I keep getting requests from our users to add optional transformers > to > > > SCAN > > > > queries. This will allow to iterate through cache, but do not > transfer > > > > whole key-value pairs across networks (e.g., get only keys). The > > feature > > > > looks useful and I created a ticket [1]. > > > > > > > > I am struggling with the design now. The problem is that I wanted to > > > extend > > > > existing ScanQuery object for this, but this seems to be impossible > > > because > > > > it already extends Query> and thus can iterate > only > > > > through entries. > > > > > > > > The only option I see now is to create a separate query type, > > copy-paste > > > > everything from ScanQuery and add *mandatory* transformer. Something > > like > > > > this: > > > > > > > > ScanTransformQuery extends Query { > > > > IgniteBiPredicate filter; > > > > IgniteClosure, R> transformer; > > > > int part; > > > > ... > > > > } > > > > > > > > Thoughts? Does anyone has other ideas? > > > > > > > > [1]https://issues.apache.org/jira/browse/IGNITE-2546 > > > > > > > > -Val > > > > > > > > > > > > > > > > > > > > > -- > Andrey Gura > GridGain Systems, Inc. > www.gridgain.com >
Assertion in TCP Discovery
Igniters (esp Sam), I see this assertion when running tests. Can you please take a look at it? Here is the link to test history. Seems it is reproducible quiet often. http://204.14.53.151/project.html?projectId=IgniteTests&testNameId=206571224502097749&tab=testDetails sockAddrs=[/127.0.0.1:47501], discPort=47501, order=254, intOrder=129, lastExchangeTime=1454472940154, loc=false, ver=1.6.0#19700101-sha1:, isClient=false] [04:15:45,187][ERROR][tcp-disco-msg-worker-#197759%cache.IgniteCachePutAllRestartTest1%][TestTcpDiscoverySpi] TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node in order to prevent cluster wide instability. java.lang.AssertionError: [TcpDiscoveryNode [id=c196ddca-5558-430b-9ef4-b18b90a1, addrs=[127.0.0.1], sockAddrs=[/ 127.0.0.1:47501], discPort=47501, order=0, intOrder=129, lastExchangeTime=1454472943694, loc=true, ver=1.6.0#19700101-sha1:, isClient=false]] at org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNodesRing.nextNode(TcpDiscoveryNodesRing.java:481) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.sendMessageAcrossRing(ServerImpl.java:2378) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeFailedMessage(ServerImpl.java:4161) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2260) at org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:5786) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2160) at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) --Yakov
[GitHub] ignite pull request: IGNITE-1144 Add affinity functions to colloca...
GitHub user oddodaoddo opened a pull request: https://github.com/apache/ignite/pull/459 IGNITE-1144 Add affinity functions to collocated Ignite Queue and Set Properly branched pull request as per instructions here: https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute You can merge this pull request into a Git repository by running: $ git pull https://github.com/oddodaoddo/ignite IGNITE-1144 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/459.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #459 commit 81ec8cb5cc5acf9f87893abe1d81a4d0260b8de6 Author: Oddo Da Date: 2016-02-04T15:20:39Z IGNITE-1144 Add affinity functions to collocated Ignite Queue and Set --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: [jira] [Created] (IGNITE-2554) Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache
fall into newbie tag? as I am a newbie to ignite. 😁 Thanks, kcheng On Thu, Feb 4, 2016 at 11:10 PM, Ken Cheng wrote: > Hi Artem Shutak , > > *Does this issue easy to reproduce? As I would like to work on it.* > > > > > Thanks, > kcheng > > On Thu, Feb 4, 2016 at 10:56 PM, Artem Shutak (JIRA) > wrote: > >> Artem Shutak created IGNITE-2554: >> >> >> Summary: Affinity.mapKeyToNode() method throw >> "ArithmeticException: / by zero" for LOCAL cache >> Key: IGNITE-2554 >> URL: https://issues.apache.org/jira/browse/IGNITE-2554 >> Project: Ignite >> Issue Type: Bug >> Reporter: Artem Shutak >> Priority: Minor >> >> >> Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for >> LOCAL cache. >> >> The following code >> {code} >> public static void main(String[] args) { >> try (Ignite ignite = Ignition.start(new IgniteConfiguration())) { >> CacheConfiguration cc = new CacheConfiguration(); >> >> cc.setCacheMode(LOCAL); >> cc.setName("myCache"); >> >> ignite.getOrCreateCache(cc); >> >> ignite.affinity("myCache").mapKeyToNode("myKey"); >> } >> } >> {code} >> >> Produce the following exception. >> >> {noformat} >> Exception in thread "main" java.lang.ArithmeticException: / by zero >> at >> org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeysToNodes(GridCacheAffinityImpl.java:210) >> at >> org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeyToNode(GridCacheAffinityImpl.java:187) >> at main >> {noformat} >> >> The issue is {{cctx.discovery().cacheAffinityNodes(cctx.name(), >> topVer)}} returns empty nodes collection. >> >> >> >> -- >> This message was sent by Atlassian JIRA >> (v6.3.4#6332) >> > >
Re: [jira] [Created] (IGNITE-2554) Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache
Hi Artem Shutak , *Does this issue easy to reproduce? As I would like to work on it.* Thanks, kcheng On Thu, Feb 4, 2016 at 10:56 PM, Artem Shutak (JIRA) wrote: > Artem Shutak created IGNITE-2554: > > > Summary: Affinity.mapKeyToNode() method throw > "ArithmeticException: / by zero" for LOCAL cache > Key: IGNITE-2554 > URL: https://issues.apache.org/jira/browse/IGNITE-2554 > Project: Ignite > Issue Type: Bug > Reporter: Artem Shutak > Priority: Minor > > > Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for > LOCAL cache. > > The following code > {code} > public static void main(String[] args) { > try (Ignite ignite = Ignition.start(new IgniteConfiguration())) { > CacheConfiguration cc = new CacheConfiguration(); > > cc.setCacheMode(LOCAL); > cc.setName("myCache"); > > ignite.getOrCreateCache(cc); > > ignite.affinity("myCache").mapKeyToNode("myKey"); > } > } > {code} > > Produce the following exception. > > {noformat} > Exception in thread "main" java.lang.ArithmeticException: / by zero > at > org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeysToNodes(GridCacheAffinityImpl.java:210) > at > org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeyToNode(GridCacheAffinityImpl.java:187) > at main > {noformat} > > The issue is {{cctx.discovery().cacheAffinityNodes(cctx.name(), topVer)}} > returns empty nodes collection. > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332) >
[jira] [Created] (IGNITE-2555) Include offheap usage in metrics report
Sergey Kozlov created IGNITE-2555: - Summary: Include offheap usage in metrics report Key: IGNITE-2555 URL: https://issues.apache.org/jira/browse/IGNITE-2555 Project: Ignite Issue Type: Task Affects Versions: 1.5.0.final Reporter: Sergey Kozlov Priority: Minor The local node prints out the set of key parameters in its log (or console). It makes sense to add offheap usage (used/free/committed) to that report. {noformat} Metrics for local node (to disable set 'metricsLogFrequency' to 0) ^-- Node [id=41b12e0f, name=null] ^-- H/N/C [hosts=1, nodes=14, CPUs=8] ^-- CPU [cur=16,73%, avg=17,39%, GC=0,6%] ^-- Heap [used=1364MB, free=27,44%, comm=1881MB] ^-- Public thread pool [active=0, idle=16, qSize=0] ^-- System thread pool [active=1, idle=15, qSize=0] ^-- Outbound messages queue [size=0] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2554) Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache
Artem Shutak created IGNITE-2554: Summary: Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache Key: IGNITE-2554 URL: https://issues.apache.org/jira/browse/IGNITE-2554 Project: Ignite Issue Type: Bug Reporter: Artem Shutak Priority: Minor Affinity.mapKeyToNode() method throw "ArithmeticException: / by zero" for LOCAL cache. The following code {code} public static void main(String[] args) { try (Ignite ignite = Ignition.start(new IgniteConfiguration())) { CacheConfiguration cc = new CacheConfiguration(); cc.setCacheMode(LOCAL); cc.setName("myCache"); ignite.getOrCreateCache(cc); ignite.affinity("myCache").mapKeyToNode("myKey"); } } {code} Produce the following exception. {noformat} Exception in thread "main" java.lang.ArithmeticException: / by zero at org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeysToNodes(GridCacheAffinityImpl.java:210) at org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeyToNode(GridCacheAffinityImpl.java:187) at main {noformat} The issue is {{cctx.discovery().cacheAffinityNodes(cctx.name(), topVer)}} returns empty nodes collection. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: IGNITE-1144 Queue and Set affinity run and ca...
Github user oddodaoddo closed the pull request at: https://github.com/apache/ignite/pull/453 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: About the Jira https://issues.apache.org/jira/browse/IGNITE-1481
Thank you! Thanks, kcheng On Thu, Feb 4, 2016 at 9:38 PM, Denis Magda wrote: > Hi Ken, > > Thanks for the contribution. Someone of the committers will review your > changes soon. > > -- > Denis > > > On 2/3/2016 5:34 PM, Ken Cheng wrote: > >> Sorry, my fault I forget to add the new junit to test suit. I committed >> again. >> >> Thanks, >> kcheng >> >> On Wed, Feb 3, 2016 at 10:17 PM, Ken Cheng wrote: >> >> Hi All, >>> >>> For this PR, I added a new Junit test file file, but I found it's not >>> executed from TeamCity build log. >>> >>> How to add this new file to test suit? >>> >>> Thanks, >>> kcheng >>> >>> On Wed, Feb 3, 2016 at 8:41 PM, Ken Cheng wrote: >>> >>> Hi Andrey Gura, Please help do a code review. All related test cases passed without break. http://204.14.53.151/viewLog.html?buildId=107343&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteDataGrid Thanks, kcheng On Wed, Feb 3, 2016 at 7:47 PM, Ken Cheng wrote: Here is the PR https://github.com/apache/ignite/pull/449 > > Please help to review it. > > Thanks, > kcheng > > On Wed, Feb 3, 2016 at 7:45 PM, Ken Cheng > wrote: > > Yes, that's Andrey's proposal. >> >> I created the PR, right now it's run Tests. >> >> Thanks, >> kcheng >> >> On Wed, Feb 3, 2016 at 6:34 PM, Alexey Goncharuk < >> alexey.goncha...@gmail.com> wrote: >> >> +1 for printing out a warning and ignoring the affinity function from >>> the >>> configuration. There is no other way to 'fix' the configuration other >>> than >>> remove the wrong affinity function, so it can be done at startup time >>> right >>> away. >>> >>> 2016-02-03 9:43 GMT+03:00 Ken Cheng : >>> >>> I prefer to throw a IgniteCheckerException. Thanks, kcheng On Wed, Feb 3, 2016 at 2:41 PM, Ken Cheng >>> wrote: >>> Hi Andrey Gura, > > > What's the expected behavior when the cache mode is "Local" but > affinity >>> function is not "LocalAffinityFunction"? > > 1: Throw an exception? > 2: or change the affinity function rudely as > "LocalAffinityFunction" and >>> log the warning message at same time? > > > Thanks, > kcheng > > On Wed, Feb 3, 2016 at 10:35 AM, Ken Cheng > wrote: >>> Hi Andrey Gura, >> >> Thank you very much! I would study this part of code first. >> >> Thanks, >> kcheng >> >> On Mon, Feb 1, 2016 at 6:37 PM, Andrey Gura >> > wrote: >>> Ken, >>> >>> cache configuration validation and initialization occurs in >>> GridCacheProcessor class (methods validate() and initialize()). >>> >>> From my point of view two changes should be made: >>> >>> - during cache intialization LocalAffinityFunction should be set >>> >> to >>> cache > configuration if cache mode is LOCAL; >>> - warning about ignoring affinity function parameter should be >>> >> moved >>> from > validate() method to intialize() method. >>> >>> I hope this will help you. >>> >>> On Mon, Feb 1, 2016 at 12:12 PM, Ken Cheng >> >> wrote: > Hi Andrey Gura, I am very new to Ignite, I am going to pick up https://issues.apache.org/jira/browse/IGNITE-1481. Can you please give more hint? Thanks, kcheng >>> >>> -- >>> Andrey Gura >>> GridGain Systems, Inc. >>> www.gridgain.com >>> >>> >> >> >
[GitHub] ignite pull request: IGNITE-2549: Example that can be used for dat...
GitHub user isapego opened a pull request: https://github.com/apache/ignite/pull/458 IGNITE-2549: Example that can be used for data visualization. Merged with IGNITE-2429. You can merge this pull request into a Git repository by running: $ git pull https://github.com/isapego/ignite ignite-2549 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/458.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #458 commit 4b171c9ecf1a1b00a5f2a6f565a4839306fe1e55 Author: isapego Date: 2016-01-22T16:57:07Z IGNITE-2429: Implemented examples for ODBC. Tested for Windows. commit a937ffeaf5e35cf85aef090b26823beb426af821 Author: Igor Sapego Date: 2016-01-22T17:27:43Z IGNITE-2429: Fixes for Autotools build system. commit c4017d249571dadddf66e544a292841e87d85718 Author: Igor Sapego Date: 2016-01-22T18:22:45Z IGNITE-2429: Fix for ODBC-driver autotools build system. commit bc5a70c0194c57a9ed7bcb893851c0845f49b6f3 Author: Igor Sapego Date: 2016-01-22T18:28:13Z IGNITE-2429: Fix for ODBC test build system. commit a156539ae597ae942c6ef22ea93208905e5c6941 Author: isapego Date: 2016-01-26T14:52:31Z Merge branch 'ignite-1786-review' into ignite-2429 commit e0c3c9d58bd4f37a60e30a67b69e817731e5347c Author: isapego Date: 2016-01-26T15:23:34Z IGNITE-2429: Put-Get example moved to separate folder. commit 8ee39220a30a8a43ad5688d9aab622f230fdaaac Author: isapego Date: 2016-01-26T15:29:31Z IGNITE-2429: Examples projects structure changed commit 452f637c151a070570dddea71c846c9ac5c9133f Author: isapego Date: 2016-01-27T15:10:15Z IGNITE-2429: Implemented simple ODBC example with single type. commit 51c7d8b5321b45b0e68f473dad47f300b44b909b Author: isapego Date: 2016-01-27T15:19:13Z IGNITE-2429: Moved common types to examples-wide include directory. commit 705094ce582370a9faf67636fe9fd7af8b156f2b Author: isapego Date: 2016-01-27T15:54:50Z IGNITE-2429: ODBC example contains 2 types now. commit 2d59cc23039815c795d3a517e07e81d17bc7f33a Author: isapego Date: 2016-01-27T16:04:15Z IGNITE-2429: Added more people to Person cache. commit e3bf0c6e80d8457e1c36d42d8ff8a83c4a326826 Author: isapego Date: 2016-01-27T16:28:25Z IGNITE-2429: Added instruction to the top of ODBC example. commit 381fc4b41153ad6b818a468ab61170cd7ee948bd Author: isapego Date: 2016-01-27T16:34:42Z Merge branch 'ignite-1786' into ignite-2429 commit 9f65a3f11b4dd41706d220a9b14d3c1f54c9f421 Author: isapego Date: 2016-01-27T16:43:24Z IGNITE-2429: Fix for the Autotools build system. commit 938543866c2b678b19a3aabdf7df4f9dd50cc26c Author: isapego Date: 2016-02-04T10:51:16Z IGNITE-2549: Working BI example. commit 8c86907eada16497de366bdcea6a33184188861e Author: isapego Date: 2016-02-04T10:56:10Z IGNITE_2549: Fixed empty Product cache bug. commit a7079ed21aacd7d811da6478792e2527e62384cb Author: isapego Date: 2016-02-04T10:59:10Z IGNITE-2549: Better user interface. commit a298429e53eb45085eb886593927cde034d8b036 Author: isapego Date: 2016-02-04T11:00:36Z IGNITE-2549: Made common generator to avoid random data corellations. commit 6b0ce8b3ef8d6a7519e19bd679a50f0c53ae8d7a Author: isapego Date: 2016-02-04T12:32:03Z IGNITE-2549: Removed C rand() usage. commit ab78fb4773e621f108984f31d0fd261cff6ec91e Author: isapego Date: 2016-02-04T13:01:21Z IGNITE-2549: Few issues fixed. commit e7bc967ffd58b410cf2d920b9d5a0801cf8f94ab Author: isapego Date: 2016-02-04T13:27:10Z IGNITE-2549: Project renamed. commit 8f94d234572edcabd7099cc0433a467da170c4f8 Author: Igor Sapego Date: 2016-02-04T13:45:21Z IGNITE-2549: Fix for autotools build system. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2553) Conditional functions
Pavel Tupitsyn created IGNITE-2553: -- Summary: Conditional functions Key: IGNITE-2553 URL: https://issues.apache.org/jira/browse/IGNITE-2553 Project: Ignite Issue Type: Sub-task Reporter: Pavel Tupitsyn IFNULL, NULLIF, COALESCE, CASE http://www.h2database.com/html/grammar.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Transformers in SCAN queries
Val, can we introduce new method into IgnteCache API? Now we have method: public QueryCursor query(Query qry); New method will be something like this: QueryCursor query(Query qry, Transformer transfomer); It allows provide transformers for all query types and chnages will be related only with query cursor functionality. Will it work? On Thu, Feb 4, 2016 at 11:13 AM, Andrey Kornev wrote: > Another perhaps bigger problem with running queries (including scan > queries) using closures was discussed at length on the @dev not so long > ago. It has to do with partitions migration due to cluster topology changes > which may result in the query returning incomplete result. And while it is > possible to solve this problem for the scan queries by using some clever > tricks, all bets are off with the SQL queries.Andrey > _ > From: Valentin Kulichenko > Sent: Thursday, February 4, 2016 6:29 AM > Subject: Re: Transformers in SCAN queries > To: > > >Dmitry, > > The main difference in my view is that you lose pagination when sending > results from servers to client. What if one wants to iterate through all > entries in cache? > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan > wrote: > > > Valentin, > > > > Wouldn’t the same effect be achieved by broadcasting a closure to the > > cluster and executing scan-query on every node locally? > > > > D. > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > valentin.kuliche...@gmail.com> wrote: > > > > > Igniters, > > > > > > I keep getting requests from our users to add optional transformers to > > SCAN > > > queries. This will allow to iterate through cache, but do not transfer > > > whole key-value pairs across networks (e.g., get only keys). The > feature > > > looks useful and I created a ticket [1]. > > > > > > I am struggling with the design now. The problem is that I wanted to > > extend > > > existing ScanQuery object for this, but this seems to be impossible > > because > > > it already extends Query> and thus can iterate only > > > through entries. > > > > > > The only option I see now is to create a separate query type, > copy-paste > > > everything from ScanQuery and add *mandatory* transformer. Something > like > > > this: > > > > > > ScanTransformQuery extends Query { > > > IgniteBiPredicate filter; > > > IgniteClosure, R> transformer; > > > int part; > > > ... > > > } > > > > > > Thoughts? Does anyone has other ideas? > > > > > > [1]https://issues.apache.org/jira/browse/IGNITE-2546 > > > > > > -Val > > > > > > > > > > -- Andrey Gura GridGain Systems, Inc. www.gridgain.com
Re: About the Jira https://issues.apache.org/jira/browse/IGNITE-1481
Hi Ken, Thanks for the contribution. Someone of the committers will review your changes soon. -- Denis On 2/3/2016 5:34 PM, Ken Cheng wrote: Sorry, my fault I forget to add the new junit to test suit. I committed again. Thanks, kcheng On Wed, Feb 3, 2016 at 10:17 PM, Ken Cheng wrote: Hi All, For this PR, I added a new Junit test file file, but I found it's not executed from TeamCity build log. How to add this new file to test suit? Thanks, kcheng On Wed, Feb 3, 2016 at 8:41 PM, Ken Cheng wrote: Hi Andrey Gura, Please help do a code review. All related test cases passed without break. http://204.14.53.151/viewLog.html?buildId=107343&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteDataGrid Thanks, kcheng On Wed, Feb 3, 2016 at 7:47 PM, Ken Cheng wrote: Here is the PR https://github.com/apache/ignite/pull/449 Please help to review it. Thanks, kcheng On Wed, Feb 3, 2016 at 7:45 PM, Ken Cheng wrote: Yes, that's Andrey's proposal. I created the PR, right now it's run Tests. Thanks, kcheng On Wed, Feb 3, 2016 at 6:34 PM, Alexey Goncharuk < alexey.goncha...@gmail.com> wrote: +1 for printing out a warning and ignoring the affinity function from the configuration. There is no other way to 'fix' the configuration other than remove the wrong affinity function, so it can be done at startup time right away. 2016-02-03 9:43 GMT+03:00 Ken Cheng : I prefer to throw a IgniteCheckerException. Thanks, kcheng On Wed, Feb 3, 2016 at 2:41 PM, Ken Cheng wrote: Hi Andrey Gura, What's the expected behavior when the cache mode is "Local" but affinity function is not "LocalAffinityFunction"? 1: Throw an exception? 2: or change the affinity function rudely as "LocalAffinityFunction" and log the warning message at same time? Thanks, kcheng On Wed, Feb 3, 2016 at 10:35 AM, Ken Cheng wrote: Hi Andrey Gura, Thank you very much! I would study this part of code first. Thanks, kcheng On Mon, Feb 1, 2016 at 6:37 PM, Andrey Gura wrote: Ken, cache configuration validation and initialization occurs in GridCacheProcessor class (methods validate() and initialize()). From my point of view two changes should be made: - during cache intialization LocalAffinityFunction should be set to cache configuration if cache mode is LOCAL; - warning about ignoring affinity function parameter should be moved from validate() method to intialize() method. I hope this will help you. On Mon, Feb 1, 2016 at 12:12 PM, Ken Cheng wrote: Hi Andrey Gura, I am very new to Ignite, I am going to pick up https://issues.apache.org/jira/browse/IGNITE-1481. Can you please give more hint? Thanks, kcheng -- Andrey Gura GridGain Systems, Inc. www.gridgain.com
[jira] [Created] (IGNITE-2552) Eviction policy must consider either max size or max entries count
Denis Magda created IGNITE-2552: --- Summary: Eviction policy must consider either max size or max entries count Key: IGNITE-2552 URL: https://issues.apache.org/jira/browse/IGNITE-2552 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 1.5.0.final Reporter: Denis Magda Assignee: Denis Magda Fix For: 1.6 Presently both max size and max entries number are considered by eviction policy logic even if the only one is set by user explicitly. This behavior must be reworked in a way that if one of the parameters is set explicitly then only it will be used by eviction policy while the other one will be ignored. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: IGNITE-2451 remove xml and java data render f...
GitHub user Dmitriyff opened a pull request: https://github.com/apache/ignite/pull/457 IGNITE-2451 remove xml and java data render from controller You can merge this pull request into a Git repository by running: $ git pull https://github.com/Dmitriyff/ignite ignite-2451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/457.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #457 commit 9037007067d3fbb9f4625f9ddc64ebf4d9c8ef46 Author: Dmitriyff Date: 2016-01-27T09:40:02Z IGNTIE-2451 refactoring output data section commit e750159932a0f6b84bf11f96ceeb2ef31e03d6be Author: Dmitriyff Date: 2016-02-04T02:44:17Z Merge branch 'ignite-843-rc2' into ignite-2451 commit 8771fbdfabc91680300670567fa358558682bdad Author: Dmitriyff Date: 2016-02-04T10:39:16Z IGNITE-2451 remove xml and java data render from controller commit 2720bc82752ef42b30524910c86936f70defae31 Author: Dmitriyff Date: 2016-02-04T10:40:19Z Merge branch 'ignite-843-rc2' of https://git-wip-us.apache.org/repos/asf/ignite into ignite-2451 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2551) Next query page result link should be locked on loading of next page.
Vasiliy Sisko created IGNITE-2551: - Summary: Next query page result link should be locked on loading of next page. Key: IGNITE-2551 URL: https://issues.apache.org/jira/browse/IGNITE-2551 Project: Ignite Issue Type: Sub-task Components: wizards Affects Versions: 1.6 Reporter: Vasiliy Sisko In case of quick clicking by next link a request is sent on every click. On some click request error is returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2550) .NET: Simplify examples configuration
Pavel Tupitsyn created IGNITE-2550: -- Summary: .NET: Simplify examples configuration Key: IGNITE-2550 URL: https://issues.apache.org/jira/browse/IGNITE-2550 Project: Ignite Issue Type: Improvement Components: platforms Affects Versions: 1.1.4 Reporter: Pavel Tupitsyn Fix For: 1.6 We now have in-code configuration (IGNITE-1906), need to demonstrate it in examples, and simplify them where possible. 1) First, start all caches programmatically. This may reduce number of spring configs. 2) Second, see if we can benefit from full in-code config for some of the examples (compute). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2549) CPP: Implement example that can be used for data visualization.
Igor Sapego created IGNITE-2549: --- Summary: CPP: Implement example that can be used for data visualization. Key: IGNITE-2549 URL: https://issues.apache.org/jira/browse/IGNITE-2549 Project: Ignite Issue Type: Sub-task Components: odbc Reporter: Igor Sapego Assignee: Igor Sapego As the one of the purposes for development of the ODBC driver was to use data visualization tools with Apache Ignite, we need to have example that can be used to test interoperability with such tools. Such example should contain several caches with relational data that can provide meaningful graphs when visualized. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: IGNITE-2544: Empty schema names treated as a ...
GitHub user isapego opened a pull request: https://github.com/apache/ignite/pull/456 IGNITE-2544: Empty schema names treated as a 'null' now. You can merge this pull request into a Git repository by running: $ git pull https://github.com/isapego/ignite ignite-2544 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/456.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #456 commit d22c530ca691ca54ad2f54a77d4e0df967fd380a Author: isapego Date: 2016-02-04T10:06:04Z IGNITE-2544: Empty schema names treated as a 'null' now. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: snapshot transaction isolation
+1 This should help us with IGFS performance as we currently use getAll() in PESSIMISTIC mode mainly to lock keys, not to get their values. On Thu, Feb 4, 2016 at 12:56 PM, Alexey Goncharuk < alexey.goncha...@gmail.com> wrote: > So, basically, we want to add lockAll() method that locks entries without > returning their values to a client - this is a good idea. I do not want, > however, to call it SNAPSHOT isolation, because this is not what it is. >
Re: snapshot transaction isolation
So, basically, we want to add lockAll() method that locks entries without returning their values to a client - this is a good idea. I do not want, however, to call it SNAPSHOT isolation, because this is not what it is.
Re: snapshot transaction isolation
> If you do a getAll, > isn’t it possible that some value will be updated before you get it? If > yes, then user’s logic will potentially be based on a wrong value, no? 1. What if any value gets updated before you lock it? It seems this is the strongest guarantee we can provide with this approach. > However, some use cases require that transactional values are consistent > with each other not at 1st access, but at transaction start time. After >giving it some thought, I think we can support it with minimal effort, if > we add a few restrictions. For example, we can easily support it if users > specify all the keys at the beginning of the transaction, for example > 1. User tells Ignite which keys he/she plans to transact on > 2. Ignite preemptively acquires locks on all these keys > 3. After locks are acquired, user has assurance that values will not > change outside of this transaction and are consistent with teacher. > 4. Locks are released upon commit I think that it will also be very good to add tx-awareness to cache lock we currently have. GETALL may be very heavy which may not be needed + we support all TX not only pessimistic. So, the logic will be: START_TX() LOCK_ALL(KEYS); INVOKE/PUT/GET/ETC COMMIT()/ROLLBACK() --Yakov 2016-02-04 12:31 GMT+03:00 Dmitriy Setrakyan : > I think the whole point is to lock 1st and get 2nd. If you do a getAll, > isn’t it possible that some value will be updated before you get it? If > yes, then user’s logic will potentially be based on a wrong value, no? > > D. > > On Thu, Feb 4, 2016 at 1:29 AM, Alexey Goncharuk < > alexey.goncha...@gmail.com > > wrote: > > > If all keys are known in advance, how is it different from starting a > > pessimistic transaction and invoking getAll() on those keys? Introducing > a > > new concept with such restrictions does not makes sense to me. > > > > 2016-02-04 1:27 GMT+03:00 Dmitriy Setrakyan : > > > > > Igniters, > > > > > > I keep hearing questions from users about the snapshot isolation. > > Currently > > > ignite provides Optimistic and Pessimistic > > > < > > > > > > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic > > > > > > > transactions [1]. This modes ensure that transactional values are > > > consistent with each other on 1st access of each value. > > > > > > However, some use cases require that transactional values are > consistent > > > with each other not at 1st access, but at transaction start time. After > > > giving it some thought, I think we can support it with minimal effort, > if > > > we add a few restrictions. For example, we can easily support it if > users > > > specify all the keys at the beginning of the transaction, for example > > > > > >1. User tells Ignite which keys he/she plans to transact on > > >2. Ignite preemptively acquires locks on all these keys > > >3. After locks are acquired, user has assurance that values will not > > >change outside of this transaction and are consistent with teacher. > > >4. Locks are released upon commit > > > > > > The above algorithm will also perform better, as the initial looks will > > be > > > acquired in bulk, and not individually. > > > > > > Thoughts? > > > > > > [1] > > > > > > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic > > > > > >
Re: snapshot transaction isolation
In PESSIMISTIC transaction a value is always read after a lock is acquired, so a locked value cannot be updated. Am I missing something? Do you have a specific scenario in mind?
[GitHub] ignite pull request: IGNITE-1563 .Net: Implemented "atomic" data s...
GitHub user ptupitsyn opened a pull request: https://github.com/apache/ignite/pull/455 IGNITE-1563 .Net: Implemented "atomic" data structures: sequence, reference You can merge this pull request into a Git repository by running: $ git pull https://github.com/ptupitsyn/ignite ignite-1563 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/455.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #455 commit 06df7e257ebfd4b78fb722187944e7a38440de7d Author: Pavel Tupitsyn Date: 2015-11-03T08:34:16Z IGNITE-1563 .Net: Implement "atomic" data structures. commit 4f66e93256d4d682f61405aeff356bea92e3fd2a Author: Pavel Tupitsyn Date: 2015-11-03T08:57:41Z wip ifaces commit f3b1d7fe5a632a4a58791f4b5bb4fde826ef15f1 Author: Pavel Tupitsyn Date: 2015-11-03T09:05:22Z Cpp ProcessorAtomicSequence commit a5984022c83cf918efd405bbae57c0b088f576cb Author: Pavel Tupitsyn Date: 2015-11-03T09:12:21Z Java PlatformAtomicSequence commit 541a525d9a5890fa8c6911ef5bb195164d73913c Author: Pavel Tupitsyn Date: 2015-11-03T09:49:19Z wip impl commit 2a9a9447815e62f50b86c0bd69681bed7c087080 Author: Pavel Tupitsyn Date: 2015-11-03T09:56:12Z Wip tests commit ed34b56cd8541ded90639a9a5609c71ea426fabe Author: Pavel Tupitsyn Date: 2015-11-03T09:56:29Z wip commit 9e56bde39f8cd1b45e48be1c19889b6c82e0cefb Author: Pavel Tupitsyn Date: 2015-11-03T10:15:56Z UnmanagedUtils commit 9d25be56943ca7ebcda63a7e15a55754a61344e0 Author: Pavel Tupitsyn Date: 2015-11-03T10:31:48Z Cpp interop done commit 79646020cbe7f69de21ad5d2e8e3edd4a8e8e7ee Author: Pavel Tupitsyn Date: 2015-11-03T13:51:51Z Merge remote-tracking branch 'remotes/upstream/ignite-1282' into ignite-1563 commit 19072403effa0153370fde950f476aa79b8f6050 Author: Pavel Tupitsyn Date: 2015-11-03T13:59:30Z wip commit 9efab62e9f43982d2260b76a6249f5bc9fb43c8f Author: Pavel Tupitsyn Date: 2015-11-03T14:05:45Z removed->isClosed commit dc1f595d4efac370c7c5d4c882f074ce0a6abe14 Author: Pavel Tupitsyn Date: 2015-11-03T14:22:05Z Java wrapper done commit 21fcc7f2fc198c578c4aff5e4597e12883a69a2b Author: Pavel Tupitsyn Date: 2015-11-03T14:25:41Z C# iface done commit af209b0f021196b479fdbbd3ae9f837b9547ed52 Author: Pavel Tupitsyn Date: 2015-11-03T14:28:05Z C# impl done commit 239a0cc9429733f1fec6a5288e5d91cb9c080ac5 Author: Pavel Tupitsyn Date: 2015-11-03T14:30:49Z Tests done commit 6e9f214f46abaca0388b0cec6f5c424fda0585fc Author: Pavel Tupitsyn Date: 2015-11-03T15:31:46Z IAtomicReference wip commit 2dd10ec7be84bb7948fa01123eece345d2627516 Author: Pavel Tupitsyn Date: 2015-11-04T09:45:32Z wip iface commit 2f128eba08bef75e7dcfe0410dca6f99e722a7a6 Author: Pavel Tupitsyn Date: 2015-11-04T10:02:25Z wip commit 2582bb5b3b66e1335776b6b4a3eaf5ba2f1138f0 Author: Pavel Tupitsyn Date: 2015-11-04T10:03:04Z wip commit e34f7a5e8f9dff44a6a1da7b0e39400af961ab64 Author: Pavel Tupitsyn Date: 2015-11-04T10:08:12Z wip commit b515ad274f605a399c6bf142d391625ddc372c64 Author: Pavel Tupitsyn Date: 2015-11-04T10:18:01Z wip commit 3fbc161ad67e99b49c0de374f587c647210c91e2 Author: Pavel Tupitsyn Date: 2015-11-04T10:37:41Z wip commit 21fbc1779944c29e84b7bb8cb91b5343a2913791 Author: Pavel Tupitsyn Date: 2015-11-04T11:12:30Z wip cpp M_PLATFORM_PROCESSOR_ATOMIC_REFERENCE commit e322b5cc6e8943ccb614ecda7b52277c1dd0643d Author: Pavel Tupitsyn Date: 2015-11-04T11:26:12Z tests done commit ec456780ecf3b9489890923e7cb249777f5b6096 Author: Pavel Tupitsyn Date: 2015-11-04T13:04:45Z wip UnmanagedUtils commit e5d6237e614e68188efe71926b9155a76ae2991b Author: Pavel Tupitsyn Date: 2015-11-04T14:13:42Z wip interop commit 47221b4365dbd7b37111aefe5c10c7ed10e88071 Author: Pavel Tupitsyn Date: 2015-11-04T14:14:12Z wip interop commit 39d374839ce857f7bce18ebab84f3803f36a7cc8 Author: Pavel Tupitsyn Date: 2015-11-04T14:14:50Z wip commit 13733c72ab9cc4b41a00c0e3d7151c2f27e045be Author: Pavel Tupitsyn Date: 2015-11-04T14:18:20Z wip --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] ignite pull request: IGNITE-1563 .Net: Implement "atomic" data str...
Github user ptupitsyn closed the pull request at: https://github.com/apache/ignite/pull/204 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: snapshot transaction isolation
I think the whole point is to lock 1st and get 2nd. If you do a getAll, isn’t it possible that some value will be updated before you get it? If yes, then user’s logic will potentially be based on a wrong value, no? D. On Thu, Feb 4, 2016 at 1:29 AM, Alexey Goncharuk wrote: > If all keys are known in advance, how is it different from starting a > pessimistic transaction and invoking getAll() on those keys? Introducing a > new concept with such restrictions does not makes sense to me. > > 2016-02-04 1:27 GMT+03:00 Dmitriy Setrakyan : > > > Igniters, > > > > I keep hearing questions from users about the snapshot isolation. > Currently > > ignite provides Optimistic and Pessimistic > > < > > > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic > > > > > transactions [1]. This modes ensure that transactional values are > > consistent with each other on 1st access of each value. > > > > However, some use cases require that transactional values are consistent > > with each other not at 1st access, but at transaction start time. After > > giving it some thought, I think we can support it with minimal effort, if > > we add a few restrictions. For example, we can easily support it if users > > specify all the keys at the beginning of the transaction, for example > > > >1. User tells Ignite which keys he/she plans to transact on > >2. Ignite preemptively acquires locks on all these keys > >3. After locks are acquired, user has assurance that values will not > >change outside of this transaction and are consistent with teacher. > >4. Locks are released upon commit > > > > The above algorithm will also perform better, as the initial looks will > be > > acquired in bulk, and not individually. > > > > Thoughts? > > > > [1] > > > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic > > >
Re: snapshot transaction isolation
If all keys are known in advance, how is it different from starting a pessimistic transaction and invoking getAll() on those keys? Introducing a new concept with such restrictions does not makes sense to me. 2016-02-04 1:27 GMT+03:00 Dmitriy Setrakyan : > Igniters, > > I keep hearing questions from users about the snapshot isolation. Currently > ignite provides Optimistic and Pessimistic > < > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic > > > transactions [1]. This modes ensure that transactional values are > consistent with each other on 1st access of each value. > > However, some use cases require that transactional values are consistent > with each other not at 1st access, but at transaction start time. After > giving it some thought, I think we can support it with minimal effort, if > we add a few restrictions. For example, we can easily support it if users > specify all the keys at the beginning of the transaction, for example > >1. User tells Ignite which keys he/she plans to transact on >2. Ignite preemptively acquires locks on all these keys >3. After locks are acquired, user has assurance that values will not >change outside of this transaction and are consistent with teacher. >4. Locks are released upon commit > > The above algorithm will also perform better, as the initial looks will be > acquired in bulk, and not individually. > > Thoughts? > > [1] > https://apacheignite.readme.io/docs/transactions#optimistic-and-pessimistic >
[jira] [Created] (IGNITE-2548) LINQ Examples
Pavel Tupitsyn created IGNITE-2548: -- Summary: LINQ Examples Key: IGNITE-2548 URL: https://issues.apache.org/jira/browse/IGNITE-2548 Project: Ignite Issue Type: Sub-task Reporter: Pavel Tupitsyn Add separate query examples with LINQ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Transformers in SCAN queries
Another perhaps bigger problem with running queries (including scan queries) using closures was discussed at length on the @dev not so long ago. It has to do with partitions migration due to cluster topology changes which may result in the query returning incomplete result. And while it is possible to solve this problem for the scan queries by using some clever tricks, all bets are off with the SQL queries.Andrey _ From: Valentin Kulichenko Sent: Thursday, February 4, 2016 6:29 AM Subject: Re: Transformers in SCAN queries To: Dmitry, The main difference in my view is that you lose pagination when sending results from servers to client. What if one wants to iterate through all entries in cache? On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan wrote: > Valentin, > > Wouldn’t the same effect be achieved by broadcasting a closure to the > cluster and executing scan-query on every node locally? > > D. > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > valentin.kuliche...@gmail.com> wrote: > > > Igniters, > > > > I keep getting requests from our users to add optional transformers to > SCAN > > queries. This will allow to iterate through cache, but do not transfer > > whole key-value pairs across networks (e.g., get only keys). The feature > > looks useful and I created a ticket [1]. > > > > I am struggling with the design now. The problem is that I wanted to > extend > > existing ScanQuery object for this, but this seems to be impossible > because > > it already extends Query> and thus can iterate only > > through entries. > > > > The only option I see now is to create a separate query type, copy-paste > > everything from ScanQuery and add *mandatory* transformer. Something like > > > > this: > > > > ScanTransformQuery extends Query { > > IgniteBiPredicate filter; > > IgniteClosure, R> transformer; > > int part; > > ... > > } > > > > Thoughts? Does anyone has other ideas? > > > > [1]https://issues.apache.org/jira/browse/IGNITE-2546 > > > > -Val > > >
Re: Transformers in SCAN queries
I think scan query implementation can be more complex than just sending closures to all nodes. e.g. it should handle topology changes. IMO it is not good idea to use compute instead of queries. On Thu, Feb 4, 2016 at 10:55 AM, Dmitriy Setrakyan wrote: > On Wed, Feb 3, 2016 at 10:28 PM, Valentin Kulichenko < > valentin.kuliche...@gmail.com> wrote: > > > Dmitry, > > > > The main difference in my view is that you lose pagination when sending > > results from servers to client. What if one wants to iterate through all > > entries in cache? > > > > I see. Perhaps we should fix the pagination for compute instead of adding > transformers for queries? > > > > > > On Wed, Feb 3, 2016 at 9:47 PM, Dmitriy Setrakyan > > > wrote: > > > > > Valentin, > > > > > > Wouldn’t the same effect be achieved by broadcasting a closure to the > > > cluster and executing scan-query on every node locally? > > > > > > D. > > > > > > On Wed, Feb 3, 2016 at 9:17 PM, Valentin Kulichenko < > > > valentin.kuliche...@gmail.com> wrote: > > > > > > > Igniters, > > > > > > > > I keep getting requests from our users to add optional transformers > to > > > SCAN > > > > queries. This will allow to iterate through cache, but do not > transfer > > > > whole key-value pairs across networks (e.g., get only keys). The > > feature > > > > looks useful and I created a ticket [1]. > > > > > > > > I am struggling with the design now. The problem is that I wanted to > > > extend > > > > existing ScanQuery object for this, but this seems to be impossible > > > because > > > > it already extends Query> and thus can iterate only > > > > through entries. > > > > > > > > The only option I see now is to create a separate query type, > > copy-paste > > > > everything from ScanQuery and add *mandatory* transformer. Something > > like > > > > this: > > > > > > > > ScanTransformQuery extends Query { > > > > IgniteBiPredicate filter; > > > > IgniteClosure, R> transformer; > > > > int part; > > > > ... > > > > } > > > > > > > > Thoughts? Does anyone has other ideas? > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-2546 > > > > > > > > -Val > > > > > > > > > >