[jira] [Created] (IGNITE-2633) Make allow to choose several caches in one 'Sender cache' item
Pavel Konstantinov created IGNITE-2633: -- Summary: Make allow to choose several caches in one 'Sender cache' item Key: IGNITE-2633 URL: https://issues.apache.org/jira/browse/IGNITE-2633 Project: Ignite Issue Type: Sub-task Reporter: Pavel Konstantinov Currently user must create separate 'Sender cache' item for every cache he want to replicate - this is very uncomfortable. It may be improved if we allow to choose several caches in the corresponding combobox. In this case the 'Sender cache' options will be applied to all selected caches in the generated configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2634) Do not export system columns when thea are not showed in result
Vasiliy Sisko created IGNITE-2634: - Summary: Do not export system columns when thea are not showed in result Key: IGNITE-2634 URL: https://issues.apache.org/jira/browse/IGNITE-2634 Project: Ignite Issue Type: Sub-task Components: wizards Affects Versions: 1.6 Reporter: Vasiliy Sisko We can off showing of system columns on query result, but they will exported anyway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: Ignite-2509 fixed offHeap_values case
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/470 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: 'Date' and 'Timestamp' types in SQL queries
Ok, then I propose following solution: when user of the C++ client tries to read 'Date' value when there is an 'Timestamp' value in a stream implicit cast from 'Timestamp' to 'Date' happens and user gets his value. What do you think? Best Regards, Igor On Thu, Feb 11, 2016 at 11:25 PM, Vladimir Ozerovwrote: > I do not think we are going to change BinaryMarshaller that way. > java.util.Date is widely used and accepted data type. To the contrast, > java.sql.Date is very specific data type usually used somewhere near JDBC > layer. > > On Thu, Feb 11, 2016 at 11:06 PM, Igor Sapego > wrote: > > > I guess we should switch to java.sql.Date in BinaryMarshaller then. > > > > Best Regards, > > Igor > > > > On Thu, Feb 11, 2016 at 7:20 PM, Sergi Vladykin < > sergi.vlady...@gmail.com> > > wrote: > > > > > This is because there is no java.util.Date in SQL, we have to either > > treat > > > it as BLOB or as native SQL type Timestamp. We've chosen the latter > > > approach. > > > > > > Sergi > > > > > > 2016-02-11 18:24 GMT+03:00 Igor Sapego : > > > > > > > Sorry, I meant In our Binary marshaler we use *java.util.Date.* > > > > > > > > Best Regards, > > > > Igor > > > > > > > > On Thu, Feb 11, 2016 at 6:12 PM, Igor Sapego > > > wrote: > > > > > > > > > Ok, It seems like I have found what was causing the issue. > > > > > > > > > > In our > > > > > > > > > apache.ignite.internal.processors.queryh.h2.IgniteH2Indexing.DBTypeEnum: > > > > > > > > > > /** > > > > > * Initialize map of DB types. > > > > > */ > > > > > static { > > > > > map.put(int.class, INT); > > > > > map.put(Integer.class, INT); > > > > > map.put(boolean.class, BOOL); > > > > > map.put(Boolean.class, BOOL); > > > > > map.put(byte.class, TINYINT); > > > > > map.put(Byte.class, TINYINT); > > > > > map.put(short.class, SMALLINT); > > > > > map.put(Short.class, SMALLINT); > > > > > map.put(long.class, BIGINT); > > > > > map.put(Long.class, BIGINT); > > > > > map.put(BigDecimal.class, DECIMAL); > > > > > map.put(double.class, DOUBLE); > > > > > map.put(Double.class, DOUBLE); > > > > > map.put(float.class, REAL); > > > > > map.put(Float.class, REAL); > > > > > map.put(Time.class, TIME); > > > > > map.put(Timestamp.class, TIMESTAMP); > > > > > map.put(java.util.Date.class, TIMESTAMP); > > > > > map.put(java.sql.Date.class, DATE); > > > > > map.put(String.class, VARCHAR); > > > > > map.put(UUID.class, UUID); > > > > > map.put(byte[].class, BINARY); > > > > > } > > > > > > > > > > As I was using java.util.Date and not the java.sql.Date it was > > > translated > > > > > as TIMESTAMP > > > > > and not as DATE. Are there any particular reason for java.util.Date > > > being > > > > > treated as a > > > > > TIMESTAMP? > > > > > > > > > > In our Binary marshaler we use java.sql.Date and when I try to > change > > > > > configuration and > > > > > make the Date field to be of the type java.sql.Date I've got an > > error, > > > > > because this field value > > > > > deserialized as java.sql.Date: > > > > > > > > > > lass org.apache.ignite.IgniteCheckedException: Failed to execute > SQL > > > > query. > > > > > at > > > > > > > > > > > > > > > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:831) > > > > > [...] > > > > > at > > > > > > > > > > > > > > > org.apache.ignite.internal.processors.platform.cache.query.PlatformAbstractQueryCursor.iterator(PlatformAbstractQueryCursor.java:134) > > > > > Caused by: org.h2.jdbc.JdbcSQLException: > > "java.lang.ClassCastException: > > > > > java.util.Date cannot be cast to java.sql.Date" > > > > > > > > > > > > > > > Best Regards, > > > > > Igor > > > > > > > > > > On Thu, Feb 11, 2016 at 12:39 PM, Vladimir Ozerov < > > > voze...@gridgain.com> > > > > > wrote: > > > > > > > > > >> There was some changes in how .NET interoperate w/ Java on binary > > > level. > > > > >> No > > > > >> changes were made to cache or query logic. > > > > >> I performed a smoke test in Java and observed that Date field was > > > > >> correctly > > > > >> mapped to H2 date and then vice versa. > > > > >> > > > > >> Probably this is a kind of configuration problem. > > > > >> > > > > >> Vladimir. > > > > >> > > > > >> On Thu, Feb 11, 2016 at 12:41 AM, Dmitriy Setrakyan < > > > > >> dsetrak...@apache.org> > > > > >> wrote: > > > > >> > > > > >> > I remember seeing some work done for the .NET support to provide > > > > better > > > > >> > precision for time data values. Could it be that SQL now > converts > > > > >> > everything to Timestamp because of that? > > > > >> > > > > > >> > D. > > > > >> > > > > > >> > On Wed, Feb 10, 2016 at 10:09 AM, Igor Sapego < > > isap...@gridgain.com > > > > > > > > >> > wrote: > > > > >> > > > > > >> > > Hello, > > > > >> > > > > > > >> > > Recently I've been working on implementation of the Date and > > > > Timestamp > > > > >> > > types support for C++
[jira] [Created] (IGNITE-2636) Server cache metrics for put-get-remove avg time are incorrect for case when request sent from client
Vladimir Ershov created IGNITE-2636: --- Summary: Server cache metrics for put-get-remove avg time are incorrect for case when request sent from client Key: IGNITE-2636 URL: https://issues.apache.org/jira/browse/IGNITE-2636 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ershov Server cache metrics for put-get-remove avg time are incorrect for case when request sent from client. We should add methods like CacheMetrics#addPutAndGetTimeNanos for all flows, when requests for cache modifications are processed. For all type of caches. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2641) Improve usability of "SELECT *" SqlQuery.
Vladimir Ozerov created IGNITE-2641: --- Summary: Improve usability of "SELECT *" SqlQuery. Key: IGNITE-2641 URL: https://issues.apache.org/jira/browse/IGNITE-2641 Project: Ignite Issue Type: Task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Case 1*: {code}SELECT * FROM Employee e{code} Result: exception: Reason: query is expanded to {code}SELECT Employee._key, Employee._val FROM EMPLOYEE e{code} instead of {code}SELECT e._key, e._val FROM EMPLOYEE e{code} *Case 2* {code}SELECT e.* FROM Employee e{code} Result: exception Reason: hard-coded check in IgniteH2Indexing.generateQuery(): {code} if (!qry.startsWith("*")) throw new IgniteCheckedException(...); {code} *Proposed solution* Instead of checking for asteriks, we must also check for "[table/alias].*" pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2642) The second run of MessagingExampleprints out the received messages twice
Sergey Kozlov created IGNITE-2642: - Summary: The second run of MessagingExampleprints out the received messages twice Key: IGNITE-2642 URL: https://issues.apache.org/jira/browse/IGNITE-2642 Project: Ignite Issue Type: Bug Components: general Affects Versions: 1.5.0.final Reporter: Sergey Kozlov 1. Start ExampleNodeStartup 2. Run MessagingExampe twice 3. Take a look on ExampleNodeStartup output: {noformat} [16:21:40] Ignite node started OK (id=47439543) [16:21:40] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, heap=3.5GB] [16:22:06] Topology snapshot [ver=2, servers=2, clients=0, CPUs=8, heap=7.1GB] Received unordered message [msg=0, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=2, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=3, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=1, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=7, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=6, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=8, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=5, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=4, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received unordered message [msg=9, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=0, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=1, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=2, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=3, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=4, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=5, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=6, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=7, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=8, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] Received ordered message [msg=9, fromNodeId=67e76b7b-32a5-4d3e-a7e5-da5ea65bec6d] [16:22:07] Topology snapshot [ver=3, servers=1, clients=0, CPUs=8, heap=3.5GB] [16:22:20] Topology snapshot [ver=4, servers=2, clients=0, CPUs=8, heap=7.1GB] Received unordered message [msg=0, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=1, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=2, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=3, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=0, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=1, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=3, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=2, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=4, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=5, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=6, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=5, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=4, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=7, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=6, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=7, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=8, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=9, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=8, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received unordered message [msg=9, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=0, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=0, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=1, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=1, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=2, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=2, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=3, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=3, fromNodeId=f8ec637f-cb61-4540-ab84-9e3fd1c2fd25] Received ordered message [msg=4,
[jira] [Created] (IGNITE-2638) "Connection to Ignite Web Agent is not established" form not in focus
Ilya Suntsov created IGNITE-2638: Summary: "Connection to Ignite Web Agent is not established" form not in focus Key: IGNITE-2638 URL: https://issues.apache.org/jira/browse/IGNITE-2638 Project: Ignite Issue Type: Sub-task Components: general Affects Versions: 1.6 Environment: OS X 10.10.5 Safari 9.0.2 (10601.3.9) Reporter: Ilya Suntsov Assignee: Alexey Kuznetsov Priority: Minor Fix For: 1.6 Steps for reproduce: 1. Stop ignite-web-agent 2. Go to https://console.gridgain.com/sql/demo 3. Click on 'Show security token' Result: Text in form became blurry. Please look in attachment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Unfriendly "SELECT *"
Folks, I noticed that the following simple *SqlQuery* doesn't not work: SELECT * FROM Employee e The reason is that it is incorrectly expanded to SELECT *Employee*._KEY, *Employee*._VAL FROM Employee *e* ... while correct form should be: SELECT *e*._KEY, *e*._VAL FROM Employee *e* I understand that this is not very easy to fix because excessive query parsing will be required to find our whether table has alias or not. Then I tried another approach, which doesn't work either: SELECT e.* FROM Employee e And here the failure is forced by our code intentionally: only "SELECT *" is allowed. This looks very trivial to fix for me: just allow "SELECT [table/alias].*" as well. Does anyone see any other problems here? I created the ticket: https://issues.apache.org/jira/browse/IGNITE-2641 Vladimir.
[GitHub] ignite pull request: IGNTIE-2483 Cache metrics functionality for c...
GitHub user VladimirErshov opened a pull request: https://github.com/apache/ignite/pull/479 IGNTIE-2483 Cache metrics functionality for client nodes should be developed. Added new version of CacheMetricsSnapshot. Fixed merging logic. Added proper put/get/remove time counting on the client side. You can merge this pull request into a Git repository by running: $ git pull https://github.com/VladimirErshov/ignite ignite-2483 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/479.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #479 commit 0bb74d8afcd3523aa51659a791b4078114f73bd3 Author: vershovDate: 2016-02-11T17:43:33Z IGNTIE-2483 added metrics on client. Fixed upTime. redesigned base method and gathering logic. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] ignite pull request: IGNITE-2635: Timestamp values can be read as ...
GitHub user isapego opened a pull request: https://github.com/apache/ignite/pull/477 IGNITE-2635: Timestamp values can be read as Date now. You can merge this pull request into a Git repository by running: $ git pull https://github.com/isapego/ignite ignite-2635 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/477.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #477 commit a55f1b5c737e4380ee9e50c0f2bd473904ec08f8 Author: isapegoDate: 2016-02-08T15:56:47Z IGNITE-: Test for the Date type added. commit 39186f4042b7453026bc7eaf088ed5c7b3462f70 Author: isapego Date: 2016-02-08T16:39:59Z IGNITE-: Added ignite::Date class. commit d7bf440fa0eff01babaf0979a4ea0827561f7500 Author: isapego Date: 2016-02-08T16:51:46Z IGNITE-: Added Date reading and writing to binary utils. commit 35a6e1edd7c9a1207b035935f86671c787618b45 Author: isapego Date: 2016-02-08T17:03:12Z IGNITE-: Added Date to BinaryWriterImpl. commit af8b0db6247352ae16c93c65a898255774773173 Author: isapego Date: 2016-02-08T17:19:36Z IGNITE-: Added specialisation WriteTopObject. commit 6f86354fd4ad3f2c0ac62ba3e5333551c6233805 Author: isapego Date: 2016-02-08T17:24:20Z IGNITE-: BinaryRawWriter::WriteDate[Array] implemented. commit 34bb1d68e096e083a85868dcf473f920e54c0af2 Author: isapego Date: 2016-02-08T18:05:37Z IGNITE-: Implemented BinaryReaderImpl::ReadDate[Array]. commit b03b9bfc969d76f76e4a6e5fa401061b919cad47 Author: isapego Date: 2016-02-08T18:08:32Z IGNITE-: Implemented BinaryRaqReader::ReadDate[Array]. commit e36f03bc5b16a074ab9308791d38a35b9bab1d72 Author: isapego Date: 2016-02-08T18:21:15Z IGNITE-: Fix for the test. commit 0a4b2e326b99a38336832a1289ae1461697efb3b Author: isapego Date: 2016-02-08T18:28:58Z IGNITE-: Added test for DateArray. commit f08029aeb96a887c679f821f488b5d22b7a612fb Author: isapego Date: 2016-02-08T18:44:16Z IGNITE-: Added BinaryWriter::WriteDate[Array]. commit e80350a4a0ae7bb5ab714f0bcb21b3b14236a8d8 Author: isapego Date: 2016-02-08T18:47:53Z IGNITE-: Added BinaryReader::ReadDate[Array]. commit 7428a3b1b3937d36f048916aeb773a2f058373f3 Author: isapego Date: 2016-02-08T19:06:36Z IGNITE-: Tests for Date type reworked. commit 14d921b89c6d89e0368a044bd3eeff4e4f6a503f Author: isapego Date: 2016-02-09T13:12:26Z IGNITE-: Added timestamp class. commit 0263b602c85f0f5f5c94fb80d4571612fb923813 Author: isapego Date: 2016-02-09T13:28:56Z IGNITE-: Added binary utils for Timestamp. commit 937248c3ace1b070c1c558c7fe17ce2864e735a4 Author: isapego Date: 2016-02-09T13:36:48Z IGNITE-: Timestamp binary type added. commit 37db4a2a748582b7a83b51fdf46f8648e1fa4f87 Author: isapego Date: 2016-02-09T13:41:56Z IGNITE-: Added BinaryWriterImpl::WriteTimestamp[Array](). commit a5583a79bbfec3157ae595b7c22cefc3be614706 Author: isapego Date: 2016-02-09T13:51:03Z IGNITE-: Added BinaryReaderImpl::ReadTimestamp[Array](). commit 8e1a023ff3fbc480a9c6f17a239f7e11ce04f63b Author: isapego Date: 2016-02-09T13:57:05Z IGNITE-: Added BinaryWriter::WriteTimestamp[Array](). commit 8a851fd65a69231ff7477882554143a9f085bfcb Author: isapego Date: 2016-02-09T13:59:17Z IGNITE-: Added BinaryReader::ReadTimestamp[Array](). commit e4f2cdf5e0c7cbffd287c024191caff5e5bed9e1 Author: isapego Date: 2016-02-09T14:01:11Z IGNITE-: Added BinaryRawReader::ReadTimestamp[Array](). commit 26cd9ea51d1f627be21ad2aee3632602b3cf5126 Author: isapego Date: 2016-02-09T14:04:06Z GNITE-: Added BinaryRawWriter::WriteTimestamp[Array](). commit 67cacde49c67a5c86f88c3adf8873b27e1a4 Author: isapego Date: 2016-02-09T14:14:40Z IGNITE-: Added tests for Timestamp binary type. commit 78f7acaaf7812ae9ac700b7da4051baa8bd10807 Author: isapego Date: 2016-02-09T14:33:49Z IGNITE-: Fix for autotools build system. commit 5863322968f7f8e2171972a1b0b3ad0c9b2748c7 Author: isapego Date: 2016-02-10T12:22:15Z IGNITE-: Renamed Timestamp::GetNanoseconds -> Timestamp::GetSecondFraction. Removed constructor Timestamp(const Date&). commit
[jira] [Created] (IGNITE-2639) Need to handle security token changing
Pavel Konstantinov created IGNITE-2639: -- Summary: Need to handle security token changing Key: IGNITE-2639 URL: https://issues.apache.org/jira/browse/IGNITE-2639 Project: Ignite Issue Type: Sub-task Reporter: Pavel Konstantinov Assignee: Andrey Novikov -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2640) [Failed test] ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper
Andrey Gura created IGNITE-2640: --- Summary: [Failed test] ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper Key: IGNITE-2640 URL: https://issues.apache.org/jira/browse/IGNITE-2640 Project: Ignite Issue Type: Test Affects Versions: 1.5.0.final Reporter: Andrey Gura Assignee: Andrey Gura Fix For: 1.6 Test {{ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper}} fails sometimes due to an timeout on {{waitForCondition}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: ignite-2640 Zookeeper test timeout increased
GitHub user agura opened a pull request: https://github.com/apache/ignite/pull/478 ignite-2640 Zookeeper test timeout increased You can merge this pull request into a Git repository by running: $ git pull https://github.com/agura/incubator-ignite ignite-2640 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/478.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #478 commit 564e72fc2972df943c3ad2ff4b819751f003f270 Author: aguraDate: 2016-02-12T12:07:40Z ignite-2640 Zookeeper test timeout increased --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2643) ODBC: Potential memory leak during client disconnect.
Vladimir Ozerov created IGNITE-2643: --- Summary: ODBC: Potential memory leak during client disconnect. Key: IGNITE-2643 URL: https://issues.apache.org/jira/browse/IGNITE-2643 Project: Ignite Issue Type: Sub-task Components: odbc Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Assignee: Igor Sapego Priority: Critical Fix For: 1.6 *Problem* When client executes a query, we preserve the cursor in concurrent collection. This could lead to two potential problems: 1) If problem disconnected abruptly, cursor gets stuck forever => memory leak. 2) Malicious client could flood us with requests which are never closed until node is out-of-memory. *Proposed solution* 1) When onDisconnect() callback is triggered, all pending client queries must be released. To achieve this it is better to move "OdbcNioListener.qryCurs" to session meta. 2) When new request is to be created, we must ensure that concurrent disconnect is not in progress. Otherwise, we might end up in a leak again. To achieve this, lets guard "add" logic with read-write lock. 3) Lets think about "max-concurrent-cursors-per-connection" property. It could be available from OdbcConfiguration. Once number of opened cursors is exceeded, we must throw an error to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Grid behavior at key deserialization failure during rebalancing
Igniters, At this moment key deserialization failure during rebalancing cause strange situation: Rebalancing from node sent supply message with broken key will be cancelled at current topology. All upcoming supply messages from this node will be be ignored, no new demand messages to this node will be sent. But when topology will be changed again, node with broken key will take path at rebalancing again, untill key deserialization failure happen ... again. Do we need to improve this situation, and if we have to how should be handled case with key deserialization failure? I see some ways: 1) We can inform user about data loss because of deserialization problems, but keep current rebalancing strategy 2) We can continue rebalancing from this node, but ignore messages with broken keys. And inform user about data loss. 3) We can pause rebalancing untill deserialization will be fixed somehow, for example by shutdowning demanding or supplying node. Thoughts?
[jira] [Created] (IGNITE-2645) Assertion error in ATOMIC cachce for invokeAll and cache store
Alexey Goncharuk created IGNITE-2645: Summary: Assertion error in ATOMIC cachce for invokeAll and cache store Key: IGNITE-2645 URL: https://issues.apache.org/jira/browse/IGNITE-2645 Project: Ignite Issue Type: Bug Components: cache Affects Versions: ignite-1.4 Reporter: Alexey Goncharuk Assertion happens under the following conditions: * Cache is empty * Cache store contains non-null values for some keys * invokeAll is invoked for those keys Update version is generated when update request reaches the primary node. Then, we need to read-through stored values (the cache is empty) and pass them to transformers. Since read-through changes entry version, subsequent update fails with an assertion because read-through version is generated later than update version. The scenario when a read-through is implemented via a separate loop with innerGet() is possible only with invokeAll() because this is the only multi-key cache operation that requires the previous entry value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2644) ODBC: Add metrics.
Vladimir Ozerov created IGNITE-2644: --- Summary: ODBC: Add metrics. Key: IGNITE-2644 URL: https://issues.apache.org/jira/browse/IGNITE-2644 Project: Ignite Issue Type: Sub-task Components: odbc Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Assignee: Igor Sapego Fix For: 1.7 Let's plan this feature for further releases (e.g. 1.7). We should add ODBC metrics. Several ideas on what to count: 1) Currently connected clients 2) Max connected clients 3) Total connected clients 4) SQL requests executed 5) Records fetched 6) Average processing time Anything else? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2637) CPP: Some API function throw exceptions even if err parameter is specified.
Igor Sapego created IGNITE-2637: --- Summary: CPP: Some API function throw exceptions even if err parameter is specified. Key: IGNITE-2637 URL: https://issues.apache.org/jira/browse/IGNITE-2637 Project: Ignite Issue Type: Bug Components: platforms Affects Versions: 1.5.0.final Reporter: Igor Sapego Happens because at least internal method {{ReadTopObject}} don't have no-throw version. Check other internal functions as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: 'Date' and 'Timestamp' types in SQL queries
Igor, I will wait for Vova to comment on your last suggestion. Just wanted to add that we should be careful not to loose any precision during the conversion, as we got hit by it in the past. D. On Fri, Feb 12, 2016 at 1:36 AM, Igor Sapegowrote: > Ok, then I propose following solution: when user of the C++ client tries > to read 'Date' value when there is an 'Timestamp' value in a stream > implicit cast from 'Timestamp' to 'Date' happens and user gets his > value. > > What do you think? > > Best Regards, > Igor > > On Thu, Feb 11, 2016 at 11:25 PM, Vladimir Ozerov > wrote: > > > I do not think we are going to change BinaryMarshaller that way. > > java.util.Date is widely used and accepted data type. To the contrast, > > java.sql.Date is very specific data type usually used somewhere near JDBC > > layer. > > > > On Thu, Feb 11, 2016 at 11:06 PM, Igor Sapego > > wrote: > > > > > I guess we should switch to java.sql.Date in BinaryMarshaller then. > > > > > > Best Regards, > > > Igor > > > > > > On Thu, Feb 11, 2016 at 7:20 PM, Sergi Vladykin < > > sergi.vlady...@gmail.com> > > > wrote: > > > > > > > This is because there is no java.util.Date in SQL, we have to either > > > treat > > > > it as BLOB or as native SQL type Timestamp. We've chosen the latter > > > > approach. > > > > > > > > Sergi > > > > > > > > 2016-02-11 18:24 GMT+03:00 Igor Sapego : > > > > > > > > > Sorry, I meant In our Binary marshaler we use *java.util.Date.* > > > > > > > > > > Best Regards, > > > > > Igor > > > > > > > > > > On Thu, Feb 11, 2016 at 6:12 PM, Igor Sapego > > > > > wrote: > > > > > > > > > > > Ok, It seems like I have found what was causing the issue. > > > > > > > > > > > > In our > > > > > > > > > > > > apache.ignite.internal.processors.queryh.h2.IgniteH2Indexing.DBTypeEnum: > > > > > > > > > > > > /** > > > > > > * Initialize map of DB types. > > > > > > */ > > > > > > static { > > > > > > map.put(int.class, INT); > > > > > > map.put(Integer.class, INT); > > > > > > map.put(boolean.class, BOOL); > > > > > > map.put(Boolean.class, BOOL); > > > > > > map.put(byte.class, TINYINT); > > > > > > map.put(Byte.class, TINYINT); > > > > > > map.put(short.class, SMALLINT); > > > > > > map.put(Short.class, SMALLINT); > > > > > > map.put(long.class, BIGINT); > > > > > > map.put(Long.class, BIGINT); > > > > > > map.put(BigDecimal.class, DECIMAL); > > > > > > map.put(double.class, DOUBLE); > > > > > > map.put(Double.class, DOUBLE); > > > > > > map.put(float.class, REAL); > > > > > > map.put(Float.class, REAL); > > > > > > map.put(Time.class, TIME); > > > > > > map.put(Timestamp.class, TIMESTAMP); > > > > > > map.put(java.util.Date.class, TIMESTAMP); > > > > > > map.put(java.sql.Date.class, DATE); > > > > > > map.put(String.class, VARCHAR); > > > > > > map.put(UUID.class, UUID); > > > > > > map.put(byte[].class, BINARY); > > > > > > } > > > > > > > > > > > > As I was using java.util.Date and not the java.sql.Date it was > > > > translated > > > > > > as TIMESTAMP > > > > > > and not as DATE. Are there any particular reason for > java.util.Date > > > > being > > > > > > treated as a > > > > > > TIMESTAMP? > > > > > > > > > > > > In our Binary marshaler we use java.sql.Date and when I try to > > change > > > > > > configuration and > > > > > > make the Date field to be of the type java.sql.Date I've got an > > > error, > > > > > > because this field value > > > > > > deserialized as java.sql.Date: > > > > > > > > > > > > lass org.apache.ignite.IgniteCheckedException: Failed to execute > > SQL > > > > > query. > > > > > > at > > > > > > > > > > > > > > > > > > > > > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:831) > > > > > > [...] > > > > > > at > > > > > > > > > > > > > > > > > > > > > org.apache.ignite.internal.processors.platform.cache.query.PlatformAbstractQueryCursor.iterator(PlatformAbstractQueryCursor.java:134) > > > > > > Caused by: org.h2.jdbc.JdbcSQLException: > > > "java.lang.ClassCastException: > > > > > > java.util.Date cannot be cast to java.sql.Date" > > > > > > > > > > > > > > > > > > Best Regards, > > > > > > Igor > > > > > > > > > > > > On Thu, Feb 11, 2016 at 12:39 PM, Vladimir Ozerov < > > > > voze...@gridgain.com> > > > > > > wrote: > > > > > > > > > > > >> There was some changes in how .NET interoperate w/ Java on > binary > > > > level. > > > > > >> No > > > > > >> changes were made to cache or query logic. > > > > > >> I performed a smoke test in Java and observed that Date field > was > > > > > >> correctly > > > > > >> mapped to H2 date and then vice versa. > > > > > >> > > > > > >> Probably this is a kind of configuration problem. > > > > > >> > > > > > >> Vladimir. > > > > > >> > > > > > >> On Thu, Feb 11, 2016 at 12:41 AM, Dmitriy Setrakyan < > > > > > >> dsetrak...@apache.org> > > > > > >> wrote: > > > > > >>
[jira] [Created] (IGNITE-2646) IgniteCompute.withAsync can execute tasks synchronously
Andrey Gura created IGNITE-2646: --- Summary: IgniteCompute.withAsync can execute tasks synchronously Key: IGNITE-2646 URL: https://issues.apache.org/jira/browse/IGNITE-2646 Project: Ignite Issue Type: Bug Components: compute Affects Versions: 1.5.0.final Reporter: Andrey Gura Assignee: Andrey Gura {{GridTaskWorker}} can invoke {{reduce}} method in caller thread. If task isn't annotated by {{@ComputeTaskMapAsync}} then job mapping will be run in caller thread. Since job mapping will be finished {{processDelayedResponses}} method will be invoked and if delayed responses queue isn't empty then caller thread can invoke {{reduce}} method eventually and perform reducing synchronously. It can be usefull in case of synchronous execution but, it is strange behavior for asynchronous case because user expects that method will return after creation of task. Similar behavior is possible for all places where code invokes {{GridTaskProcessor.execute()}} method ({{IgniteCompute.broadcast()}}, {{IgniteCache.size()}}, REST handlers, etc.) Rerated discussion on dev-list: [IgniteCompute.withAsync can execute tasks synchronously|http://apache-ignite-developers.2346864.n4.nabble.com/IgniteCompute-withAsync-can-execute-tasks-synchronously-td7262.html] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: IgniteCompute.withAsync can execute tasks synchronously
I've created ticket https://issues.apache.org/jira/browse/IGNITE-2646 On Thu, Feb 11, 2016 at 4:12 PM, Andrey Gurawrote: > Dmitry, > > GridTaskProcessor does't know what kind of IgniteCompute implementation > was used by client code. So we need some kind of flag that will talk to > GridTaskProcessor: "execute task in pool, not in caller thread". > > > On Wed, Feb 10, 2016 at 11:56 PM, Dmitriy Setrakyan > wrote: > >> Andrey, >> >> I think we should keep it simple. From the API standpoint, I am not sure >> why not just always execute the task asynchronously every time when >> withAsync() API is invoked? Why add additional parameters to the API? >> >> D. >> >> On Wed, Feb 10, 2016 at 6:53 AM, Andrey Gura wrote: >> >> > Guys, >> > >> > during debugging of failed test >> > (GridSessionCheckpointSelfTest.testSharedFsCheckpoint) I've noticed that >> > GridTaskWorker can invoke reduce() method in caller thread. >> > >> > If task isn't annotated by @ComputeTaskMapAsync then mapping job will be >> > run in caller thread. Since job mapping will be finished >> > processDelayedResponses() method will be invoked and if delayed >> responses >> > queue isn't empty then caller thread can invoke reduce() method >> eventually >> > and perform reducing synchronously. >> > >> > It can be usefull in case of synchronous execution but, IMHO, it is very >> > strange behavior for asynchronous case because user expects that method >> > will return after creation of task. >> > >> > Similar behavior is possible for all places where code invokes >> > GridTaskProcessor.execute() method (IgniteCompute.broadcast(), >> > IgniteCAche.size(), REST handlers, etc.) >> > >> > I see three options in order to fix the problem: >> > >> > 1. Remove GridTaskWorker.processDelayedResponses() method and all calls. >> > Perhaps, performance can sufer a little bit (but I'm not sure). >> > >> > 2. Add special flag to execute method (e.g. usePool) that will give >> > guarantees that task will not be executed in caller thread. Of course >> this >> > flag should be add for all methods in call chain. >> > >> > 3. Use task process thread context (GridTaskProcessor.thCtx) and special >> > key that will represent requirement about execution task in pool >> similar to >> > usePool flag. >> > >> > In case of 2nd and 3rd options we should analyze every usage of >> > GridTaskProcessor.execute() method and solve should caller thread >> execute >> > task or not. >> > >> > Maybe I missed something and there is better way to solve this problem. >> > >> > I will be grateful for any advice or idea. >> > >> > -- >> > Andrey Gura >> > GridGain Systems, Inc. >> > www.gridgain.com >> > >> > > > > -- > Andrey Gura > GridGain Systems, Inc. > www.gridgain.com > -- Andrey Gura GridGain Systems, Inc. www.gridgain.com
[GitHub] ignite pull request: ignite-169 GridSessionCheckpointSelfTest.test...
Github user agura closed the pull request at: https://github.com/apache/ignite/pull/472 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] ignite pull request: ignite-2588 Test fixed
Github user agura closed the pull request at: https://github.com/apache/ignite/pull/463 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2647) Cache is undeployed even when BinaryMarshaller is used
Denis Magda created IGNITE-2647: --- Summary: Cache is undeployed even when BinaryMarshaller is used Key: IGNITE-2647 URL: https://issues.apache.org/jira/browse/IGNITE-2647 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 1.5.0.final Reporter: Denis Magda Assignee: Denis Magda Priority: Blocker Fix For: 1.6 Even when we use BinaryMarshaller a cache can be undeployed in SHARED (ISOLATED , PRIVATE) modes in the following case: - start a remote server node; - start a client node; - create new cache from the client; - send a compute job to the server that triggers loading a the job class from the client to server; - stop the server; - both the job class and the cache will be undeployed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Unfriendly "SELECT *"
Sergi, This problem is more about aliases, then about "SELECT *". The query FROM Employee e doesn't work either. And this is the problem, because as soon as JOINs appear, aliases greatly help to reduce SQL boilerplate. And as I understand it is hard to make this query work due to complex parsing. But we can make the query SELECT e.* FROM Employee e ... work with minimal efforts. On Fri, Feb 12, 2016 at 8:42 PM, Sergi Vladykinwrote: > Use SqlFieldsQuery, Luke! I tried, it works! :) > > I was always against SELECT in SqlQuery, it was terrible design decision, > but for "historical reasons" it is supported in SqlQuery. > > As for adding new parsing, the more fancier parsing we will introduce, the > worse performance we will have. > > Sergi > > 2016-02-12 15:11 GMT+03:00 Vladimir Ozerov : > > > Folks, > > > > I noticed that the following simple *SqlQuery* doesn't not work: > > SELECT * FROM Employee e > > > > The reason is that it is incorrectly expanded to > > SELECT *Employee*._KEY, *Employee*._VAL FROM Employee *e* > > > > ... while correct form should be: > > SELECT *e*._KEY, *e*._VAL FROM Employee *e* > > > > I understand that this is not very easy to fix because excessive query > > parsing will be required to find our whether table has alias or not. > > > > Then I tried another approach, which doesn't work either: > > SELECT e.* FROM Employee e > > > > And here the failure is forced by our code intentionally: only "SELECT *" > > is allowed. > > > > This looks very trivial to fix for me: just allow "SELECT > [table/alias].*" > > as well. Does anyone see any other problems here? > > > > I created the ticket: https://issues.apache.org/jira/browse/IGNITE-2641 > > > > Vladimir. > > >
Re: Unfriendly "SELECT *"
Use SqlFieldsQuery, Luke! I tried, it works! :) I was always against SELECT in SqlQuery, it was terrible design decision, but for "historical reasons" it is supported in SqlQuery. As for adding new parsing, the more fancier parsing we will introduce, the worse performance we will have. Sergi 2016-02-12 15:11 GMT+03:00 Vladimir Ozerov: > Folks, > > I noticed that the following simple *SqlQuery* doesn't not work: > SELECT * FROM Employee e > > The reason is that it is incorrectly expanded to > SELECT *Employee*._KEY, *Employee*._VAL FROM Employee *e* > > ... while correct form should be: > SELECT *e*._KEY, *e*._VAL FROM Employee *e* > > I understand that this is not very easy to fix because excessive query > parsing will be required to find our whether table has alias or not. > > Then I tried another approach, which doesn't work either: > SELECT e.* FROM Employee e > > And here the failure is forced by our code intentionally: only "SELECT *" > is allowed. > > This looks very trivial to fix for me: just allow "SELECT [table/alias].*" > as well. Does anyone see any other problems here? > > I created the ticket: https://issues.apache.org/jira/browse/IGNITE-2641 > > Vladimir. >
Async and sync ops in IgniteQueue and IgniteSet implementations
Hello all, This may be a very dumb question (and feel free to "reprimand" me ;-) but I will ask it anyways. I am working on https://issues.apache.org/jira/browse/IGNITE-1144 and one of the comments on the submitted code was that I marked both methods I am implemented as @IgniteAsyncSupported but I never actually provided an async implementation. All my methods do is call on methods already implemented in IgniteCompute.java (after verifying that the methods are called on data structures that are collocated) and my assumption was that the methods being called are already implemented in async/sync versions. Specifically, these methods are affinityRun() and affinityCall(). Furthermore, I noticed the following comment in IgniteQueue.java, for example: ""* All queue operations have synchronous and asynchronous counterparts."". However, after looking at all the implementations of IgniteQueue interface in the code base, I could not find any actual implementations of asynchronous calls on any methods of queue or set. Am I just missing something really basic? Thanks!
Re: Async and sync ops in IgniteQueue and IgniteSet implementations
Hi, First of all, the JavaDoc is incorrect, there are no async counterparts for queue and set operations in the current API. The question is - do we need them? I think we should have have them for new affinityRun and affinityCall methods that you're adding, but I'm not sure about others. Does anyone has thoughts on this? -Val On Fri, Feb 12, 2016 at 11:07 AM, Dood@ODDOwrote: > Hello all, > > This may be a very dumb question (and feel free to "reprimand" me ;-) but > I will ask it anyways. > > I am working on https://issues.apache.org/jira/browse/IGNITE-1144 and one > of the comments on the submitted code was that I marked both methods I am > implemented as @IgniteAsyncSupported but I never actually provided an async > implementation. All my methods do is call on methods already implemented in > IgniteCompute.java (after verifying that the methods are called on data > structures that are collocated) and my assumption was that the methods > being called are already implemented in async/sync versions. Specifically, > these methods are affinityRun() and affinityCall(). > > Furthermore, I noticed the following comment in IgniteQueue.java, for > example: > ""* All queue operations have synchronous and asynchronous counterparts."". > > However, after looking at all the implementations of IgniteQueue interface > in the code base, I could not find any actual implementations of > asynchronous calls on any methods of queue or set. Am I just missing > something really basic? > > Thanks! >
Re: Async and sync ops in IgniteQueue and IgniteSet implementations
Val, My question was also - if all I am doing is calling affinityRun() and affinityCall() that is already implemented elsewhere (just making sure it is done on a collocated queue/set) - do I need to do anything special looking at IgniteComputeImpl.java it looks like affinityRun()/Call() are already implemented in async fashion?)? Thanks! On 2/12/2016 1:49 PM, Valentin Kulichenko wrote: Hi, First of all, the JavaDoc is incorrect, there are no async counterparts for queue and set operations in the current API. The question is - do we need them? I think we should have have them for new affinityRun and affinityCall methods that you're adding, but I'm not sure about others. Does anyone has thoughts on this? -Val On Fri, Feb 12, 2016 at 11:07 AM, Dood@ODDOwrote: Hello all, This may be a very dumb question (and feel free to "reprimand" me ;-) but I will ask it anyways. I am working on https://issues.apache.org/jira/browse/IGNITE-1144 and one of the comments on the submitted code was that I marked both methods I am implemented as @IgniteAsyncSupported but I never actually provided an async implementation. All my methods do is call on methods already implemented in IgniteCompute.java (after verifying that the methods are called on data structures that are collocated) and my assumption was that the methods being called are already implemented in async/sync versions. Specifically, these methods are affinityRun() and affinityCall(). Furthermore, I noticed the following comment in IgniteQueue.java, for example: ""* All queue operations have synchronous and asynchronous counterparts."". However, after looking at all the implementations of IgniteQueue interface in the code base, I could not find any actual implementations of asynchronous calls on any methods of queue or set. Am I just missing something really basic? Thanks!
Re: Async and sync ops in IgniteQueue and IgniteSet implementations
For affinityRun and affinityCall you don't need to implement anything for async support, because this is already supported by IgniteCompute. But you have to add async support to the API as I described in earlier the ticket, because currently there is no way to get the Future from queue or set. -Val On Fri, Feb 12, 2016 at 11:53 AM, Dood@ODDOwrote: > Val, > > My question was also - if all I am doing is calling affinityRun() and > affinityCall() that is already implemented elsewhere (just making sure it > is done on a collocated queue/set) - do I need to do anything special > looking at IgniteComputeImpl.java it looks like affinityRun()/Call() are > already implemented in async fashion?)? > > Thanks! > > > On 2/12/2016 1:49 PM, Valentin Kulichenko wrote: > >> Hi, >> >> First of all, the JavaDoc is incorrect, there are no async counterparts >> for >> queue and set operations in the current API. >> >> The question is - do we need them? I think we should have have them for >> new >> affinityRun and affinityCall methods that you're adding, but I'm not sure >> about others. >> >> Does anyone has thoughts on this? >> >> -Val >> >> On Fri, Feb 12, 2016 at 11:07 AM, Dood@ODDO wrote: >> >> Hello all, >>> >>> This may be a very dumb question (and feel free to "reprimand" me ;-) but >>> I will ask it anyways. >>> >>> I am working on https://issues.apache.org/jira/browse/IGNITE-1144 and >>> one >>> of the comments on the submitted code was that I marked both methods I am >>> implemented as @IgniteAsyncSupported but I never actually provided an >>> async >>> implementation. All my methods do is call on methods already implemented >>> in >>> IgniteCompute.java (after verifying that the methods are called on data >>> structures that are collocated) and my assumption was that the methods >>> being called are already implemented in async/sync versions. >>> Specifically, >>> these methods are affinityRun() and affinityCall(). >>> >>> Furthermore, I noticed the following comment in IgniteQueue.java, for >>> example: >>> ""* All queue operations have synchronous and asynchronous >>> counterparts."". >>> >>> However, after looking at all the implementations of IgniteQueue >>> interface >>> in the code base, I could not find any actual implementations of >>> asynchronous calls on any methods of queue or set. Am I just missing >>> something really basic? >>> >>> Thanks! >>> >>> >
Re: labeling tickets that come from user list
OK, “community” it is then. Let us all follow this guideline to ensure that we are able to properly prioritize Jira tickets. On Fri, Feb 12, 2016 at 12:40 PM, Denis Magdawrote: > I label such tickets as "community". > > It aggregates tickets coming from both user & dev lists. > > Let's use this label? > > On Friday, February 12, 2016, Dmitriy Setrakyan > wrote: > > > Igniters, > > > > I think that we should start labeling tickets that come form user lists. > > Such issues usually get higher priority, as they are encountered by the > > users, and we should have a convenient way of finding them. > > > > How about adding the label “user” to the ticket? > > > > D. > > >
Ignite components deserialization
Folks, I reopened the ticket where we improved the serialization of Ignite components [1]. >From what I can see, the fix was made for IgniteKernal, but not for other classes like ClusterGroupAdapter, GridKernalContextImpl and others. What is the reason for this? Vladimir Ershov, it looks like you were working on this, can you please respond? [1] https://issues.apache.org/jira/browse/IGNITE-10 -Val
Re: labeling tickets that come from user list
I label such tickets as "community". It aggregates tickets coming from both user & dev lists. Let's use this label? On Friday, February 12, 2016, Dmitriy Setrakyanwrote: > Igniters, > > I think that we should start labeling tickets that come form user lists. > Such issues usually get higher priority, as they are encountered by the > users, and we should have a convenient way of finding them. > > How about adding the label “user” to the ticket? > > D. >
Re: Ignite components deserialization
I have also looked at the ticket, and I do not see if the review was done there. Can I please ask all the committers to make sure to add review comments to the tickets? Otherwise, it is impossible to understand the history of the fix. Thanks, D. On Fri, Feb 12, 2016 at 3:30 PM, Valentin Kulichenko < valentin.kuliche...@gmail.com> wrote: > Folks, > > I reopened the ticket where we improved the serialization of Ignite > components [1]. > > From what I can see, the fix was made for IgniteKernal, but not for other > classes like ClusterGroupAdapter, GridKernalContextImpl and others. What is > the reason for this? > > Vladimir Ershov, it looks like you were working on this, can you please > respond? > > [1] https://issues.apache.org/jira/browse/IGNITE-10 > > -Val >
Re: Grid behavior at key deserialization failure during rebalancing
Anton, I am not sure I fully grok the use case. Can you please explain why a key can be broken? D. On Fri, Feb 12, 2016 at 7:11 AM, Anton Vinogradovwrote: > Igniters, > > At this moment key deserialization failure during rebalancing cause strange > situation: > > Rebalancing from node sent supply message with broken key will be cancelled > at current topology. > All upcoming supply messages from this node will be be ignored, no new > demand messages to this node will be sent. > > But when topology will be changed again, node with broken key will take > path at rebalancing again, untill key deserialization failure happen ... > again. > > Do we need to improve this situation, and if we have to how should be > handled case with key deserialization failure? > > I see some ways: > 1) We can inform user about data loss because of deserialization problems, > but keep current rebalancing strategy > 2) We can continue rebalancing from this node, but ignore messages with > broken keys. And inform user about data loss. > 3) We can pause rebalancing untill deserialization will be fixed somehow, > for example by shutdowning demanding or supplying node. > > Thoughts? >