Re: [VOTE] Release Apache Ignite 3.0.0-beta1 RC2

2022-11-15 Thread Alexander Lapin
+1

вт, 15 нояб. 2022 г. в 08:48, Pavel Tupitsyn :

> +1 (binding)
>
> On Mon, Nov 14, 2022 at 9:05 PM Вячеслав Коптилин <
> slava.kopti...@gmail.com>
> wrote:
>
> > Dear Community,
> >
> > Ignite 3 is moving forward and I think we're in a good spot to release
> the
> > first beta version. In the last few months the following major features
> > have been added:
> > - RPM and DEB packages: simplified installation and node management with
> > system services.
> > - Client's Partition Awareness: Clients are now aware of data
> distribution
> > over the cluster nodes which helps avoid additional network transmissions
> > and lowers operations latency.
> > - C++ client:  Basic C++ client, able to perform operations on data.
> > - Autogenerated values: now a function can be specified as a default
> value
> > generator during a table creation. Currently only gen_random_uuid is
> > supported.
> > - SQL Transactions.
> > - Transactional Protocol: improved locking model, multi-version based
> > lock-free read-only transactions.
> > - Storage: A number of improvements to memory-only and on-disk engines
> > based on Page Memory.
> > - Indexes: Basic functionality, hash and sorted indexes.
> > - Client logging: A LoggerFactory may be provided during client creation
> to
> > specify a custom logger for logs generated by the client.
> > - Metrics framework: Collection and export of cluster metrics.
> >
> > I propose to release 3.0.0-beta1 with the features listed above.
> >
> > Release Candidate:
> > https://dist.apache.org/repos/dist/dev/ignite/3.0.0-beta1-rc2/
> > Maven Staging:
> > https://repository.apache.org/content/repositories/orgapacheignite-1556/
> > Tag: https://github.com/apache/ignite-3/tree/3.0.0-beta1-rc2
> >
> > +1 - accept Apache Ignite 3.0.0-beta1 RC2
> >  0 - don't care either way
> > -1 - DO NOT accept Apache Ignite 3.0.0-beta1 RC2 (explain why)
> >
> > Voting guidelines: https://www.apache.org/foundation/voting.html
> > How to verify the release: https://www.apache.org/info/verification.html
> >
> > The vote will be closed on Wednesday, 16 November 2022, 18:00:00 (UTC
> time)
> >
> >
> https://www.timeanddate.com/countdown/generic?iso=20221116T18=1440=Apache+Ignite+3.0.0-beta1+RC2=cursive=1
> >
> > Thanks,
> > S.
> >
>


Re: [ANNOUNCE] SCOPE FREEZE for Apache Ignite 3.0.0 beta 1 RELEASE

2022-11-03 Thread Alexander Lapin
Hi Igniters,

I would like to ask you to add one more bugfix to the beta1:
https://issues.apache.org/jira/browse/IGNITE-18003

Best regards,
Aleksandr

вт, 1 нояб. 2022 г. в 17:17, Aleksandr Pakhomov :

> Hi Igniters,
>
> I would like to ask you to add two more tickets to the beta1:
>
> - https://issues.apache.org/jira/browse/IGNITE-18036 <
> https://issues.apache.org/jira/browse/IGNITE-18036>
> - https://issues.apache.org/jira/browse/IGNITE-18025 <
> https://issues.apache.org/jira/browse/IGNITE-18025>
>
> Both of them now have PR's into the main.
>
> Best regards,
> Aleksandr
>
> > On 15 Oct 2022, at 00:45, Andrey Gura  wrote:
> >
> > Igniters,
> >
> > The 'ignite-3.0.0-beta1' branch was created (the latest commit is
> > 8160ef31ecf8d49f227562b6f0ab090c6b4438c1).
> >
> > The scope for the release is frozen.
> >
> > It means the following:
> >
> > - Any issue could be added to the release (fixVersion == 3.0.0-beta1)
> > only after discussion with the community and a release manager in this
> > thread.
> > - Any commit to the release branch must be also applied to the 'main'
> branch.
>
>


Re: [ANNOUNCE] SCOPE FREEZE for Apache Ignite 3.0.0 beta 1 RELEASE

2022-10-19 Thread Alexander Lapin
Igniters,

I would like to add following tickets to ignite-3.0.0-beta1
https://issues.apache.org/jira/browse/IGNITE-17806
https://issues.apache.org/jira/browse/IGNITE-17759
https://issues.apache.org/jira/browse/IGNITE-17637
https://issues.apache.org/jira/browse/IGNITE-17263
https://issues.apache.org/jira/browse/IGNITE-17260

It's all about read-only transactions.

Best regards,
Alexander

пт, 14 окт. 2022 г. в 19:45, Andrey Gura :

> Igniters,
>
> The 'ignite-3.0.0-beta1' branch was created (the latest commit is
> 8160ef31ecf8d49f227562b6f0ab090c6b4438c1).
>
> The scope for the release is frozen.
>
> It means the following:
>
> - Any issue could be added to the release (fixVersion == 3.0.0-beta1)
> only after discussion with the community and a release manager in this
> thread.
> - Any commit to the release branch must be also applied to the 'main'
> branch.
>


Re: [VOTE] Add micronaut dependency to Ignite 3 (reopened)

2022-06-01 Thread Alexander Lapin
+1

ср, 1 июн. 2022 г. в 12:14, ткаленко кирилл :

> + 1
>


Re: Node and cluster life-cycle in ignite-3

2021-06-03 Thread Alexander Lapin
Alexei, In general I agree with the given proposal.
However I'd also like to clarify a few things:
1. start() is called in "depends on" relation order.
2. It might have sense to add a runLevel parameter to start(): start(int
runLevel).
3. Seems that we don't need an afterStart() method. Cause we already have
component construction either with constructor or DI and component
initialization with start(). Before component.start() all dependencies of a
given component are already initialized, after component.start() the
component is also fully initialized.
4. Each component has its own busyLock.
5. beforeStop() is called without busyLock.block(). Currently it has sense
at most for Network and MetaStorage components where we prevent any new
distributed communication by throwing a NodeStoppingException that will
guarantee graceful termination on the sender side, including closing
sender-scoped resources such as cursors and futures.
6. After components.foreach(beforeStop()) shutdown process gets a
busyLock.block() on every component and stops it in "depends on" relation
order.

Best regards,
Alexander

чт, 3 июн. 2021 г. в 17:37, Alexei Scherbakov :

> Sergey, I'm ok with the runlevel approach.
>
> I've thought about the node/components lifecycle, here my ideas:
>
> 1. The Component interface.
> Each manageable component must implement it.
>
> 2. Define components hierarchy.
> Each component can depend on others - this produces component hierarchy
> defined by "depends on" relation.
> For example, RaftGroup depends on a ClusterService to send messages to
> other nodes.
>
> 3. Cyclic dependencies in the component hierarchy are forbidden.
>
> 4. Some form of dependency injection for easier component construction.
>
> 5. Transparent component lifecycle, defined by following methods:
>
> void start(); // Start a component
> void afterStart(); // Called then all component dependencies are
> initialized. Invoked in reverse to "depends on" relation order.
> void beforeStop();  // Called before the component is going to stop (for
> example, to cancel a pending operation) Invoked in reverse to "depends on"
> relation order.
> void stop(); // Stop a component. Invoked in "depends on" relation order.
> In the example above, Ignite is stopped before the Network.
> boolean isStopping(); // Flag to check if a node is stopping right now.
> boolean runnable(int runLevel) // Defines if a component has to be started
> on a specific run level.
>
> 6. Dynamic components (can be started/stopped at any time)
>
> 7. enterBusy/leaveBusy/block logic (similar to Ignite2) to avoid races on
> node stopping.
>
> чт, 3 июн. 2021 г. в 13:08, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Hi Sergey,
> >
> > Sounds interesting, I do agree that it might be beneficial to improve the
> > lifecycle management in 3.0 - 2.x version is far from perfect.
> >
> > Regarding your questions:
> >
> > 1. Can this be done via the metastore?
> > 2. I think we should list the run levels that we think should be there,
> and
> > then it will be easier to identify dependencies between them. Can you
> give
> > an example of independent run levels?
> >
> > -Val
> >
> > On Tue, Jun 1, 2021 at 7:57 AM Sergey Chugunov <
> sergey.chugu...@gmail.com>
> > wrote:
> >
> > >  Hello Igniters,
> > >
> > > I would like to start a discussion on evolving IEP-73 [1]. Now it
> covers
> > a
> > > narrow topic about components dependencies but it makes sense to cover
> in
> > > the IEP a broader question: how different components should be
> > initialized
> > > to support different modes of an individual node or a whole cluster.
> > >
> > > There is an idea to borrow the notion of run-levels from Unix-like
> > systems,
> > > and I suggest the following design to implement it.
> > >
> > >1. To start and function at a specific run-level node needs to start
> > and
> > >initialize components in a proper order. During initialization
> > > components
> > >may need to notify each other about reaching a particular run-level
> so
> > >other components are able to execute their actions. Orchestrating of
> > > this
> > >process should be a responsibility of a new component.
> > >
> > >2. Orchestration component doesn't manage the initialization process
> > >directly but uses another abstraction called scenario. Examples of
> > >run-levels in the context of Ignite 2.x may be Maintenance Mode,
> > >INACTIVE-READONLY-ACTIVE states of a cluster, and each level is
> > reached
> > >when a corresponding scenario has executed.
> > >
> > >So the responsibility of the orchestrator will be managing scenarios
> > and
> > >providing them with infrastructure of spreading notification events
> > > between
> > >components. All low-level details and knowledge of existing
> components
> > > and
> > >their dependencies are encapsulated inside scenarios.
> > >
> > >3. Scenarios allow nesting, e.g. a scenario for INACTIVE cluster
> state
> 

[jira] [Created] (IGNITE-14599) Add generic way to bootstrap configuration

2021-04-20 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14599:


 Summary: Add generic way to bootstrap configuration 
 Key: IGNITE-14599
 URL: https://issues.apache.org/jira/browse/IGNITE-14599
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Currently it's possible to bootstrap configuration only with json formatted 
string.
{code:java}
Ignite start(@Nullable String jsonStrBootstrapCfg);
{code}
Instead of given temporary solution some sort of generic implementation should 
be used, for example configuration file with set of formats: json, hocon.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14586) Remove @SupperessWarnings when implementation provided.

2021-04-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14586:


 Summary: Remove @SupperessWarnings when implementation provided.
 Key: IGNITE-14586
 URL: https://issues.apache.org/jira/browse/IGNITE-14586
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14581) Implement IgniteImpl close method

2021-04-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14581:


 Summary: Implement IgniteImpl close method
 Key: IGNITE-14581
 URL: https://issues.apache.org/jira/browse/IGNITE-14581
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14580) Add exception handling logic to IgntitionProcessor

2021-04-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14580:


 Summary: Add exception handling logic to IgntitionProcessor
 Key: IGNITE-14580
 URL: https://issues.apache.org/jira/browse/IGNITE-14580
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Handle Ignition service lack and throw proper exception from start method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14579) Start rest manager

2021-04-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14579:


 Summary: Start rest manager
 Key: IGNITE-14579
 URL: https://issues.apache.org/jira/browse/IGNITE-14579
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Update IgntionImpl with code that will start rest manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14578) Bootstrap configuation manager with distributed configuraiton

2021-04-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14578:


 Summary: Bootstrap configuation manager with distributed 
configuraiton
 Key: IGNITE-14578
 URL: https://issues.apache.org/jira/browse/IGNITE-14578
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Update IgntionImpl with code that will 
* Add distributed root keys.
* Bootstrap distributed configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14522) CommunicationManager refactoring

2021-04-12 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14522:


 Summary:  CommunicationManager refactoring
 Key: IGNITE-14522
 URL: https://issues.apache.org/jira/browse/IGNITE-14522
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Terms clarification and modules splitting logic

2021-03-26 Thread Alexander Lapin
Igniters,

Seems that within Ignite-3 we have some mess in terms like manager, cpu,
service, module, etc. Let's clarify this point. Also It'll be great to
discuss the rules of dividing code into modules.
I'll use the context of Ignite cluster & node lifecycle

for terms definition and as an example source.

*Terms clarification.*

   - Component - semantically consistent part of Ignite that in most cases
   will have component-public but ignite-internal API and a lifecycle, somehow
   related to the lifecycle of a node or cluster. So, *structurally*
   TableManager, SchemaManager, AffinityManager, etc are all components. For
   example, TableManager will have methods like createTable(), alterTable(),
   dropTable(), etc and a lifecycle that will create listeners (aka
   DistributedMetastorage watches) on schema and affinity updates in order to
   create/drop raft servers for particular partitions that should be hosted on
   local node). Components are lined up in a graph without cycles, for more
   details please see mentioned above Ignite cluster & node lifecycle.
   

   - Manager is a driving point of a component with high level lifecycle
   logic and API methods. My intention here is to agree about naming: should
   we use the term Manager, Processor or anything else?
   - Service is an entry point to some component/server or a group of
   components/servers. See RaftGroupService.java
   

   as an example.
   - Server, for example RaftServer, seems to be self-explanatory itself.


*Dividing code into modules.*
It seems useful to introduce a restriction that a module should contain at
most one component. So that, combining component-specific modules and ones
of api, lang, etc we will end up with something like following:

   - affinity // TO be created.
   - api [public]
   - baseline // TO be created.
   - bytecode
   - cli
   - cli-common
   - configuration
   - configuration-annotation-processor
   - core // Module with classes like IgniteUuid. Should we raname it to
   lang/utils/commons?
   - metastorage-client // To be created.
   - metastorage-common // To be created.
   - metastorage-server // TO be created.
   - network
   - raft // raft-server?
   - raft-client
   - rest
   - runner
   - schema
   - table // Seems that there might be a conflict between the meaning of
   table module that we already have and table module with create/dropTable()
   - vault

Also it's not quite clear to me how we should split lang and util classes
some of which belong to the public api, and some to the private.

Please share your thoughts about topics mentioned above.

Best regards,
Alexander


[jira] [Created] (IGNITE-14100) GridCachePartitionedNodeRestartTest fails due to wrong tx mapping

2021-01-28 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14100:


 Summary: GridCachePartitionedNodeRestartTest fails due to wrong tx 
mapping
 Key: IGNITE-14100
 URL: https://issues.apache.org/jira/browse/IGNITE-14100
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Sometimes 
GridCachePartitionedNodeRestartTest#testRestartWithTxEightNodesTwoBackups hangs 
with the following AssertionError:


{code:java}
[18:06:54]W: [org.gridgain:ignite-core] java.lang.AssertionError: 
cacheId=-838655627, localNode = c2156a8f-1ac7-4098-8808-b1f9fad5, dhtNodes 
= [TcpDiscoveryNode [id=51df7fb1-2cad-4bfc-a02e-8225fbc0, 
consistentId=127.0.0.1:47500, addrs=ArrayList [127.0.0.1], sockAddrs=HashSet 
[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
lastExchangeTime=1603552013349, loc=false, ver=8.8.127#20201023-sha1:8f62caf2, 
isClient=false], TcpDiscoveryNode [id=c2156a8f-1ac7-4098-8808-b1f9fad5, 
consistentId=127.0.0.1:47504, addrs=ArrayList [127.0.0.1], sockAddrs=HashSet 
[/127.0.0.1:47504], discPort=47504, order=21, intOrder=13, 
lastExchangeTime=1603552013259, loc=true, ver=8.8.127#20201023-sha1:8f62caf2, 
isClient=false], TcpDiscoveryNode [id=f7355089-fdea-4a96-af57-00ebeb94, 
consistentId=127.0.0.1:47505, addrs=ArrayList [127.0.0.1], sockAddrs=HashSet 
[/127.0.0.1:47505], discPort=47505, order=22, intOrder=14, 
lastExchangeTime=1603552013390, loc=false, ver=8.8.127#20201023-sha1:8f62caf2, 
isClient=false], TcpDiscoveryNode [id=4e8a8c17-e1c7-43e2-8461-9b5348d3, 
consistentId=127.0.0.1:47503, addrs=ArrayList [127.0.0.1], sockAddrs=HashSet 
[/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
lastExchangeTime=1603552013349, loc=false, ver=8.8.127#20201023-sha1:8f62caf2, 
isClient=false], TcpDiscoveryNode [id=d4f66e72-4ecc-4aca-a6d1-91242331, 
consistentId=127.0.0.1:47501, addrs=ArrayList [127.0.0.1], sockAddrs=HashSet 
[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
lastExchangeTime=1603552013349, loc=false, ver=8.8.127#20201023-sha1:8f62caf2, 
isClient=false]]

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.map(GridDhtTxPrepareFuture.java:1669)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1355)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:732)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1125)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:410)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:591)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:388)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:197)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:174)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:134)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:219)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:217)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)

[18:06:54]W: [org.gridgain:ignite-core] at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318

[jira] [Created] (IGNITE-14036) Tracing: add ability to trace compute operations.

2021-01-22 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-14036:


 Summary: Tracing: add ability to trace compute operations.
 Key: IGNITE-14036
 URL: https://issues.apache.org/jira/browse/IGNITE-14036
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


After implementing update and check whether removeAll() and clear() traced 
properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13870) Remove GridCacheAdapter#validateCacheKey cause it seems to be obsolete

2020-12-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13870:


 Summary: Remove GridCacheAdapter#validateCacheKey cause it seems 
to be obsolete
 Key: IGNITE-13870
 URL: https://issues.apache.org/jira/browse/IGNITE-13870
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13869) Add extra map query logging within the debug level.

2020-12-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13869:


 Summary: Add extra map query logging within the debug level.
 Key: IGNITE-13869
 URL: https://issues.apache.org/jira/browse/IGNITE-13869
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Alexander Lapin
Assignee: Alexander Lapin


For some cases, particularly partition pruning and affinity awareness checks, 
it would be useful to log map queries on a recipient nodes in case of debug log 
level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13868) Create caches simultaneously in same non-existing cache group led to NPE

2020-12-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13868:


 Summary: Create caches simultaneously in same non-existing cache 
group led to NPE
 Key: IGNITE-13868
 URL: https://issues.apache.org/jira/browse/IGNITE-13868
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


1 server 1 client
Case:
server create cache and group
client create cache in previous group
{code:java}
Caused by: java.lang.NullPointerException
at 
java.util.Collections$UnmodifiableCollection.(Collections.java:1026)
at java.util.Collections$UnmodifiableList.(Collections.java:1302)
at java.util.Collections.unmodifiableList(Collections.java:1287)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentV2.(GridAffinityAssignmentV2.java:108)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.initialize(GridAffinityAssignmentCache.java:205)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.init(GridAffinityAssignmentCache.java:750)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$CacheGroupHolder.(CacheAffinitySharedManager.java:2635)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$CacheGroupAffNodeHolder.(CacheAffinitySharedManager.java:2692)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initStartedGroup(CacheAffinitySharedManager.java:1339)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.lambda$initAffinityOnCacheGroupsStart$641a38d$1(CacheAffinitySharedManager.java:1001)
at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:10575)
at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:10498)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initAffinityOnCacheGroupsStart(CacheAffinitySharedManager.java:997)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processCacheStartRequests(CacheAffinitySharedManager.java:975)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:833)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:1238)
... 5 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13867) IgniteCacheTxExpiryPolicyTest.testNearAccess almost stably fails in CE master

2020-12-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13867:


 Summary: IgniteCacheTxExpiryPolicyTest.testNearAccess almost 
stably fails in CE master
 Key: IGNITE-13867
 URL: https://issues.apache.org/jira/browse/IGNITE-13867
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


[TC|https://ci.ignite.apache.org/project.html?testNameId=-5197950048275421802=IgniteTests24Java8=IgniteTests24Java8_CacheExpiryPolicy=testDetails_IgniteTests24Java8=ignite-2.9.1=true=true]
{code:java}
java.lang.AssertionError: Unexpected non-null value for grid 1 expected null, 
but was:<1>
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Removing MVCC public API

2020-12-15 Thread Alexander Lapin
+1

Best regards,
Alexander


вт, 15 дек. 2020 г. в 10:59, Konstantin Orlov :

> +1
>
> --
> Regards,
> Konstantin Orlov
> Software Engineer, St. Petersburg
> +7 (921) 445-65-75
> https://www.gridgain.com
> Powered by Apache® Ignite™
>
>
>
>


Re: Tool for performance statistics reports

2020-12-14 Thread Alexander Lapin
Ok from my side.
Few more details about tracing spi updates, based on mentioned above
discussion with Nikolay and Nikita.

Tracing provides enough data for a performance profiling tool, actually
only root spans are required. However, according to Nikinta,
root-span-tracing has a 7-8% performance drop in comparison to 1-2%
performance drop of the performance profiling tool. It's the main reason to
have given tool as is right now. In order to reuse TracingSPI for a
profiling tool internals, few modifications should be made to increase
tracing performance:

   - Add support for non-strings tags and log points: primitives, etc.
   - Add ability to postpone adding span tags and log points to the very
   and of span tree creation.
   - Probably some sort of tags caching could also help.

Best regards,
Alexander

пн, 14 дек. 2020 г. в 12:48, Nikolay Izhikov :

> Hello, Igniters.
>
> We discussed this feature privately with Alexander and Nikita.
> Here are the results we want to share with the community:
>
> 0. In the end, both, performance statistic tool and tracing should use the
> same API.
> 1. We should improve the Tracing API, so it able to be used for gathering
> information about all operations without a significant performance drop.
>
> I propose to go as follows:
>
> 1. Merge current PR as is after final review. My intention is to provide a
> tool for users that can be used in the real-world production environment.
> 2. Improve the Tracing API.
> 3. Combine both tools under the same API.
>
> > 14 дек. 2020 г., в 10:42, Alexander Lapin 
> написал(а):
> >
> > Hello Igniters,
> >
> > Because the tracing causes performance drop 52% [4] and can not be
> >> used for collecting statistics about all queries in production
> >> deployments. The performance drop of the profiling tool is less than
> >> 2% and it can be used in production. I have benchmarked the tracing
> >> and got the results:
> >>
> >> -2% when configured OpenCensusTracingSpi and all scopes disabled
> >> -52% for TX scope (IgnitePutTxBenchmark)
> >> -58% for SQL scope  (IgniteSqlQueryBenchmark)
> >>
> >> Such a performance drop is significant to not use the tracing in
> >> production.
> >>
> > We've rerun tracing benchmarks based on more realistic scenarios and got
> a
> > 10-15% performance drop in case of sampling-rate 1 (all transactions were
> > traced). More realistic scenarios means that we don't test tracing
> > performance if the system is in overdraft state but add some sort of
> micro
> > throttling (1 millisecond) between operations, transactions in our case.
> > *IgnitePutTxBenchmark*
> >
> > Green: Case 1: NoopTracingSpi
> >
> > Blue: Case 2: OpenCensusTracingSpi (disabled)
> >
> > Red: Case 3: OpenCensusTracingSpi, --scope TX --sampling-rate 0.1
> >
> > Black: Case 5: *ControlCenter* + OpenCensusTracingSpi, --scope TX
> > --sampling-rate 0.1
> >
> > Violet: Case 4: OpenCensusTracingSpi, --scope TX --sampling-rate 1
> > Yellow: Case 6: ControlCenter + OpenCensusTracingSpi, --scope TX
> > --sampling-rate
> >
> > I have considered the possibility to reuse the tracing API. If
> >> statistics collecting will be implemented with the TracingSpi then we
> >> get a performance drop due to:
> >> - Transferring tracing context over the network.
> >> - Using ThreadLocal for spans
> >> - Converting primitives and objects to string and vice versa. (API
> >> supports only String-based tags and values)
> >> - Generating span objects
> >>
> > @Nikita Amelchev Could you please share numbers?
> >
> > Best regards,
> > Alexander
> >
> > пн, 7 дек. 2020 г. в 17:24, Nikolay Izhikov :
> >
> >> Hello, Nikita.
> >>
> >> Makes sense.
> >>
> >> I will take a look.
> >>
> >>> 7 дек. 2020 г., в 15:29, Nikita Amelchev 
> >> написал(а):
> >>>
> >>> Hello, Igniters.
> >>>
> >>> I have implemented the profiling tool [1, 2]. It writes duration and
> >>> other parameters of user operations (scan, SQL query, transactions,
> >>> tasks, jobs, CQ, etc) to a local file. This info can be used in
> >>> various cases. The main goal is to build the performance report to
> >>> analyze the count and duration of user queries [3].
> >>>
> >>> We already have the tracing with similar functionality but I think
> >>> Ignite should have both tools - tracing and profiling.
> >>>
> >>> Because the tracing causes performa

Re: Tool for performance statistics reports

2020-12-13 Thread Alexander Lapin
Hello Igniters,

Because the tracing causes performance drop 52% [4] and can not be
> used for collecting statistics about all queries in production
> deployments. The performance drop of the profiling tool is less than
> 2% and it can be used in production. I have benchmarked the tracing
> and got the results:
>
>  -2% when configured OpenCensusTracingSpi and all scopes disabled
>  -52% for TX scope (IgnitePutTxBenchmark)
>  -58% for SQL scope  (IgniteSqlQueryBenchmark)
>
> Such a performance drop is significant to not use the tracing in
> production.
>
We've rerun tracing benchmarks based on more realistic scenarios and got a
10-15% performance drop in case of sampling-rate 1 (all transactions were
traced). More realistic scenarios means that we don't test tracing
performance if the system is in overdraft state but add some sort of micro
throttling (1 millisecond) between operations, transactions in our case.
*IgnitePutTxBenchmark*

Green: Case 1: NoopTracingSpi

Blue: Case 2: OpenCensusTracingSpi (disabled)

Red: Case 3: OpenCensusTracingSpi, --scope TX --sampling-rate 0.1

Black: Case 5: *ControlCenter* + OpenCensusTracingSpi, --scope TX
--sampling-rate 0.1

Violet: Case 4: OpenCensusTracingSpi, --scope TX --sampling-rate 1
Yellow: Case 6: ControlCenter + OpenCensusTracingSpi, --scope TX
--sampling-rate

I have considered the possibility to reuse the tracing API. If
> statistics collecting will be implemented with the TracingSpi then we
> get a performance drop due to:
>  - Transferring tracing context over the network.
>  - Using ThreadLocal for spans
>  - Converting primitives and objects to string and vice versa. (API
> supports only String-based tags and values)
>  - Generating span objects
>
@Nikita Amelchev Could you please share numbers?

Best regards,
Alexander

пн, 7 дек. 2020 г. в 17:24, Nikolay Izhikov :

> Hello, Nikita.
>
> Makes sense.
>
> I will take a look.
>
> > 7 дек. 2020 г., в 15:29, Nikita Amelchev 
> написал(а):
> >
> > Hello, Igniters.
> >
> > I have implemented the profiling tool [1, 2]. It writes duration and
> > other parameters of user operations (scan, SQL query, transactions,
> > tasks, jobs, CQ, etc) to a local file. This info can be used in
> > various cases. The main goal is to build the performance report to
> > analyze the count and duration of user queries [3].
> >
> > We already have the tracing with similar functionality but I think
> > Ignite should have both tools - tracing and profiling.
> >
> > Because the tracing causes performance drop 52% [4] and can not be
> > used for collecting statistics about all queries in production
> > deployments. The performance drop of the profiling tool is less than
> > 2% and it can be used in production. I have benchmarked the tracing
> > and got the results:
> >
> > -2% when configured OpenCensusTracingSpi and all scopes disabled
> > -52% for TX scope (IgnitePutTxBenchmark)
> > -58% for SQL scope  (IgniteSqlQueryBenchmark)
> >
> > Such a performance drop is significant to not use the tracing in
> production.
> >
> > I have considered the possibility to reuse the tracing API. If
> > statistics collecting will be implemented with the TracingSpi then we
> > get a performance drop due to:
> > - Transferring tracing context over the network.
> > - Using ThreadLocal for spans
> > - Converting primitives and objects to string and vice versa. (API
> > supports only String-based tags and values)
> > - Generating span objects
> >
> > I have benchmarked implementations on the yardstick’s
> > IgniteGetBenchmark. The tracing context transferring over the network
> > was disabled. The results:
> > - Tracing API implementation - 8% performance drop.
> > - Proposed implementation - 2% performance drop.
> >
> > I think this is a significant drop and implementation with reuse
> > tracing API should not be used. The cluster profiling should have as
> > little performance drop as possible to be used in production. The
> > tracing will be used for the detailed investigation.
> >
> > WDYT?
> >
> > The tool is ready to be reviewed [3, 5].
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-12666
> > [2]
> https://cwiki.apache.org/confluence/display/IGNITE/Cluster+performance+profiling+tool
> > [3] https://github.com/apache/ignite-extensions/pull/16
> > [4]
> https://issues.apache.org/jira/secure/attachment/13016636/Tracing%20benchmarks.docx
> > [5] https://github.com/apache/ignite/pull/7693
> >
> > ср, 24 июн. 2020 г. в 23:31, Saikat Maitra :
> >>
> >> Hi Nikita,
> >>
> >> The changes in this PR looks good.
> >>
> >> https://github.com/apache/ignite-extensions/pull/16
> >>
> >> Regards,
> >> Saikat
> >>
> >> On Mon, Jun 22, 2020 at 12:03 PM Nikolay Izhikov 
> >> wrote:
> >>
> >>> Hello, Igniters.
> >>>
> >>> I think that inside Ignite core we should name this feature as
> >>> «performance statistics»
> >>> We already have «cache statistics».
> >>> Data that is collected by performance statistics can be used not only
> for
> >>> profiling but to 

[jira] [Created] (IGNITE-13640) Opencensus module has no runtime dependencies.

2020-10-29 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13640:


 Summary: Opencensus module has no runtime dependencies.
 Key: IGNITE-13640
 URL: https://issues.apache.org/jira/browse/IGNITE-13640
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin
 Fix For: 2.9.1


Need to add dependencies to pom.xml
 * io.opencensus.opencensus-impl-core - 0.22.0 version

 * io.grpc.grpc-context - 1.19.0 version

 * com.lmax.disruptor - 3.4.2 version

 * com.google.guava.guava - 26.0-android version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13393) Tracing: Atomic cache read/write flow.

2020-08-30 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13393:


 Summary: Tracing: Atomic cache read/write flow.
 Key: IGNITE-13393
 URL: https://issues.apache.org/jira/browse/IGNITE-13393
 Project: Ignite
  Issue Type: New Feature
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Implement tracing for atomic cache operations:
 * put
 * putAll
 * putAsync
 * putAllAsync
 * remove
 * removeAll
 * removeAsync
 * removeAllAsync
 * get
 * getAll
 * getAsync
 * getAllAsync

Also add ability to include root cache read/write operations to tx tracing flow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13375) Operations started on client nodes are not traced.

2020-08-19 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13375:


 Summary: Operations started on client nodes are not traced.
 Key: IGNITE-13375
 URL: https://issues.apache.org/jira/browse/IGNITE-13375
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin
 Fix For: 2.10


Root spans should be created on client node like it’s done on server node. I 
suppose that to already existent node join span creation we should add:
 * addMessage

 * node stop

 * custom event



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13374) Initial PME hangs because of multiple blinking nodes

2020-08-19 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13374:


 Summary: Initial PME hangs because of multiple blinking nodes
 Key: IGNITE-13374
 URL: https://issues.apache.org/jira/browse/IGNITE-13374
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin
 Fix For: 2.10


*Root cause* of the issue is a race inside GridDhtPartitionsExchangeFuture on 
client side between two processes:
 # When old coordinator fails and the new one takes over it sends 
GridDhtPartitionsSingleRequest messages to all nodes including clients to 
restore exchange results. Processing this message on client includes updating 
current coordinator reference (crd field).

 # When future receives discovery notification about old coordinator failure it 
should detect change of coordinator and send GridDhtPartitionsSingleMessage to 
new coordinator to obtain affinity. But updated crd field prevents client from 
detecting coordinator failure and sending SingleMessage to new coordinator 
which in turn leads to hanging client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13373) WAL segmentns do not released on releaseHistoryForPreloading()

2020-08-19 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13373:


 Summary: WAL segmentns do not released on 
releaseHistoryForPreloading()
 Key: IGNITE-13373
 URL: https://issues.apache.org/jira/browse/IGNITE-13373
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


* Reserve/releaseHistoryForPreloading() was reworked, now we store oldest 
WALPointer that was reserved during reserveHistoryForPreloading in 
reservedForPreloading field. As a result it's possible to release wall 
reservation on releaseHIstoryForPreloading().

 * searchAndReserveCheckpoints() was slightly refactored: now it returns not 
only an earliestValidCheckpoints but also an oldest reservedCheckpoint, so that 
there’s no need to recalculate it within reserveHistoryForExchange().

 *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13251) Deadlock between grid-timeout-worker and a thread opening a communication connection.

2020-07-13 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13251:


 Summary: Deadlock between grid-timeout-worker and a thread opening 
a communication connection.
 Key: IGNITE-13251
 URL: https://issues.apache.org/jira/browse/IGNITE-13251
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


rid-timeout-worker is known to go into a deadlock state with other threads in a 
few scenarios.

The general scheme is:
1. A thread `T` is holding lock `L` and is trying to establish a communication 
connection, hanging in `safeTcpHandshake` method. Due to the logic of 
`safeTcpHandshake`, `grid-timeout-worker` needs to send a signal to `T` in 
order for it to proceed.

2. `grid-timeout-worker` is trying to acquire `L`. Hence, the deadlock.

It may include more threads. The lock `L` can be different: checkpoint lock, 
GridCacheMapEntry lock, etc.
 #  

 ## Example

The below example shows a lock between
 * `grid-timeout-worker` trying to acquire a cp read lock in a 
`dumpLongRunningTransactions`

 * `tcp-comm-worker` trying to establish a connection but hanging on socket 
read due to unstable network

 * checkpointer trying to start a checkpoint and acquire cp write lock

 * `utility` worker waiting for the connection to be established by 
`tcp-comm-worker` while holding cp read lock

{code:java}
Thread [name="grid-timeout-worker-#23", id=42, state=WAITING, blockCnt=6991, 
waitCnt=1746467]
Lock 
[object=java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@4545a8b9, 
ownerName=null, ownerId=-1]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
at 
o.a.i.i.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1707)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setResult(GridPartitionedSingleGetFuture.java:715)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.localGet(GridPartitionedSingleGetFuture.java:511)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.tryLocalGet(GridPartitionedSingleGetFuture.java:399)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture.java:366)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:243)
at 
o.a.i.i.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:232)
at 
o.a.i.i.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:246)
at 
o.a.i.i.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4190)
at 
o.a.i.i.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4171)
at 
o.a.i.i.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1362)
at 
o.a.i.i.processors.task.GridTaskProcessor.saveTaskMetadata(GridTaskProcessor.java:908)
at 
o.a.i.i.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:746)
at 
o.a.i.i.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:477)
at 
o.a.i.i.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:674)
at 
o.a.i.i.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:479)
at o.a.i.i.IgniteComputeImpl.callAsync0(IgniteComputeImpl.java:809)
at o.a.i.i.IgniteComputeImpl.callAsync(IgniteComputeImpl.java:794)
at 
o.a.i.i.processors.cache.GridCachePartitionExchangeManager.dumpLongRunningTransaction(GridCachePartitionExchangeManager.java:2115)
at 
o.a.i.i.processors.cache.GridCachePartitionExchangeManager.dumpLongRunningOperations0(GridCachePartitionExchangeManager.java:2012)
- locked 
o.a.i.i.processors.cache.GridCachePartitionExchangeManager$ActionLimiter@47b86e8f
at 
o.a.i.i.processors.cache.GridCachePartitionExchangeManager.dumpLongRunningOperations(GridCachePartitionExchangeManager.java:2180)
at o.a.i.i.IgniteKernal$4.run(IgniteKernal.java:1478)
at 
o.a.i.i.processors.timeout.GridTimeoutProcessor$CancelableTask.onTimeout(GridTimeoutProcessor.java:410)
- locked 
o.a.i.i.processors.timeout.GridTimeoutProcessor$CancelableTas

[jira] [Created] (IGNITE-13238) Instant not supported whithin the scope of jdbc thin.

2020-07-10 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13238:


 Summary: Instant not supported whithin the scope of jdbc thin.
 Key: IGNITE-13238
 URL: https://issues.apache.org/jira/browse/IGNITE-13238
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Affects Versions: 2.9
Reporter: Alexander Lapin


Reproducer: 
org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testInstantDataType

 
{code:java}
[2020-07-10 
09:12:36,183][ERROR][client-connector-#196%thin.JdbcThinCacheToJdbcDataTypesCoverageTest0%][JdbcRequestHandler]
 Failed to execute SQL query [reqId=11, req=JdbcQueryExecuteRequest 
[schemaName=cachebb4b3282_3ad0_4ee3_9959_806c33521406, pageSize=1024, 
maxRows=0, sqlQry=SELECT * FROM tablebb4b3282_3ad0_4ee3_9959_806c33521406 WHERE 
_key = ?, args=Object[] [-292275055-05-16T16:47:04.192Z], 
stmtType=SELECT_STATEMENT_TYPE, autoCommit=true, partResReq=false, 
super=JdbcRequest [type=2, reqId=11]]]
javax.cache.CacheException: Failed to calculate derived partitions for query.
at 
org.apache.ignite.internal.sql.optimizer.affinity.PartitionResult.calculatePartitions(PartitionResult.java:132)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSelectDistributed(IgniteH2Indexing.java:1682)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSelect0(IgniteH2Indexing.java:1412)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSelect(IgniteH2Indexing.java:1291)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1128)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2779)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2775)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3338)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$2(GridQueryProcessor.java:2795)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2833)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2769)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2727)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:647)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.doHandle(JdbcRequestHandler.java:320)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:257)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:200)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:54)
at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to wrap 
value[type=24, value=-292275055-05-16T16:47:04.192Z]
at 
org.apache.ignite.internal.processors.query.h2.H2Utils.wrap(H2Utils.java:668)
at 
org.apache.ignite.internal.processors.query.h2.H2Utils.convert(H2Utils.java:520)
at 
org.apache.ignite.internal.processors.query.h2.affinity.H2PartitionResolver.partition(H2PartitionResolver.java:43)
at 
org.apache.ignite.internal.sql.optimizer.affinity.PartitionParameterNode.applySingle(PartitionParameterNode.java:72)
at 
org.apache.ignite.internal.sql.optimizer.affinity.PartitionSingleNode.apply(PartitionSingleNode.java:47)
at 
org.apache.ignite.internal.sql.optimizer.affinity.PartitionResult.calculatePartitions(PartitionResult.java:114)
... 25 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13236) Rebalance doesn't happen after restart if node failed at the moment of rebalance

2020-07-09 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13236:


 Summary: Rebalance doesn't happen after restart if node failed at 
the moment of rebalance
 Key: IGNITE-13236
 URL: https://issues.apache.org/jira/browse/IGNITE-13236
 Project: Ignite
  Issue Type: Bug
 Environment: !Screen Shot 2019-08-05 at 18.20.46.png!
Reporter: Alexander Lapin
 Attachments: Screen Shot 2019-08-05 at 18.20.46.png, 
image-2020-07-09-14-55-51-574.png

Steps to reproduce:
1. Start 2 nodes in baseline
2. Stream a lot of data.
3. Stop the second node, clean the persistence and start it again(with the same 
consistent id
4. Kill the first node at the moment of rebalancing (make sure that not all 
data was rebalanced)
5. Partition loss will be detected.
6. Return first node to the cluster.
7. Result:
!image-2020-07-09-14-55-51-574.png!


Data is not rebalanced, All data(primary and backup) is stored on the first 
node only, while the second node has the same small subset of the data. It's 
reproducible with any Partition loss policy. Resetting lost partitions, 
activation/deactivation doesn't help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13233) Invalid milliseconds value of java.util.Date converted to java.sql.Timestamp via JDBC Thin.

2020-07-09 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13233:


 Summary: Invalid milliseconds value of java.util.Date converted to 
java.sql.Timestamp via JDBC Thin.
 Key: IGNITE-13233
 URL: https://issues.apache.org/jira/browse/IGNITE-13233
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


In some cases, for example in case of new java.util.Date(Long.MAX_VALUE), 
milliseconds part of java.sql.Timestamp retrieved via JDBC has different value 
than previously putted via cache API java.util.Date.
Date.getTime() 9223372036854775*807*
Timestamp.getTime() 9223372036854775*191*

Reproducer 
org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testDateDataType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13232) JDBC thin: it's not possible to use array of objects as value in where _key=

2020-07-09 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13232:


 Summary: JDBC thin: it's not possible to use array of objects as 
value in where _key=
 Key: IGNITE-13232
 URL: https://issues.apache.org/jira/browse/IGNITE-13232
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


Reproducers:

org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testObjectArrayDataType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13231) Unable to use character in where _key= clause via JDBC Thin, if previously putted within cache API.

2020-07-09 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13231:


 Summary: Unable to use character in where _key= clause 
via JDBC Thin, if previously putted within cache API.
 Key: IGNITE-13231
 URL: https://issues.apache.org/jira/browse/IGNITE-13231
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


Unable to use character in where _key= clause via JDBC Thin, if 
previously putted within cache API.

Following exception will be got:

{color:#172b4d}{{java.sql.{color:#6554c0}SQLException{color}: 
{color:#6554c0}Failed{color} {color:#0052cc}to{color} parse query. 
{color:#6554c0}Hexadecimal{color} string {color:#0052cc}with{color} odd number 
of characters: {color:#36b37e}"a"{color}; SQL statement:SELECT * FROM 
tableefa88adf_7a52_4cac_9a2b_253c4880a369 {color:#6554c0}WHERE{color} _key = 
{color:#36b37e}'a'{color} 
[{color:#0052cc}90003{color}-{color:#0052cc}199{color}]}}{color}

Reproducer: 
org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testCharacterDataType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSSION] Tracing: IGNITE-13060

2020-06-30 Thread Alexander Lapin
Hello Igniters,

I'd like to discuss with you and then donate changes related to IGNITE-13060

In very brief it's an initial tracing implementation that allows to thrace
Communication, Exchange, Discovery and Transactions. Spi concept is used
with OpenCensus as one of implementations. For more details about tracing
engine, tracing configuration, etc please see IEP-48
.

Best regards,
Alexander


[jira] [Created] (IGNITE-13060) Tracing: initial implementation

2020-05-22 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-13060:


 Summary: Tracing: initial implementation
 Key: IGNITE-13060
 URL: https://issues.apache.org/jira/browse/IGNITE-13060
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexander Lapin
Assignee: Alexander Lapin


Initial tracing implementation. See 
[IEP-48|https://cwiki.apache.org/confluence/display/IGNITE/IEP-48%3A+Tracing] 
for details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12326) JDBC thin returns java.util.Date instead of java.sql.Date if input values, putted to cache via cache API are LocalDate or java.sql.Date.

2019-10-23 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12326:


 Summary: JDBC thin returns java.util.Date instead of java.sql.Date 
if input values, putted to cache via cache API are  LocalDate or java.sql.Date.
 Key: IGNITE-12326
 URL: https://issues.apache.org/jira/browse/IGNITE-12326
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
 Fix For: 2.8


JDBC thin returns java.util.Date instead of java.sql.Date if input values, 
putted to cache via cache API are LocalDate or java.sql.Date.

Reproducers:
org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testSqlDateDataType
org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testLocalDateDataType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12322) Changing baseline via set command may cause NPEs if configured NodeFilter takes node attributes into account

2019-10-22 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12322:


 Summary: Changing baseline via set command may cause NPEs if 
configured NodeFilter takes node attributes into account
 Key: IGNITE-12322
 URL: https://issues.apache.org/jira/browse/IGNITE-12322
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
 Fix For: 2.8


VisorBaselineTask doesn't allow to add() offline baseline node, but allows to 
set() collection of nodes where at least one is offline and doesn’t belong to 
current BLT. 
We should prohibit passing offline nodes to setBaselineTopology(…) (in case 
they are not part of current BLT): otherwise we won't be able to calculate 
affinity in case NodeFilter is configured.
{code:java}
2019-07-16 13:38:01.658 ERROR 16507 --- [exchange-worker-#165] 
.c.d.d.p.GridDhtPartitionsExchangeFuture : Failed to reinitialize local 
partitions (rebalancing will be stopped): GridDhtPartitionExchangeId 
[topVer=AffinityTopologyVersion [topVer=481, minorTopVer=1], 
discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=e53b5aca-9432-4cbe-9626-da480b86d417, addrs=ArrayList [10.12.85.13, 
127.0.0.1], sockAddrs=HashSet [lpposput50143.phx.aexp.com/10.12.85.13:47550, 
/127.0.0.1:47550], discPort=47550, order=288, intOrder=148, 
lastExchangeTime=1563309481590, loc=true, ver=2.5.7#20190326-sha1:b45b8438, 
isClient=false], topVer=481, nodeId8=e53b5aca, msg=null, 
type=DISCOVERY_CUSTOM_EVT, tstamp=1563309481580]DiscoveryCustomEvent 
[customMsg=ChangeGlobalStateMessage 
[id=bb76b94db61-f4fff892-04f6-4153-9210-1a19749fec35, 
reqId=1b5cae87-ad46-4d62-8525-bc1a5015b0d8, 
initiatingNodeId=e53b5aca-9432-4cbe-9626-da480b86d417, activate=true, 
baselineTopology=BaselineTopology [id=3, branchingHash=-1403071463, 
branchingType='New BaselineTopology', 
baselineNodes=[/dev/shm/ignite/storagepath/lpposput50142.phx.aexp.com, 
/dev/shm/ignite/storagepath/lpposput50143.phx.aexp.com, 
/dev/shm/ignite/storagepath/lpposput50133.phx.aexp.com, 
/dev/shm/ignite/storagepath/lpposput50134.phx.aexp.com, 
/dev/shm/ignite/storagepath/lpposput50141.phx.aexp.com, 
/dev/shm/ignite/storagepath/lpposput50140.phx.aexp.com]], 
forceChangeBaselineTopology=true, timestamp=1563309481551], 
affTopVer=AffinityTopologyVersion [topVer=481, minorTopVer=1], super=], 
nodeId=e53b5aca, evt=DISCOVERY_CUSTOM_EVT]
java.lang.NullPointerException: null
at 
org.apache.ignite.internal.cluster.DetachedClusterNode.attribute(DetachedClusterNode.java:69)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at com.aexp.rc.ignite.CacheNodeFilter.apply(CacheNodeFilter.java:14) 
~[classes!/:0.1.0-SNAPSHOT]
at com.aexp.rc.ignite.CacheNodeFilter.apply(CacheNodeFilter.java:6) 
~[classes!/:0.1.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.affinityNode(GridCacheUtils.java:1362)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.processors.cluster.BaselineTopology.createBaselineView(BaselineTopology.java:331)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.calculate(GridAffinityAssignmentCache.java:347)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$15.applyx(CacheAffinitySharedManager.java:1899)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$15.applyx(CacheAffinitySharedManager.java:1895)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.lambda$forAllRegisteredCacheGroups$e0a6939d$1(CacheAffinitySharedManager.java:1268)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at 
org.apache.ignite.internal.util.IgniteUtils.lambda$null$1(IgniteUtils.java:10529)
 ~[ignite-core-2.5.7.jar!/:2.5.7]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_161]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_161]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]
Suppressed: java.lang.NullPointerException: null
... 14 common frames omitted
Suppressed: java.lang.NullPointerException: null
... 14 common frames omitted
Suppressed: java.lang.NullPointerException: null
... 14 common frames omitted
Suppressed: java.lang.NullPointerException: null
... 14 common frames omitted
Suppressed: java.lang.NullPointerException: null
... 14 common frames omitted
Suppressed: java.lang.NullPointerException: null
at 
org.apache.ignite.internal.cluster.DetachedClusterNode.attribute(DetachedClusterNode.java:69)
 ~[ignite-core-2.5.7.jar

[jira] [Created] (IGNITE-12321) Rebalance doesn't happen after restart if node failed at the moment of rebalance.

2019-10-22 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12321:


 Summary: Rebalance doesn't happen after restart if node failed at 
the moment of rebalance.
 Key: IGNITE-12321
 URL: https://issues.apache.org/jira/browse/IGNITE-12321
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
 Fix For: 2.8


Steps to reproduce:
1. Start 2 nodes in baseline
2. Stream a lot of data.
3. Stop the second node, clean the persistence and start it again(with the same 
consistent id
4. Kill the first node at the moment of rebalancing (make sure that not all 
data was rebalanced)
5. Partition loss will be detected.
6. Return first node to the cluster.
7. Result:

Data is not rebalanced, All data(primary and backup) is stored on the first 
node only, while the second node has the same small subset of the data. It's 
reproducible with any Partition loss policy. Resetting lost partitions, 
activation/deactivation doesn't help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12320) Partial index rebuild fails in case indexed cache contains different datatypes

2019-10-22 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12320:


 Summary: Partial index rebuild fails in case indexed cache 
contains different datatypes
 Key: IGNITE-12320
 URL: https://issues.apache.org/jira/browse/IGNITE-12320
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
 Fix For: 2.8


The problem is that in case cache contains different datatypes, all of them 
will be passed to IndexRebuildPartialClosure during iteration over partition. 
Perhaps, TableCacheFilter is supposed to filter out entries of unwanted types, 
but it doesn't work properly.
Steps to reprocude:
1. Add entries of different types (both indexed and not) to cache
2. Trigger partial index rebuild
Index rebuild will fail with the following error:
{code:java}
[2019-08-20 
00:33:55,640][ERROR][pub-#302%h2.GridIndexFullRebuildTest3%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is 
corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=98629247, 
val2=844420635165670]], msg=Runtime failure on row: %s ]]]
class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=98629247, 
val2=844420635165670]], msg=Runtime failure on row: %s ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.corruptedTreeException(BPlusTree.java:5126)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2236)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2183)
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:285)
at 
org.apache.ignite.internal.processors.query.h2.IndexRebuildPartialClosure.apply(IndexRebuildPartialClosure.java:49)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:3867)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processKey(SchemaIndexCacheVisitorImpl.java:254)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartition(SchemaIndexCacheVisitorImpl.java:217)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartitions(SchemaIndexCacheVisitorImpl.java:176)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.visit(SchemaIndexCacheVisitorImpl.java:135)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.rebuildIndexesFromHash0(IgniteH2Indexing.java:2191)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$7.body(IgniteH2Indexing.java:2154)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to get 
field because type ID of passed object differs from type ID this BinaryField 
belongs to [expected=-635374417, actual=1778229603]
at 
org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:287)
at 
org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:109)
at 
org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:220)
at 
org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:116)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor.columnValue(GridH2RowDescriptor.java:331)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap.getValue0(GridH2KeyValueRowOnheap.java:122)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap.getValue(GridH2KeyValueRowOnheap.java:106)
at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:350)
at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:56)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4614)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4534

[jira] [Created] (IGNITE-12315) In case of using ArrayBlockingQueue as key, cache.get() returns null.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12315:


 Summary: In case of using ArrayBlockingQueue as key, cache.get() 
returns null.
 Key: IGNITE-12315
 URL: https://issues.apache.org/jira/browse/IGNITE-12315
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


In case of using ArrayBlockingQueue as key, cache.get() returns null. In case 
of using ArrayList or LinkedList everything works as expected.
{code:java}
ArrayBlockingQueue queueToCheck = new ArrayBlockingQueue<>(5);
queueToCheck.addAll(Arrays.asList(1, 2, 3));

cache.put(queueToCheck, "aaa");
cache.put(queueToCheck); // returns null
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12314) Unexpected return type in case of retrieving Byte[]{1,2,3} from cache value.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12314:


 Summary: Unexpected return type in case of retrieving 
Byte[]{1,2,3} from cache value.
 Key: IGNITE-12314
 URL: https://issues.apache.org/jira/browse/IGNITE-12314
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


Unexpected return type in case of retrieving Byte[]\{1,2,3} from cache value:
 
{color:#172b4d}{{cache.{color:#172b4d}put{color}({color:#36b37e}"aaa"{color}, 
{color:#0052cc}new{color} {color:#6554c0}Byte{color}[] 
{{color:#0052cc}1{color}, {color:#0052cc}2{color}, {color:#0052cc}3{color}});   
cache.{color:#172b4d}get{color}({color:#36b37e}"aaa"{color});}}{color}
Byte[3]@... expected with corresponding content, however Object[3]@... got.

Seems that it's related to primitive wrapers, cause String[] as value works as 
expected:
 
{color:#172b4d}{{cache.{color:#172b4d}put{color}({color:#36b37e}"aaa"{color}, 
{color:#0052cc}new{color} {color:#6554c0}String{color}[] 
{{color:#36b37e}"1"{color}, {color:#36b37e}"2"{color}, 
{color:#36b37e}"3"{color}});   
cache.{color:#172b4d}get{color}({color:#36b37e}"aaa"{color});}}{color}
Arrays of primitives also works as expected:
 
{color:#172b4d}{{cache.{color:#172b4d}put{color}({color:#36b37e}"aaa"{color}, 
{color:#0052cc}new{color} {color:#0052cc}byte{color}[] 
{{color:#0052cc}1{color}, {color:#0052cc}2{color}, {color:#0052cc}3{color}});   
cache.{color:#172b4d}get{color}({color:#36b37e}"aaa"{color});}}{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12313) Unable to update value via sql update query if a key is a byte array within non-mvcc mode.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12313:


 Summary: Unable to update value via sql update query if a key is a 
byte array within non-mvcc mode.
 Key: IGNITE-12313
 URL: https://issues.apache.org/jira/browse/IGNITE-12313
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


{code:java}
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: [B 
cannot be cast to java.lang.Comparable
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2350)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2283)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2210)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2183)
at 
org.apache.ignite.sqltests.SqlDataTypesCoverageTests.checkCRUD(SqlDataTypesCoverageTests.java:381)
at 
org.apache.ignite.sqltests.SqlDataTypesCoverageTests.checkBasicSqlOperations(SqlDataTypesCoverageTests.java:335)
at 
org.apache.ignite.sqltests.SqlDataTypesCoverageTests.testBinaryDataType(SqlDataTypesCoverageTests.java:269)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2075)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: [B cannot be cast to 
java.lang.Comparable
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2828)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2309)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2347)
... 17 more
Caused by: java.lang.ClassCastException: [B cannot be cast to 
java.lang.Comparable
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.compareForDml(BinaryObjectImpl.java:863)
at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender$BatchEntryComparator.compare(DmlBatchSender.java:423)
at java.util.TreeMap.compare(TreeMap.java:1295)
at java.util.TreeMap.put(TreeMap.java:538)
at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender$Batch.put(DmlBatchSender.java:368)
at 
org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.add(DmlBatchSender.java:118)
at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:252)
at 
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:168)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2765)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2625)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2555)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1167)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1093)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2293)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2289)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:35)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2805)
... 19 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12312) JDBC thin: it's not possible to use LocalDate/LocalTime/LocalDateTime as value in where _key=.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12312:


 Summary: JDBC thin: it's not possible to use 
LocalDate/LocalTime/LocalDateTime as value in where _key=.
 Key: IGNITE-12312
 URL: https://issues.apache.org/jira/browse/IGNITE-12312
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


JDBC thin: it's not possible to use LocalDateTime (converted internally to 
java.sql.Timestamp) as value in where _key=.
In case of following query
 
{color:#172b4d}{{stmt.{color:#172b4d}executeQuery{color}({color:#36b37e}"SELECT 
* FROM "{color} + tableName +{color:#36b37e}" where _key >= 
PARSEDATETIME('2019-09-05 17:43:01.144', '-MM-dd HH:mm:ss.SSS') and _key <= 
PARSEDATETIME('2019-09-05 17:43:01.144', '-MM-dd 
HH:mm:ss.SSS')"{color});}}{color}

expected row would be returned, however in case of
 
{color:#172b4d}{{stmt.{color:#172b4d}executeQuery{color}({color:#36b37e}"SELECT 
* FROM "{color} + tableName +{color:#36b37e}" where _key = 
PARSEDATETIME('2019-09-05 17:43:01.144', '-MM-dd 
HH:mm:ss.SSS')"{color});}}{color}
no rows would be returned.

Reproducers:

org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testLocalDateTimeDataType

org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testLocalTimeDataType

org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testLocalDateDataType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12311) DELETE by PK doesn't work if PK is TINYINT or SMALLINT.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12311:


 Summary: DELETE by PK doesn't work if PK is TINYINT or SMALLINT.
 Key: IGNITE-12311
 URL: https://issues.apache.org/jira/browse/IGNITE-12311
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


1. Start bin\ignite.bat
2. Start bin\sqlline.bat -d org.apache.ignite.IgniteJdbcThinDriver -u 
jdbc:ignite:thin://127.0.0.1/distributedJoins=true

3. Execute operations:
{color:#172b4d}{{{color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 create table t1 (a tinyint {color:#0052cc}null{color} primary key, b 
VARCHAR);{color:#6554c0}No{color} rows affected 
({color:#0052cc}0{color},{color:#0052cc}198{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 insert into t1 values ({color:#0052cc}1{color}, 
{color:#36b37e}'a'{color});{color:#0052cc}1{color} row affected 
({color:#0052cc}0{color},{color:#0052cc}036{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 select * from 
t1;+++| 
  {color:#6554c0}A{color}|   
{color:#6554c0}B{color}
|+++| 
{color:#0052cc}1{color}  | a
  
|+++{color:#0052cc}1{color}
 row selected ({color:#0052cc}0{color},{color:#0052cc}048{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 update t1 set b = {color:#36b37e}'b'{color} where 
a={color:#0052cc}1{color};{color:#0052cc}1{color} row affected 
({color:#0052cc}0{color},{color:#0052cc}018{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 select * from 
t1;+++| 
  {color:#6554c0}A{color}|   
{color:#6554c0}B{color}
|+++| 
{color:#0052cc}1{color}  | b
  
|+++{color:#0052cc}1{color}
 row selected ({color:#0052cc}0{color},{color:#0052cc}006{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 delete from t1 where a={color:#0052cc}1{color};{color:#6554c0}No{color} rows 
affected ({color:#0052cc}0{color},{color:#0052cc}003{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>
 select * from 
t1;+++| 
  {color:#6554c0}A{color}|   
{color:#6554c0}B{color}
|+++| 
{color:#0052cc}1{color}  | b
  
|+++{color:#0052cc}1{color}
 row selected ({color:#0052cc}0{color},{color:#0052cc}006{color} 
seconds){color:#0052cc}0{color}: 
jdbc:ignite:thin://{color:#0052cc}127.0{color}{color:#0052cc}.0{color}{color:#0052cc}.1{color}/>}}{color}
Same behavoir is for SMALLINT type



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12310) SortedEvictionPolicy fails with java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang. in case of Byte, Short, Long, etc.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12310:


 Summary: SortedEvictionPolicy fails with 
java.lang.ClassCastException: java.lang.Integer cannot be cast to 
java.lang. in case of Byte, Short, Long, etc.
 Key: IGNITE-12310
 URL: https://issues.apache.org/jira/browse/IGNITE-12310
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


{code:java}
java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer
at java.lang.Integer.compareTo(Integer.java:52)
at 
org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy$DefaultHolderComparator.compare(SortedEvictionPolicy.java:359)
at 
org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy$DefaultHolderComparator.compare(SortedEvictionPolicy.java:347)
at 
java.util.concurrent.ConcurrentSkipListMap.cpr(ConcurrentSkipListMap.java:655)
at 
java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:835)
at 
java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1979)
at 
org.apache.ignite.internal.util.GridConcurrentSkipListSet.add(GridConcurrentSkipListSet.java:142)
at 
org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy$GridConcurrentSkipListSetEx.add(SortedEvictionPolicy.java:397)
at 
org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy.touch(SortedEvictionPolicy.java:175)
at 
org.apache.ignite.cache.eviction.AbstractEvictionPolicy.onEntryAccessed(AbstractEvictionPolicy.java:87)
at 
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.notifyPolicy(GridCacheEvictionManager.java:319)
at 
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.touch(GridCacheEvictionManager.java:228)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.touch(GridCacheMapEntry.java:5052)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.unlockEntries(GridDhtAtomicCache.java:3104)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1905)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1668)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:814)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:666)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAll0(GridDhtAtomicCache.java:1104)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAll0(GridDhtAtomicCache.java:654)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2513)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1264)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:863)
at 
org.apache.ignite.internal.processors.cache.GridCacheDataTypesCoverageTest.checkBasicCacheOperations(GridCacheDataTypesCoverageTest.java:624)
at 
org.apache.ignite.internal.processors.cache.GridCacheDataTypesCoverageTest.testLongDataType(GridCacheDataTypesCoverageTest.java:347)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2075)
at java.lang.Thread.run(Thread.java:748)
[2019-08-23 12:26:39,784][INFO ][main][root

[jira] [Created] (IGNITE-12308) Data types coverage for basic sql operations.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12308:


 Summary: Data types coverage for basic sql operations.
 Key: IGNITE-12308
 URL: https://issues.apache.org/jira/browse/IGNITE-12308
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Alexander Lapin
 Fix For: 2.8


For more details see: https://issues.apache.org/jira/browse/IGNITE-12307



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12307) Data types coverage for basic cache operations.

2019-10-21 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-12307:


 Summary: Data types coverage for basic cache operations.
 Key: IGNITE-12307
 URL: https://issues.apache.org/jira/browse/IGNITE-12307
 Project: Ignite
  Issue Type: Task
  Components: cache
Reporter: Alexander Lapin
 Fix For: 2.8


The data types used for testing are not collected at single test/suite and it's 
not clear which types are covered and which not. We should redesign the 
coverage and cover following

Operations:
 * put

 * putAll

 * remove

 * removeAll

 * get

 * getAll

Data Types both for value and key (if applicable):
 * byte/Byte

 * short/Short

 * int/Integer

 * long/Long

 * float/Float

 * double/Double

 * boolean/Boolean

 * char/String

 * Arrays of primitives (single type)

 * Arrays of Objects (different types)

 * Collections

 ** List

 ** Queue

 ** Set

 * Objects based on:

 ** primitives only

 ** primitives + collections

 ** primitives + collections + nested objects

Persistance mode:
 * in-memory

 * PDS

Cache configurations:
 * atomic/tx/mvcc

 * replication/partitioned

 * TTL/no TTL

 * QueryEntnty

 * Backups=1,2

 * EvictionPolicy

 * writeSynchronizationMode(FULL_SYNC, PRIMARY_SYNC, FULL_ASYNC)

 * onheapCacheEnabled



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12036) Changing baseline via set command may cause NPEs if configured NodeFilter takes node attributes into account

2019-08-02 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-12036:


 Summary: Changing baseline via set command may cause NPEs if 
configured NodeFilter takes node attributes into account
 Key: IGNITE-12036
 URL: https://issues.apache.org/jira/browse/IGNITE-12036
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin
Assignee: Alexander Lapin


VisorBaselineTask doesn't allow to add() offline baseline node, but allows to 
set() collection of nodes where at least one is offline and doesn’t belong to 
current BLT. 
We should prohibit passing offline nodes to setBaselineTopology(…) (in case 
they are not part of current BLT): otherwise we won't be able to calculate 
affinity in case NodeFilter is configured.

 
 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IGNITE-11705) Jdbc Thin: add ability to control affinity cache size.

2019-04-09 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11705:


 Summary: Jdbc Thin: add ability to control affinity cache size.
 Key: IGNITE-11705
 URL: https://issues.apache.org/jira/browse/IGNITE-11705
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin


Within AffinityCache there are two properties DISTRIBUTIONS_CACHE_LIMIT and 
SQL_CACHE_LIMIT that are hard coded. We should add an ability to control given 
parameters within some sort of configuration. IgniteSystemProperties is not an 
option however.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11661) Jdbc Thin: Implement tests for best effort affinity

2019-04-01 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11661:


 Summary: Jdbc Thin: Implement tests for best effort affinity
 Key: IGNITE-11661
 URL: https://issues.apache.org/jira/browse/IGNITE-11661
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin


Test plan draft.
 # Сheck that requests go to the expected number of nodes for different 
combinations of conditions
 ** Transactional
 *** Without params
  Select
 * Different partition tree options(All/NONE/Group/CONST) produced by 
different query types.
  Dml
 * - // -
 *** With params
  - // -
 ** Non-Transactional
 *** - // -
 # Check that request/response functionality works fine if server response 
lacks partition result.
 # Check that partition result is supplied only in case of rendezvous affinity 
function without custom filters.
 # Check that best effort functionality works fine for different partitions 
count.
 # Сheck that a change in topology leads to jdbc thin affinity cache 
invalidation.
 ## Topology changed during partition result retrieval.
 ## Topology changed during cache distribution retrieval.
 ## Topology changed during best-effort-affinity-unrelated query.
 # Check that jdbc thin best effort affinity works fine if cache is full and 
new data still coming. For given case we probably should decrease cache 
boundaries.
 # Check that proper connection is used if set of nodes we are connected to and 
set of nodes derived from partitions
 ## Fully intersect;
 ## Partially intersect;
 ## Doesn't intersect, i.e.
||User Specified||Derived from partitons||
|host:port1 - > UUID1
 host:port2 -> UUID2|partition1 -> UUID3|

No intersection, so random connection should be used.
 # Check client reconnection after failure.
 # Check that jdbc thin best effort affinity skipped if it is switched off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11566) JDBC Thin: add support for custom partitions count within the client side best effort affinity

2019-03-19 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11566:


 Summary: JDBC Thin: add support for custom partitions count within 
the client side best effort affinity
 Key: IGNITE-11566
 URL: https://issues.apache.org/jira/browse/IGNITE-11566
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11565) JDBC Thin: add connection resolution in case of multiple partitons

2019-03-19 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11565:


 Summary: JDBC Thin: add connection resolution in case of multiple 
partitons
 Key: IGNITE-11565
 URL: https://issues.apache.org/jira/browse/IGNITE-11565
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin


At this moment, we only calculate target node id if there is none or single 
partitions. Implement node resolution logic for multiple partitions case. The 
basic idea is that we should use random node from one of presented in 
connections if it's derived from calculated partitions or totally random if 
there are no matches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11507) SQL: Ensure that affinity topology version doesn't change during PartitionResult construction/application.

2019-03-07 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11507:


 Summary: SQL: Ensure that affinity topology version doesn't change 
during PartitionResult construction/application.
 Key: IGNITE-11507
 URL: https://issues.apache.org/jira/browse/IGNITE-11507
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Alexander Lapin
 Fix For: 2.8


Currently some actions might be performed (for example cache removal) during 
PartitionResult construction, so it might become invalid. Besides that it's not 
possible to associate PartitionResult with affinity topology version, so it is 
impossible to guarantee that the partition result is used on the same version 
on which it was built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11321) JDBC Thin: implement nodes multi version support in case of best effort affinity mode.

2019-02-14 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11321:


 Summary: JDBC Thin: implement nodes multi version support in case 
of best effort affinity mode.
 Key: IGNITE-11321
 URL: https://issues.apache.org/jira/browse/IGNITE-11321
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin
 Fix For: 2.8


Currently in case of best effort affinity mode we throw SQLException if the 
version of any of the nodes to which we connect is different from the version 
of all other nodes.Given logic needs to be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11320) JDBC Thin: add support for individual reconnect in case of best effort affinity mode.

2019-02-14 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11320:


 Summary: JDBC Thin: add support for individual reconnect in case 
of best effort affinity mode.
 Key: IGNITE-11320
 URL: https://issues.apache.org/jira/browse/IGNITE-11320
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin
 Fix For: 2.8


Currently in case of best effort affinity mode we either connect to all nodes 
specified by user or throw SQLException. Given logic needs to be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11314) JDBC Thin: add transaction-scoped flag to JdbcHandler's responses.

2019-02-13 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11314:


 Summary: JDBC Thin: add transaction-scoped flag to JdbcHandler's 
responses.
 Key: IGNITE-11314
 URL: https://issues.apache.org/jira/browse/IGNITE-11314
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Reporter: Alexander Lapin


Within the context of best effort affinity, and particular, multi-connections, 
it's necessary to use "sticky" connections in case of  "next page" requests, 
transactions, streaming and copy.
In order to implement transaction-based-sticky use case we need to know whether 
we are in transnational scope or not. So JdbcRequestHandler ought to retrieve 
query execution plan, analyse whether transaction exists and propagate 
corresponding flag to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11309) JDBC Thin: add flag or property to disable best effort affinity

2019-02-13 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11309:


 Summary: JDBC Thin: add flag or property to disable best effort 
affinity
 Key: IGNITE-11309
 URL: https://issues.apache.org/jira/browse/IGNITE-11309
 Project: Ignite
  Issue Type: Task
  Components: sql
Affects Versions: 2.8
Reporter: Alexander Lapin


It's necessary to have an ability to disable best effort affinity among thin 
clients including thin jdbc client.
It's not obvious whether it should be flag in connection string, app properties 
or some other place, so research required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11287) JDBC Thin: best effort affinity

2019-02-11 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11287:


 Summary: JDBC Thin: best effort affinity
 Key: IGNITE-11287
 URL: https://issues.apache.org/jira/browse/IGNITE-11287
 Project: Ignite
  Issue Type: Task
Reporter: Alexander Lapin


It's an umbrella ticket for implementing 
[IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]
 within the scope of JDBC Thin driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11258) JDBC: update connection setup logic.

2019-02-08 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11258:


 Summary: JDBC: update connection setup logic.
 Key: IGNITE-11258
 URL: https://issues.apache.org/jira/browse/IGNITE-11258
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Alexander Lapin


# On thin client startup it connects to *all* *nodes* provided by user by 
client configuration.
 # Upon handshake server returns its UUID to client.
 # By the end of the startup procedure, client have open connections to all 
available server nodes and the following mapping (*nodeMap*): [UUID => 
Connection].

Connection to all nodes helps to identify available nodes, but can lead to 
significant delay, when thin client is used on a large cluster with a long IP 
list provided by user. To lower this delay, asynchronous establishment of 
connections can be used.
For more information see [IEP-23: Best Effort 
Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11257) JDBC: update handshake protocol so that the node returns its UUID.

2019-02-08 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11257:


 Summary: JDBC: update handshake protocol so that the node returns 
its UUID.
 Key: IGNITE-11257
 URL: https://issues.apache.org/jira/browse/IGNITE-11257
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Alexander Lapin
 Fix For: 2.8


Add node UUID to successful handshake response.

For more information see [IEP-23: Best Effort 
Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11197) SQL:Ability to resolve partiton for temporal data types.

2019-02-04 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11197:


 Summary: SQL:Ability to resolve partiton for temporal data types.
 Key: IGNITE-11197
 URL: https://issues.apache.org/jira/browse/IGNITE-11197
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Alexander Lapin


Within org.apache.ignite.internal.sql.optimizer.affinity.PartitionUtils#convert 
it's now possible to convert following data types:
* BOOLEAN,
* BYTE,
* SHORT,
* INT,
* LONG,
* FLOAT,
* DOUBLE,
* STRING,
* DECIMAL,
* UUID;

Following temporal data types should also be supported: 
* DATE,
* TIME,
* TIMESTAMP
The most critical part here is that convertation result should be exactly the 
same as H2 convertation result (see 
org.apache.ignite.internal.processors.query.h2.H2Utils#convert), in other case 
partitions might be  resolved incorrectly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11019) SQL: explain plan of a simple query contains merge table

2019-01-22 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-11019:


 Summary: SQL: explain plan of a simple query contains merge table
 Key: IGNITE-11019
 URL: https://issues.apache.org/jira/browse/IGNITE-11019
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Alexander Lapin
 Fix For: 2.8


In case of simple* query like following "select * from Organization org where 
org._KEY = 1 or org._KEY = 2" explain plan will contain merge table despite the 
fact that it's skipped within regular query flow.
{code:java|title=GridReduceQueryExecutor#query}
final boolean skipMergeTbl = !qry.explain() && qry.skipMergeTable()
{code}

Explain plan output:
 : [SELECT
ORG__Z0.NAME AS __C0_0,
ORG__Z0.DEBTCAPITAL AS __C0_1,
ORG__Z0.ID AS __C0_2
FROM "orgBetweenTest".ORGANIZATION ORG__Z0
/* "orgBetweenTest"."_key_PK": _KEY IN(1, 2) */
WHERE ORG__Z0._KEY IN(1, 2)]
 : [SELECT
__C0_0 AS NAME,
__C0_1 AS DEBTCAPITAL,
__C0_2 AS ID
FROM PUBLIC.__T0
/* "orgBetweenTest"."merge_scan" */]

*simple query by means of GridSqlQuery#simpleQuery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10893) ClientListenerResponse.status uses codes both from IgniteQueryErrorCode and ClientListenerResponse itself.

2019-01-11 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-10893:


 Summary: ClientListenerResponse.status uses codes both from 
IgniteQueryErrorCode and ClientListenerResponse itself. 
 Key: IGNITE-10893
 URL: https://issues.apache.org/jira/browse/IGNITE-10893
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Alexander Lapin
 Fix For: 2.8


ClientListenerResponse.status might be set as one of IgniteQueryErrorCode 
constants. Besides that, ClientListenerResponse has few codes as constants 
itself. These codes may intersect, in particular, 
ClientListenerResponse.STATUS_FAILED == IgniteQueryErrorCode.UNKNOWN. Seems 
that it works fine at the moment, however it looks unsafe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10836) Add documentation for jdbc thin query cancel

2018-12-27 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-10836:


 Summary: Add documentation for jdbc thin query cancel
 Key: IGNITE-10836
 URL: https://issues.apache.org/jira/browse/IGNITE-10836
 Project: Ignite
  Issue Type: Task
  Components: documentation, sql
Reporter: Alexander Lapin
 Fix For: 2.8


Query cancel for jdbc thin was implemented.

At least error codes page 
([https://apacheignite-sql.readme.io/docs/error-codes-27)] should be updated, 
cause new error code (57014 Query cancelled) was added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: JDBC thin driver: support connection timeout

2018-12-03 Thread Alexander Lapin
Hi Ivan, Vladimir,

@Ivan

> 1. According to the jdbc spec [1] setNetworkTimeout method is
> optional. What user problem we are going to solve by implementing that
> method?
>
We are going to give user an ability to set custom connection timeout.

> Also I checked another quite popular jdbc driver provided by
> MariaDB [2]. They ignore an executor argument as well and set a socket
> timeout instead. So, I think that we are on a safe side if we ignore
> an executor.
>
Got it. Thank you!

So, I'll implemented connection timeout with the help of socket timeout
ignoring an executor.

Thanks,
Alexander

пн, 3 дек. 2018 г. в 00:41, Vladimir Ozerov :

> +1
>
> вс, 2 дек. 2018 г. в 18:39, Павлухин Иван :
>
> > Missing ref:
> > [2]
> >
> https://mvnrepository.com/artifact/org.mariadb.jdbc/mariadb-java-client/2.3.0
> >
> > 2018-12-02 18:31 GMT+03:00, Павлухин Иван :
> > > Hi Alexander,
> > >
> > > I have 2 points.
> > >
> > > 1. According to the jdbc spec [1] setNetworkTimeout method is
> > > optional. What user problem we are going to solve by implementing that
> > > method?
> > > 2. Also I checked another quite popular jdbc driver provided by
> > > MariaDB [2]. They ignore an executor argument as well and set a socket
> > > timeout instead. So, I think that we are on a safe side if we ignore
> > > an executor.
> > >
> > > [1]
> > https://download.oracle.com/otndocs/jcp/jdbc-4_2-mrel2-spec/index.html
> > > пт, 30 нояб. 2018 г. в 16:28, Alexander Lapin :
> > >>
> > >> Hi Igniters,
> > >>
> > >> Within context of connection timeout [
> > >> https://issues.apache.org/jira/browse/IGNITE-5234] it's not obvious
> > >> whether
> > >> it's required to use setNetworkTimeout's executor or not.
> > >>
> > >> According to the javadoc of
> > >> java.sql.Connection#setNetworkTimeout(Executor
> > >> executor, int milliseconds), executor is "The Executor
> > >> implementation which will be used by setNetworkTimeout."
> > >> Seems that executor supposed to take care of connection
> closing/aborting
> > >> in
> > >> case of timeout, based on submitted Runnable implementation. On the
> > other
> > >> hand it's possible to ignore executor and implement
> > >> timeout-detection/cancellation logic with Timer. Something like
> > following
> > >> (pseudo-code):
> > >>
> > >> ConnectionTimeoutTimerTask connectionTimeoutTimerTask = new
> > >> ConnectionTimeoutTimerTask(timeout);
> > >> timer.schedule(connectionTimeoutTimerTask, 0, REQUEST_TIMEOUT_PERIOD);
> > >> ...
> > >> JdbcResponse res = cliIo.sendRequest(req);
> > >> ...
> > >>
> > >> private class ConnectionTimeoutTimerTask extends TimerTask {
> > >> ...
> > >> @Override public void run() {
> > >> if (remainingConnectionTimeout <= 0)
> > >> close(); //connection.close();
> > >>
> > >> remainingConnectionTimeout -= REQUEST_TIMEOUT_PERIOD;
> > >> }
> > >> ...
> > >> }
> > >>
> > >> It worth to mention that MSSQL Jdbc driver doesn't use executor and
> > >> PostgreSQL doesn't implement setNetworkTimeout() at all.
> > >>
> > >> From my point of view it might be better to ignore executor, is it
> > >> suitable?
> > >>
> > >> Any ideas?
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >
>


JDBC thin driver: support connection timeout

2018-11-30 Thread Alexander Lapin
Hi Igniters,

Within context of connection timeout [
https://issues.apache.org/jira/browse/IGNITE-5234] it's not obvious whether
it's required to use setNetworkTimeout's executor or not.

According to the javadoc of java.sql.Connection#setNetworkTimeout(Executor
executor, int milliseconds), executor is "The Executor
implementation which will be used by setNetworkTimeout."
Seems that executor supposed to take care of connection closing/aborting in
case of timeout, based on submitted Runnable implementation. On the other
hand it's possible to ignore executor and implement
timeout-detection/cancellation logic with Timer. Something like following
(pseudo-code):

ConnectionTimeoutTimerTask connectionTimeoutTimerTask = new
ConnectionTimeoutTimerTask(timeout);
timer.schedule(connectionTimeoutTimerTask, 0, REQUEST_TIMEOUT_PERIOD);
...
JdbcResponse res = cliIo.sendRequest(req);
...

private class ConnectionTimeoutTimerTask extends TimerTask {
...
@Override public void run() {
if (remainingConnectionTimeout <= 0)
close(); //connection.close();

remainingConnectionTimeout -= REQUEST_TIMEOUT_PERIOD;
}
...
}

It worth to mention that MSSQL Jdbc driver doesn't use executor and
PostgreSQL doesn't implement setNetworkTimeout() at all.

>From my point of view it might be better to ignore executor, is it suitable?

Any ideas?


[jira] [Created] (IGNITE-10340) JDBC thin: in some cases it's not possible to convert IllegalStateException to SQLException with appropriate message.

2018-11-20 Thread Alexander Lapin (JIRA)
Alexander Lapin created IGNITE-10340:


 Summary: JDBC thin: in some cases it's not possible to convert 
IllegalStateException to SQLException with appropriate message.
 Key: IGNITE-10340
 URL: https://issues.apache.org/jira/browse/IGNITE-10340
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Affects Versions: 2.6
Reporter: Alexander Lapin


In case of using already closed JdbcBulkLoadProcessor following exception might 
be thrown:
{code}
[12:04:15] (err) Error processing file batchjava.lang.IllegalStateException: 
Data streamer has been closed.
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.closedException(DataStreamerImpl.java:1081)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.lock(DataStreamerImpl.java:443)
...
{code}

However, cause cancellationReason is null it's not possible to detect whether 
query was cancelled or not:
{code:title=DataStreamerImpl.java|borderStyle=solid}
private void closedException() {
throw new IllegalStateException("Data streamer has been closed.", 
cancellationReason);
}
{code}

Reproducer:
{code}
org.apache.ignite.jdbc.thin.JdbcThinStatementCancelSelfTest
#testExpectSQLExceptionAndAFAPControlRetrievalAfterCancelingLongRunningFileUpload
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Welcome Message

2018-10-29 Thread Alexander Lapin
Hello, Ignite Community!

My name is Alexander Lapin. I want to contribute to Apache Ignite.
my JIRA user name is alapin. Any help on this will be appreciated.

Thanks!