[jira] [Updated] (IGNITE-4757) SQL Joins Metrics

2017-03-07 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-4757:

Description: 
SQL queries monitoring part has to be improved. Suggest adding the metrics 
below.

Per-query metrics. Total history size is defined by 
{{CacheConfiguration.getQueryDetailMetricsSize()}}:

* Total query execution count.
* Query execution time - min, max, avg. Provide a breakdown for map, per-join 
and reduce phases.
* A number of bytes exchanged between nodes during query execution. Provide a 
per-join, unions, etc. breakdown.
* A number rows returned - min, max, avg.
* Collocated or non-collocated query.

In addition, we need to introduce a way to filter out metrics for queries that 
complete faster than some predefined NNN time.

Finally, research a possibility to put the metrics into the H2 execution plan. 
Create a separated ticket if this task in not completed under this ticket.

Discussion on the dev list:
http://apache-ignite-developers.2346864.n4.nabble.com/Additional-SQL-metrics-td14945.html

  was:
SQL queries monitoring part has to be improved. Suggest adding the metrics 
below.

Per-query metrics. Total history size is defined by 
{{CacheConfiguration.getQueryDetailMetricsSize()}}:

* Total query execution count.
* Query execution time - min, max, avg. Provide a breakdown for map, per-join 
and reduce phases.
* A number of bytes exchanged between nodes during query execution. Provide a 
per-join breakdown.
* A number rows returned - min, max, avg.
* Collocated or non-collocated query.

In addition, we need to introduce a way to filter out metrics for queries that 
complete faster than some predefined NNN time.

Finally, research a possibility to put the metrics into the H2 execution plan. 
Create a separated ticket if this task in not completed under this ticket.

Discussion on the dev list:
http://apache-ignite-developers.2346864.n4.nabble.com/Additional-SQL-metrics-td14945.html


> SQL Joins Metrics
> -
>
> Key: IGNITE-4757
> URL: https://issues.apache.org/jira/browse/IGNITE-4757
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis Magda
>
> SQL queries monitoring part has to be improved. Suggest adding the metrics 
> below.
> Per-query metrics. Total history size is defined by 
> {{CacheConfiguration.getQueryDetailMetricsSize()}}:
> * Total query execution count.
> * Query execution time - min, max, avg. Provide a breakdown for map, per-join 
> and reduce phases.
> * A number of bytes exchanged between nodes during query execution. Provide a 
> per-join, unions, etc. breakdown.
> * A number rows returned - min, max, avg.
> * Collocated or non-collocated query.
> In addition, we need to introduce a way to filter out metrics for queries 
> that complete faster than some predefined NNN time.
> Finally, research a possibility to put the metrics into the H2 execution 
> plan. Create a separated ticket if this task in not completed under this 
> ticket.
> Discussion on the dev list:
> http://apache-ignite-developers.2346864.n4.nabble.com/Additional-SQL-metrics-td14945.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4797) Need to expose offheap memory allocated size metric for internal data structures

2017-03-07 Thread Kartik Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900020#comment-15900020
 ] 

Kartik Somani commented on IGNITE-4797:
---

I would like to work on this as my first ticket here. How can I assign it to 
myself?

> Need to expose offheap memory allocated size metric for internal data 
> structures
> 
>
> Key: IGNITE-4797
> URL: https://issues.apache.org/jira/browse/IGNITE-4797
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Gura
>  Labels: newbie
> Fix For: 2.0
>
>
> Offheap caches expose offheap memory allocated size via 
> {{CacheMetricsMXBean.getOffHeapAllocatedSize()}}. But this metric doesn't 
> take into account offheap memory that allocated for internal data structures 
> (GridUnsafeMap and LRU eviction policy). 
> However Ignite collects this metric (see 
> GridUnsafeMemory.systemAllocatedSize() method).
> Need to expose this metric via {{CacheMetricsMXBean}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4799) Remove TcpDiscoverySpi.heartbeatsFrequency parameter

2017-03-07 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4799:
---

 Summary: Remove TcpDiscoverySpi.heartbeatsFrequency parameter
 Key: IGNITE-4799
 URL: https://issues.apache.org/jira/browse/IGNITE-4799
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Magda
 Fix For: 2.0


{{TcpDiscoverySpi.heartbeatsFrequency}} is no longer used to adjust the 
heartbeats frequence. It affects the frequency of metrics messages sent over 
the cluster ring.

The following has to be done as a part of 2.0 release:
* Remove {{TcpDiscoverySpi.heartbeatsFrequency}} parameter.
* Use {{IgniteConfiguraion.getMetricsUpdateFrequency}} to adjust the rate of 
metrics messages.
* Make sure {{IgniteConfiguraion.getMetricsUpdateFrequency}} and metrics 
messages are not participated in the failure detection process. We have to 
clean up legacy code in {{ServerImpl}}.

Refer to this discussion for more details:
http://apache-ignite-developers.2346864.n4.nabble.com/Renaming-TcpDiscoverySpi-heartbeatsFrequency-to-TcpDiscoverySpi-metricsUpdateFrequency-td14941.html
 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-4799) Remove TcpDiscoverySpi.heartbeatsFrequency parameter

2017-03-07 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-4799:

Priority: Blocker  (was: Major)

> Remove TcpDiscoverySpi.heartbeatsFrequency parameter
> 
>
> Key: IGNITE-4799
> URL: https://issues.apache.org/jira/browse/IGNITE-4799
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Priority: Blocker
> Fix For: 2.0
>
>
> {{TcpDiscoverySpi.heartbeatsFrequency}} is no longer used to adjust the 
> heartbeats frequence. It affects the frequency of metrics messages sent over 
> the cluster ring.
> The following has to be done as a part of 2.0 release:
> * Remove {{TcpDiscoverySpi.heartbeatsFrequency}} parameter.
> * Use {{IgniteConfiguraion.getMetricsUpdateFrequency}} to adjust the rate of 
> metrics messages.
> * Make sure {{IgniteConfiguraion.getMetricsUpdateFrequency}} and metrics 
> messages are not participated in the failure detection process. We have to 
> clean up legacy code in {{ServerImpl}}.
> Refer to this discussion for more details:
> http://apache-ignite-developers.2346864.n4.nabble.com/Renaming-TcpDiscoverySpi-heartbeatsFrequency-to-TcpDiscoverySpi-metricsUpdateFrequency-td14941.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (IGNITE-4772) CPP: Add documentation for LoadCache feature

2017-03-07 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda closed IGNITE-4772.
---

> CPP: Add documentation for LoadCache feature
> 
>
> Key: IGNITE-4772
> URL: https://issues.apache.org/jira/browse/IGNITE-4772
> Project: Ignite
>  Issue Type: Task
>  Components: documentation, platforms
>Affects Versions: 1.8
>Reporter: Igor Sapego
>Assignee: Prachi Garg
>  Labels: C++, DML, cpp, documentation
> Fix For: 1.9
>
>
> {{LoadCache}} feature has been implemented recently for C++ client 
> (IGNITE-4670) and needs proper documentation on 
> [readme.io|https://apacheignite-cpp.readme.io/docs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (IGNITE-4798) Cluster does not finish rebalancing after nodes leaving

2017-03-07 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov closed IGNITE-4798.


> Cluster does not finish rebalancing after nodes leaving
> ---
>
> Key: IGNITE-4798
> URL: https://issues.apache.org/jira/browse/IGNITE-4798
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Kholodov
>
>  I managed to reproduce the stability issue we've been having in production 
> in a relatively sterile environment.
> The situation is:
> 1. Startup a cluster of 223 nodes.
> 2. Wait for everything to stabilize (took about 2 minutes).
> 3. Shut down 112 nodes.
> 4. Wait for everything to stabilize..
> Since that point, I can't connect client nodes to the cluster:
> 2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
> main ctx: actor: - Failed to wait for 
> initial partition map exchange. Possible reasons are:
>   ^-- Transactions in deadlock.
>   ^-- Long running transactions (ignore if this is the case).
>   ^-- Unreleased explicit locks.
> Other cache operations are also stuck.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-2552) Eviction policy must consider either max size or max entries count

2017-03-07 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899625#comment-15899625
 ] 

Andrey Gura commented on IGNITE-2552:
-

[~Alexey Kuznetsov], 

looks good for me. Did you run tests on TeamCity?

> Eviction policy must consider either max size or max entries count
> --
>
> Key: IGNITE-2552
> URL: https://issues.apache.org/jira/browse/IGNITE-2552
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Alexey Kuznetsov
>Priority: Minor
> Fix For: 2.0
>
>
> Presently both max size and max entries number are considered by eviction 
> policy logic even if the only one is set by user explicitly.
> This behavior must be reworked in a way that if one of the parameters is set 
> explicitly then only it will be used by eviction policy while the other one 
> will be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (IGNITE-4162) Step-by-step guidance on how to start and use Ignite cluster with Kubernetes

2017-03-07 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda closed IGNITE-4162.
---

> Step-by-step guidance on how to start and use Ignite cluster with Kubernetes
> 
>
> Key: IGNITE-4162
> URL: https://issues.apache.org/jira/browse/IGNITE-4162
> Project: Ignite
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Denis Magda
>Assignee: Prachi Garg
> Fix For: 2.0
>
>
> At the end, we have to prepare a step-by-step guidance on how to:
> - configure, launch and scale Ignite cluster.
> - how to connect and work with this cluster from applications running inside 
> and outside of Kubernetes.
> Refer to existed guidance prepared for other products:
> https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra
> https://github.com/pires/hazelcast-kubernetes
> https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (IGNITE-4798) Cluster does not finish rebalancing after nodes leaving

2017-03-07 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov resolved IGNITE-4798.
--
Resolution: Duplicate

> Cluster does not finish rebalancing after nodes leaving
> ---
>
> Key: IGNITE-4798
> URL: https://issues.apache.org/jira/browse/IGNITE-4798
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Kholodov
>
>  I managed to reproduce the stability issue we've been having in production 
> in a relatively sterile environment.
> The situation is:
> 1. Startup a cluster of 223 nodes.
> 2. Wait for everything to stabilize (took about 2 minutes).
> 3. Shut down 112 nodes.
> 4. Wait for everything to stabilize..
> Since that point, I can't connect client nodes to the cluster:
> 2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
> main ctx: actor: - Failed to wait for 
> initial partition map exchange. Possible reasons are:
>   ^-- Transactions in deadlock.
>   ^-- Long running transactions (ignore if this is the case).
>   ^-- Unreleased explicit locks.
> Other cache operations are also stuck.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-4798) Cluster does not finish rebalancing after nodes leaving

2017-03-07 Thread Nikolay Tikhonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Tikhonov updated IGNITE-4798:
-
Description: 
 I managed to reproduce the stability issue we've been having in production in 
a relatively sterile environment.

The situation is:
1. Startup a cluster of 223 nodes.
2. Wait for everything to stabilize (took about 2 minutes).
3. Shut down 112 nodes.
4. Wait for everything to stabilize..

Since that point, I can't connect client nodes to the cluster:
2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
main ctx: actor: - Failed to wait for 
initial partition map exchange. Possible reasons are:
  ^-- Transactions in deadlock.
  ^-- Long running transactions (ignore if this is the case).
  ^-- Unreleased explicit locks.

Other cache operations are also stuck.

 

  was:
   
Hi Valentin,

I managed to reproduce the stability issue we've been having in production in a 
relatively sterile environment.
The logs and stack traces are accessible here: 
https://drive.google.com/open?id=0B1YMrCiHZq1PMWJsblBYSXhaX1k

The situation is:
1. Startup a cluster of 223 nodes.
2. Wait for everything to stabilize (took about 2 minutes).
3. Shut down 112 nodes.
4. Wait for everything to stabilize..

Since that point, I can't connect client nodes to the cluster:
2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
main ctx: actor: - Failed to wait for 
initial partition map exchange. Possible reasons are:
  ^-- Transactions in deadlock.
  ^-- Long running transactions (ignore if this is the case).
  ^-- Unreleased explicit locks.

Other cache operations are also stuck.

Let me know what other information I can provide.

 


> Cluster does not finish rebalancing after nodes leaving
> ---
>
> Key: IGNITE-4798
> URL: https://issues.apache.org/jira/browse/IGNITE-4798
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Kholodov
>
>  I managed to reproduce the stability issue we've been having in production 
> in a relatively sterile environment.
> The situation is:
> 1. Startup a cluster of 223 nodes.
> 2. Wait for everything to stabilize (took about 2 minutes).
> 3. Shut down 112 nodes.
> 4. Wait for everything to stabilize..
> Since that point, I can't connect client nodes to the cluster:
> 2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
> main ctx: actor: - Failed to wait for 
> initial partition map exchange. Possible reasons are:
>   ^-- Transactions in deadlock.
>   ^-- Long running transactions (ignore if this is the case).
>   ^-- Unreleased explicit locks.
> Other cache operations are also stuck.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4798) Cluster does not finish rebalancing after nodes leaving

2017-03-07 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4798:


 Summary: Cluster does not finish rebalancing after nodes leaving
 Key: IGNITE-4798
 URL: https://issues.apache.org/jira/browse/IGNITE-4798
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Kholodov


   
Hi Valentin,

I managed to reproduce the stability issue we've been having in production in a 
relatively sterile environment.
The logs and stack traces are accessible here: 
https://drive.google.com/open?id=0B1YMrCiHZq1PMWJsblBYSXhaX1k

The situation is:
1. Startup a cluster of 223 nodes.
2. Wait for everything to stabilize (took about 2 minutes).
3. Shut down 112 nodes.
4. Wait for everything to stabilize..

Since that point, I can't connect client nodes to the cluster:
2017-02-15 23:13:16.396 WARN  o.a.i.i.p.c.GridCachePartitionExchangeManager 
main ctx: actor: - Failed to wait for 
initial partition map exchange. Possible reasons are:
  ^-- Transactions in deadlock.
  ^-- Long running transactions (ignore if this is the case).
  ^-- Unreleased explicit locks.

Other cache operations are also stuck.

Let me know what other information I can provide.

 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3018) Cache affinity calculation is slow with large nodes number

2017-03-07 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899422#comment-15899422
 ] 

Taras Ledkov commented on IGNITE-3018:
--

Tests 
[results|http://195.239.208.174/project.html?projectId=IgniteTests=projectOverview_IgniteTests=pull%2F1600%2Fhead]
 with partition balancer. Partition balancer try to make partition distribution 
closer to even distribution.

> Cache affinity calculation is slow with large nodes number
> --
>
> Key: IGNITE-3018
> URL: https://issues.apache.org/jira/browse/IGNITE-3018
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Yakov Zhdanov
> Fix For: 2.0
>
> Attachments: 003.png, 064.png, 100.png, 128.png, 200.png, 300.png, 
> 400.png, 500.png, 600.png
>
>
> With large number of cache server nodes (> 200)  RendezvousAffinityFunction 
> and FairAffinityFunction work pretty slow .
> For RendezvousAffinityFunction.assignPartitions can take hundredes of 
> milliseconds, for FairAffinityFunction it can take seconds.
> For RendezvousAffinityFunction most time is spent in MD5 hash calculation and 
> nodes list sorting. As optimization we can try to cache {partion, node} MD5 
> hash or try another hash function. Also several minor optimizations are 
> possible (avoid unncecessary allocations, only one thread local 'get', etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-4797) Need to expose offheap memory allocated size metric for internal data structures

2017-03-07 Thread Andrey Gura (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura updated IGNITE-4797:

Labels: newbie  (was: )

> Need to expose offheap memory allocated size metric for internal data 
> structures
> 
>
> Key: IGNITE-4797
> URL: https://issues.apache.org/jira/browse/IGNITE-4797
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Gura
>  Labels: newbie
> Fix For: 2.0
>
>
> Offheap caches expose offheap memory allocated size via 
> {{CacheMetricsMXBean.getOffHeapAllocatedSize()}}. But this metric doesn't 
> take into account offheap memory that allocated for internal data structures 
> (GridUnsafeMap and LRU eviction policy). 
> However Ignite collects this metric (see 
> GridUnsafeMemory.systemAllocatedSize() method).
> Need to expose this metric via {{CacheMetricsMXBean}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4797) Need to expose offheap memory allocated size metric for internal data structures

2017-03-07 Thread Andrey Gura (JIRA)
Andrey Gura created IGNITE-4797:
---

 Summary: Need to expose offheap memory allocated size metric for 
internal data structures
 Key: IGNITE-4797
 URL: https://issues.apache.org/jira/browse/IGNITE-4797
 Project: Ignite
  Issue Type: Improvement
Reporter: Andrey Gura
 Fix For: 2.0


Offheap caches expose offheap memory allocated size via 
{{CacheMetricsMXBean.getOffHeapAllocatedSize()}}. But this metric doesn't take 
into account offheap memory that allocated for internal data structures 
(GridUnsafeMap and LRU eviction policy). 

However Ignite collects this metric (see GridUnsafeMemory.systemAllocatedSize() 
method).

Need to expose this metric via {{CacheMetricsMXBean}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4758) Introduce MemoryPolicy configuration and adapt PageMemory concept to multiple memory policies

2017-03-07 Thread Sergey Chugunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov reassigned IGNITE-4758:
---

Assignee: Alexey Kuznetsov  (was: Sergey Chugunov)

Alexey please fix compilation errors in *VisorMemoryConfiguration* class.

> Introduce MemoryPolicy configuration and adapt PageMemory concept to multiple 
> memory policies
> -
>
> Key: IGNITE-4758
> URL: https://issues.apache.org/jira/browse/IGNITE-4758
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Sergey Chugunov
>Assignee: Alexey Kuznetsov
> Fix For: 2.0
>
>   Original Estimate: 336h
>  Time Spent: 32h
>  Remaining Estimate: 304h
>
> h2. Acceptance Criteria
> # For each named *MemoryPolicy* described in configuration new named 
> *PageMemory* is created.
> # Each cache on startup is assigned to corresponding *PageMemory* instance 
> which is stored in context of the cache.
> # A default *PageMemory* is used to store content of caches with no 
> *MemoryPolicy* configured.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4795) Inherit TransactionException and update Javadoc

2017-03-07 Thread Ryabov Dmitrii (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15899223#comment-15899223
 ] 

Ryabov Dmitrii commented on IGNITE-4795:


[~ein], what about IgniteTxTimeoutCheckedException and other transaction 
checked exceptions? Do we need TransactionCheckedException marker?

> Inherit TransactionException and update Javadoc
> ---
>
> Key: IGNITE-4795
> URL: https://issues.apache.org/jira/browse/IGNITE-4795
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, SQL, website
>Affects Versions: 1.8
>Reporter: Alexandr Kuramshin
>Assignee: Ryabov Dmitrii
>  Labels: documentation
> Fix For: 2.0
>
>
> Understanding transactional behaviour is not clear in Javadoc at this point 
> of time. Even after reading website some doubt remain.
> Proposal.
> 1. Create {{TransactionException}} as the marker of transactional methods and 
> inherit from it all the existed transactional exceptions like 
> {{TransactionTimeoutException}}, {{TransactionRollbackException}}, 
> {{TransactionHeuristicException}}, {{TransactionOptimisticException}}, etc.
> 2. Update all the transactional methods ({{get}}, {{put}}, {{invoke}}, etc) 
> as throwing the base {{TransactionException}}. Comment all the 
> {{IgniteCache}} methods whether they are transactional or not, add {{@see 
> TransactionException}} annotation.
> 3. Make extensive documentation in the header of {{TransactionException}} to 
> get understanding of transactional and non-transactional methods behaviour.
> 4. Update website and Javadoc to clarify the fact that {{put}} value is 
> cached within the transaction and affects successive {{get}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4795) Inherit TransactionException and update Javadoc

2017-03-07 Thread Ryabov Dmitrii (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryabov Dmitrii reassigned IGNITE-4795:
--

Assignee: Ryabov Dmitrii

> Inherit TransactionException and update Javadoc
> ---
>
> Key: IGNITE-4795
> URL: https://issues.apache.org/jira/browse/IGNITE-4795
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, SQL, website
>Affects Versions: 1.8
>Reporter: Alexandr Kuramshin
>Assignee: Ryabov Dmitrii
>  Labels: documentation
> Fix For: 2.0
>
>
> Understanding transactional behaviour is not clear in Javadoc at this point 
> of time. Even after reading website some doubt remain.
> Proposal.
> 1. Create {{TransactionException}} as the marker of transactional methods and 
> inherit from it all the existed transactional exceptions like 
> {{TransactionTimeoutException}}, {{TransactionRollbackException}}, 
> {{TransactionHeuristicException}}, {{TransactionOptimisticException}}, etc.
> 2. Update all the transactional methods ({{get}}, {{put}}, {{invoke}}, etc) 
> as throwing the base {{TransactionException}}. Comment all the 
> {{IgniteCache}} methods whether they are transactional or not, add {{@see 
> TransactionException}} annotation.
> 3. Make extensive documentation in the header of {{TransactionException}} to 
> get understanding of transactional and non-transactional methods behaviour.
> 4. Update website and Javadoc to clarify the fact that {{put}} value is 
> cached within the transaction and affects successive {{get}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4686) Web Console: Add grouping by company on Admin panel screen

2017-03-07 Thread Dmitriy Shabalin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Shabalin reassigned IGNITE-4686:


Assignee: Alexey Kuznetsov  (was: Vica Abramova)

Added ability to group columns to list of registered users

> Web Console: Add grouping by company on Admin panel screen
> --
>
> Key: IGNITE-4686
> URL: https://issues.apache.org/jira/browse/IGNITE-4686
> Project: Ignite
>  Issue Type: Task
>  Components: UI, wizards
>Affects Versions: 1.8
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
> Fix For: 2.0
>
>
> Add tab with table that will be grouped by company.
> This table should has following features:
> # Combo with list of companies to show only selected companies.
> # Filter with company name (to quick filter).
> # Columns with metrics should show totals for company.
> # Instead of "Action" column show "Collapse/Expand" control.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)