[jira] [Assigned] (IGNITE-1251) Develop a library of distributed data types for ML applications.
[ https://issues.apache.org/jira/browse/IGNITE-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladisav Jelisavcic reassigned IGNITE-1251: --- Assignee: Vladisav Jelisavcic (was: Nikita Ivanov) > Develop a library of distributed data types for ML applications. > > > Key: IGNITE-1251 > URL: https://issues.apache.org/jira/browse/IGNITE-1251 > Project: Ignite > Issue Type: New Feature > Components: data structures >Reporter: Nikita Ivanov >Assignee: Vladisav Jelisavcic > > Essentially, we want to make Ignite as friendly to ML application as > possible. The first step here is to develop a set of basic (distributed) data > structures that can be used in implementing ML algorithms. > We should borrow most of the ideas from the great Apache Spark project: > https://spark.apache.org/docs/latest/mllib-data-types.html Our implementation > should be based on Ignite data grid / compute grid capabilities (instead of > Spark RDD concept). > Implementation language should be Java (as well as making sure that these > Java APIs can be used relatively pain-free from other JVM-based languages > such as Scala and Groovy). > This has also been submitted to: > http://eecs.oregonstate.edu/capstone/submission/?page=allproposals -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2788) Redis API for Ignite to work with data via the Redis protocol
[ https://issues.apache.org/jira/browse/IGNITE-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198547#comment-15198547 ] Roman Shtykh commented on IGNITE-2788: -- Yes, I am thinking about taking it after I complete IGNITE-2730, which was requested from Kafka users. This task is pretty big and I also think we have to implement it iteratively. I will assign it to myself and if anyone else is interested, let's collaborate! > Redis API for Ignite to work with data via the Redis protocol > - > > Key: IGNITE-2788 > URL: https://issues.apache.org/jira/browse/IGNITE-2788 > Project: Ignite > Issue Type: New Feature >Reporter: Roman Shtykh > > Introduce Redis API that works with the Redis protocol but uses Ignite grid. > Needless to say, Redis is an extremely popular caching solution. Such API > will enable smooth migration to Ignite. > As a first phase we can start with most frequently used commands and enhance > gradually. > Redis commands http://redis.io/commands -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2791) Continuous query listener is not notified during concurrent key put.
[ https://issues.apache.org/jira/browse/IGNITE-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201041#comment-15201041 ] Semen Boikov commented on IGNITE-2791: -- Nikolay, Your fix doesn't look correct for me: each node calculates primary partitions using locally ready affinity version, this does not guarantee consistentency. All nodes should use the same topology version (topology version of discovery event). I guess we can't not wait for affinity in discovery event listener, so I think correct fix is collect per-node counter maps, and then on client node wait for affinity and do filtering. Also please add test case in GridCacheContinuousQueryConcurrentTest which will execute the same scenario on changing topology (both server and client nodes should join/leave). Thanks > Continuous query listener is not notified during concurrent key put. > > > Key: IGNITE-2791 > URL: https://issues.apache.org/jira/browse/IGNITE-2791 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Assignee: Nikolay Tikhonov >Priority: Critical > Fix For: 1.6 > > Attachments: CacheListenersKillingMe3Main.java > > > Attached the code reproducing the problem. What is evident from the log, is > that that filter was invoked, but the listener was not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-813) Apache Flink Integration -- data streaming connector
[ https://issues.apache.org/jira/browse/IGNITE-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197891#comment-15197891 ] Saikat Maitra commented on IGNITE-813: -- [~roman_s][~avinogradov] Hi Roman, Anton Thank you for reviewing the code and sharing feedback. I will make the changes and update on this ticket. Regards Saikat > Apache Flink Integration -- data streaming connector > > > Key: IGNITE-813 > URL: https://issues.apache.org/jira/browse/IGNITE-813 > Project: Ignite > Issue Type: New Feature > Components: streaming >Reporter: Suminda Dharmasena >Assignee: Saikat Maitra > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2835) BinaryObjectOffHeapImpl leaked to public code
[ https://issues.apache.org/jira/browse/IGNITE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201916#comment-15201916 ] Artem Shutak commented on IGNITE-2835: -- Made a potential fix. Waiting for TC results. > BinaryObjectOffHeapImpl leaked to public code > - > > Key: IGNITE-2835 > URL: https://issues.apache.org/jira/browse/IGNITE-2835 > Project: Ignite > Issue Type: Bug >Affects Versions: 1.5.0.final >Reporter: Denis Magda >Assignee: Artem Shutak >Priority: Critical > Labels: community, important > Fix For: 1.6 > > Attachments: BinaryObjectOffHeapIssue.java > > > To my knowledge {{BinaryObjectOffHeapImpl}} is considered to be used under > some internal lock only to prevent possible offheap pointer movement. > However seems that we made it available to public code. If to start a > partitioned cache in {{OFFHEAP_TIRED}} mode, get {{BinaryObject}} from the > cache inside of a TX and put the same object back we will get exception like > below > {noformat} > [15:00:00,892][WARN ][main][GridNearTxLocal] Set transaction invalidation > flag to true due to error [tx=GridNearTxLocal [mappings=IgniteTxMappingsImpl > [], nearLocallyMapped=false, colocatedLocallyMapped=true, > needCheckBackup=null, hasRemoteLocks=false, mappings=IgniteTxMappingsImpl [], > super=GridDhtTxLocalAdapter [nearOnOriginatingNode=false, nearNodes=[], > dhtNodes=[], explicitLock=false, super=IgniteTxLocalAdapter > [completedBase=null, sndTransformedVals=false, depEnabled=false, > txState=IgniteTxStateImpl [activeCacheIds=GridLongList [idx=1, > arr=[-1206548976]], txMap={IgniteTxKey [key=KeyCacheObjectImpl [val=0, > hasValBytes=true], cacheId=-1206548976]=IgniteTxEntry [key=KeyCacheObjectImpl > [val=0, hasValBytes=true], cacheId=-1206548976, txKey=IgniteTxKey > [key=KeyCacheObjectImpl [val=0, hasValBytes=true], cacheId=-1206548976], > val=[op=UPDATE, val=SomeType [idHash=1337835760, hash=0, field2=name_0, > field1=0]], prevVal=[op=UPDATE, val=SomeType [idHash=1337835760, hash=0, > field2=name_0, field1=0]], entryProcessorsCol=null, ttl=-1, > conflictExpireTime=-1, conflictVer=null, explicitVer=null, > dhtVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200871, order=1458043167489], filters=[], > filtersPassed=false, filtersSet=true, entry=GridDhtColocatedCacheEntry > [super=GridDhtCacheEntry [rdrs=[], locPart=GridDhtLocalPartition [id=0, > mapPubSize=0, rmvQueue=GridCircularBuffer [sizeMask=255, idxGen=1], cntr=1, > state=OWNING, reservations=0, empty=true, createTime=03/15/2016 15:00:00, > mapPubSize=0], super=GridDistributedCacheEntry [super=GridCacheMapEntry > [key=KeyCacheObjectImpl [val=0, hasValBytes=true], val=null, > startVer=1458043167488, ver=GridCacheVersion [topVer=69523200, > nodeOrderDrId=1, globalTime=1458043200890, order=1458043167490], > hash=-1484017934, extras=GridCacheObsoleteEntryExtras > [obsoleteVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200890, order=1458043167491]], flags=7, prepared=false, > locked=true, nodeId=993f5733-b014-4a5b-a6d1-934aeec9e9f5, locMapped=false, > expiryPlc=null, transferExpiryPlc=false, flags=2, partUpdateCntr=0, > serReadVer=null, xidVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487]]}], super=IgniteTxAdapter > [xidVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487], writeVer=GridCacheVersion > [topVer=69523200, nodeOrderDrId=1, globalTime=1458043200871, > order=1458043167489], implicit=false, loc=true, threadId=1, > startTime=1458043200850, nodeId=993f5733-b014-4a5b-a6d1-934aeec9e9f5, > startVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487], endVer=null, > isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=0, > sysInvalidate=true, sys=false, plc=2, commitVer=GridCacheVersion > [topVer=69523200, nodeOrderDrId=1, globalTime=1458043200852, > order=1458043167487], finalizing=NONE, preparing=false, invalidParts=null, > state=UNKNOWN, timedOut=false, topVer=AffinityTopologyVersion [topVer=1, > minorTopVer=1], duration=40ms, onePhaseCommit=true], size=1]]], err=class > o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Failed to locally > write to cache (all transaction entries will be invalidated, however there > was a window when entries for this transaction were visible to others): > GridNearTxLocal [mappings=IgniteTxMappingsImpl [], nearLocallyMapped=false, > colocatedLocallyMapped=true, needCheckBackup=null, hasRemoteLocks=false, > mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter > [nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[]
[jira] [Created] (IGNITE-2858) Refactoring of creation of elements on domains page to using of mixins
Vasiliy Sisko created IGNITE-2858: - Summary: Refactoring of creation of elements on domains page to using of mixins Key: IGNITE-2858 URL: https://issues.apache.org/jira/browse/IGNITE-2858 Project: Ignite Issue Type: Sub-task Components: wizards Affects Versions: 1.6 Reporter: Vasiliy Sisko Assignee: Vasiliy Sisko -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2801) Coordinator floods network with partitions' full map messages
[ https://issues.apache.org/jira/browse/IGNITE-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda updated IGNITE-2801: Assignee: (was: Vladimir Ozerov) > Coordinator floods network with partitions' full map messages > - > > Key: IGNITE-2801 > URL: https://issues.apache.org/jira/browse/IGNITE-2801 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.5.0.final >Reporter: Denis Magda >Priority: Critical > Labels: community, important > Fix For: 1.6 > > Attachments: basic_node.nps, basic_node.png, coordinator.nps, > coordinator.png > > > It is detected that the more machines in the cluster we have and the more > caches are started then the more outgoing traffic is produced by a > coordinator node. > As an example in the current deployment > - 30 nodes; > - 67 caches; > - caches are empty and the cluster is not used at all (idle). > the coordinator constantly uses 300 Mbit/s of outgoing traffic. In contrast > each other node shows constant 10 Mbit/s usage of incoming traffic. > Most likely the reason is that the coordinator indefinitely sends partitions > full map for all the caches to all the nodes. This shouldn't happen. > Need to debug the reason of the issue and fix it. > Attached snapshots taken from the coordinator and on of cluster's nodes. > Probably they would help. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-2835) BinaryObjectOffHeapImpl leaked to public code
[ https://issues.apache.org/jira/browse/IGNITE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Shutak reassigned IGNITE-2835: Assignee: Artem Shutak > BinaryObjectOffHeapImpl leaked to public code > - > > Key: IGNITE-2835 > URL: https://issues.apache.org/jira/browse/IGNITE-2835 > Project: Ignite > Issue Type: Bug >Affects Versions: 1.5.0.final >Reporter: Denis Magda >Assignee: Artem Shutak >Priority: Critical > Labels: community, important > Fix For: 1.6 > > Attachments: BinaryObjectOffHeapIssue.java > > > To my knowledge {{BinaryObjectOffHeapImpl}} is considered to be used under > some internal lock only to prevent possible offheap pointer movement. > However seems that we made it available to public code. If to start a > partitioned cache in {{OFFHEAP_TIRED}} mode, get {{BinaryObject}} from the > cache inside of a TX and put the same object back we will get exception like > below > {noformat} > [15:00:00,892][WARN ][main][GridNearTxLocal] Set transaction invalidation > flag to true due to error [tx=GridNearTxLocal [mappings=IgniteTxMappingsImpl > [], nearLocallyMapped=false, colocatedLocallyMapped=true, > needCheckBackup=null, hasRemoteLocks=false, mappings=IgniteTxMappingsImpl [], > super=GridDhtTxLocalAdapter [nearOnOriginatingNode=false, nearNodes=[], > dhtNodes=[], explicitLock=false, super=IgniteTxLocalAdapter > [completedBase=null, sndTransformedVals=false, depEnabled=false, > txState=IgniteTxStateImpl [activeCacheIds=GridLongList [idx=1, > arr=[-1206548976]], txMap={IgniteTxKey [key=KeyCacheObjectImpl [val=0, > hasValBytes=true], cacheId=-1206548976]=IgniteTxEntry [key=KeyCacheObjectImpl > [val=0, hasValBytes=true], cacheId=-1206548976, txKey=IgniteTxKey > [key=KeyCacheObjectImpl [val=0, hasValBytes=true], cacheId=-1206548976], > val=[op=UPDATE, val=SomeType [idHash=1337835760, hash=0, field2=name_0, > field1=0]], prevVal=[op=UPDATE, val=SomeType [idHash=1337835760, hash=0, > field2=name_0, field1=0]], entryProcessorsCol=null, ttl=-1, > conflictExpireTime=-1, conflictVer=null, explicitVer=null, > dhtVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200871, order=1458043167489], filters=[], > filtersPassed=false, filtersSet=true, entry=GridDhtColocatedCacheEntry > [super=GridDhtCacheEntry [rdrs=[], locPart=GridDhtLocalPartition [id=0, > mapPubSize=0, rmvQueue=GridCircularBuffer [sizeMask=255, idxGen=1], cntr=1, > state=OWNING, reservations=0, empty=true, createTime=03/15/2016 15:00:00, > mapPubSize=0], super=GridDistributedCacheEntry [super=GridCacheMapEntry > [key=KeyCacheObjectImpl [val=0, hasValBytes=true], val=null, > startVer=1458043167488, ver=GridCacheVersion [topVer=69523200, > nodeOrderDrId=1, globalTime=1458043200890, order=1458043167490], > hash=-1484017934, extras=GridCacheObsoleteEntryExtras > [obsoleteVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200890, order=1458043167491]], flags=7, prepared=false, > locked=true, nodeId=993f5733-b014-4a5b-a6d1-934aeec9e9f5, locMapped=false, > expiryPlc=null, transferExpiryPlc=false, flags=2, partUpdateCntr=0, > serReadVer=null, xidVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487]]}], super=IgniteTxAdapter > [xidVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487], writeVer=GridCacheVersion > [topVer=69523200, nodeOrderDrId=1, globalTime=1458043200871, > order=1458043167489], implicit=false, loc=true, threadId=1, > startTime=1458043200850, nodeId=993f5733-b014-4a5b-a6d1-934aeec9e9f5, > startVer=GridCacheVersion [topVer=69523200, nodeOrderDrId=1, > globalTime=1458043200852, order=1458043167487], endVer=null, > isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=0, > sysInvalidate=true, sys=false, plc=2, commitVer=GridCacheVersion > [topVer=69523200, nodeOrderDrId=1, globalTime=1458043200852, > order=1458043167487], finalizing=NONE, preparing=false, invalidParts=null, > state=UNKNOWN, timedOut=false, topVer=AffinityTopologyVersion [topVer=1, > minorTopVer=1], duration=40ms, onePhaseCommit=true], size=1]]], err=class > o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Failed to locally > write to cache (all transaction entries will be invalidated, however there > was a window when entries for this transaction were visible to others): > GridNearTxLocal [mappings=IgniteTxMappingsImpl [], nearLocallyMapped=false, > colocatedLocallyMapped=true, needCheckBackup=null, hasRemoteLocks=false, > mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter > [nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false, > super=IgniteTxLocalAdapter [completedBase=nul
[jira] [Updated] (IGNITE-2747) ODBC: Time cast returns wrong results for Linux.
[ https://issues.apache.org/jira/browse/IGNITE-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Sapego updated IGNITE-2747: Assignee: Vladimir Ozerov (was: Igor Sapego) > ODBC: Time cast returns wrong results for Linux. > > > Key: IGNITE-2747 > URL: https://issues.apache.org/jira/browse/IGNITE-2747 > Project: Ignite > Issue Type: Sub-task > Components: odbc >Affects Versions: 1.5.0.final >Reporter: Igor Sapego >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.6 > > > Current time cast does not work for Linux because {{timezone}} variable does > not return right time offset for Linux. > Also, {{gmtime}} is not thread-safe so it seems that we should use some > platform-specific functions for time-conversion operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2849) BinaryObjectBuilder doesn't properly check metadata
[ https://issues.apache.org/jira/browse/IGNITE-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201490#comment-15201490 ] Denis Magda commented on IGNITE-2849: - Fixed according to the description. Checked with TC. Ready for review. > BinaryObjectBuilder doesn't properly check metadata > --- > > Key: IGNITE-2849 > URL: https://issues.apache.org/jira/browse/IGNITE-2849 > Project: Ignite > Issue Type: Bug >Reporter: Denis Magda >Assignee: Denis Magda >Priority: Critical > Labels: community, important > > There are several cases when {{BinaryObjectBuilder}} doesn't properly check > fields metadata when {{build}} method is called. > 1) Set {{builder.setField("name", null).build()}} and > {{builder.setField("name", new Date()).build()}} won't check metadata > allowing to serialize the object; > 2) Metadata is not checked at all if new BinaryObject is assembled from the > previous one {{binaries.builder(someBinaryObject).setField().build()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2809) Optimize IGFS performance.
[ https://issues.apache.org/jira/browse/IGNITE-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2809: Assignee: (was: Ivan Veselovsky) > Optimize IGFS performance. > -- > > Key: IGNITE-2809 > URL: https://issues.apache.org/jira/browse/IGNITE-2809 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > This is the umbrella ticket to host proposed IGFS performance improvements. > Currently IGFS is struggling from some inefficiencies. It moves lots of > unnecesary data over network. It has points of high contention - root and > trash directories. It has less than efficient default cache properties, etc.. > We need to take care about it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1837) Rebalancing on a big cluster (30 nodes and more)
[ https://issues.apache.org/jira/browse/IGNITE-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda updated IGNITE-1837: Summary: Rebalancing on a big cluster (30 nodes and more) (was: Rebalancing on a big cluster (64 nodes and more)) > Rebalancing on a big cluster (30 nodes and more) > > > Key: IGNITE-1837 > URL: https://issues.apache.org/jira/browse/IGNITE-1837 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: ignite-1.4 >Reporter: Denis Magda >Assignee: Alexey Goncharuk > Fix For: 1.6 > > > It seems that Ignite has different rebalancing related issues that appear > when a big cluster is started. > Under the big cluster I mean: > - cluster of 64 server nodes; > - cluster of 64 server and 64 client nodes. > The issues can be divided on three main use cases. > 1) Slow rebalancing on start. > - If to set partitions number for some cache to value bigger than default one > (to 3200 or to 6400, etc.) then rebalancing of such caches may take several > minutes. The caches are empty at that time. In addition, as a part of this > issue let's document that the number of partitions can't exceed some value. > - exchange message on NODE_JOINED event that times out for a long time. > Discussed there: > http://apache-ignite-users.70518.x6.nabble.com/Help-with-tuning-for-larger-clusters-td1692.html#a1813 > 2) Slow rebalancing on client nodes shutdown. > If to stop a significant number of client nodes at the same time then again > by some reason the rebalancing will take serveral minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1743) IGFS: Use async cache put instead of block/ack messages on data write
[ https://issues.apache.org/jira/browse/IGNITE-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1743: Issue Type: Task (was: Sub-task) Parent: (was: IGNITE-1697) > IGFS: Use async cache put instead of block/ack messages on data write > - > > Key: IGNITE-1743 > URL: https://issues.apache.org/jira/browse/IGNITE-1743 > Project: Ignite > Issue Type: Task >Reporter: Ivan Veselovsky >Assignee: Ivan Veselovsky > Fix For: 1.7 > > > Item "1)" from IGNITE-1697 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-2820) IGFS: Ensure that all participating IDs are locked right after TX start.
[ https://issues.apache.org/jira/browse/IGNITE-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-2820: --- Assignee: Vladimir Ozerov > IGFS: Ensure that all participating IDs are locked right after TX start. > > > Key: IGNITE-2820 > URL: https://issues.apache.org/jira/browse/IGNITE-2820 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > *Problem* > Sometimes we have to create new entries during some operation on metadata. If > we do not lock this ID along with other participating IDs right after TX > start, subsequent cache operation on this ID will result in network calls to > block this key on other nodes. > *Solution* > Create potentially new key before entering TX and lock it with other keys. > This should decrease number of network calls. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2835) BinaryObjectOffHeapImpl leaked to public code
[ https://issues.apache.org/jira/browse/IGNITE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201914#comment-15201914 ] ASF GitHub Bot commented on IGNITE-2835: GitHub user ashutakGG opened a pull request: https://github.com/apache/ignite/pull/567 IGNITE-2835 Binary object off heap impl leak on public API Fix. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ashutakGG/incubator-ignite ignite-2835-BinaryObjectOffHeapImpl Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/567.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #567 commit 2e64d0d7cc51552fffc231cbc850cd615076fb85 Author: vozerov-gridgain Date: 2015-12-29T06:31:58Z IGNITE-2258: IGFS: now default path modes could be optionally disabled using FileSystemConfiguration.isInitializeDefaultPathModes() property. commit 4cd3b3dc2f1fa0f1a9cceb6bf544dd8fb505d7f5 Author: vozerov-gridgain Date: 2015-12-29T09:52:00Z IGNITE-2258: Fixed type on getter/setter. commit 5d58fcbf40fdb9114e4cbb32b72dd9bce7fa38ca Author: iveselovskiy Date: 2016-01-04T06:47:28Z IGNITE-2308: Fixed HadoopClassLoader dependency resolution. This closes #391. commit 83a19179cee2bb15adc36c2265dd0a3c794b60bb Author: vozerov-gridgain Date: 2016-01-04T08:14:58Z IGNITE-2218: Fixed a problem with native Hadoop libraries load. This closes #378. commit 1d7fb5702fa33cf395e797161f3a86a9600921a7 Author: vozerov-gridgain Date: 2016-01-05T06:59:31Z IGNITE-2206: Hadoop file system creation is now abstracted out using factory interface. commit a12ec7d08573d5396654a5ba05bb7d873e4c2677 Author: Ignite Teamcity Date: 2016-01-06T10:50:48Z 1.5.2 commit 090a5de6a930c10a3a57a6e28c486fe5c87e028d Author: vozerov-gridgain Date: 2015-12-29T12:50:39Z Minor fix. commit c786820dda7f7cd1849c5593ac24ca9072945887 Author: vozerov-gridgain Date: 2016-01-07T13:48:14Z IgniteHadoopIgfsSecondaryFileSystem.usrName field is renamed to "userName" to preserve backward compatibility. commit 6ab4ce246316442fa4295f9941c372fea70c24ef Author: vozerov-gridgain Date: 2016-01-08T06:23:55Z IGNITE-2342: Set correct classlader before calling FileSystem.get(). commit 077ab1b3a77fdb1c2c2fd6360fc5b60fda6c50c3 Author: vozerov-gridgain Date: 2016-01-08T07:17:45Z IGNITE-2341: Improved warning message when BinaryMarshaller cannot be used. Also it is not shown when "org.apache.ignite" classes is in described situation. commit 86c4816edfd0e983014f136ffc92cde28ec56cca Author: vozerov-gridgain Date: 2016-01-08T07:26:03Z IGNITE-2340: Improved error thrown when PROXY mode exists, but secondary file system is not IgniteHadoopIgfsSecondaryFileSystem. commit fc48a8429a84ef6c87ed3225a218d7d3ae617e14 Author: vozerov-gridgain Date: 2016-01-08T07:48:42Z Merge branch 'ignite-1.5.2' into ignite-1.5.3 commit 86740cefe212ed0f506d81056dd8e76de9a31e4f Author: Ignite Teamcity Date: 2016-01-08T09:32:11Z 1.5.3-SNAPSHOT commit 92229d2a6c6ef86772a62cb52b3aa07a55c99d89 Author: sboikov Date: 2016-01-13T05:56:34Z ignite-2359 Added locking for files used by MarshallerContextImpl. (cherry picked from commit 1d8c4e2) commit 2e4ce585d5f54502c6511d3425b1cd5fbf0a7f4f Author: Ignite Teamcity Date: 2016-01-13T10:37:33Z 1.5.4-SNAPSHOT commit 6e5f9f0c7d4c86773b1f0cd5c5a673acb58c86c2 Author: Denis Magda Date: 2016-01-13T11:42:27Z Changed year to 2016 in Copyrights commit 02dbcfd8ed2701a4f415c8871d0b8fd08bfa0583 Author: Alexey Goncharuk Date: 2016-01-13T13:47:32Z IGNITE-2365 - Notify policy if swap or offheap is enabled and rebalanced entry was not preloaded. IGNITE-2099 - Fixing custom collections. This closes #396 commit 86c2ba2a601e82b824cf17422683e5398a4d8c7d Author: sboikov Date: 2016-01-13T15:40:08Z ignite-2350 Pass update notifier flag in discovery data (all cluster nodes will have the same notifier status as first cluster node) (cherry picked from commit 7175a42) commit e1a494df400fc37ca04e8d88d1cf20bca02607b4 Author: sboikov Date: 2016-01-14T11:16:33Z Renamed fields to change fields write order (to preserve backward compatibility). (cherry picked from commit 2a4adf5) commit 09f978234b6062afa1e1658d5a6439365a856aca Author: sboikov Date: 2016-01-14T11:42:44Z Merge remote-tracking branch 'origin/ignite-1.5.4' into ignite-1.5.4 commit 30043e7892d0b52dc851ce5ec79c7eb3b7cc44fb Author: Denis Magda Date: 2016-01-14T13:02:50Z Added release notes commit cc3db35925698f1670a8bf1c6a1830c0c9b51290 Author: vershov Date: 2016-01-14T14:21:56Z IGNITE-2032 Unwind undeploys in preloader - Fixes #369. Signed-off-by: Alexey Goncharuk commit
[jira] [Commented] (IGNITE-2797) Prepare and finish future never time out
[ https://issues.apache.org/jira/browse/IGNITE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201441#comment-15201441 ] Andrey Gura commented on IGNITE-2797: - Found on more place where lock can be created without timeout despite transaction has timeout. It is {{IgniteTxLocalAdapter.putAsync0()}} method. May be we should throw {{IgniteTxTimeoutCheckedException}} from all places where {{remainingTime()}} or {{lockTimeout()}} methods invoke. > Prepare and finish future never time out > > > Key: IGNITE-2797 > URL: https://issues.apache.org/jira/browse/IGNITE-2797 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.5.0.final >Reporter: Valentin Kulichenko >Priority: Blocker > Labels: community, customer, important > Fix For: 1.6 > > Attachments: TxTest2.java > > > Even if transaction timeout is configured, transaction will not timeout if > it's already in prepare state. It will be shown in log as pending transaction > and can cause the whole cluster hang. > We need to add a mechanism that will properly timeout prepare and (if > possible) finish futures. > Also we can create an event that will be fired if there is a transaction > pending for a long time, showing which nodes we are waiting responses from. > This will allow user to recover by stopping only these nodes instead of > restarting the whole cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-2816) IGFS: IgfsMetaManager should not use "put" to update parent listing.
[ https://issues.apache.org/jira/browse/IGNITE-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-2816. - Resolution: Duplicate > IGFS: IgfsMetaManager should not use "put" to update parent listing. > > > Key: IGNITE-2816 > URL: https://issues.apache.org/jira/browse/IGNITE-2816 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > Lightweight entry processor must be used instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (IGNITE-2797) Prepare and finish future never time out
[ https://issues.apache.org/jira/browse/IGNITE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201441#comment-15201441 ] Andrey Gura edited comment on IGNITE-2797 at 3/18/16 1:06 PM: -- Found on more place where lock can be created without timeout despite transaction has timeout. It is {{IgniteTxLocalAdapter.putAsync0()}} method. Fixed. Waiting for TC May be we should throw {{IgniteTxTimeoutCheckedException}} from all places where {{remainingTime()}} or {{lockTimeout()}} methods invoke. was (Author: agura): Found on more place where lock can be created without timeout despite transaction has timeout. It is {{IgniteTxLocalAdapter.putAsync0()}} method. May be we should throw {{IgniteTxTimeoutCheckedException}} from all places where {{remainingTime()}} or {{lockTimeout()}} methods invoke. > Prepare and finish future never time out > > > Key: IGNITE-2797 > URL: https://issues.apache.org/jira/browse/IGNITE-2797 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.5.0.final >Reporter: Valentin Kulichenko >Priority: Blocker > Labels: community, customer, important > Fix For: 1.6 > > Attachments: TxTest2.java > > > Even if transaction timeout is configured, transaction will not timeout if > it's already in prepare state. It will be shown in log as pending transaction > and can cause the whole cluster hang. > We need to add a mechanism that will properly timeout prepare and (if > possible) finish futures. > Also we can create an event that will be fired if there is a transaction > pending for a long time, showing which nodes we are waiting responses from. > This will allow user to recover by stopping only these nodes instead of > restarting the whole cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-2788) Redis API for Ignite to work with data via the Redis protocol
[ https://issues.apache.org/jira/browse/IGNITE-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Shtykh reassigned IGNITE-2788: Assignee: Roman Shtykh > Redis API for Ignite to work with data via the Redis protocol > - > > Key: IGNITE-2788 > URL: https://issues.apache.org/jira/browse/IGNITE-2788 > Project: Ignite > Issue Type: New Feature >Reporter: Roman Shtykh >Assignee: Roman Shtykh > > Introduce Redis API that works with the Redis protocol but uses Ignite grid. > Needless to say, Redis is an extremely popular caching solution. Such API > will enable smooth migration to Ignite. > As a first phase we can start with most frequently used commands and enhance > gradually. > Redis commands http://redis.io/commands -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-2557) ODBC: Add integrity tests.
[ https://issues.apache.org/jira/browse/IGNITE-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-2557. --- > ODBC: Add integrity tests. > -- > > Key: IGNITE-2557 > URL: https://issues.apache.org/jira/browse/IGNITE-2557 > Project: Ignite > Issue Type: Sub-task > Components: odbc >Affects Versions: 1.5.0.final >Reporter: Igor Sapego >Assignee: Vladimir Ozerov > Fix For: 1.6 > > > Need to add integrity tests that would work through system API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2847) Failed atomic-offheap-invoke-retry load consistency test
[ https://issues.apache.org/jira/browse/IGNITE-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15199287#comment-15199287 ] Ilya Suntsov commented on IGNITE-2847: -- Today I got the same exceptions during the test with 1 backup. I attached logs to the ticket. > Failed atomic-offheap-invoke-retry load consistency test > > > Key: IGNITE-2847 > URL: https://issues.apache.org/jira/browse/IGNITE-2847 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 1.6 > Environment: Yardstick driver's host: > Ubuntu 14.04.3 LTS > Yardstick server's hosts: Ubuntu 14.04.3 LTS and CentOS release 6.7 (Final) >Reporter: Ilya Suntsov >Assignee: Artem Shutak >Priority: Critical > Fix For: 1.6 > > Attachments: logs_configs.zip > > > I ran load test with 2 backups in client mode (1 client, 4 servers) on 3 > hosts (host1 - client, hosts-2,3 - 2 data nodes on each) and got the > following exception after 5h of work of test: > {noformat} > <13:49:20> Got > exception: > org.apache.ignite.yardstick.cache.failover.IgniteConsistencyException: Cache > and local map are in inconsistent state [badKeys=[key-62687]] > org.apache.ignite.yardstick.cache.failover.IgniteConsistencyException: Cache > and local map are in inconsistent state > [badKeys=[key-62687]]<13:49:20> Full thread > dump of the current node below. > {noformat} > Logs in attachment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1433) .Net: Add IgniteException.JavaStackTrace
[ https://issues.apache.org/jira/browse/IGNITE-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1433: Fix Version/s: (was: 1.7) 1.6 > .Net: Add IgniteException.JavaStackTrace > > > Key: IGNITE-1433 > URL: https://issues.apache.org/jira/browse/IGNITE-1433 > Project: Ignite > Issue Type: Task > Components: platforms >Affects Versions: 1.1.4 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > Fix For: 1.6 > > > Propagate java stack trace as a string in ExceptionUtils.GetException and > write it to a new field in IgniteException class. > This will simplify debugging for us both locally and when getting error > reports from clients. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2797) Prepare and finish future never time out
[ https://issues.apache.org/jira/browse/IGNITE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198851#comment-15198851 ] Valentin Kulichenko commented on IGNITE-2797: - I also noticed a bug in timeout logic for optimistic transactions. In {{IgniteTxManager.lockMultiple(...)}} method we calculate remaining time incorrectly (see line 1399): {code} long remainingTime = U.currentTimeMillis() - (tx.startTime() + tx.timeout()); {code} In most cases this value will be below zero, and transactions will be rolled back right away even with a long timeout. I'm attaching test that reproduces the issue ({{TxTest2.java}}). > Prepare and finish future never time out > > > Key: IGNITE-2797 > URL: https://issues.apache.org/jira/browse/IGNITE-2797 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.5.0.final >Reporter: Valentin Kulichenko >Priority: Blocker > Labels: community, customer, important > Fix For: 1.6 > > > Even if transaction timeout is configured, transaction will not timeout if > it's already in prepare state. It will be shown in log as pending transaction > and can cause the whole cluster hang. > We need to add a mechanism that will properly timeout prepare and (if > possible) finish futures. > Also we can create an event that will be fired if there is a transaction > pending for a long time, showing which nodes we are waiting responses from. > This will allow user to recover by stopping only these nodes instead of > restarting the whole cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2730) Ignite Events Source Streaming to Kafka
[ https://issues.apache.org/jira/browse/IGNITE-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201089#comment-15201089 ] Roman Shtykh commented on IGNITE-2730: -- Some implementation details for reviewers: - I use a distributed queue as a buffer that is polled by Kafka. This is done to keep cache event data safe between polls. - Schema facilities of Kafka Connect are not used -- cache events are marshalled with _JDKMarshaller_ and sent to Kafka. Later they can be deserialized with the provided _CacheEventDeserializer_, as done in the test code. - Kafka partition keys are specified in the current implementation. If there are requests from users, we can enhance. > Ignite Events Source Streaming to Kafka > --- > > Key: IGNITE-2730 > URL: https://issues.apache.org/jira/browse/IGNITE-2730 > Project: Ignite > Issue Type: New Feature > Components: streaming >Reporter: Roman Shtykh >Assignee: Roman Shtykh > Labels: community > > Streaming specified Ignite events > (https://apacheignite.readme.io/docs/events) to Kafka via Kafka Connect. > It has to be added to org.apache.ignite.stream.kafka.connect package. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-2820) IGFS: Ensure that all participating IDs are locked right after TX start.
[ https://issues.apache.org/jira/browse/IGNITE-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-2820. - Resolution: Fixed Fixed as a part of IGNITE-2860. > IGFS: Ensure that all participating IDs are locked right after TX start. > > > Key: IGNITE-2820 > URL: https://issues.apache.org/jira/browse/IGNITE-2820 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > *Problem* > Sometimes we have to create new entries during some operation on metadata. If > we do not lock this ID along with other participating IDs right after TX > start, subsequent cache operation on this ID will result in network calls to > block this key on other nodes. > *Solution* > Create potentially new key before entering TX and lock it with other keys. > This should decrease number of network calls. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2862) Deployment Ignite in Mesos cluster is failed
[ https://issues.apache.org/jira/browse/IGNITE-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201765#comment-15201765 ] Vasilisa Sidorova commented on IGNITE-2862: NB: the screenshot in the step 5 of paragraph "RUN THE FRAMEWORK VIA MARATHON" is obsoleted. Please, update it. For example onto attached screenshot "mesos_logs.png" > Deployment Ignite in Mesos cluster is failed > > > Key: IGNITE-2862 > URL: https://issues.apache.org/jira/browse/IGNITE-2862 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 1.5.0.final >Reporter: Vasilisa Sidorova > Attachments: marathon.json, mesos_activetasks.png, mesos_logs.png > > > - > DESCRIPTION > - > Deployment Ignite in Mesos cluster by this instruction > https://apacheignite.readme.io/docs/mesos-deployment is failed > - > STEPS FOR REPRODUCE > - > # Do items 1-5 from "RUN THE FRAMEWORK VIA MARATHON" paragraph > - > ACTUAL RESULT > - > # Ignite nodes didn't start. Only ignition task is running. Look at the > attached picture "mesos_activetasks.png" > # Stderr log for ignition task get follow exception: > {noformat} > I0318 18:42:08.252142 17138 fetcher.cpp:409] Fetcher Info: > {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/20160318-180241-16777343-5050-16554-S0\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":false,"value":"https:\/\/s3.amazonaws.com\/vasilisk\/m\/ignite-mesos-1.5.0.final.jar"}}],"sandbox_directory":"\/tmp\/mesos\/slaves\/20160318-180241-16777343-5050-16554-S0\/frameworks\/20150603-121744-16842879-5050-6241-\/executors\/ignition.f8fa2435-ed1f-11e5-bea1-0242e00dbbdd\/runs\/776db62f-965f-4e9e-950a-4edb67c44667","user":"root"} > I0318 18:42:08.253284 17138 fetcher.cpp:364] Fetching URI > 'https://s3.amazonaws.com/vasilisk/m/ignite-mesos-1.5.0.final.jar' > I0318 18:42:08.253298 17138 fetcher.cpp:238] Fetching directly into the > sandbox directory > I0318 18:42:08.253312 17138 fetcher.cpp:176] Fetching URI > 'https://s3.amazonaws.com/vasilisk/m/ignite-mesos-1.5.0.final.jar' > I0318 18:42:08.253325 17138 fetcher.cpp:126] Downloading resource from > 'https://s3.amazonaws.com/vasilisk/m/ignite-mesos-1.5.0.final.jar' to > '/tmp/mesos/slaves/20160318-180241-16777343-5050-16554-S0/frameworks/20150603-121744-16842879-5050-6241-/executors/ignition.f8fa2435-ed1f-11e5-bea1-0242e00dbbdd/runs/776db62f-965f-4e9e-950a-4edb67c44667/ignite-mesos-1.5.0.final.jar' > I0318 18:42:12.091784 17138 fetcher.cpp:441] Fetched > 'https://s3.amazonaws.com/vasilisk/m/ignite-mesos-1.5.0.final.jar' to > '/tmp/mesos/slaves/20160318-180241-16777343-5050-16554-S0/frameworks/20150603-121744-16842879-5050-6241-/executors/ignition.f8fa2435-ed1f-11e5-bea1-0242e00dbbdd/runs/776db62f-965f-4e9e-950a-4edb67c44667/ignite-mesos-1.5.0.final.jar' > I0318 18:42:12.293658 17142 exec.cpp:132] Version: 0.23.0 > I0318 18:42:12.296212 17145 exec.cpp:206] Executor registered on slave > 20160318-180241-16777343-5050-16554-S0 > Mar 18, 2016 6:42:12 PM org.apache.ignite.mesos.IgniteFramework main > INFO: Enabling checkpoint for the framework > 2016-03-18 18:42:12.495:INFO::main: Logging initialized @151ms > 2016-03-18 18:42:22.542:INFO:oejs.Server:main: jetty-9.2.z-SNAPSHOT > 2016-03-18 18:42:22.579:INFO:oejs.ServerConnector:main: Started > ServerConnector@1268c278{HTTP/1.1}{172.17.0.1:48610} > 2016-03-18 18:42:22.580:INFO:oejs.Server:main: Started @10235ms > Exception in thread "main" java.lang.RuntimeException: Got unexpected > response code. Response code: 404 > at > org.apache.ignite.mesos.resource.IgniteProvider.downloadIgnite(IgniteProvider.java:202) > at > org.apache.ignite.mesos.resource.IgniteProvider.getIgnite(IgniteProvider.java:132) > at > org.apache.ignite.mesos.resource.ResourceProvider.init(ResourceProvider.java:57) > at org.apache.ignite.mesos.IgniteFramework.main(IgniteFramework.java:77) > {noformat} > - > EXPECTED RESULT > - > Tasks for all 3 nodes are running with ignition task. > - > ADDITIONAL INFO > - > # File "marathon.json" is attached -- This message was sent by Atlassian JIRA (v6.3.4#6332)