[jira] [Created] (IGNITE-12262) Implement an Apache Camel Data Streamer
Emmanouil Gkatziouras created IGNITE-12262: -- Summary: Implement an Apache Camel Data Streamer Key: IGNITE-12262 URL: https://issues.apache.org/jira/browse/IGNITE-12262 Project: Ignite Issue Type: New Feature Components: streaming Affects Versions: 2.7.6 Reporter: Emmanouil Gkatziouras Assignee: Emmanouil Gkatziouras A Pub/Sub data streamer would assist GCP users to consume data and feed them into an Ignite cache. This data streamer will instantiate a Pub/Sub consumer endpoint, The user will specify and apply a StreamTransformer to the incoming Exchange which shall add the result to the data streamer. The streamer will register as a subscriber and will listen to incoming messages. The same subscriber Id shall be used for all nodes. Only one subscriber/node will process an incoming message, instead of every subscriber receiving the same message -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (IGNITE-10683) Prepare process of packaging and delivering thin clients
[ https://issues.apache.org/jira/browse/IGNITE-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944780#comment-16944780 ] Peter Ivanov edited comment on IGNITE-10683 at 10/4/19 7:29 PM: [~dmagda], [~dma...@apache.org] - what do you think. Should it be included to 2.8 scope? was (Author: vveider): [~dmagda], what do you think. Should it be included to 2.8 scope? > Prepare process of packaging and delivering thin clients > > > Key: IGNITE-10683 > URL: https://issues.apache.org/jira/browse/IGNITE-10683 > Project: Ignite > Issue Type: Task >Reporter: Peter Ivanov >Assignee: Peter Ivanov >Priority: Major > Fix For: 2.8 > > > # **NodeJs client** > #* +Instruction+: > https://github.com/nobitlost/ignite/blob/ignite--docs/modules/platforms/nodejs/README.md#publish-ignite-nodejs-client-on-npmjscom-instruction > #* +Uploaded+: https://www.npmjs.com/package/apache-ignite-client > # **PHP client** > #* +Instruction+: > https://github.com/nobitlost/ignite/blob/ignite-7783-docs/modules/platforms/php/README.md#release-the-client-in-the-php-package-repository-instruction > {panel} > Cannot be uploaded on Packagist as the client should be in a dedicated > repository for that - > https://issues.apache.org/jira/browse/IGNITE-7783?focusedCommentId=16595476&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16595476 > Installation from the sources works. > {panel} > # **Python client** > I have already registered the package `pyignite` on PyPI[1]. The person who > is going to take the responsibility of maintaining it should create an > account on PyPI and mail me in private, so that I can grant them the > necessary rights. They also must install twine[3]. > The process of packaging is well described in the packaging tutorial[2]. In > the nutshell, the maintainer must do the following: > ## Clone/pull the sources from the git repository, > ## Enter the directory in which the `setup.py` is resides (“the setup > directory”), in our case it is `modules/platforms/python`. > ## Create the packages with the command `python3 setup.py sdist bdist_wheel`. > The packages will be created in `modules/platforms/python/dist` folder. > ## Upload packages with twine: `twine upload dist/*`. > It is very useful to have a dedicated Python virtual environment prepared to > perform steps 3-4. Just do an editable install of `pyignite` into that > environment from the setup directory: `pip3 install -e .` You can also > install twine (`pip install twine`) in it. > Consider also making a `.pypirc` file to save time on logging in to PyPI. > Newest version of `twine` is said to support keyrings on Linux and Mac, but I > have not tried this yet. > [1] https://pypi.org/project/pyignite/ > [2] https://packaging.python.org/tutorials/packaging-projects/ > [3] https://twine.readthedocs.io/en/latest/ > Some other notes on PyPI and versioning. > - The package version is located in the `setup.py`, it is a `version` > argument of the `setuptools.setup()` function. Editing the `setup.py` is the > only way to set the package version. > - You absolutely can not replace a package in PyPI (hijacking prevention). If > you have published the package by mistake, all you can do is delete the > unwanted package, increment the version counter in `setup.py`, and try again. > - If you upload the package through the web interface of PyPI (without > twine), the package description will be garbled. Web interface does not > support markdown. > Anyway, I would like to join in the congratulations on successful release. > Kudos to the team. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-10683) Prepare process of packaging and delivering thin clients
[ https://issues.apache.org/jira/browse/IGNITE-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944780#comment-16944780 ] Peter Ivanov commented on IGNITE-10683: --- [~dmagda], what do you think. Should it be included to 2.8 scope? > Prepare process of packaging and delivering thin clients > > > Key: IGNITE-10683 > URL: https://issues.apache.org/jira/browse/IGNITE-10683 > Project: Ignite > Issue Type: Task >Reporter: Peter Ivanov >Assignee: Peter Ivanov >Priority: Major > Fix For: 2.8 > > > # **NodeJs client** > #* +Instruction+: > https://github.com/nobitlost/ignite/blob/ignite--docs/modules/platforms/nodejs/README.md#publish-ignite-nodejs-client-on-npmjscom-instruction > #* +Uploaded+: https://www.npmjs.com/package/apache-ignite-client > # **PHP client** > #* +Instruction+: > https://github.com/nobitlost/ignite/blob/ignite-7783-docs/modules/platforms/php/README.md#release-the-client-in-the-php-package-repository-instruction > {panel} > Cannot be uploaded on Packagist as the client should be in a dedicated > repository for that - > https://issues.apache.org/jira/browse/IGNITE-7783?focusedCommentId=16595476&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16595476 > Installation from the sources works. > {panel} > # **Python client** > I have already registered the package `pyignite` on PyPI[1]. The person who > is going to take the responsibility of maintaining it should create an > account on PyPI and mail me in private, so that I can grant them the > necessary rights. They also must install twine[3]. > The process of packaging is well described in the packaging tutorial[2]. In > the nutshell, the maintainer must do the following: > ## Clone/pull the sources from the git repository, > ## Enter the directory in which the `setup.py` is resides (“the setup > directory”), in our case it is `modules/platforms/python`. > ## Create the packages with the command `python3 setup.py sdist bdist_wheel`. > The packages will be created in `modules/platforms/python/dist` folder. > ## Upload packages with twine: `twine upload dist/*`. > It is very useful to have a dedicated Python virtual environment prepared to > perform steps 3-4. Just do an editable install of `pyignite` into that > environment from the setup directory: `pip3 install -e .` You can also > install twine (`pip install twine`) in it. > Consider also making a `.pypirc` file to save time on logging in to PyPI. > Newest version of `twine` is said to support keyrings on Linux and Mac, but I > have not tried this yet. > [1] https://pypi.org/project/pyignite/ > [2] https://packaging.python.org/tutorials/packaging-projects/ > [3] https://twine.readthedocs.io/en/latest/ > Some other notes on PyPI and versioning. > - The package version is located in the `setup.py`, it is a `version` > argument of the `setuptools.setup()` function. Editing the `setup.py` is the > only way to set the package version. > - You absolutely can not replace a package in PyPI (hijacking prevention). If > you have published the package by mistake, all you can do is delete the > unwanted package, increment the version counter in `setup.py`, and try again. > - If you upload the package through the web interface of PyPI (without > twine), the package description will be garbled. Web interface does not > support markdown. > Anyway, I would like to join in the congratulations on successful release. > Kudos to the team. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12256) Fix double invocation of javaMajorVersion in scripts
[ https://issues.apache.org/jira/browse/IGNITE-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944753#comment-16944753 ] Peter Ivanov commented on IGNITE-12256: --- Looks good to me. > Fix double invocation of javaMajorVersion in scripts > > > Key: IGNITE-12256 > URL: https://issues.apache.org/jira/browse/IGNITE-12256 > Project: Ignite > Issue Type: Improvement > Components: general >Affects Versions: 2.7.6 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Most of our shell script look as folows: > {code} > # > # Discover path to Java executable and check it's version. > # > checkJava > # > # Discover IGNITE_HOME environment variable. > # > setIgniteHome > # > # Final JVM_OPTS for Java 9+ compatibility > # > javaMajorVersion "${JAVA_HOME}/bin/java" > {code} > It makes no sense to me since we already call javaMajorVersion in checkJava. > Let's try to get rid of it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12260) Fallback to {user.home}/ignite/work if {user.dir} is not writable
[ https://issues.apache.org/jira/browse/IGNITE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944746#comment-16944746 ] Denis A. Magda commented on IGNITE-12260: - [~ilyak], I would advise us not to fallback but to use "user.home" as the only default. Let's make things simpler with as a few fallback options as possible. > Fallback to {user.home}/ignite/work if {user.dir} is not writable > - > > Key: IGNITE-12260 > URL: https://issues.apache.org/jira/browse/IGNITE-12260 > Project: Ignite > Issue Type: Improvement > Components: general >Affects Versions: 2.7.6 >Reporter: Ilya Kasnacheev >Priority: Major > > After IGNITE-12103 we have a new program, that some software under Windows, > e.g. that is installed in Program Files, tries to create ignite\work\ dir > under current dir which is not writable. > It was suggested to fallback to {user.home}\ignite\work dir in such cases. On > each start we will try to create workdir in current dir, fail, print warning > and fallback to home dir. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12261) Issue with adding nested index dynamically
Hemambara created IGNITE-12261: -- Summary: Issue with adding nested index dynamically Key: IGNITE-12261 URL: https://issues.apache.org/jira/browse/IGNITE-12261 Project: Ignite Issue Type: Bug Affects Versions: 2.7.6 Reporter: Hemambara [http://apache-ignite-users.70518.x6.nabble.com/Issue-with-adding-nested-index-dynamically-tt29571.html] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-10025) CPP: Runtime code deployment
[ https://issues.apache.org/jira/browse/IGNITE-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Sapego updated IGNITE-10025: - Fix Version/s: (was: 2.9) > CPP: Runtime code deployment > > > Key: IGNITE-10025 > URL: https://issues.apache.org/jira/browse/IGNITE-10025 > Project: Ignite > Issue Type: New Feature > Components: platforms >Reporter: Igor Sapego >Assignee: Igor Sapego >Priority: Major > Labels: cpp > > It would be useful for a user to have an ability to deploy code (dll, so, > etc) in cluster on a selected subset of nodes. > This task can be split in a 3 steps: > 1. Uploading module to selected subset of nodes. > 2. Loading module dynamically on the whole subset. > 3. Initializing of a module (i.e. registering callables, declared in the > module within Ignite) > This also may require partial implementation of a Cluster API - IGNITE-5708 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12220) Allow to use cache-related permissions both at system and per-cache levels
[ https://issues.apache.org/jira/browse/IGNITE-12220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Rakov updated IGNITE-12220: Reviewer: Stepachev Maksim > Allow to use cache-related permissions both at system and per-cache levels > -- > > Key: IGNITE-12220 > URL: https://issues.apache.org/jira/browse/IGNITE-12220 > Project: Ignite > Issue Type: Task > Components: security >Affects Versions: 2.7.6 >Reporter: Andrey Kuznetsov >Assignee: Sergei Ryzhov >Priority: Major > Fix For: 2.8 > > Time Spent: 3h > Remaining Estimate: 0h > > Currently, {{CACHE_CREATE}} and {{CACHE_DESTROY}} permissions are enforced to > be system-level permissions, see for instance > {{SecurityPermissionSetBuilder#appendCachePermissions}}. This looks > inflexible: Ignite Security implementations are not able to manage cache > creation and deletion permissions on per-cache basis (unlike get/put/remove > permissions). All such limitations should be found and removed in order to > allow all {{CACHE_*}} permissions to be set both at system and per-cache > levels. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12260) Fallback to {user.home}/ignite/work if {user.dir} is not writable
Ilya Kasnacheev created IGNITE-12260: Summary: Fallback to {user.home}/ignite/work if {user.dir} is not writable Key: IGNITE-12260 URL: https://issues.apache.org/jira/browse/IGNITE-12260 Project: Ignite Issue Type: Improvement Components: general Affects Versions: 2.7.6 Reporter: Ilya Kasnacheev After IGNITE-12103 we have a new program, that some software under Windows, e.g. that is installed in Program Files, tries to create ignite\work\ dir under current dir which is not writable. It was suggested to fallback to {user.home}\ignite\work dir in such cases. On each start we will try to create workdir in current dir, fail, print warning and fallback to home dir. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12255) Cache affinity fetching and calculation on client nodes may be broken in some cases
[ https://issues.apache.org/jira/browse/IGNITE-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944589#comment-16944589 ] Ignite TC Bot commented on IGNITE-12255: {panel:title=Branch: [pull/6933/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4658217&buildTypeId=IgniteTests24Java8_RunAll] > Cache affinity fetching and calculation on client nodes may be broken in some > cases > --- > > Key: IGNITE-12255 > URL: https://issues.apache.org/jira/browse/IGNITE-12255 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.5, 2.7 >Reporter: Pavel Kovalenko >Assignee: Pavel Kovalenko >Priority: Critical > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > We have a cluster with server and client nodes. > We dynamically start several caches on a cluster. > Periodically we create and destroy some temporary cache in a cluster to move > up cluster topology version. > At the same time, a random client node chooses a random existing cache and > performs operations on that cache. > It leads to an exception on client node that affinity is not initialized for > a cache during cache operation like: > Affinity for topology version is not initialized [topVer = 8:10, head = 8:2] > This exception means that the last affinity for a cache is calculated on > version [8,2]. This is a cache start version. It happens because during > creating/destroying some temporary cache we don’t re-calculate affinity for > all existing but not already accessed caches on client nodes. Re-calculate in > this case is cheap - we just copy affinity assignment and increment topology > version. > As a solution, we need to fetch affinity on client node join for all caches. > Also, we need to re-calculate affinity for all affinity holders (not only for > started caches or only configured caches) for all topology events that > happened in a cluster on a client node. > This solution showed the existing race between client node join and > concurrent cache destroy. > The race is the following: > Client node (with some configured caches) joins to a cluster sending > SingleMessage to coordinator during client PME. This SingleMessage contains > affinity fetch requests for all cluster caches. When SingleMessage is > in-flight server nodes finish client PME and also process and finish cache > destroy PME. When a cache is destroyed affinity for that cache is cleared. > When SingleMessage delivered to coordinator it doesn’t have affinity for a > requested cache because the cache is already destroyed. It leads to assertion > error on the coordinator and unpredictable behavior on the client node. > The race may be fixed with the following change: > If the coordinator doesn’t have an affinity for requested cache from the > client node, it doesn’t break PME with assertion error, just doesn’t send > affinity for that cache to a client node. When the client node receives > FullMessage and sees that affinity for some requested cache doesn’t exist, it > just closes cache proxy for user interactions which throws CacheStopped > exception for every attempt to use that cache. This is safe behavior because > cache destroy event should be happened on the client node soon and destroy > that cache completely. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12189) Implement correct limit for TextQuery
[ https://issues.apache.org/jira/browse/IGNITE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944584#comment-16944584 ] Andrey Mashenkov commented on IGNITE-12189: --- [~Yuriy_Shuliha], looks better. I've added a few more comments and start a TC. > Implement correct limit for TextQuery > - > > Key: IGNITE-12189 > URL: https://issues.apache.org/jira/browse/IGNITE-12189 > Project: Ignite > Issue Type: Improvement > Components: general >Reporter: Yuriy Shuliha >Assignee: Yuriy Shuliha >Priority: Major > Fix For: 2.8 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > PROBLEM > For now each server-node returns all response records to the client-node and > it may contain ~thousands, ~hundred thousands records. > Event if we need only first 10-100. Again, all the results are added to > queue in _*GridCacheQueryFutureAdapter*_ in arbitrary order by pages. > There are no any means to deliver deterministic result. > SOLUTION > Implement _*limit*_ as parameter for _*TextQuery*_ and > _*GridCacheQueryRequest*_ > It should be passed as limit parameter in Lucene's > _*IndexSearcher.search()*_ in _*GridLuceneIndex*_. > For distributed queries _*limit*_ will also trim response queue when merging > results. > Type: long > Special value: : 0 -> No limit (Integer.MAX_VALUE); -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (IGNITE-12259) Create new module for support spring-5.2.X and spring-data-2.2.X
[ https://issues.apache.org/jira/browse/IGNITE-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surkov Aleksandr reassigned IGNITE-12259: - Assignee: Surkov Aleksandr > Create new module for support spring-5.2.X and spring-data-2.2.X > > > Key: IGNITE-12259 > URL: https://issues.apache.org/jira/browse/IGNITE-12259 > Project: Ignite > Issue Type: Wish >Reporter: Surkov Aleksandr >Assignee: Surkov Aleksandr >Priority: Minor > > The actual spring version is > [5.2.0.RELEASE|https://mvnrepository.com/artifact/org.springframework/spring-context/5.2.0.RELEASE], > spring data version is > [2.2.0.RELEASE.|https://mvnrepository.com/artifact/org.springframework.data/spring-data-commons/2.2.0.RELEASE] > It would be nice to add a module to support these versions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12259) Create new module for support spring-5.2.X and spring-data-2.2.X
Surkov Aleksandr created IGNITE-12259: - Summary: Create new module for support spring-5.2.X and spring-data-2.2.X Key: IGNITE-12259 URL: https://issues.apache.org/jira/browse/IGNITE-12259 Project: Ignite Issue Type: Wish Reporter: Surkov Aleksandr The actual spring version is [5.2.0.RELEASE|https://mvnrepository.com/artifact/org.springframework/spring-context/5.2.0.RELEASE], spring data version is [2.2.0.RELEASE.|https://mvnrepository.com/artifact/org.springframework.data/spring-data-commons/2.2.0.RELEASE] It would be nice to add a module to support these versions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12254) IO errors during write header of WAL files in FSYNC mode should be handled by failure handler
[ https://issues.apache.org/jira/browse/IGNITE-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944542#comment-16944542 ] Aleksey Plekhanov commented on IGNITE-12254: [~nizhikov], [~andrey-kuznetsov] thanks for reviews! I will merge the ticket after a week. > IO errors during write header of WAL files in FSYNC mode should be handled by > failure handler > - > > Key: IGNITE-12254 > URL: https://issues.apache.org/jira/browse/IGNITE-12254 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Blocker > Time Spent: 20m > Remaining Estimate: 0h > > Currently, such errors can hang the cluster. > Reproducer: > {code:java} > @Test > public void testWalFsyncIOError() throws Exception { > cleanPersistenceDir(); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setCacheConfiguration(new > CacheConfiguration(DEFAULT_CACHE_NAME).setAtomicityMode(ATOMIC)); > cfg.setDataStorageConfiguration( > new DataStorageConfiguration() > .setDefaultDataRegionConfiguration( > new DataRegionConfiguration() > .setMaxSize(100L * 1024 * 1024) > .setPersistenceEnabled(true)) > .setWalMode(WALMode.FSYNC) > .setWalSegmentSize(512 * 1024) > .setWalBufferSize(512 * 1024)); > IgniteEx ignite0 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite0")); > IgniteEx ignite1 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite1")); > ignite0.cluster().active(true); > IgniteCache cache = ignite0.cache(DEFAULT_CACHE_NAME); > for (int i = 0; i < 1_000; i++) > cache.put(i, "Test value " + i); > > ((FileWriteAheadLogManager)ignite1.context().cache().context().wal()).setFileIOFactory(new > FileIOFactory() { > FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); > @Override public FileIO create(File file, OpenOption... modes) > throws IOException { > final FileIO delegate = delegateFactory.create(file, modes); > return new FileIODecorator(delegate) { > @Override public int write(ByteBuffer srcBuf) throws > IOException { > throw new IOException("No space left on device"); > } > }; > } > }); > for (int i = 0; i < 2_000; i++) > try { > cache.put(i, "Test value " + i); > } > catch (Exception ignore) { > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (IGNITE-12129) Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions
[ https://issues.apache.org/jira/browse/IGNITE-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944337#comment-16944337 ] Surkov Aleksandr edited comment on IGNITE-12129 at 10/4/19 2:17 PM: [~p.klevakin] we could not update spring-data 2.0.X to 2.X and spring-5.0 to 5.X because module *ignite-spring-data_2.0* are used only for spring 5.0 and spring-data 2.0. I'll update spring to 5.0.15 and spring-data to 2.0.14 and create ticket for update spring to 5.2.X and spring-data-2.2.X. was (Author: surkov): [~p.klevakin] we could not update spring-data 2.0.X to 2.X and spring-5.0 to 5.X because module *ignite-spring-data_2.0* are used only for spring 5.0 and spring-data 2.0. I'll update spring to 5.0.14 and spring-data to 2.0.14 and create ticket for update spring to 5.2.X and spring-data-2.2.X. > Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions > -- > > Key: IGNITE-12129 > URL: https://issues.apache.org/jira/browse/IGNITE-12129 > Project: Ignite > Issue Type: Wish > Components: spring >Reporter: Pavel >Assignee: Surkov Aleksandr >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The actual spring version is 5.1.9.RELEASE, spring boot version is > 2.1.7.RELEASE > In Ignite master branch spring versions are 5.0.8.RELEASE and 2.0.9.RELEASE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12254) IO errors during write header of WAL files in FSYNC mode should be handled by failure handler
[ https://issues.apache.org/jira/browse/IGNITE-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944523#comment-16944523 ] Andrey Kuznetsov commented on IGNITE-12254: --- LGTM as well. > IO errors during write header of WAL files in FSYNC mode should be handled by > failure handler > - > > Key: IGNITE-12254 > URL: https://issues.apache.org/jira/browse/IGNITE-12254 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Blocker > Time Spent: 20m > Remaining Estimate: 0h > > Currently, such errors can hang the cluster. > Reproducer: > {code:java} > @Test > public void testWalFsyncIOError() throws Exception { > cleanPersistenceDir(); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setCacheConfiguration(new > CacheConfiguration(DEFAULT_CACHE_NAME).setAtomicityMode(ATOMIC)); > cfg.setDataStorageConfiguration( > new DataStorageConfiguration() > .setDefaultDataRegionConfiguration( > new DataRegionConfiguration() > .setMaxSize(100L * 1024 * 1024) > .setPersistenceEnabled(true)) > .setWalMode(WALMode.FSYNC) > .setWalSegmentSize(512 * 1024) > .setWalBufferSize(512 * 1024)); > IgniteEx ignite0 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite0")); > IgniteEx ignite1 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite1")); > ignite0.cluster().active(true); > IgniteCache cache = ignite0.cache(DEFAULT_CACHE_NAME); > for (int i = 0; i < 1_000; i++) > cache.put(i, "Test value " + i); > > ((FileWriteAheadLogManager)ignite1.context().cache().context().wal()).setFileIOFactory(new > FileIOFactory() { > FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); > @Override public FileIO create(File file, OpenOption... modes) > throws IOException { > final FileIO delegate = delegateFactory.create(file, modes); > return new FileIODecorator(delegate) { > @Override public int write(ByteBuffer srcBuf) throws > IOException { > throw new IOException("No space left on device"); > } > }; > } > }); > for (int i = 0; i < 2_000; i++) > try { > cache.put(i, "Test value " + i); > } > catch (Exception ignore) { > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12129) Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions
[ https://issues.apache.org/jira/browse/IGNITE-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944522#comment-16944522 ] Ignite TC Bot commented on IGNITE-12129: {panel:title=Branch: [pull/6935/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4660628&buildTypeId=IgniteTests24Java8_RunAll] > Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions > -- > > Key: IGNITE-12129 > URL: https://issues.apache.org/jira/browse/IGNITE-12129 > Project: Ignite > Issue Type: Wish > Components: spring >Reporter: Pavel >Assignee: Surkov Aleksandr >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The actual spring version is 5.1.9.RELEASE, spring boot version is > 2.1.7.RELEASE > In Ignite master branch spring versions are 5.0.8.RELEASE and 2.0.9.RELEASE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12254) IO errors during write header of WAL files in FSYNC mode should be handled by failure handler
[ https://issues.apache.org/jira/browse/IGNITE-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944515#comment-16944515 ] Ignite TC Bot commented on IGNITE-12254: {panel:title=Branch: [pull/6934/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4660105&buildTypeId=IgniteTests24Java8_RunAll] > IO errors during write header of WAL files in FSYNC mode should be handled by > failure handler > - > > Key: IGNITE-12254 > URL: https://issues.apache.org/jira/browse/IGNITE-12254 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Blocker > Time Spent: 20m > Remaining Estimate: 0h > > Currently, such errors can hang the cluster. > Reproducer: > {code:java} > @Test > public void testWalFsyncIOError() throws Exception { > cleanPersistenceDir(); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setCacheConfiguration(new > CacheConfiguration(DEFAULT_CACHE_NAME).setAtomicityMode(ATOMIC)); > cfg.setDataStorageConfiguration( > new DataStorageConfiguration() > .setDefaultDataRegionConfiguration( > new DataRegionConfiguration() > .setMaxSize(100L * 1024 * 1024) > .setPersistenceEnabled(true)) > .setWalMode(WALMode.FSYNC) > .setWalSegmentSize(512 * 1024) > .setWalBufferSize(512 * 1024)); > IgniteEx ignite0 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite0")); > IgniteEx ignite1 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite1")); > ignite0.cluster().active(true); > IgniteCache cache = ignite0.cache(DEFAULT_CACHE_NAME); > for (int i = 0; i < 1_000; i++) > cache.put(i, "Test value " + i); > > ((FileWriteAheadLogManager)ignite1.context().cache().context().wal()).setFileIOFactory(new > FileIOFactory() { > FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); > @Override public FileIO create(File file, OpenOption... modes) > throws IOException { > final FileIO delegate = delegateFactory.create(file, modes); > return new FileIODecorator(delegate) { > @Override public int write(ByteBuffer srcBuf) throws > IOException { > throw new IOException("No space left on device"); > } > }; > } > }); > for (int i = 0; i < 2_000; i++) > try { > cache.put(i, "Test value " + i); > } > catch (Exception ignore) { > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (IGNITE-8473) Add option to enable/disable WAL for several caches with single command
[ https://issues.apache.org/jira/browse/IGNITE-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergei Ryzhov reassigned IGNITE-8473: - Assignee: Sergei Ryzhov > Add option to enable/disable WAL for several caches with single command > --- > > Key: IGNITE-8473 > URL: https://issues.apache.org/jira/browse/IGNITE-8473 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Rakov >Assignee: Sergei Ryzhov >Priority: Major > Labels: newbie > Fix For: 2.8 > > > API method for disabling WAL in IgniteCluster accepts only one cache name. > Every call triggers exchange and checkpoints cluster-wide - it takes plenty > of time to disable/enable WAL for multiple caches. > We should add option to disable/enable WAL for several caches with single > command. > New proposed API methods: > {noformat} > IgniteCluster.disableWal(Collection cacheNames) > IgniteCluster.enableWal(Collection cacheNames) > IgniteCluster.disableWal() // Disables WAL for all caches. > IgniteCluster.enableWal() // Enables WAL for all caches. > {noformat} > Methods should return true if WAL state of at least one cache was actually > changed by the call. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12258) .NET: Add ContinuousQueryWithTransformer
Pavel Tupitsyn created IGNITE-12258: --- Summary: .NET: Add ContinuousQueryWithTransformer Key: IGNITE-12258 URL: https://issues.apache.org/jira/browse/IGNITE-12258 Project: Ignite Issue Type: Improvement Components: platforms Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn ContinuousQueryWithTransformer is a powerful mechanism to improve continuous query performance by sending only relevant data back to listener nodes. https://apacheignite.readme.io/docs/continuous-queries#section-remote-transformer -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12213) Sql objects system views
[ https://issues.apache.org/jira/browse/IGNITE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944371#comment-16944371 ] Ignite TC Bot commented on IGNITE-12213: {panel:title=Branch: [pull/6916/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4660221&buildTypeId=IgniteTests24Java8_RunAll] > Sql objects system views > > > Key: IGNITE-12213 > URL: https://issues.apache.org/jira/browse/IGNITE-12213 > Project: Ignite > Issue Type: Sub-task >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Major > Labels: IEP-35, await > Fix For: 2.8 > > Time Spent: 3.5h > Remaining Estimate: 0h > > IGNITE-12145 finished > We should add SQL objects system views. > * Schemas > * Tables > * Indexes > * SQL queries -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (IGNITE-11312) JDBC: Thin driver doesn't reports incorrect property names
[ https://issues.apache.org/jira/browse/IGNITE-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944352#comment-16944352 ] Igor Sapego edited comment on IGNITE-11312 at 10/4/19 9:32 AM: --- [~tledkov-gridgain] or [~alapin], could you help? I'm not very familiar with JDBC. was (Author: isapego): [~tledkov-gridgain] or [~alapin]], could you help? I'm not very familiar with JDBC. > JDBC: Thin driver doesn't reports incorrect property names > -- > > Key: IGNITE-11312 > URL: https://issues.apache.org/jira/browse/IGNITE-11312 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Reporter: Stanislav Lukyanov >Assignee: Lev Agafonov >Priority: Major > Labels: newbie > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > JDBC driver reports the properties it supports via getPropertyInfo method. It > currently reports the property names as simple strings, like > "enforceJoinOrder". However, when the properties are processed on connect > they are looked up with prefix "ignite.jdbc", e.g. > "ignite.jdbc.enforceJoinOrder". > Because of this UI tools like DBeaver can't properly pass the properties to > Ignite. For example, when "enforceJoinOrder" is set to true in "Connection > settings" -> "Driver properties" menu of DBeaver it has no effect. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-11312) JDBC: Thin driver doesn't reports incorrect property names
[ https://issues.apache.org/jira/browse/IGNITE-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944352#comment-16944352 ] Igor Sapego commented on IGNITE-11312: -- [~tledkov-gridgain] or [~alapin]], could you help? I'm not very familiar with JDBC. > JDBC: Thin driver doesn't reports incorrect property names > -- > > Key: IGNITE-11312 > URL: https://issues.apache.org/jira/browse/IGNITE-11312 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Reporter: Stanislav Lukyanov >Assignee: Lev Agafonov >Priority: Major > Labels: newbie > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > JDBC driver reports the properties it supports via getPropertyInfo method. It > currently reports the property names as simple strings, like > "enforceJoinOrder". However, when the properties are processed on connect > they are looked up with prefix "ignite.jdbc", e.g. > "ignite.jdbc.enforceJoinOrder". > Because of this UI tools like DBeaver can't properly pass the properties to > Ignite. For example, when "enforceJoinOrder" is set to true in "Connection > settings" -> "Driver properties" menu of DBeaver it has no effect. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-11704) Write tombstones during rebalance to get rid of deferred delete buffer
[ https://issues.apache.org/jira/browse/IGNITE-11704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944348#comment-16944348 ] Ignite TC Bot commented on IGNITE-11704: {panel:title=Branch: [pull/6931/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4657057&buildTypeId=IgniteTests24Java8_RunAll] > Write tombstones during rebalance to get rid of deferred delete buffer > -- > > Key: IGNITE-11704 > URL: https://issues.apache.org/jira/browse/IGNITE-11704 > Project: Ignite > Issue Type: Improvement >Reporter: Alexey Goncharuk >Assignee: Pavel Kovalenko >Priority: Major > Labels: rebalance > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently Ignite relies on deferred delete buffer in order to handle > write-remove conflicts during rebalance. Given the limit size of the buffer, > this approach is fundamentally flawed, especially in case when persistence is > enabled. > I suggest to extend the logic of data storage to be able to store key > tombstones - to keep version for deleted entries. The tombstones will be > stored when rebalance is in progress and should be cleaned up when rebalance > is completed. > Later this approach may be used to implement fast partition rebalance based > on merkle trees (in this case, tombstones should be written on an incomplete > baseline). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (IGNITE-12129) Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions
[ https://issues.apache.org/jira/browse/IGNITE-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944337#comment-16944337 ] Surkov Aleksandr edited comment on IGNITE-12129 at 10/4/19 9:07 AM: [~p.klevakin] we could not update spring-data 2.0.X to 2.X and spring-5.0 to 5.X because module *ignite-spring-data_2.0* are used only for spring 5.0 and spring-data 2.0. I'll update spring to 5.0.14 and spring-data to 2.0.14 and create ticket for update spring to 5.2.X and spring-data-2.2.X. was (Author: surkov): [~p.klevakin] we could not update spring-data 2.0.X to 2.X and spring-5.0 to 5.X because module *ignite-spring-data_2.0* are used ** only for spring 5.0 and spring-data 2.0. I'll update spring to 5.0.14 and spring-data to 2.0.14 and create ticket for update spring to 5.2.X and spring-data-2.2.X. > Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions > -- > > Key: IGNITE-12129 > URL: https://issues.apache.org/jira/browse/IGNITE-12129 > Project: Ignite > Issue Type: Wish > Components: spring >Reporter: Pavel >Assignee: Surkov Aleksandr >Priority: Major > > The actual spring version is 5.1.9.RELEASE, spring boot version is > 2.1.7.RELEASE > In Ignite master branch spring versions are 5.0.8.RELEASE and 2.0.9.RELEASE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12129) Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions
[ https://issues.apache.org/jira/browse/IGNITE-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944337#comment-16944337 ] Surkov Aleksandr commented on IGNITE-12129: --- [~p.klevakin] we could not update spring-data 2.0.X to 2.X and spring-5.0 to 5.X because module *ignite-spring-data_2.0* are used ** only for spring 5.0 and spring-data 2.0. I'll update spring to 5.0.14 and spring-data to 2.0.14 and create ticket for update spring to 5.2.X and spring-data-2.2.X. > Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions > -- > > Key: IGNITE-12129 > URL: https://issues.apache.org/jira/browse/IGNITE-12129 > Project: Ignite > Issue Type: Wish > Components: spring >Reporter: Pavel >Assignee: Surkov Aleksandr >Priority: Major > > The actual spring version is 5.1.9.RELEASE, spring boot version is > 2.1.7.RELEASE > In Ignite master branch spring versions are 5.0.8.RELEASE and 2.0.9.RELEASE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12129) Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions
[ https://issues.apache.org/jira/browse/IGNITE-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surkov Aleksandr updated IGNITE-12129: -- Summary: Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions (was: Upgrade spring dependencies to latest versions) > Upgrade spring-5.0.X and spring-data-2.0.X dependencies to latest versions > -- > > Key: IGNITE-12129 > URL: https://issues.apache.org/jira/browse/IGNITE-12129 > Project: Ignite > Issue Type: Wish > Components: spring >Reporter: Pavel >Assignee: Surkov Aleksandr >Priority: Major > > The actual spring version is 5.1.9.RELEASE, spring boot version is > 2.1.7.RELEASE > In Ignite master branch spring versions are 5.0.8.RELEASE and 2.0.9.RELEASE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-6444) Validate that copyOnRead flag is configured with on-heap cache enabled
[ https://issues.apache.org/jira/browse/IGNITE-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated IGNITE-6444: Fix Version/s: (was: 2.8) 2.9 > Validate that copyOnRead flag is configured with on-heap cache enabled > -- > > Key: IGNITE-6444 > URL: https://issues.apache.org/jira/browse/IGNITE-6444 > Project: Ignite > Issue Type: Improvement > Components: cache >Affects Versions: 2.0 >Reporter: Alexey Goncharuk >Priority: Major > Labels: usability > Fix For: 2.9 > > > Link to the user-list discussion: > http://apache-ignite-users.70518.x6.nabble.com/Ignite-2-1-0-CopyOnRead-Problem-td17009.html > It makes sense to validate the flag and print out a warning if on-heap cache > is disabled. I do not think that we should prevent node from startup because > this may break existing deployments. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12254) IO errors during write header of WAL files in FSYNC mode should be handled by failure handler
[ https://issues.apache.org/jira/browse/IGNITE-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944301#comment-16944301 ] Nikolay Izhikov commented on IGNITE-12254: -- LGTM > IO errors during write header of WAL files in FSYNC mode should be handled by > failure handler > - > > Key: IGNITE-12254 > URL: https://issues.apache.org/jira/browse/IGNITE-12254 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Blocker > Time Spent: 20m > Remaining Estimate: 0h > > Currently, such errors can hang the cluster. > Reproducer: > {code:java} > @Test > public void testWalFsyncIOError() throws Exception { > cleanPersistenceDir(); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setCacheConfiguration(new > CacheConfiguration(DEFAULT_CACHE_NAME).setAtomicityMode(ATOMIC)); > cfg.setDataStorageConfiguration( > new DataStorageConfiguration() > .setDefaultDataRegionConfiguration( > new DataRegionConfiguration() > .setMaxSize(100L * 1024 * 1024) > .setPersistenceEnabled(true)) > .setWalMode(WALMode.FSYNC) > .setWalSegmentSize(512 * 1024) > .setWalBufferSize(512 * 1024)); > IgniteEx ignite0 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite0")); > IgniteEx ignite1 = startGrid(new > IgniteConfiguration(cfg).setIgniteInstanceName("ignite1")); > ignite0.cluster().active(true); > IgniteCache cache = ignite0.cache(DEFAULT_CACHE_NAME); > for (int i = 0; i < 1_000; i++) > cache.put(i, "Test value " + i); > > ((FileWriteAheadLogManager)ignite1.context().cache().context().wal()).setFileIOFactory(new > FileIOFactory() { > FileIOFactory delegateFactory = new RandomAccessFileIOFactory(); > @Override public FileIO create(File file, OpenOption... modes) > throws IOException { > final FileIO delegate = delegateFactory.create(file, modes); > return new FileIODecorator(delegate) { > @Override public int write(ByteBuffer srcBuf) throws > IOException { > throw new IOException("No space left on device"); > } > }; > } > }); > for (int i = 0; i < 2_000; i++) > try { > cache.put(i, "Test value " + i); > } > catch (Exception ignore) { > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (IGNITE-6930) Optionally to do not write free list updates to WAL
[ https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944281#comment-16944281 ] Aleksey Plekhanov edited comment on IGNITE-6930 at 10/4/19 7:09 AM: [~ivan.glukos], # The test assumes that PDS size didn't change between the first checkpoint and after several checkpoints. It's not true anymore with caching since the only final free-list state is persisted on checkpoint, some changed, but currently empty buckets are not persisted. So with caching PDS size in this test after the first checkpoint about 0.5 size of the original test, and after several checkpoints about 0.75 size of the original test. # This test checks that free-list works and pages cache flush correctly under the concurrent load. It helps me to catch a couple of concurrent bugs (these bugs have also reproduced by yardstick benchmark, but haven't reproduced by other tests on TC). I will add a comment about this. # I think they are too low level for some configuration files, but can be configured by system properties. I will change it. # I think 64 and 4 it's reasonable values. I've benchmarked with higher values, but it almost gives no performance boost. 8 (2 per bucket)- it's too small. There will be big overhead for service objects (at least 16 bytes per object, at least 3 objects: lock, GridLongList and arr inside GridLongList), so we will have 48 bytes for service objects per bucket and only 16 bytes (2 longs) of useful data. 64/4 is a more reliable configuration since we allocate more heap space (16*8=128 bytes) for useful data than for service objects. Also, I think choosing MAX_SIZE dynamically, it's not such a good idea, since, there can be more than one node inside one JVM and we don't know when and how many nodes will be started when we start first one. # Ok, I will implement counter of empty flushed buckets. was (Author: alex_pl): # The test assumes that PDS size didn't change between the first checkpoint and after several checkpoints. It's not true anymore with caching since the only final free-list state is persisted on checkpoint, some changed, but currently empty buckets are not persisted. So with caching PDS size in this test after the first checkpoint about 0.5 size of the original test, and after several checkpoints about 0.75 size of the original test. # This test checks that free-list works and pages cache flush correctly under the concurrent load. It helps me to catch a couple of concurrent bugs (these bugs have also reproduced by yardstick benchmark, but haven't reproduced by other tests on TC). I will add a comment about this. # I think they are too low level for some configuration files, but can be configured by system properties. I will change it. # I think 64 and 4 it's reasonable values. I've benchmarked with higher values, but it almost gives no performance boost. 8 (2 per bucket)- it's too small. There will be big overhead for service objects (at least 16 bytes per object, at least 3 objects: lock, GridLongList and arr inside GridLongList), so we will have 48 bytes for service objects per bucket and only 16 bytes (2 longs) of useful data. 64/4 is a more reliable configuration since we allocate more heap space (16*8=128 bytes) for useful data than for service objects. Also, I think choosing MAX_SIZE dynamically, it's not such a good idea, since, there can be more than one node inside one JVM and we don't know when and how many nodes will be started when we start first one. # Ok, I will implement counter of empty flushed buckets. > Optionally to do not write free list updates to WAL > --- > > Key: IGNITE-6930 > URL: https://issues.apache.org/jira/browse/IGNITE-6930 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: IEP-8, performance > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > When cache entry is created, we need to write update the free list. When > entry is updated, we need to update free list(s) several times. Currently > free list is persistent structure, so every update to it must be logged to be > able to recover after crash. This may incur significant overhead, especially > for small entries. > E.g. this is how WAL for a single update looks like. "D" - updates with real > data, "F" - free-list management: > {code} > 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject > [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry > [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, > order=1510667560607, nodeOrder=1], partId=0, partCnt=4, s
[jira] [Commented] (IGNITE-6930) Optionally to do not write free list updates to WAL
[ https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944281#comment-16944281 ] Aleksey Plekhanov commented on IGNITE-6930: --- # The test assumes that PDS size didn't change between the first checkpoint and after several checkpoints. It's not true anymore with caching since the only final free-list state is persisted on checkpoint, some changed, but currently empty buckets are not persisted. So with caching PDS size in this test after the first checkpoint about 0.5 size of the original test, and after several checkpoints about 0.75 size of the original test. # This test checks that free-list works and pages cache flush correctly under the concurrent load. It helps me to catch a couple of concurrent bugs (these bugs have also reproduced by yardstick benchmark, but haven't reproduced by other tests on TC). I will add a comment about this. # I think they are too low level for some configuration files, but can be configured by system properties. I will change it. # I think 64 and 4 it's reasonable values. I've benchmarked with higher values, but it almost gives no performance boost. 8 (2 per bucket)- it's too small. There will be big overhead for service objects (at least 16 bytes per object, at least 3 objects: lock, GridLongList and arr inside GridLongList), so we will have 48 bytes for service objects per bucket and only 16 bytes (2 longs) of useful data. 64/4 is a more reliable configuration since we allocate more heap space (16*8=128 bytes) for useful data than for service objects. Also, I think choosing MAX_SIZE dynamically, it's not such a good idea, since, there can be more than one node inside one JVM and we don't know when and how many nodes will be started when we start first one. # Ok, I will implement counter of empty flushed buckets. > Optionally to do not write free list updates to WAL > --- > > Key: IGNITE-6930 > URL: https://issues.apache.org/jira/browse/IGNITE-6930 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: IEP-8, performance > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > When cache entry is created, we need to write update the free list. When > entry is updated, we need to update free list(s) several times. Currently > free list is persistent structure, so every update to it must be logged to be > able to recover after crash. This may incur significant overhead, especially > for small entries. > E.g. this is how WAL for a single update looks like. "D" - updates with real > data, "F" - free-list management: > {code} > 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject > [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry > [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, > order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord > [size=0, chainSize=0, pos=null, type=DATA_RECORD]] > 2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, > pageId=00010006, grpId=94416770, super=PageDeltaRecord > [grpId=94416770, pageId=00010006, super=WALRecord [size=37, > chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]] > 3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, > pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, > type=DATA_PAGE_INSERT_RECORD]]] > 4. [F] PagesListAddPageRecord [dataPageId=00010005, > super=PageDeltaRecord [grpId=94416770, pageId=00010008, > super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]] > 5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, > super=PageDeltaRecord [grpId=94416770, pageId=00010005, > super=WALRecord [size=37, chainSize=0, pos=null, > type=DATA_PAGE_SET_FREE_LIST_PAGE]]] > 6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010004, super=WALRecord [size=47, > chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]] > 7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010005, super=WALRecord [size=30, > chainSize=0, pos=null, type=DATA_PAGE_REMOVE_RECORD]]] > 8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, > pageId=00010008, grpId=94416770, super=PageDeltaRecord > [grpId=94416770, pageId=00010008, super=WALRecord [size=37, > chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]] > 9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010005, super=WALRecord [size=37, > c
[jira] [Assigned] (IGNITE-6943) Web console: LoadCaches method should activate cluster if persistent is configured, otherwise method doesn't work due to cluster is inactive
[ https://issues.apache.org/jira/browse/IGNITE-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-6943: Assignee: (was: Vasiliy Sisko) > Web console: LoadCaches method should activate cluster if persistent is > configured, otherwise method doesn't work due to cluster is inactive > > > Key: IGNITE-6943 > URL: https://issues.apache.org/jira/browse/IGNITE-6943 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.3 >Reporter: Pavel Konstantinov >Priority: Major > > Web console can generate sample project. > Now we have a persistence feature. > Cluster with persistence started inactive and generated LoadCaches will not > work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-6930) Optionally to do not write free list updates to WAL
[ https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944278#comment-16944278 ] Ignite TC Bot commented on IGNITE-6930: --- {panel:title=Branch: [pull/6893/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4654547&buildTypeId=IgniteTests24Java8_RunAll] > Optionally to do not write free list updates to WAL > --- > > Key: IGNITE-6930 > URL: https://issues.apache.org/jira/browse/IGNITE-6930 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: IEP-8, performance > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > When cache entry is created, we need to write update the free list. When > entry is updated, we need to update free list(s) several times. Currently > free list is persistent structure, so every update to it must be logged to be > able to recover after crash. This may incur significant overhead, especially > for small entries. > E.g. this is how WAL for a single update looks like. "D" - updates with real > data, "F" - free-list management: > {code} > 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject > [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry > [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, > order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord > [size=0, chainSize=0, pos=null, type=DATA_RECORD]] > 2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, > pageId=00010006, grpId=94416770, super=PageDeltaRecord > [grpId=94416770, pageId=00010006, super=WALRecord [size=37, > chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]] > 3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, > pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, > type=DATA_PAGE_INSERT_RECORD]]] > 4. [F] PagesListAddPageRecord [dataPageId=00010005, > super=PageDeltaRecord [grpId=94416770, pageId=00010008, > super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]] > 5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, > super=PageDeltaRecord [grpId=94416770, pageId=00010005, > super=WALRecord [size=37, chainSize=0, pos=null, > type=DATA_PAGE_SET_FREE_LIST_PAGE]]] > 6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010004, super=WALRecord [size=47, > chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]] > 7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010005, super=WALRecord [size=30, > chainSize=0, pos=null, type=DATA_PAGE_REMOVE_RECORD]]] > 8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, > pageId=00010008, grpId=94416770, super=PageDeltaRecord > [grpId=94416770, pageId=00010008, super=WALRecord [size=37, > chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]] > 9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord > [grpId=94416770, pageId=00010005, super=WALRecord [size=37, > chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]] > 10. [F] PagesListAddPageRecord [dataPageId=00010005, > super=PageDeltaRecord [grpId=94416770, pageId=00010006, > super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]] > 11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, > super=PageDeltaRecord [grpId=94416770, pageId=00010005, > super=WALRecord [size=37, chainSize=0, pos=null, > type=DATA_PAGE_SET_FREE_LIST_PAGE]]] > {code} > If you sum all space required for operation (size in p.3 is shown incorrectly > here), you will see that data update required ~300 bytes, so do free list > update! > *Proposed solution* > 1) Optionally do not write free list updates to WAL > 2) In case of node restart we start with empty free lists, so data inserts > will have to allocate new pages > 3) When old data page is read, add it to the free list > 4) Start a background thread which will iterate over all old data pages and > re-create the free list, so that eventually all data pages are tracked. -- This message was sent by Atlassian Jira (v8.3.4#803005)