[jira] [Created] (IGNITE-12068) puzzling select result
JerryKwan created IGNITE-12068: -- Summary: puzzling select result Key: IGNITE-12068 URL: https://issues.apache.org/jira/browse/IGNITE-12068 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.7.5 Environment: System version: CentOS Linux release 7.6.1810 (Core) Apache Ignite version: apache-ignite-2.7.5-1.noarch Reporter: JerryKwan select using the first primary key only returns one record, but it should return more records. The following is how to reproduce this problem 1, create a table using CREATE TABLE IF NOT EXISTS Person( id int, city_id int, name varchar, age int, company varchar, PRIMARY KEY (id, city_id) ); 2, insert some records INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3); INSERT INTO Person (id, name, city_id) VALUES (1, 'John Dean', 4); INSERT INTO Person (id, name, city_id) VALUES (2, 'Alex', 4); 3, query using 'select * from Person' show all of the records, expected [http://www.passimage.in/i/03da31c8f23cf64580d5.png] 4, query using 'select * from Person where id=1', only get one record, NOT expected [http://www.passimage.in/i/f5491491a70c5d796823.png] 5, query using 'select * from Person where city_id=4' get two records, expected [http://www.passimage.in/i/ff0ee4f5e882983d779d.png] Why 'select * from Person where id=1', only get one record? and how to fix this? Is there any special operations/configurations to do? -- This message was sent by Atlassian JIRA (v7.6.14#76016)
Fwd: The Apache(R) Software Foundation Announces Annual Report for 2019 Fiscal Year
> > 18. Top 5 most active mailing lists (user@ + dev@): Flink, Beam, Lucene, > *Ignite*, and Kafka; Community fellows, congrats! Ignite continues being of the top ASF projects among 300+ in the category above. Thanks the dev community for contribution and user community for selecting Ignite and helping us to improve it over the time. - Denis Ignite PMC Chair -- Forwarded message - From: Sally Khudairi Date: Tue, Aug 13, 2019 at 8:03 PM Subject: Fwd: The Apache® Software Foundation Announces Annual Report for 2019 Fiscal Year To: , ASF Operations Cc: ASF Marketing & Publicity We are live. Thank you, everyone, for your help in getting this completed. Warm regards, Sally - - - Vice President Marketing & Publicity Vice President Sponsor Relations The Apache Software Foundation Tel +1 617 921 8656 | s...@apache.org - Original message - From: Sally Khudairi To: Apache Announce List Subject: The Apache® Software Foundation Announces Annual Report for 2019 Fiscal Year Date: Tuesday, August 13, 2019 13:01 [this announcement is available online at https://s.apache.org/w7bw1 ] World's largest Open Source foundation’s 300+ freely-available, enterprise-grade Apache projects power some of the most visible and widely used applications in computing today. Wakefield, MA —13 August 2019— The Apache® Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of the annual report for its 2019 fiscal year, which ended 30 April 2019. Celebrating its 20th Anniversary, the world's largest Open Source foundation’s "Apache Way" of community-driven development is the process behind hundreds of freely-available (100% no cost), enterprise-grade Apache projects that serve as the backbone for some of the most visible and widely used applications in Artificial Intelligence and Deep Learning, Big Data, build management, Cloud Computing, content management, DevOps, IoT and Edge computing, mobile, servers, and Web frameworks, among many other categories. The ubiquity of Apache software is undeniable, with Apache projects managing exabytes of data, executing teraflops of operations, and storing billions of objects in virtually every industry. Apache software is an integral part of nearly every end user computing device, from laptops to tablets to phones. Apache software is used in every Internet-connected country on the planet. Highlights include: 1. ASF codebase is conservatively valued at least $20B, using the COCOMO 2 model; 2. Continued guardianship of 190M+ lines of code in the Apache repositories; 3. Profit for FY2018-2019: $585,486; 4. Total of 10 Platinum Sponsors, 9 Gold Sponsors, 11 Silver Sponsors, 25 Bronze Sponsors, and 6 Platinum Targeted Sponsors, 5 5. Gold Targeted Sponsors, 3 Silver Targeted Sponsors, and 10 Bronze Targeted Sponsors; 5. 35 new individual ASF Members elected, totalling 766; 6. Exceeded 7,000 code Committers; 7. 202 Top-Level communities overseeing 332 Apache projects and sub-projects; 8. 17 newly-graduated Top-Level Projects from the Apache Incubator; 9. 47 projects currently undergoing development in the Apache Incubator; 10. Top 5 most active/visited Apache projects: Hadoop, Kafka, Lucene, POI, ZooKeeper; 11. Top 5 Apache repositories by number of commits: Camel, Hadoop, HBase, Beam, and Flink; 12. Top 5 Apache repositories by lines of code: NetBeans, OpenOffice, Flex (combined), Mynewt (combined), and Trafodion; 13. 35M page views per week across apache.org; 14. 9M+ source code downloads from Apache mirrors (excluding convenience binaries); 15. Web requests received from every Internet-connected country on the planet; 16. 3,280 Committers changed 71,186,324 lines of code over 222,684 commits; 17. 18,750 authors sent 1,402,267 emails on 570,469 topics across 1,131 mailing lists; 18. Top 5 most active mailing lists (user@ + dev@): Flink, Beam, Lucene, Ignite, and Kafka; 19. Automated Gitbox across ~1,800 git repositories containing ~75GB of code and repository history; 20. Each GitHub account monitored for security compliance; 21. GitHub traffic: Top 5 most active Apache sources --clones: Thrift, Cordova, Arrow, Airflow, and Beam; 22. GitHub traffic: Top 5 most active Apache sources --visits: Spark, Camel, Flink, Kafka, and Airflow; 23. 24th anniversary of the Apache HTTP Server (20 years under the ASF umbrella); 24. 770 Individual Contributor License Agreements (CLAs) signed; 25. 28 Corporate Contributor License Agreements signed; 26. 26 Software Grant Agreements signed; and 27. ASF is a mentoring organization in Google Summer of Code for 14th consecutive year. The full report is available online at https://s.apache.org/FY2019AnnualReport About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server —the world's most popular Web server so
[jira] [Created] (IGNITE-12067) SQL: metrics of executions of user queries
Pavel Kuznetsov created IGNITE-12067: Summary: SQL: metrics of executions of user queries Key: IGNITE-12067 URL: https://issues.apache.org/jira/browse/IGNITE-12067 Project: Ignite Issue Type: Bug Components: sql Reporter: Pavel Kuznetsov Assignee: Pavel Kuznetsov Lets add: - Counter of success executed user queries. - Counter of failed executed user queries. - Counter of failed by OOM executed user queries. - Counter of cancelled user queries. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
Do I have to use --illegal-access=permit for Java thin client and JDBC with JDK 9/10/11.
Hi Igniter, I understand that --illegal-access=permit is required for JDK 9/10/11 on Ignite server. But do I have to include this JVM parameter for Ignite Java thin client and JDBC client? I tried some simple test without it and it seems working fine... Thanks, Shane
Re: SQL query timeout: in progress or abandoned
Hi Saikat, Thanks for a quick turnaround! Ivan, could you please step in and do a review? - Denis On Sun, Aug 11, 2019 at 6:26 AM Saikat Maitra wrote: > Hi Denis, Ivan > > As discussed I have updated the PR and incorporated review comments. > > https://github.com/apache/ignite/pull/6490/files > > Please take a look and share your feedback. > > Regard, > Saikat > > > > On Sat, Aug 10, 2019 at 5:51 PM Saikat Maitra > wrote: > > > Hello Denis, Ivan > > > > Yes, I can take up the changes for IGNITE-7825. > > > > I had a doubt on the usage of the Default Query Timeout. > > > > I had raised the PR in an assumption that Default Query Timeout will only > > be used if user had not provided Cache Query Timeout > > > > https://github.com/apache/ignite/pull/6490/files > > > > I wanted to discuss if it is correct intended usage of Default Query > > Timeout or should we reconsider? > > > > Regards, > > Saikat > > > > > > > > On Fri, Aug 9, 2019 at 12:11 PM Denis Magda wrote: > > > >> Ivan, thanks for sharing this discussion. Let's use it for our > >> conversation. > >> > >> - > >> Denis > >> > >> > >> On Thu, Aug 8, 2019 at 11:15 PM Павлухин Иван > >> wrote: > >> > >> > Just for the protocol. There was an original dev-list discussion [1]. > >> > Added a link to the ticket as well. > >> > > >> > [1] > >> > > >> > http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-7285-Add-default-query-timeout-td41828.html > >> > > >> > пт, 9 авг. 2019 г. в 01:22, Denis Magda : > >> > > > >> > > Hey Saikat, > >> > > > >> > > Are you still working on this ticket? > >> > > https://issues.apache.org/jira/browse/IGNITE-7285 > >> > > > >> > > Seems that's the last API that doesn't support timeouts - JDBC and > >> ODBC > >> > > drivers already go with it. > >> > > > >> > > If you don't have time to complete the changes then someone else > from > >> the > >> > > community can take over. We see a lot of demand for this API and > here > >> is > >> > > one example: > >> > > > >> > > >> > https://stackoverflow.com/questions/57275301/how-to-set-a-query-timeout-for-apache-ignite-cache > >> > > > >> > > - > >> > > Denis > >> > > >> > > >> > > >> > -- > >> > Best regards, > >> > Ivan Pavlukhin > >> > > >> > > >
[jira] [Created] (IGNITE-12066) Add transmission chunk size to IgniteConfiguration
Maxim Muzafarov created IGNITE-12066: Summary: Add transmission chunk size to IgniteConfiguration Key: IGNITE-12066 URL: https://issues.apache.org/jira/browse/IGNITE-12066 Project: Ignite Issue Type: Improvement Reporter: Maxim Muzafarov Currently, the {{DFLT_CHUNK_SIZE_BYTES}} constant value is used to set the size of chunks with data sending to the remote node during the file transmission. This value must be set from IgniteConfiguration. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (IGNITE-12065) File transmission speed limit
Maxim Muzafarov created IGNITE-12065: Summary: File transmission speed limit Key: IGNITE-12065 URL: https://issues.apache.org/jira/browse/IGNITE-12065 Project: Ignite Issue Type: Improvement Reporter: Maxim Muzafarov We need to limit transmission speed since the system resources can be exhausted. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
Re: Asynchronous registration of binary metadata
I would also like to mention, that marshaller mappings are written to disk even if persistence is disabled. So, this issue affects purely in-memory clusters as well. Denis > On 13 Aug 2019, at 17:06, Denis Mekhanikov wrote: > > Hi! > > When persistence is enabled, binary metadata is written to disk upon > registration. Currently it happens in the discovery thread, which makes > processing of related messages very slow. > There are cases, when a lot of nodes and slow disks can make every binary > type be registered for several minutes. Plus it blocks processing of other > messages. > > I propose starting a separate thread that will be responsible for writing > binary metadata to disk. So, binary type registration will be considered > finished before information about it will is written to disks on all nodes. > > The main concern here is data consistency in cases when a node acknowledges > type registration and then fails before writing the metadata to disk. > I see two parts of this issue: > Nodes will have different metadata after restarting. > If we write some data into a persisted cache and shut down nodes faster than > a new binary type is written to disk, then after a restart we won’t have a > binary type to work with. > > The first case is similar to a situation, when one node fails, and after that > a new type is registered in the cluster. This issue is resolved by the > discovery data exchange. All nodes receive information about all binary types > in the initial discovery messages sent by other nodes. So, once you restart a > node, it will receive information, that it failed to finish writing to disk, > from other nodes. > If all nodes shut down before finishing writing the metadata to disk, then > after a restart the type will be considered unregistered, so another > registration will be required. > > The second case is a bit more complicated. But it can be resolved by making > the discovery threads on every node create a future, that will be completed > when writing to disk is finished. So, every node will have such future, that > will reflect the current state of persisting the metadata to disk. > After that, if some operation needs this binary type, it will need to wait on > that future until flushing to disk is finished. > This way discovery threads won’t be blocked, but other threads, that actually > need this type, will be. > > Please let me know what you think about that. > > Denis
Asynchronous registration of binary metadata
Hi! When persistence is enabled, binary metadata is written to disk upon registration. Currently it happens in the discovery thread, which makes processing of related messages very slow. There are cases, when a lot of nodes and slow disks can make every binary type be registered for several minutes. Plus it blocks processing of other messages. I propose starting a separate thread that will be responsible for writing binary metadata to disk. So, binary type registration will be considered finished before information about it will is written to disks on all nodes. The main concern here is data consistency in cases when a node acknowledges type registration and then fails before writing the metadata to disk. I see two parts of this issue: Nodes will have different metadata after restarting. If we write some data into a persisted cache and shut down nodes faster than a new binary type is written to disk, then after a restart we won’t have a binary type to work with. The first case is similar to a situation, when one node fails, and after that a new type is registered in the cluster. This issue is resolved by the discovery data exchange. All nodes receive information about all binary types in the initial discovery messages sent by other nodes. So, once you restart a node, it will receive information, that it failed to finish writing to disk, from other nodes. If all nodes shut down before finishing writing the metadata to disk, then after a restart the type will be considered unregistered, so another registration will be required. The second case is a bit more complicated. But it can be resolved by making the discovery threads on every node create a future, that will be completed when writing to disk is finished. So, every node will have such future, that will reflect the current state of persisting the metadata to disk. After that, if some operation needs this binary type, it will need to wait on that future until flushing to disk is finished. This way discovery threads won’t be blocked, but other threads, that actually need this type, will be. Please let me know what you think about that. Denis
[jira] [Created] (IGNITE-12064) Check licence headers by checkstyle plugin
Maxim Muzafarov created IGNITE-12064: Summary: Check licence headers by checkstyle plugin Key: IGNITE-12064 URL: https://issues.apache.org/jira/browse/IGNITE-12064 Project: Ignite Issue Type: Improvement Reporter: Maxim Muzafarov Currently, the {{apache-rat-plugin}} is used to check that source files contain the specific license header. The suite {{[Licenses Headers]}} is configured on TC to do so. It is possible to achieve the same thing with {{checkstyle-plugin}} (such it is already run on each build). This will save TC resources consumed to run both suites and simplify Ignite {{pom.xml}}. [1] https://checkstyle.sourceforge.io/config_header.html -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (IGNITE-12063) Add ability to track system/user time held in transaction
Denis Chudov created IGNITE-12063: - Summary: Add ability to track system/user time held in transaction Key: IGNITE-12063 URL: https://issues.apache.org/jira/browse/IGNITE-12063 Project: Ignite Issue Type: Improvement Reporter: Denis Chudov Assignee: Denis Chudov Fix For: 2.8 We should dump user/system times in transaction to log on commit/rollback, if duration of transaction more then threshold. I want to see in log on tx coordinator node: # Transaction duration # System time: #* How long we were getting locks on keys? #* How long we were preparing transaction? #* How long we were commiting transaction? # User time (transaction time - total system time) # Transaction status (commit/rollback) The threshold could be set by system property and overwrite by JMX. We shouldn't dump times, if the property not set. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
Re: Deprecate\remove REBALANCE_OBJECT_LOADED cache event
Anton, I've thought that we will mark these events with @deprecated annotation and remove all the internal usages. We can safely remove it in 3.0 (see the issue [1] description). [1] https://issues.apache.org/jira/browse/IGNITE-12035 On Tue, 13 Aug 2019 at 08:30, Anton Vinogradov wrote: > > +1 for removing, but... > Is it suitable to remove some API not at major release? > > On Fri, Aug 2, 2019 at 11:32 AM Maxim Muzafarov wrote: > > > Igniters, > > > > I've created a ticket [1]. > > > > [1] https://issues.apache.org/jira/browse/IGNITE-12035 > > > > On Thu, 1 Aug 2019 at 10:55, Pavel Kovalenko wrote: > > > > > > Hello Maxim, > > > > > > Thank you for researching this. > > > It seems those events can be used as an interceptor for the rebalance > > > process to make some extra actions after the entry is rebalanced. > > > However, I don't see any real usages despite tests. Most likely > > > functionality that used such rebalance events no longer exists. > > > I see no reasons to have it anymore. > > > +1 for removing in 2.8 > > > > > > > > > ср, 31 июл. 2019 г. в 20:54, Maxim Muzafarov : > > > > > > > Igniters, > > > > > > > > > > > > I've faced with EVT_CACHE_REBALANCE_OBJECT_LOADED [1] and > > > > EVT_CACHE_REBALANCE_OBJECT_UNLOADED [2] cache events and not fully > > > > understand their general purpose. Hope someone from the community can > > > > clarify to me the initial idea of adding these events. > > > > > > > > The first - it seems to me that these events are completely Ignite > > > > internal thing. Why the user should be able to subscribe to such > > > > events? (not related to tracking cache keys metrics). Once the data is > > > > loaded to cache, I see no reasons to notifying the user about moving > > > > cache keys from one node to another if the cluster topology changed. > > > > It's up to Ignites mission to keep data consistency in any cases. > > > > > > > > The second - I haven't found any real usages on GitHub\Google of these > > > > events. Most of the examples are related to our community members and > > > > Ignites documentation. > > > > > > > > The third - Correct me if I am wrong, but subscribing for Ignites > > > > events can have a strong influence on the cluster performance. So > > > > fewer events available to users the better performance will be. > > > > > > > > > > > > I think these events can be easily removed in the next 2.8 release. > > > > WDYT? Am I missing something? > > > > > > > > [1] > > > > > > https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/events/EventType.html#EVT_CACHE_REBALANCE_OBJECT_LOADED > > > > [2] > > > > > > https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/events/EventType.html#EVT_CACHE_REBALANCE_OBJECT_UNLOADED > > > > > >
[jira] [Created] (IGNITE-12062) IntMap throws NullPointerException when map is creating
Stepachev Maksim created IGNITE-12062: - Summary: IntMap throws NullPointerException when map is creating Key: IGNITE-12062 URL: https://issues.apache.org/jira/browse/IGNITE-12062 Project: Ignite Issue Type: Bug Reporter: Stepachev Maksim Assignee: Stepachev Maksim The problem located here: compactThreshold = (int)(COMPACT_LOAD_FACTOR * (entries.length >> 1)); scaleThreshold = (int)(entries.length * SCALE_LOAD_FACTOR); The fix looks that: compactThreshold = (int)(COMPACT_LOAD_FACTOR * (entriesSize >> 1)); scaleThreshold = (int)(entriesSize * SCALE_LOAD_FACTOR); -- This message was sent by Atlassian JIRA (v7.6.14#76016)