RE: Is Apache Ignite appropriate for my use case?
Hi, Ignite can go so far in meeting your requirements for durable storage and retrieval, but a primary node failure is probably going to be terminal because unless you can boot a machine in less than 5 seconds you can forget about reloading ignite anyway, you simply won't be able to restore your TCP connection in time. (Unless of course you have some radical kind of hardware with OS in bubble memory or similar, but even then?) So you need to use something like a p2p protocol such as bit torrent where the download is broken into chunks and delegated to multiple nodes each holding some or all of the blobs data. The client has to be able to recover as well, either by round robin or by implementing a p2p protocol - you still can't depend on a single fallible point of entry to your service because of the problem above. If you have such a p2p protocol I'm not sure Ignite adds anything much of value? John -Original Message- From: steven Sent: Thursday, May 23, 2019 2:16 AM To: user@ignite.apache.org Subject: Is Apache Ignite appropriate for my use case? Email received from outside the company. If in doubt don't click links nor open attachments! Hi, I need to manage a large fleet of servers (around 100k machines). Each node contains a subset of the data (basically binary blobs) in memory. The data store is a separate database. I am trying to provide a binary blob lookup service (in greatly simplified terms). For each incoming request (which contains a binary string), every node should perform matching of the string against its subset of the data. When the first matching binary blob is found, that result is returned, and all other nodes should stop searching. If no matches are found, NOT_FOUND should be returned. Requests must be handled within 5 seconds. If any node fails, it must be revived with the same subset of data that it had before. One challenge is how to handle node failure in the middle of a request. It is unlikely that a node will be revived quickly enough to respond within 5 seconds. Most likely there should be standby nodes that will retrieve the failing node's subset of data from the data store and perform matching upon being notified of node failure. Is Apache Ignite appropriate for this use case? Thanks, Steven -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
Integrity of write behind
Hi, What happens if a node goes down while write behind is in progress on a cache that's persisting to the database? Can the task of persistence be carried on exactly where it failed by a back up node? Are cache entries flagged when they have been successfully persisted so that another node can pick up the task later? Should the persistence layer keep a version number or similar so that updates are orderly and not duplicated? John This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: cache update slow
Hi, Thanks for that observation. I increased cache test to 100,000 entries and the average write time is far better at around 23K wps. It seems like a lot of latency on the first few hundred writes. Do you have any benchmarks published? John From: Ilya Kasnacheev Sent: Friday, April 26, 2019 7:29 PM To: user@ignite.apache.org Subject: Re: cache update slow Email received from outside the company. If in doubt don't click links nor open attachments! Hello! I think that comparing steady state benchmarks of multi-million operations versus 500 operations is misleading. 500 operations is probably not enough to gain full benefits from e.g. JIT. Regards, -- Ilya Kasnacheev пт, 26 апр. 2019 г. в 12:20, Coleman, JohnSteven (Agoda) mailto:johnsteven.cole...@agoda.com>>: Hi, Yes, comparing to DMA is apples and oranges comparison, but gives an idea of the relative gap in performance. A better comparison would be to an alike product such as NCache. They claims 20K wps*, thus 20 times faster than my ignite results, but obvs I'd have to compare using my scenario for a valid comparison. But this is more like the kind of gap in performance I'd expect vs DMA. But then NCache product is also quite different from ignite, so what to say? regards, John http://www.alachisoft.com/ncache/ncache-performance-benchmarks.html -Original Message- From: Maxim.Pudov mailto:pudov@gmail.com>> Sent: Friday, April 26, 2019 3:17 PM To: user@ignite.apache.org<mailto:user@ignite.apache.org> Subject: RE: cache update slow Email received from outside the company. If in doubt don't click links nor open attachments! Glad you met your requirements. I think it is not fair to compare Ignite with direct memory access, so I can't really say whether this is a good result or not. In your case .net process starts a java process and communicates with it via JNI [1]. Also Ignite stores cache data off-heap, which requires serialisation [2]. [1] https://apacheignite-net.readme.io/docs#section-ignite-and-ignitenet [2] https://apacheignite.readme.io/docs/durable-memory -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: cache update slow
Hi, Yes, comparing to DMA is apples and oranges comparison, but gives an idea of the relative gap in performance. A better comparison would be to an alike product such as NCache. They claims 20K wps*, thus 20 times faster than my ignite results, but obvs I'd have to compare using my scenario for a valid comparison. But this is more like the kind of gap in performance I'd expect vs DMA. But then NCache product is also quite different from ignite, so what to say? regards, John http://www.alachisoft.com/ncache/ncache-performance-benchmarks.html -Original Message- From: Maxim.Pudov Sent: Friday, April 26, 2019 3:17 PM To: user@ignite.apache.org Subject: RE: cache update slow Email received from outside the company. If in doubt don't click links nor open attachments! Glad you met your requirements. I think it is not fair to compare Ignite with direct memory access, so I can't really say whether this is a good result or not. In your case .net process starts a java process and communicates with it via JNI [1]. Also Ignite stores cache data off-heap, which requires serialisation [2]. [1] https://apacheignite-net.readme.io/docs#section-ignite-and-ignitenet [2] https://apacheignite.readme.io/docs/durable-memory -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: cache update slow
Hi, Thanks for the tip. I implemented with data streamer and observe a significant improvement. However it still takes >1ms per cache entry addition which is fast enough for my requirements, but still >500 times slower than DMA. Is this largely a factor of network overhead (even though I use localhost cache), or the underlying caching mechanics? Regards, John -Original Message- From: Maxim.Pudov Sent: Tuesday, April 23, 2019 8:12 PM To: user@ignite.apache.org Subject: RE: cache update slow Email received from outside the company. If in doubt don't click links nor open attachments! Thanks for sharing your code. I didn't realise you use .NET. Check out how you can benefit from data streamer in .NET [1]. It was designed to populate your cache faster, so it could help you to improve performance. [1] https://apacheignite-net.readme.io/docs/data-streamers -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: cache update slow
Hi, I'm just using the cache created by running a local process, see below. This is just for a POC and works, but I do need to have high performance. John using (var ignite = Ignition.Start()) { QueryEntity[] entities = { new QueryEntity(typeof(int), typeof(SimpleTransaction)) }; var cfg = new CacheConfiguration { OnheapCacheEnabled = true, ReadThrough = false, WriteThrough = false, KeepBinaryInStore = false, QueryEntities = entities, Name = "SimpleTransactions" }; ignite.AddCacheConfiguration(cfg); ICache cache = ignite.GetOrCreateCache(cfg); -Original Message- From: Maxim.Pudov Sent: Friday, April 19, 2019 3:37 PM To: user@ignite.apache.org Subject: Re: cache update slow Email received from outside the company. If in doubt don't click links nor open attachments! Hi, the execution time depends on the configuration of your cache and your cluster. How many node do you have? What is your cache configuration? Have you tried Ignite data streamer [1] instead of cache.put(K,V)? [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
cache update slow
Hi, I'm inserting and then updating 250 cache entries using a couple of threads. So a total of 500 cache puts. The initial 250 writes take a little under 500ms. So each initial put is taking approx. 2ms. While that's a little faster than writing to database, I would expect RAM write to be much faster, i.e. sub millisecond. Any ideas why cache write is not so fast and how to improve it? John This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: efficient write through
Hi, I was checking for the existence of the row, but that is an extra read that could be avoided. Now I changed my stored procedure to update and then insert if update failed to find a match with the cache key value. John From: Ilya Kasnacheev Sent: Monday, April 15, 2019 4:14 PM To: user@ignite.apache.org Subject: Re: efficient write through Email received from outside the company. If in doubt don't click links nor open attachments! Hello! I'm not aware of the possibility of modifying the object in Cache Store. I would not recommend trying to do that. What's the problem of checking row existence by Id (i.e. Cache Key?) Regards, -- Ilya Kasnacheev сб, 13 апр. 2019 г. в 06:53, Coleman, JohnSteven (Agoda) mailto:johnsteven.cole...@agoda.com>>: Currently my writethrough executes a stored procedure, but it has to identify whether to insert or update which is inefficient as well as using locks and transaction. If I disable write behind caching my cache put processing slows by a factor of 10. I’m thinking of adding a rowID field to the cache values that the database will return and add to the value after writing, so that I will know if the value is already stored or is a new item and needs inserting. What approaches are recommended for ID fields? It would be nice if I didn’t have to have a different key value and row ID field, but catch 22 is I can’t wait for the Db to assign an ID when I put. At present I let my code assign the cache key value, maybe I should use a Guid rather than a sequence? Suggestions please. John CREATE PROCEDURE [dbo].[updateorinsert_simple_transaction] (@id int , @charge_amount decimal(18,6) , @fee_amount decimal(18,6) , @event_status tinyint) AS BEGIN TRANSACTION IF exists (select 1 FROM [dbo].[simpletransaction] WITH (updlock,serializable) WHERE id = @id) BEGIN UPDATE [dbo].[simpletransaction] SET charge_amount = @charge_amount, fee_amount = @fee_amount, event_status = @event_status WHERE id = @id END ELSE BEGIN INSERT INTO [dbo].[simpletransaction] (id, charge_amount, fee_amount, event_status) VALUES(@id, @charge_amount, @fee_amount, @event_status) END COMMIT TRANSACTION GO This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
efficient write through
Currently my writethrough executes a stored procedure, but it has to identify whether to insert or update which is inefficient as well as using locks and transaction. If I disable write behind caching my cache put processing slows by a factor of 10. I'm thinking of adding a rowID field to the cache values that the database will return and add to the value after writing, so that I will know if the value is already stored or is a new item and needs inserting. What approaches are recommended for ID fields? It would be nice if I didn't have to have a different key value and row ID field, but catch 22 is I can't wait for the Db to assign an ID when I put. At present I let my code assign the cache key value, maybe I should use a Guid rather than a sequence? Suggestions please. John CREATE PROCEDURE [dbo].[updateorinsert_simple_transaction] (@id int , @charge_amount decimal(18,6) , @fee_amount decimal(18,6) , @event_status tinyint) AS BEGIN TRANSACTION IF exists (select 1 FROM [dbo].[simpletransaction] WITH (updlock,serializable) WHERE id = @id) BEGIN UPDATE [dbo].[simpletransaction] SET charge_amount = @charge_amount, fee_amount = @fee_amount, event_status = @event_status WHERE id = @id END ELSE BEGIN INSERT INTO [dbo].[simpletransaction] (id, charge_amount, fee_amount, event_status) VALUES(@id, @charge_amount, @fee_amount, @event_status) END COMMIT TRANSACTION GO This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
RE: config guide
For CacheStore.write is there any need to lock in the database to prevent simultaneous row updates? Presumably it’s quicker to lock the key at the cache level and allow DB lock free and non-transactional updates? John From: Ilya Kasnacheev Sent: Thursday, April 11, 2019 4:31 PM To: user@ignite.apache.org Subject: Re: config guide Email received from outside the company. If in doubt don't click links nor open attachments! Hello! Most of those topics are solidly covered in our docs: https://apacheignite.readme.io/docs More precisely, https://apacheignite.readme.io/docs/cache-configuration https://apacheignite.readme.io/docs/3rd-party-store#section-custom-cachestore Please feel free to ask if any questions remain. Regards, -- Ilya Kasnacheev чт, 11 апр. 2019 г. в 04:42, Coleman, JohnSteven (Agoda) mailto:johnsteven.cole...@agoda.com>>: What is the best guide to Ignite configuration? For example how to have a distributed cache that is both partitioned to spread the values as well as resilient so nothing is lost with node failures? Also examples how to efficiently reload whole table to a cache after a DC/full cluster failure and how to load highly used data in local memory vs distributed cache vs persistent storage. John This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
config guide
What is the best guide to Ignite configuration? For example how to have a distributed cache that is both partitioned to spread the values as well as resilient so nothing is lost with node failures? Also examples how to efficiently reload whole table to a cache after a DC/full cluster failure and how to load highly used data in local memory vs distributed cache vs persistent storage. John This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
Ingnite.Net ContinuousQuery
I'm trying to get a continuous query running to update an entry when its put in the cache with a 0 status. A producer thread populates the cache, and this thread (code below*) should do the job but nothing happens, no listener or filter activity at all. I'm wondering if this task is even connected to the same cache instance? Where should computing logic for the continuous qry go, its same as the initialQry? Thanks, John >From >https://github.com/JohnSColeman/IgniteDemo/blob/master/ConsoleApp1/FeeComputer.cs var eventListener = new EventListener(); var qry = new ContinuousQuery(eventListener); var initialQry = new ScanQuery(new InitialFilter()); using (var queryHandle = _cache.QueryContinuous(qry, initialQry)) { foreach (var entry in queryHandle.GetInitialQueryCursor()) { var simpleTransaction = entry.Value; ComputeFees(simpleTransaction); simpleTransaction.EventStatus = 1; _cache.Put(simpleTransaction.Id, simpleTransaction); Console.WriteLine("computed: " + simpleTransaction); } } This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
starting ignite in docker
I'd like to pass a local configuration file to docker as below, but ignite doesn't find it, any ideas? The file is in the PWD directory. Using windows powershell. docker run -it --rm --net=host -v ${PWD}:/apache-ignite/config -e "CONFIG_URI=file:///apache-ignite/config/ignite-config.xml" --name ignite apacheignite/ignite /opt/ignite/apache-ignite/bin/ignite.sh, WARN: Failed to resolve JMX host (JMX will be disabled): linuxkit-00155d0f670e class org.apache.ignite.IgniteException: Failed to instantiate Spring XML application context [springUrl=file:/apache-ignite/config/ignite-config.xml, err=IOException parsing XML document from URL [file:/apache-ignite/config/ignite-config.xml]; nested exception is java.io.FileNotFoundException: /apache-ignite/config/ignite-config.xml (No such file or directory)] at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1025) at org.apache.ignite.Ignition.start(Ignition.java:351) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301) Caused by: class org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML application context [springUrl=file:/apache-ignite/config/ignite-config.xml, err=IOException parsing XML document from URL [file:/apache-ignite/config/ignite-config.xml]; nested exception is java.io.FileNotFoundException: /apache-ignite/config/ignite-config.xml (No such file or directory)] This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.