Re: Create a Blog with Example code

2023-03-03 Thread vtchernyi
Hi Humphrey,Here is my own contribution in sharing experience of Apache Ignite. Back in those times I was starting a new project and spent a few weeks in solving data load perfomance problems. I was inspired by the post [2], it was very helpful that I can load, compile and debug complete project code from github. So I spent another few weeks and wrote my own post [1], also with compilable project inside.As for your question, I think a post about monitoring the cashes with grafana will be usefullPSMany thanks to Kseniya Romanova, Denis Magda and Susan Ledford for their help in my workVladimir[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api[2] https://www.gridgain.com/resources/blog/implementing-microservices-apache-ignite-service-apis-part-i-000:41, 4 марта 2023 г., Humphrey Lopez :I’ve no idea yet. Lately working with spring boot and ignite again think maybe more people could use some help getting started. Would also like to setup local K8s cluster with microk8s and deploy the service and some nodes (client and server) and use rest endpoint to populate and monitor the cashes with grafana. Humphrey On 3 Mar 2023, at 10:57, Kseniya Romanova  wrote:Hi Humphrey! Good idea! If you need a review before publishing, please let me know and I'll find a comitter who could help with this. Where do you plan to publish your blog? I know that many igniters prefer dzone or dev.to, but personal github or medium blogs work as well. Cheers,KseniyaOn Fri, Mar 3, 2023 at 10:21 AM Humphrey  wrote:Hi,

I would like to create a blog with code and share experience of Apache Ignite. What’s the best way to do that?

Greetings Humphrey
-- Отправлено из мобильного приложения Яндекс Почты

Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread vtchernyi
Hi Tianyue,IMHO fully compilable project is usefull for newbie, while short code snippets are not. You can start a single server cluster and debug code in your IDE, check some suggestions about how it works. Couple of years ago I found such a compilable project describing microservices written by @DenisMagda, it helped a lot. So I hope my post will be usefull.VladimirPSI know out there in China last name is written first, while here in Russia it is written last, so the names are Vladimir or Zhenya. Hope I am correct and your name is Tianyue5:12, 20 апреля 2022 г., y :Hi Stanilovsky,I don't know how to describe my problem to you, but I'm sure there is not an error and the data was successfully inserted but not mapped to SQL-data. Vladimir give me a link https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api. I decided to take a look at this link first. Anyway, thanks for your advise and hope can help you in the future.Tianyue Hu,2022/4/20在 2022-04-19 14:50:51,"Zhenya Stanilovsky"  写道:hi !BinaryObjectBuilder oldBuilder = igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»); do you call: oldBuilder.build(); // after ? If so — what this mean ? «data is not mapped to sql» is it error in log or client side or smth ? thanks !Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows:1. create POJO on the client node;2. convert it to the binary object;3. on the data node, get binary object over the network and get its builder (obj.toBuilder());4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps,Vladimir 13:06, 18 апреля 2022 г., y :Hi ,When using binary to insert data, I need to  get an exist BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.comIf I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data?3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com   --Отправлено из мобильного приложения Яндекс Почты
 -- Отправлено из мобильного приложения Яндекс Почты

Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-18 Thread vtchernyi
Hi,I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing.The workaround is as follows:1. create POJO on the client node;2. convert it to the binary object;3. on the data node, get binary object over the network and get its builder (obj.toBuilder());4. set some fields, build and put in the cache.The builder on the step 3 seems to be the same as the one on the cluent node.Hope that helps,Vladimir13:06, 18 апреля 2022 г., y :Hi ,When using binary to insert data, I need to  get an exist BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.comIf I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data?3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты

[no subject]

2021-11-23 Thread vtchernyi
Hi community,I try to remember how to set non default partition number for some cache. Do we have java/xml example at ignite.apache.org website?Vladimir


Re: Add field on cached class without restarting the whole cluster

2021-10-26 Thread vtchernyi
Hi,You can solve the problem as follows:1) create your pojo at the client, where all the classes are present;2) convert pojo to the BinaryObject on the client;3) put binary object in the cache.In such a way, you can add new fields whenever you want. However, if you want to change the type of existing field, that will be still painfullVladimir6:53, 27 октября 2021 г., Surinder Mehra :Peer class loading doesnt work on key and value objects of class as per ignite documentationOn Wed, Oct 27, 2021, 08:44 Ilya Kazakov  wrote:Hi Rick. Actually, if you do not need to use your POJO classes on the server-side (e.g. for some compute tasks or in services) you can try just enabling peerClassLoding and put a new object from your client without any interruption. In this case, Ignite will serialize a new object and create in the cluster metadata a new schema for your POJO-type. The best way is a monotonic expansion of object fields set.--Ilyaср, 27 окт. 2021 г. в 09:34, Rick Lee :Dear all,I'm currently on version 2.8 and use ignite.getOrCreateCache(cacheCfg) to create a cache, e.g., cache Account object, etc, on startup. Now whenever I want to add field to the Account class, I need to restart all nodes in the cluster in order to make the change effective, but obviously it will introduce service interruption. Any other way I can modify the class structure by only rolling restart the nodes instead of restarting the whole cluster?Thanks & Regards,Rick


-- Отправлено из мобильного приложения Яндекс.Почты

Re: Max cache entry size

2021-09-11 Thread vtchernyi
Hello Mark,My experiments with Ignite clearly say that the value size must be about few memory pages, one page is 4K. Or may be tens, but not thousands of pages. If you enlarge value size, it is no good; if you are getting millions of key/value pair for the single cache, it is OK. Large values slow down data processing.While building proof of concept I had changed cache key from Integer to integer tuple (BinaryObject) and got real perfomance increase.--Vladimir23:55, 9 сентября 2021 г., Mark Peters :Hello,https://www.mail-archive.com/user@ignite.apache.org/msg30932.htmlThe above link states this:>> 800MB entry is far above of the entry size that we ever expected to see.
>> Even brief holding of these entries on heap will cause problems for you, as
>> well as sending them over communication.What is the max cache entry size then? I have objects that are at least around 100M and potentially up to around 400M being sent currently and am running out of heap space.Thanks!Mark-- Отправлено из мобильного приложения Яндекс.Почты

Re: Peer ClassLoading Issue | Apache Ignite 2.10 with Spring Boot 2.3

2021-05-09 Thread vtchernyi
Hi Siva,Thank you for reading my blog post. I have no idea what is the problem in your case, just wanna share some experience.I do not use any user POJOs on the remote nodes. Instead, I create POJO on the thick client node, convert it in BinaryObject and change that object on the remote node by object.toBuilder().setField().build(). I use key value API only. So no class not found issues arise. Hope that helpsVladimir Chernyi8:28, 9 мая 2021 г., "siva.velich...@barclays.com" :

Hi,
 
We are trying to use ignite for the first time in our project. We are trying to use ignite with persistence enabled.

 
Architecture is as follows.
 
SpringBoot 2.3 application (thick client ) tries to connect to apace ignite cluster (3 nodes ) with persistence enabled and peer class loading enabled.

 
There seems to be a weird  issue with peer class loading.
 
We are trying to load huge data following the same approach as here -

https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api
 
Cache Configuration
 
cacheConfiguration.setName(CacheIdentifiers.USER_IGNITE_CACHE.toString());
cacheConfiguration.setIndexedTypes(String.class,
IgniteUser1.class);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setStoreKeepBinary(true);
RendezvousAffinityFunction rendezvousAffinityFunction =
new
RendezvousAffinityFunction();
rendezvousAffinityFunction.setPartitions(512);
cacheConfiguration.setBackups(1);
cacheConfiguration.setAffinity(rendezvousAffinityFunction);
 
 
Scenario 1.  
 
Start the cluster à activate the cluster
à start the thick client 
à  Loading clients/ignite.cluster fails
 
Exception occured in adding the data javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Failed to resolve class name [platformId=0, platform=Java, typeId=620850656]
 
Scenario 2. 
 
Stop the Thick client , Rename the file from IgniteUser1 to IgniteUser and restart the thick client , the classes are now copied to the cluster and works fine.
 
I am not sure if there is an issue with grid deployment. Any help would be appreciated.
 
Thanks,
Siva.


_“This 
message is for information purposes only, it is not a recommendation, advice, 
offer or solicitation to buy or sell a product or service nor an official 
confirmation of any transaction. It is directed at persons who are professionals 
and is not intended for retail customer use. Intended for recipient only. This 
message is subject to the terms at: www.barclays.com/emaildisclaimer.
For important 
disclosures, please see: www.barclays.com/salesandtradingdisclaimer 
regarding market commentary from Barclays Sales and/or Trading, who are active 
market participants; https://www.investmentbank.barclays.com/disclosures/barclays-global-markets-disclosures.html 
regarding our standard terms for the Investment Bank of Barclays where we trade 
with you in principal-to-principal wholesale markets transactions; and in 
respect of Barclays Research, including disclosures relating to specific 
issuers, please see http://publicresearch.barclays.com.”  
_If 
you are incorporated or operating in Australia, please see https://www.home.barclays/disclosures/importantapacdisclosures.html 
for important 
disclosure._How 
we use personal information  see our privacy notice https://www.investmentbank.barclays.com/disclosures/personalinformationuse.html 
_

-- Отправлено из мобильного приложения Яндекс.Почты

Re: very fast loading of very big table

2021-02-18 Thread vtchernyi
Hi Denis,Data space is 3.7Gb according to MSSQL table properriesVladimir9:47, 19 февраля 2021 г., Denis Magda :Hello Vladimir, Good to hear from you! How much is that in gigabytes?-DenisOn Thu, Feb 18, 2021 at 10:06 PM  wrote:Sep 2020 I've published the paper about Loading Large Datasets into Apache Ignite by Using a Key-Value API (English [1] and Russian [2] version). The approach described works in production, but shows inacceptable perfomance for very large tables.The story continues, and yesterday I've finished the proof of concept for very fast loading of very big table. The partitioned MSSQL table about 295 million rows was loaded by the 4-node Ignite cluster in 3 min 35 sec. Each node had executed its own SQL queries in parallel and then distributed the loaded values across the other cluster nodes.Probably that result will be of interest for the community.Regards,Vladimir Chernyi[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api[2] https://m.habr.com/ru/post/526708/
-- Отправлено из мобильного приложения Яндекс.Почты

very fast loading of very big table

2021-02-18 Thread vtchernyi
Sep 2020 I've published the paper about Loading Large Datasets into Apache Ignite by Using a Key-Value API (English [1] and Russian [2] version). The approach described works in production, but shows inacceptable perfomance for very large tables.The story continues, and yesterday I've finished the proof of concept for very fast loading of very big table. The partitioned MSSQL table about 295 million rows was loaded by the 4-node Ignite cluster in 3 min 35 sec. Each node had executed its own SQL queries in parallel and then distributed the loaded values across the other cluster nodes.Probably that result will be of interest for the community.Regards,Vladimir Chernyi[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api[2] https://m.habr.com/ru/post/526708/

Re: Slow cache loading with an object key using affinity key

2021-01-24 Thread vtchernyi
Hi,Please take a look at my post [1]. It is not exactly about you question, but it works in production for row count you mentionRegards,Vladimir[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api22:39, 22 января 2021 г., gvaidya :Hi Alex,The data distribution of itemid is very even. For the ~537K records data setthere are exactly 12 records per itemid for all values of itemid.I have reviewed all the documentation in the links you provided. I alsoswitched to jdk8 (i was using jdk15 earlier), set my jvm options per thedocumentation recommendations. No change in cache load times. Itconsistently takes 3 min to load ~537K records. Interestingly, the loadprocess is not linear. I can see ~200K loaded in 30 sec. The next 30 sec getme to ~300K, next 1 min to ~425K.Here are my jvm options: Attached is the node server log and the GC log per you request.Thanks,Gautamignite-2a3b18c1.log  GClog.current  -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-12-02 Thread vtchernyi
Hi,I do not use SQL because my interest is to get max performance, I use key-value API instead. The same reason is why I use Java - it is native for Ignite. I think there should be wrappers forBinaryObject and Affinity in c#.Vladimir 4:39, 3 декабря 2020 г., adumalagan :I see, thanks for the clarification! I also have another question - are there C# equivalencies to the Javainterfaces BinaryObject and Affinity?-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-24 Thread vtchernyi
Hi,may be my recent tutorial [1] will shed some light to the question. The story is about my experience in loading big tables to Ignite. Hope it will help despite the Java language insideRegards,Vladimir[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api21:06, 24 ноября 2020 г., Pavel Tupitsyn :Run your application in the same data center as the database, so that network costs are minimized.On Tue, Nov 24, 2020 at 8:54 PM ABDumalagan  wrote:I see - do you have any suggestions of how I can work around the bottleneck
and speed up data loading into cache? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

-- Отправлено из мобильного приложения Яндекс.Почты

Re: Working with Spring Cloud and Docker

2020-10-02 Thread vtchernyi
Hello Sam,The tutorial you wrote does not have any github repo, right?Regargs,Vladimir Tchernyi1:05, 2 октября 2020 г., "Данилов Семён" :Hello, Igniters!We've recently found issue regarding working with Ignite + Spring Data which is fixed with https://issues.apache.org/jira/browse/IGNITE-13005 now.I've also posted this article [1] on Apache Ignite, Spring Cloud, Spring Data and Docker.In this article I'm building a simple REST application to showcase key features of Apache Ignite that can be useful to developers, who use Spring and Docker.If you have any suggestions or any experience to share, please leave me some feedback. [1] https://www.gridgain.com/resources/blog/using-apache-ignite-spring-cloud-and-docker-create-rest-applicationKind regards,Sam.-- Отправлено из мобильного приложения Яндекс.Почты

Fast Load Large Datasets

2020-09-24 Thread vtchernyi
Igniters,My tutorial post about loading big tables into Apache Ignite has finally arrived [1]. Many thanks to @Denis Magda and @Ksenia Romanova for their valuable help.[1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-apiVladimir


Re: read-though tutorial for a big table

2020-09-23 Thread vtchernyi
Hi Alex,I have some good news.>> experience and materials you mentioned in thisthreadMy tutorial is finally published: https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-apiHope that helpsVladimir12:52, 22 июня 2020 г., Alex Panchenko :Hello Vladimir,I'm building the high-load service to handle intensive read-write operationsusing Apache Ignite. I need exactly the same - "loading big tables fromrdbms (Postgres) and creating cache entries based on table info".Could you, please, share your experience and materials you mentioned in thisthread. I'd be much appreciated. I think it'd help me and others IgniteusersBTW"This approach was tested in production and showed good timing being pairedwith MSSQL, tables from tens to hundreds million rows."Is it possible to see some results of testing or/and performance metricsbefore and after using IgniteThanks!-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Re: read-though tutorial for a big table

2020-08-07 Thread vtchernyi
Hi Alex,I do not feel myself like a guru about Ignite, there is much more experienced people in that chat. Summer happened, vacation and so forth.. Please wait some time until my blog will be ready.I hope you can read russian and my article [1] may be helpful meanwhile. It is as a result of half year Ignite learning using oficial doc sites, user maillist and StackOverflow. Very helpful for me was Denis Magda's post about building microservices and 2-line comment on StackOverflow by Val KulichenkoHope that helps,Vladimir[1] https://m.habr.com/ru/post/472568/16:27, 7 августа 2020 г., Denis Magda :Alex,Please share a bit more details on what you’re struggling with. It sounds like you are looking for a specific piece of advice rather than generic performance suggestions.DenisOn Thursday, August 6, 2020, Alex Panchenko  wrote:Hello Vladimir,

are there some key things you can share with us? Some checklist with the
most important configuration params or things we need to review/check?
anything would be helpful

I've been playing with Ignite for the last few months, performance still
low.
I have to decide whether to switch from Ignite to some another solution or
improve the performance ASAP.  

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
-- -Denis
-- Отправлено из мобильного приложения Яндекс.Почты

Re: How to optimize the connection terminal in debugging mode?

2020-08-06 Thread vtchernyi
Hi,I do my debug in single node cluster made of my local machine. I create special testConfig.xml to be used for debug only and specify only local comp name in that file.  In that case there is no 30 sec timeout and debugging is okVladimir9:45, 6 августа 2020 г., 38797715 <38797...@qq.com>:
Hi,
In the development scenario, when using the native client to
  connect to the server, if the single step debugging interrupt time
  is too long, the connection between the client and the server may
  be broken. What parameters can be used to optimize the timeout in
  this scenario to facilitate debugging?
part1.91c8c02d.0a7fc...@qq.com
  
-- Отправлено из мобильного приложения Яндекс.Почты

Re: read-though tutorial for a big table

2020-06-22 Thread vtchernyi
Hi Alex,There is an NDA covering my work, so direct sharing is not a variant. I see tutorial post of that kind will be actual, so I should start working. Please wait some time. Right now I do have nothing to share.About production - the first thing I faced with was turtle-slow inserting values in cache. I did some efforts and now sql queries take longer than cache inserts, but my work got in production only after it became fast. That was a must. So I have no "before and after" state, only "after" one.Vladimir12:52, 22 июня 2020 г., Alex Panchenko :Hello Vladimir,I'm building the high-load service to handle intensive read-write operationsusing Apache Ignite. I need exactly the same - "loading big tables fromrdbms (Postgres) and creating cache entries based on table info".Could you, please, share your experience and materials you mentioned in thisthread. I'd be much appreciated. I think it'd help me and others IgniteusersBTW"This approach was tested in production and showed good timing being pairedwith MSSQL, tables from tens to hundreds million rows."Is it possible to see some results of testing or/and performance metricsbefore and after using IgniteThanks!-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Re: Using putAll(TreeMap) with BinaryObjects

2020-05-20 Thread vtchernyi
Hi,I did implement user pojo with comparator on the client node, that pojo exists in the client process jar file. That approach works well on the client.But it seems that pojo will not be zero deployed, since it is just user class without any system inheritance. So I didn't even try to use it in ComputeJobAdapter objects that are executed on the cluster nodes.It will ge great to implement comparable binary object on the system level16:08, 20 мая 2020 г., "Grigory.D" :For solution 2, should Comparator implementation class be present on servernode (in case of no p2p cl enabled)? Or it is used only on client side.-- Sent from: http://apache-ignite-users.70518.x6.nabble.com/-- Отправлено из мобильного приложения Яндекс.Почты

Re: read-though tutorial for a big table

2020-03-11 Thread vtchernyi
Hello Denis,That is possible, my writing activities should be continued. The only question is to get my local project to production, there is no sense in writing another model example. So I hope there will be a progress in the nearest futureVladimir2:25, 12 марта 2020 г., Denis Magda :Hello Vladimir,Just to clarify, are you suggesting to create a tutorial for data loading scenarios when data resides in an external database? -DenisOn Tue, Mar 10, 2020 at 11:41 PM  wrote:Andrei, Evgenii, thanks for answer.Aa far as I see, there is no ready to use tutorial. I managed to do multi-threaded cache load procedure, out-of-the-box loadCache method is extremely slow.I spent about a month studying write-through topics, and finally got the same as "capacity planning" says: 0.8Gb mssql table on disk expands to 2.3Gb, size in ram is 2.875 times bigger.Is it beneficial to use BinaryObject instead of user pojo? If yes, how to create BinaryObject without pojo definition and deserialize it back to pojo?It would be great to have kind of advanced github example like this https://github.com/dmagda/MicroServicesExampleIt helped a lot in understanding. Current documentation links do not help to build a real solution, they are mostly like a reference, with no option to compile and debugVladimir2:51, 11 марта 2020 г., Evgenii Zhuravlev :When you're saying that the result was poor, do you mean that data preloading took too much time, or it's just about get operations?Evgeniiвт, 10 мар. 2020 г. в 03:29, aealexsandrov :Hi,

You can read the documentation articles:

https://apacheignite.readme.io/docs/3rd-party-store

In case if you are going to load the cache from 3-rd party store (RDBMS)
then the default implementation of CacheJdbcPojoStore can take a lot of time
for loading the data because it used JDBC connection inside (not pull of
these connections).

Probably you should implement your own version of CacheStore that will read
data from RDBMS in several threads, e.g using the JDBC connection pull
there. Sources are open for you, so you can copy the existed implementation
and modify it:

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStore.java

Otherwise, you can do the initial data loading using some streaming tools:

1)Spark integration with Ignite -
https://apacheignite-fs.readme.io/docs/ignite-data-frame
2)Kafka integration with Ignite -
https://apacheignite-mix.readme.io/docs/kafka-streamer

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

-- Отправлено из мобильного приложения Яндекс.Почты
-- Отправлено из мобильного приложения Яндекс.Почты

Re: read-though tutorial for a big table

2020-03-11 Thread vtchernyi
Andrei, Evgenii, thanks for answer.Aa far as I see, there is no ready to use tutorial. I managed to do multi-threaded cache load procedure, out-of-the-box loadCache method is extremely slow.I spent about a month studying write-through topics, and finally got the same as "capacity planning" says: 0.8Gb mssql table on disk expands to 2.3Gb, size in ram is 2.875 times bigger.Is it beneficial to use BinaryObject instead of user pojo? If yes, how to create BinaryObject without pojo definition and deserialize it back to pojo?It would be great to have kind of advanced github example like this https://github.com/dmagda/MicroServicesExampleIt helped a lot in understanding. Current documentation links do not help to build a real solution, they are mostly like a reference, with no option to compile and debugVladimir2:51, 11 марта 2020 г., Evgenii Zhuravlev :When you're saying that the result was poor, do you mean that data preloading took too much time, or it's just about get operations?Evgeniiвт, 10 мар. 2020 г. в 03:29, aealexsandrov :Hi,

You can read the documentation articles:

https://apacheignite.readme.io/docs/3rd-party-store

In case if you are going to load the cache from 3-rd party store (RDBMS)
then the default implementation of CacheJdbcPojoStore can take a lot of time
for loading the data because it used JDBC connection inside (not pull of
these connections).

Probably you should implement your own version of CacheStore that will read
data from RDBMS in several threads, e.g using the JDBC connection pull
there. Sources are open for you, so you can copy the existed implementation
and modify it:

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStore.java

Otherwise, you can do the initial data loading using some streaming tools:

1)Spark integration with Ignite -
https://apacheignite-fs.readme.io/docs/ignite-data-frame
2)Kafka integration with Ignite -
https://apacheignite-mix.readme.io/docs/kafka-streamer

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

-- Отправлено из мобильного приложения Яндекс.Почты

read-though tutorial for a big table

2020-03-09 Thread vtchernyi
Hi Igniters,My question is about well done tutorial. Recently on the devlist there was topic "Read load balancing, read-though, ttl and optimistic serializable transactions". It says ignite cache sitting on the top of RDBMS is the most often use case. I tried to implement read-though for a big table over 1100 milion rows just from scratch, the result was poor. Little model example works fine, but moving to production is not simple. It seems I should aviod some pitfalls.Do we have tutorial to guide newbie like me?Vladimir TchernyiMagnit Retail network--