Hi Mike, Thanks for asking that question. I am working on a project where I need to persist a series of work coming to a given docker based worker node. the type of work is a sequential list with a unique id for each item in the list. And these work requests come in separately in REST calls with Session Id to map it to the worker. If the node that is processing a given series of work dies midway, I need to get the series and send it to a new worker node. So, the design I had in mind is to save the work based on the Session Id which will represent a unique host (say an IP address) as a Map based on Request Id. Once a given series of work is completed, I remove the entry for that particular Session. If it broke midway, I retrieve the Map from Geode, append the new work to the Map, forward it to a brand new worker and the process starts over as a brand new job. A given record will look like below.
{ Session_Id : 23123132, <------ Key [ { Request_Id : 1, <----- Another embedded key Work : {some payload here} }, <---- Value { Request_Id : 2, Work : {some payload here} }, { Request_Id : 3, Work : {some payload here} } ] } Originally I thought I would use just one region and use Session_Id as the key, and save a serialized object to represent the Map. Every time a new work comes in for a session, I retrieve the object from the region, append the new work to the map and resave it to Geode. This approach will require a read and a write every time a new work comes in for one session. In order to reduce the read times, I am exploring other alternatives as below. Alternative 1. Use Session_Id + Reques_Id as the key and Work as the value. That will look like this. [ {ID : 23123132_1, <------ Key Work : {some payload here} }, <---- Value {ID : 23123132_2 <------ Key Work : {some payload here} }, <---- Value {ID : 23123132_3, <------ Key Work : {some payload here} } <---- Value ] The disadvantage of this approach is that when I read to retrieve a list of works for a given session, I have to do a "Like Session_ID%" operation, which will force Geode to go search a large number of records to get my data set. Alternative 2: Create a Region for every Session_Id and save Work in the respective Region using the Request_Id as the key and Work as the Value. This way, When I read, I just get all the data from the Region. When I write, I just blindly insert to the respective Region. Once work is done, I just drop the Region. Data will look like below Region 23123132: [ {Request_Id : 1, <------ Key Work : {some payload here} }, <---- Value {Request_Id : 2 <------ Key Work : {some payload here} }, <---- Value {Request_Id : 3, <------ Key Work : {some payload here} } <---- Value ] Region 12312312: [ {Request_Id : 1, <------ Key Work : {some payload here} }, <---- Value {Request_Id : 2 <------ Key Work : {some payload here} }, <---- Value {Request_Id : 3, <------ Key Work : {some payload here} } <---- Value ] Alternative 2 seems to be the most efficient approach in terms of read/write. The only overhead is to dynamically create a Region for every session. This is where I am right now. I am trying to see how easy/efficient to create Regions dynamically. Hope this helps Marcus On Tue, Oct 30, 2018, 6:56 PM Michael Stolz <mst...@pivotal.io> wrote: > While everyone is helping you get to your goal of creating regions > dynamically, I'd like to learn why dynamic region creation is important. > Could you please explain that? > > -- > Mike Stolz > Principal Engineer - Gemfire Product Manager > Mobile: 631-835-4771 > > On Oct 30, 2018 6:12 PM, "Udo Kohlmeyer" <u...@apache.org> wrote: > >> Hi there Marcus, >> >> It seems the default pool is only create once a region operation is done. >> >> In order to get around this, you can just do the following: >> ClientCache cache = new >> ClientCacheFactory().set("log-level","WARN").create(); >> Pool pool = >> PoolManager.createFactory().addLocator("localhost",10334).create("MyCustomPool"); >> >> And then you can replace the FunctionService call's poolname with >> "MyCustomPool". >> >> The Spring Data Geode example could have been just as simple.. If you >> want I can send you an example for that aswell. >> >> --Udo >> >> On 10/30/18 14:58, Marcus Dushshantha Chandradasa wrote: >> >> Thanks for the responses. >> >> I checked the versions. I was playing around with spring-data-geode as >> well, maybe that might have had a different version. now I created a >> separate project with just geode core 1.7 dependency and the version >> problem seems to be gone away. >> >> Now, when I use a Cache like earlier, I connect to the server but get the >> following error which is aligned with Udo's comment. >> >> Exception in thread "main" >> org.apache.geode.cache.execute.FunctionException: The cache was not a >> client cach >> >> So I changed the Cache to a ClientCache and this time I get the following >> error. >> >> Code: >> >> ClientCache cache = new ClientCacheFactory() >> .addPoolLocator("localhost", 10334) >> .set("log-level", "WARN") >> .create(); >> >> Error : >> Exception in thread "main" java.lang.UnsupportedOperationException: >> operation is not supported on a client cache >> at >> org.apache.geode.internal.cache.GemFireCacheImpl.throwIfClient(GemFireCacheImpl.java:5354) >> at >> org.apache.geode.internal.cache.GemFireCacheImpl.createRegionFactory(GemFireCacheImpl.java:4629) >> at >> com.mmodal.Geode.CreateRegionFunction.createRegionAttributesMetadataRegion(CreateRegionFunction.java:74) >> at >> com.mmodal.Geode.CreateRegionFunction.<init>(CreateRegionFunction.java:24) >> >> Also, When I tried using PoolManager.find("DEFAULT") for the >> onServers(), I get the below error. >> Exception in thread "main" >> org.apache.geode.cache.execute.FunctionException: Pool instance passed is >> null >> at >> org.apache.geode.cache.execute.internal.FunctionServiceManager.onServers(FunctionServiceManager.java:167) >> at >> org.apache.geode.cache.execute.FunctionService.onServers(FunctionService.java:95) >> at Main.main(Main.java:31) >> >> Marcus >> >> >> >> >> >> >> On Tue, Oct 30, 2018 at 4:45 PM Udo Kohlmeyer <u...@apache.org> wrote: >> >>> From the exception, it seems that you are not using the same version of >>> Geode. Could you look into that... >>> >>> Also, when creating a client, you use "ClientCacheFactory" and define a >>> pool pointing at the locators. >>> https://geode.apache.org/docs/guide/16/topologies_and_comm/cs_configuration/client_server_example_configurations.html >>> >>> ClientCache c = new ClientCacheFactory().addPoolLocator(host, >>> port).create(); >>> >>> FunctionService.onServers(PoolManager.find("DEFAULT")).execute("FunctionName").getResult() >>> //I think the default pool created is called "DEFAULT". >>> >>> That should hopefully connect to your servers, where the functions have >>> been deployed, invoke the function and return the result back to the client. >>> >>> --Udo >>> >>> On 10/30/18 13:13, Marcus Dushshantha Chandradasa wrote: >>> >>> Hi All, >>> >>> I am trying to figure out how to programmatically create Regions on a >>> Geode Cluster. I followed below links but without any success. >>> >>> >>> https://geode.apache.org/docs/guide/16/developing/region_options/dynamic_region_creation.html >>> >>> https://stackoverflow.com/questions/50833166/cannot-create-region-dynamically-from-client-in-geode/50850584 >>> >>> So far, I have the CreateRegionFunction and CreateRegionCacheListener copied >>> and JARed them together. I am referig them in my client as well as added to >>> the classpath for Geode Cluster. Below is my client code. I am receiving >>> the below error when I try to execute. Any help would be really >>> appreciated. >>> >>> Error : >>> >>> SEVERE: Servlet.service() for servlet [dispatcherServlet] in context >>> with path [] threw exception [Request processing failed; nested exception >>> is org.apache.geode.SystemConnectException: Rejecting the attempt of a >>> member using an older version of the product to join the distributed >>> system] with root cause >>> org.apache.geode.SystemConnectException: Rejecting the attempt of a >>> member using an older version of the product to join the distributed system >>> at >>> org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.attemptToJoin(GMSJoinLeave.java:433) >>> at >>> org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.join(GMSJoinLeave.java:329) >>> at >>> org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.join(GMSMembershipManager.java:664) >>> >>> >>> >>> Cache cache = new CacheFactory() >>> .set( ConfigurationProperties.LOCATORS, >>> "localhost[10334],localhost[10335]") >>> .create();Execution execution = >>> FunctionService.onServers(cache);ArrayList argList = new >>> ArrayList();argList.add("region_new");RegionAttributes attr = new >>> AttributesFactory().create();argList.add(attr);Function function = new >>> CreateRegionFunction();//FunctionService.registerFunction(function);Object >>> result = execution.setArguments(argList).execute(function).getResult(); >>> >>> >>> >>