Ignite Service Grid - how to have control over service execution in cluster?
Hi, Suppose I have three server nodes and using ignite in standalone manner. I have configured a service for loading my data into cache in all nodes, but expects only one of them to actually deploy the service as a cluster singleton. 1. How do I tell my service to execute its execute method only after all nodes are up and the cluster is activated? Can I do this without another client node? 2. The service will be started up (invokes init method) when the first node starts up. Can I also do this only when the cluster is active without doing this through a client node? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
IO implications of ScanQuery
I have a potential work flow where I may need to find a set of elements in a cache where the values are not be small (1-100Kb say), and where the numbers of elements in the cache may be large (many millions). Each key contains fields the scan query could use to select the entries I want. Will the scan query only read keys from the store while it is determining which items in the cache match the query, and then pull the matching values to return back to the scan query client? Or does scan query read all key/value data then apply its conditions before returning the matching items? Thanks, Raymond.
RE: Ignite Query Slow
Thank you..Tried @QuerySQLFields...but column name not appeared same as database table(i.e underscore("_") is missing in ignite). Hence I have added index to qryEntity and noticed that Query takes about 7 seconds (i.e select * from customercache.customer) for against ignite cache data of 200,000 records to fetch 200 records in dbeaver. Same query against database comes in fractions of seconds. I have INDEX created for two fields. I am running cache with 8 GB Memory, one server node. The codes are mostly generated from ignite web console. Is there anything am I missing from below code or anything can be improved on below code? Any suggestion to improve this helps. Cache Code public class CustomerCache { public static CacheConfiguration cacheCustomerCache() throws Exception { CacheConfiguration ccfg = new CacheConfiguration(); ccfg.setName("CustomerCache"); ccfg.setCacheMode(CacheMode.PARTITIONED); ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC); CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory(); cacheStoreFactory.setDataSourceFactory(new Factory() { @Override public DataSource create() { return DataSources.INSTANCE_DB; }; }); cacheStoreFactory.setDialect(new SQLServerDialect()); cacheStoreFactory.setTypes(jdbcTypeCustomer(ccfg.getName())); ccfg.setCacheStoreFactory(cacheStoreFactory); ccfg.setReadThrough(true); ccfg.setWriteThrough(true); ArrayList qryEntities = new ArrayList<>(); QueryEntity qryEntity = new QueryEntity(); qryEntity.setKeyType("java.lang.Long"); qryEntity.setValueType("com.model.Customer"); LinkedHashMap fields = new LinkedHashMap<>(); fields.put("dwId", "java.lang.Long"); fields.put("customerId", "java.lang.Long"); fields.put("customerName", "java.lang.String"); qryEntity.setFields(fields); HashMap aliases = new HashMap<>(); aliases.put("dwId", "DW_Id"); aliases.put("customerId", "Customer_ID"); aliases.put("customerName", "Customer_Name"); qryEntity.setAliases(aliases); /Adding Index ArrayList indexes = new ArrayList<>(); QueryIndex index = new QueryIndex(); index.setName("NonClustered_Index_ID"); index.setIndexType(QueryIndexType.SORTED); LinkedHashMap indFlds = new LinkedHashMap<>(); indFlds.put("dwId", true); indFlds.put("customerId", true); index.setFields(indFlds); indexes.add(index); qryEntity.setIndexes(indexes); qryEntities.add(qryEntity); ccfg.setQueryEntities(qryEntities); return ccfg; } private static JdbcType jdbcTypeCustomer(String cacheName) { JdbcType type = new JdbcType(); type.setCacheName(cacheName); type.setKeyType("java.lang.Long"); type.setValueType("com.model.Customer"); type.setDatabaseSchema("dbo"); type.setDatabaseTable("Customer"); type.setValueFields( new JdbcTypeField(Types.BIGINT, "DW_Id", long.class, "dwId"), new JdbcTypeField(Types.BIGINT, "Customer_ID", Long.class, "customerId"), new JdbcTypeField(Types.VARCHAR, "First_Name", String.class, "customerName") ); return type; } } POJO Class -- public class Customer implements Serializable { private static final long serialVersionUID = 0L; private long dwId; private Long customerId; } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Avoiding Docker Bridge network when using S3 discovery
When we use S3 discovery and Ignite containers running under ECS using host networking, the S3 bucket end up with 172.17.0.1#47500 along with the other server addresses. Then on cluster startup we must wait for the network timeout.Is there a way to avoid having this address pushed to the S3 bucket? Visor shows: | Address (0) | 10.32.97.32 | | Address (1) | 172.17.0.1 | | Address (2) | 127.0.0.1 Disclaimer The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. This email has been scanned for viruses and malware, and may have been automatically archived by Mimecast Ltd, an innovator in Software as a Service (SaaS) for business. Providing a safer and more useful place for your human generated data. Specializing in; Security, archiving and compliance. To find out more visit the Mimecast website.
Re: SQL Engine
Thx… moved to dev list. Regards, Igor > On Oct 17, 2018, at 6:10 PM, aealexsandrov wrote: > > Hi Igor, > > I think that optimizationd for Ignite better to discuss it on the Apache > Ignite Developer list: > > http://apache-ignite-developers.2346864.n4.nabble.com/ > > BR, > Andrei > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: SQL Engine
Hi Igor, I think that optimizationd for Ignite better to discuss it on the Apache Ignite Developer list: http://apache-ignite-developers.2346864.n4.nabble.com/ BR, Andrei -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
SQL Engine
Hello, Seems that SQL engine always deserialize whole objects instead of using just SQL enabled fields (annotated with @QuerySqlField). This may have a huge impact on Ignite heap usage and GC overhead as well. For example, we have a cache holding big objects but with only two sql query fields which for each query execution (SELECT COUNT(*) FROM 'cache') consumes large amount on heap memory (~300MB). As a proof of concept, we decided the same cache to *index* cache with only sql query field and a *data* holding whole object for materialization. The same query (SELECT COUNT(*) FROM 'index-cache') consumes ~25 time less memory! The same is true for all other queries. The obvious workaround would be to always have separated regions for indexes (sql query enabled region) and a data/value region for materialization, but it might be a good idea to fix this in a systematic way during off heap deserialization. Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Query execution too long even after providing index
How much data do you have? What is the amount of heap and offheap memory? Can you share the reproducer with the community? Evgenii -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ./control.sh --baseline doesn't list nodes on a linux machines
>By the way - I wonder what is the right way to reset an Ignite instance to the initial state (the one I have when Ignite is just installed)? You should delete work and db directories. >I could not activate the new cluster. When control.sh --baseline tried to connect to the cluster it hung Can you share the logs from the nodes? I'd recommend to check all processes on the machine - topology version you've mentioned could be caused by the running visor somewhere. Evgenii -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: On-heap cache
Hello! Cache API calls work with smaller overhead with On-Heap caching enabled, but for SQL there would not likely much difference. Regards, -- Ilya Kasnacheev ср, 17 окт. 2018 г. в 16:54, Prasad Bhalerao : > What is the advantage of using on-heap cache? > > I compared the sql execution time of on-heap cache and off-heap cache and > found that there is not much difference in execution time. > > Thanks, > Prasad >
On-heap cache
What is the advantage of using on-heap cache? I compared the sql execution time of on-heap cache and off-heap cache and found that there is not much difference in execution time. Thanks, Prasad
Re: odbc caches - cannot browse
The issue is fixed and is in master already. You can wait for Ignite 2.7 release or check that the issue is resolved with nightly release [1] [1] - https://ci.ignite.apache.org/project.html?projectId=Releases_NightlyRelease Best Regards, Igor On Tue, Oct 16, 2018 at 10:42 AM wt wrote: > Thank you Igor > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
RE: IGNITE-8386 question (composite pKeys)
Yep, just create a separate index. (I saw in your other messages that you’re already trying that) Stan From: eugene miretsky Sent: 18 сентября 2018 г. 17:56 To: user@ignite.apache.org Subject: Re: IGNITE-8386 question (composite pKeys) So how should we work around it now? Just create a new index for (customer_id, date)? Cheers, Eugene On Mon, Sep 17, 2018 at 10:52 AM Stanislav Lukyanov wrote: Hi, The thing is that the PK index is currently created roughly as CREATE INDEX T(_key) and not CREATE INDEX T(customer_id, date). You can’t use the _key column in the WHERE clause directly, so the query optimizer can’t use the index. After the IGNITE-8386 is fixed the index will be created as a multi-column index, and will behave the way you expect (e.g. it will be used instead of the affinity key index). Stan From: eugene miretsky Sent: 12 сентября 2018 г. 23:45 To: user@ignite.apache.org Subject: IGNITE-8386 question (composite pKeys) Hi, A question regarding https://issues.apache.org/jira/browse/IGNITE-8386?focusedCommentId=16511394=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16511394 It states that a pkey index with a compoise pKey is "effectively useless". Could you please explain why is that? We have a pKey that we are using as an index. Also, on our pKey is (customer_id, date) and affinity column is customer_id. I have noticed that most queries use AFFINITY_KEY index. Looking at the source code, AFFINITY_KEY index should not even be created since the first field of the pKey is the affinity key. Any idea what may be happening? Cheers, Eugene
Re: Writing binary [] to ignite via memcache binary protocol
Hi Michael, The troubles could be related to Python library. It seems in Python 2.7 there is no such thing as "byte array". And value passed to the client is string in this case. I checked that Ignite recognizes bytes array type and stores in as byte array internally. I did following experiment with Spymemcached [1]. public class Memcached { public static void main(String[] args) throws IOException { MemcachedClient client = new MemcachedClient( new BinaryConnectionFactory(), AddrUtil.getAddresses("127.0.0.1:11211")); client.add("a", Integer.MAX_VALUE, new byte[]{1, 2, 3}); client.add("b", Integer.MAX_VALUE, "123"); System.out.println(Arrays.toString((byte[])client.get("a"))); System.out.println(client.get("b")); System.exit(0); } } And I see expected output: [1, 2, 3] 123 [1] https://mvnrepository.com/artifact/net.spy/spymemcached/2.12.3 ср, 17 окт. 2018 г. в 10:25, Павлухин Иван : > Hi Michael, > > Answering one of your questions. > > Does ignite internally have a way to store the data type when cache > entry is stored? > Yes, internally Ignite maintains data types for stored keys and values. > > Could you confirm that for real memcached your example works as expected? > I will try reproduce your Python example. It should not be hard to check > what exactly is stored inside Ignite. > > ср, 17 окт. 2018 г. в 5:25, Michael Fong : > >> bump :) >> >> Could anyone please help to answer a newbie question? Thanks in advance! >> >> On Mon, Oct 15, 2018 at 4:22 PM Michael Fong >> wrote: >> >>> Hi, >>> >>> I kind of able to reproduce it with a small python script >>> >>> import pylibmc >>> >>> client = pylibmc.Client (["127.0.0.1:11211"], binary=True) >>> >>> >>> ##abc >>> val = "abcd".decode("hex") >>> client.set("pyBin1", val) >>> >>> print "val decode w/ iso-8859-1: %s" % val.encode("hex") >>> >>> get_val = client.get("pyBin1") >>> >>> print "Value for 'pyBin1': %s" % get_val.encode("hex") >>> >>> >>> where the the program intends to insert a byte[] into ignite using >>> memcache binary protocol. >>> The output is >>> >>> val decode w/ iso-8859-1: abcd >>> Value for 'pyBin1': *efbfbdefbfbd* >>> >>> where, 'ef bf bd' are the replacement character for UTF-8 String. >>> Therefore, the value field seems to be treated as String in Ignite. >>> >>> Regards, >>> >>> Michael >>> >>> >>> >>> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov wrote: >>> Hi, it looks strange to me. Do you have a reproducer? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >>> > > -- > Best regards, > Ivan Pavlukhin > -- Best regards, Ivan Pavlukhin
Re: Failed to process selector key error
Hello! I have seen this problem under heavy workloads and my recommendation will be to increase socketWriteTimeout: Regards, -- Ilya Kasnacheev ср, 17 окт. 2018 г. в 11:52, the_palakkaran : > > Hi, > > While loading data using streamer, I have the below exception. Why do i get > this error? > > From another thread, there was a hint that it causes due to network lags. > Even if this error occurs, data loading gets complete without any problem. > > [grid-nio-worker-tcp-comm-0-#25][TcpCommunicationSpi] Failed to process > selector key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker > [super=AbstractNioClientWorker [idx=0, bytesRcvd=276409905, > bytesSent=480474228, bytesRcvd0=13319971, bytesSent0=12535268, select=true, > super=GridWorker [name=grid-nio-worker-tcp-comm-0, igniteInstanceName=null, > finished=false, hashCode=510309409, interrupted=false, > runner=grid-nio-worker-tcp-comm-0-#25]]], > writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], > readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], > inRecovery=GridNioRecoveryDescriptor [acked=442016, resendCnt=0, > rcvCnt=502951, sentCnt=442032, reserved=true, lastAck=502944, > nodeLeft=false, node=TcpDiscoveryNode > [id=e7651c3d-52cb-42d5-a112-7c7e346a25d0, addrs=[0:0:0:0:0:0:0:1%lo, > 127.0.0.1, 192.168.11.134], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, > /127.0.0.1:47500, sbstjvmlx222.suntecsbs.com/192.168.11.134:47500], > discPort=47500, order=2, intOrder=2, lastExchangeTime=1539762572477, > loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false], > connected=false, connectCnt=1, queueLimit=4096, reserveCnt=2, > pairedConnections=false], outRecovery=GridNioRecoveryDescriptor > [acked=442016, resendCnt=0, rcvCnt=502951, sentCnt=442032, reserved=true, > lastAck=502944, nodeLeft=false, node=TcpDiscoveryNode > [id=e7651c3d-52cb-42d5-a112-7c7e346a25d0, addrs=[0:0:0:0:0:0:0:1%lo, > 127.0.0.1, 192.168.11.134], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, > /127.0.0.1:47500, sbstjvmlx222.suntecsbs.com/192.168.11.134:47500], > discPort=47500, order=2, intOrder=2, lastExchangeTime=1539762572477, > loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false], > connected=false, connectCnt=1, queueLimit=4096, reserveCnt=2, > pairedConnections=false], super=GridNioSessionImpl > [locAddr=/192.168.11.130:36356, > rmtAddr=sbstjvmlx222.suntecsbs.com/192.168.11.134:47100, > createTime=1539762655890, closeTime=0, bytesSent=478235463, > bytesRcvd=264250879, bytesSent0=12535268, bytesRcvd0=13319971, > sndSchedTime=1539762655890, lastSndTime=1539762724840, > lastRcvTime=1539762724840, readsPaused=false, > filterChain=FilterChain[filters=[GridNioCodecFilter > [parser=o.a.i.i.util.nio.GridDirectParser@b36d8a, directMode=true], > GridConnectionBytesVerifyFilter], accepted=false]]] > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.write0(Native Method) > at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) > at sun.nio.ch.IOUtil.write(IOUtil.java:51) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) > at > > org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite0(GridNioServer.java:1649) > at > > org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite(GridNioServer.java:1306) > at > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2342) > at > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110) > at > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Ignite Select Query Performance
Hi, We can try to check can your configuration be improved but first of all could you please provide some details: 1)How many data nodes you used. 2)Data node configuration 3)Cache configuration or DML Create Table command for your cache (table) BR, Andrei -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Failed to process selector key error
Hi, Your client was disconnected from the server. It could be because of different reasons: 1. Network problems on your client. 2. Very long GC pause on the server side. I guess that you face first because you note that possible you had the network problems. In this case, the server will not able to get metrics from the client node for some time and close the connection. Data streamer operation will fail in this case because the connection was closed. However, you can try to set up failure detection as it described here is related section: https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html BR, Andrei -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite complains for low heap memory
Thanks! It works. On Wed, Oct 17, 2018 at 4:46 PM Stanislav Lukyanov wrote: > Put your -X* options before -jar. That’s how java command line works. > > > > Stan > > > > *From: *Lokesh Sharma > *Sent: *17 октября 2018 г. 14:08 > *To: *user@ignite.apache.org > *Subject: *Re: Ignite complains for low heap memory > > > > I get the same output in the logs when I run the application with Xms set > to 2 GB. I ran this command: > > > > java -jar target/cm.jar -Xms2024m -Xmx4024m > > > > On Wed, Oct 17, 2018 at 4:26 PM aealexsandrov > wrote: > > Hi, > > The heap metrics that you see in topology message shows the max heap value > that your cluster can use: > > Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum() > > Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM > will be started with Xms amount of memory and will be able to use a maximum > of Xmx amount of memory. > > Looks like Ignite has the recommendation to set Xms to at least 512mb. > About JVM tunning you can read here: > > https://apacheignite.readme.io/docs/jvm-and-system-tuning > > BR, > Andrei > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ > > >
RE: Ignite complains for low heap memory
Put your -X* options before -jar. That’s how java command line works. Stan From: Lokesh Sharma Sent: 17 октября 2018 г. 14:08 To: user@ignite.apache.org Subject: Re: Ignite complains for low heap memory I get the same output in the logs when I run the application with Xms set to 2 GB. I ran this command: java -jar target/cm.jar -Xms2024m -Xmx4024m On Wed, Oct 17, 2018 at 4:26 PM aealexsandrov wrote: Hi, The heap metrics that you see in topology message shows the max heap value that your cluster can use: Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum() Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM will be started with Xms amount of memory and will be able to use a maximum of Xmx amount of memory. Looks like Ignite has the recommendation to set Xms to at least 512mb. About JVM tunning you can read here: https://apacheignite.readme.io/docs/jvm-and-system-tuning BR, Andrei -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite complains for low heap memory
I get the same output in the logs when I run the application with Xms set to 2 GB. I ran this command: java -jar target/cm.jar -Xms2024m -Xmx4024m On Wed, Oct 17, 2018 at 4:26 PM aealexsandrov wrote: > Hi, > > The heap metrics that you see in topology message shows the max heap value > that your cluster can use: > > Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum() > > Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM > will be started with Xms amount of memory and will be able to use a maximum > of Xmx amount of memory. > > Looks like Ignite has the recommendation to set Xms to at least 512mb. > About JVM tunning you can read here: > > https://apacheignite.readme.io/docs/jvm-and-system-tuning > > BR, > Andrei > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Ignite complains for low heap memory
Hi, The heap metrics that you see in topology message shows the max heap value that your cluster can use: Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum() Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM will be started with Xms amount of memory and will be able to use a maximum of Xmx amount of memory. Looks like Ignite has the recommendation to set Xms to at least 512mb. About JVM tunning you can read here: https://apacheignite.readme.io/docs/jvm-and-system-tuning BR, Andrei -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: What happens if Server node in a client-server gets down?
Hello! 1. In this case client will not be able to reconnect, I am afraid. You should specify several addresses there. 2. No, but the client will not be able to reconnect to a new server node until you restart. It will still be able to communicate with any new servers or clients, provided that it can access one of servers specified in configuration. 3. Yes, the simplest way is to use Multicast IP finder: https://apacheignite.readme.io/docs/tcpip-discovery#section-multicast-ip-finder There's also a wide range of IP finders there. 4. I don't think this can even remotely affect service deployment, so the reason was probably some other kind of misconfiguration as a coincidence. Regards, -- Ilya Kasnacheev вт, 16 окт. 2018 г. в 22:33, the_palakkaran : > 1. How does client re-connect to another server if I have mentioned only > the > address of leader aka oldest node in its configuration? > > 2. Suppose I have mentioned all server addresses in my client's > configuration. What if I I need to add a new server node? Then I will need > to restart the client again right? > > 3. Is it possible to connect client to a cluster without mentioning ip > address of any of the nodes in the cluster in its configuration? > > 4. I faced a major challenge when I mentioned all server addresses in my > client configuration. When I deployed a service from client as cluster > singleton, it got deployed on all nodes. > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Ignite complains for low heap memory
*Typo Correction: The later heap size is 2.6 GB not 2 GB. On Wed, Oct 17, 2018 at 3:53 PM Lokesh Sharma wrote: > When Ignite boots up, it initially complains that only "188 MB is > available": > > 2018-10-17 15:44:12.295 WARN 15129 --- [pub-#22%cm%] >> o.apache.ignite.internal.GridDiagnostic : Initial heap size is 188MB >> (should be no less than 512MB, use -Xms512m -Xmx512m). >> [15:44:12] Initial heap size is 188MB (should be no less than 512MB, use >> -Xms512m -Xmx512m). >> [15:44:12] Configured plugins: >> [15:44:12] ^-- None >> [15:44:12] >> [15:44:12] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler >> [tryStop=false, timeout=0]] > > > But milli seconds later it says heap size is 2 GB. > > [15:44:18] Ignite node started OK (id=bfcfa993, instance name=cm) >> [15:44:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, >> offheap=2.0GB, heap=2.6GB] >> [15:44:18] ^-- Node [id=BFCFA993-069C-44B5-9B7B-778A7C1EAC00, >> clusterState=ACTIVE] >> [15:44:18] Data Regions Configured: >> [15:44:18] ^-- default [initSize=512.0 MiB, maxSize=1000.0 MiB, >> persistenceEnabled=false] >> [15:44:18] ^-- Buffer_Region [initSize=512.0 MiB, maxSize=1000.0 MiB, >> persistenceEnabled=false] > > > So, is the heap size "188 MB" or "2 GB"? >
Ignite complains for low heap memory
When Ignite boots up, it initially complains that only "188 MB is available": 2018-10-17 15:44:12.295 WARN 15129 --- [pub-#22%cm%] > o.apache.ignite.internal.GridDiagnostic : Initial heap size is 188MB > (should be no less than 512MB, use -Xms512m -Xmx512m). > [15:44:12] Initial heap size is 188MB (should be no less than 512MB, use > -Xms512m -Xmx512m). > [15:44:12] Configured plugins: > [15:44:12] ^-- None > [15:44:12] > [15:44:12] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler > [tryStop=false, timeout=0]] But milli seconds later it says heap size is 2 GB. [15:44:18] Ignite node started OK (id=bfcfa993, instance name=cm) > [15:44:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, > offheap=2.0GB, heap=2.6GB] > [15:44:18] ^-- Node [id=BFCFA993-069C-44B5-9B7B-778A7C1EAC00, > clusterState=ACTIVE] > [15:44:18] Data Regions Configured: > [15:44:18] ^-- default [initSize=512.0 MiB, maxSize=1000.0 MiB, > persistenceEnabled=false] > [15:44:18] ^-- Buffer_Region [initSize=512.0 MiB, maxSize=1000.0 MiB, > persistenceEnabled=false] So, is the heap size "188 MB" or "2 GB"?
Re: Blog post "Introduction to the Apache(R) Ignite™ community structure"
[For discussion] It's interesting to know more about possible conflicts: when a few persons are involved in contribution as a commiters, for example, and they have different opinions about next steps in roadmap implementation or about certain feature. How to measure correctly their weights? One of them doing small bug fixes, another does large features. They are not ranked, except amount of commits or code lines in github. Who can say last word? Imagine, only they are both understand something in this feature and its priority. Votes couldn't help, right? How they should solve their conflict? Does you have any cases in your experience? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Service Grid - Anomalous behavior
Without it, you can't just understand, which data you should upload to each node. ср, 17 окт. 2018 г. в 12:03, Evgenii Zhuravlev : > Yes, in case if you use not partition-aware CacheStore > > Evgenii > > вт, 16 окт. 2018 г. в 22:36, the_palakkaran : > >> Does it mean all data will be loaded on all servers and then it will >> partition amongst them ? >> >> >> >> -- >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> >
Re: Ignite Service Grid - Anomalous behavior
Yes, in case if you use not partition-aware CacheStore Evgenii вт, 16 окт. 2018 г. в 22:36, the_palakkaran : > Does it mean all data will be loaded on all servers and then it will > partition amongst them ? > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Heap size
Sometimes, for blobs and images with size around 100-1000 mbs I'm using G1 with increased heap region size via -XX:G1HeapRegionSize Also, if you have nodes with 8-24 cores, try to use CMS, it gives small second GC pause during "remark" phase when you don't change the graph of objects very often and intensive. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Failed to process selector key error
Hi, While loading data using streamer, I have the below exception. Why do i get this error? >From another thread, there was a hint that it causes due to network lags. Even if this error occurs, data loading gets complete without any problem. [grid-nio-worker-tcp-comm-0-#25][TcpCommunicationSpi] Failed to process selector key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=0, bytesRcvd=276409905, bytesSent=480474228, bytesRcvd0=13319971, bytesSent0=12535268, select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-0, igniteInstanceName=null, finished=false, hashCode=510309409, interrupted=false, runner=grid-nio-worker-tcp-comm-0-#25]]], writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], inRecovery=GridNioRecoveryDescriptor [acked=442016, resendCnt=0, rcvCnt=502951, sentCnt=442032, reserved=true, lastAck=502944, nodeLeft=false, node=TcpDiscoveryNode [id=e7651c3d-52cb-42d5-a112-7c7e346a25d0, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.11.134], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, sbstjvmlx222.suntecsbs.com/192.168.11.134:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1539762572477, loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false], connected=false, connectCnt=1, queueLimit=4096, reserveCnt=2, pairedConnections=false], outRecovery=GridNioRecoveryDescriptor [acked=442016, resendCnt=0, rcvCnt=502951, sentCnt=442032, reserved=true, lastAck=502944, nodeLeft=false, node=TcpDiscoveryNode [id=e7651c3d-52cb-42d5-a112-7c7e346a25d0, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.11.134], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, sbstjvmlx222.suntecsbs.com/192.168.11.134:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1539762572477, loc=false, ver=2.6.0#20180710-sha1:669feacc, isClient=false], connected=false, connectCnt=1, queueLimit=4096, reserveCnt=2, pairedConnections=false], super=GridNioSessionImpl [locAddr=/192.168.11.130:36356, rmtAddr=sbstjvmlx222.suntecsbs.com/192.168.11.134:47100, createTime=1539762655890, closeTime=0, bytesSent=478235463, bytesRcvd=264250879, bytesSent0=12535268, bytesRcvd0=13319971, sndSchedTime=1539762655890, lastSndTime=1539762724840, lastRcvTime=1539762724840, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser@b36d8a, directMode=true], GridConnectionBytesVerifyFilter], accepted=false]]] java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:51) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite0(GridNioServer.java:1649) at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite(GridNioServer.java:1306) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2342) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) at java.lang.Thread.run(Thread.java:748) -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Writing binary [] to ignite via memcache binary protocol
Hi Michael, Answering one of your questions. > Does ignite internally have a way to store the data type when cache entry is stored? Yes, internally Ignite maintains data types for stored keys and values. Could you confirm that for real memcached your example works as expected? I will try reproduce your Python example. It should not be hard to check what exactly is stored inside Ignite. ср, 17 окт. 2018 г. в 5:25, Michael Fong : > bump :) > > Could anyone please help to answer a newbie question? Thanks in advance! > > On Mon, Oct 15, 2018 at 4:22 PM Michael Fong > wrote: > >> Hi, >> >> I kind of able to reproduce it with a small python script >> >> import pylibmc >> >> client = pylibmc.Client (["127.0.0.1:11211"], binary=True) >> >> >> ##abc >> val = "abcd".decode("hex") >> client.set("pyBin1", val) >> >> print "val decode w/ iso-8859-1: %s" % val.encode("hex") >> >> get_val = client.get("pyBin1") >> >> print "Value for 'pyBin1': %s" % get_val.encode("hex") >> >> >> where the the program intends to insert a byte[] into ignite using >> memcache binary protocol. >> The output is >> >> val decode w/ iso-8859-1: abcd >> Value for 'pyBin1': *efbfbdefbfbd* >> >> where, 'ef bf bd' are the replacement character for UTF-8 String. >> Therefore, the value field seems to be treated as String in Ignite. >> >> Regards, >> >> Michael >> >> >> >> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov wrote: >> >>> Hi, it looks strange to me. Do you have a reproducer? >>> >>> >>> >>> -- >>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >>> >> -- Best regards, Ivan Pavlukhin