Re: Logging query on server node sent from client node
Yes, tried this below configuration ... Rest-of-the-configs It did not work for me. Thanks Om -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Memory utilization of Indexes in ignite
Hi All, I have some open questions about memory utilization of indexes in ignite. So here is the scenario. I have created one cache(CACHE_1) with following props: And i am loading 5GB in CACHE_1. So as per our configs 1 GB data is residing in offHeap and remaining 4GB is going in swap. Now my question is where exactly indexes are going in offHeap or Swap? Thanks & Regards Tejas
Reason for using JVM Instance for Ignite C++ Driver
Hi >From the Ignite Docs: Ignite С++ starts the JVM in the same process and communicates with it via JNI. I would like to know why a JVM instance is required? Why can't we have Ignite C++ driver communicate via TCP directly to the Ignite Server and from the server side process the request and perform necessary actions. In the end the Client being able to communicate to the server is all what matters right. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ignite ODBC Driver v/s Ignite C++ driver
Hi I have a C++ Application that needs to perform query operations(DML and Select) and transactions on Ignite Server. Whats the best way to achieve this ? There are two choices: 1. Ignite ODBC driver and 2. Ignite C++ Driver. I would like to what would be right choice to use from the point of scalability and performance. >From the docs its mentioned that Ignite ODBC used TCP to communicate to Ignite server. On the other hand Ignite C++ Driver starts the JVM in the same process and communicates with it via JNI and this JVM process communicates with the Ignite Server. So Ignite C++ Driver has an additional JNI/JVM hop before communicating to the Ignite Server. Does this affect performance ? What are pros and cons of using each of the above choices available ? Apart from above two choices are there any better choices? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ignite 2.3.0 Console logging handler is not configured error in start-up
Hi, I am using ignite with spring boot 1.5.8, which has Slf4j configured (with logback implementation as default). Starting from 2.3 (most probably any version after 2.0, as with 2.0 I am not getting this error) I am getting following in the log 2017-11-16 09:45:11.591 ERROR 8636 --- [ main] : Failed to resolve default logging config file: config/java.util.logging.properties Console logging handler is not configured. However, I have the node setup like below IgniteConfiguration igniteConfig = new IgniteConfiguration(); igniteConfig.setGridLogger(new Slf4jLogger(log)); and pom.xml entry for Slf4j logger as below org.apache.ignite ignite-slf4j ${ignite.version} As can be seen above, the Slf4jLogger is passed with an slf4j logger configured by spring boot. However, whereas the spring boot and application logs are coming up fine in console, the ignite logs stopped appearing with the error above. Can you please help me resolving this? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Best way to configure a single node embedded ignite 2.3
Thanks. Will look for the page you mentioned. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite 2.3.0 hangs in startup
Hi, Got the reason now. Ignite 2.3.0 hangs because of a regression bug wherein the cachestores use @SpringResource annotation and ignite starts up with SpringCacheManager class. Thanks for the help extended. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Initial query resent the data when client got reconnect
I tried to use contentious query to monitor a cache and set initial query, LocalListener and RemoteFilter as example did. The issue I met is when client reconnect to Ignite cluster, the initial query will query the data from cache which the client might already got before. I tried to use unchanged ID or instance name /cfg.setConsistentId("de01"); cfg.setIgniteInstanceName("test1"); / but does not work. *Is any way to solve this issue?* Many thanks, -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Apache Ignite binary type invalidation
I've createted the binary type with the name 'someField' and the field type is "String". Now I want to change it from string to int. But I don't want to change the name of the type, but I got an exception while cache populating: javax.cache.CacheException:class org.apache.ignite.binary.Binary0bjectException:Binary type has different field types[typeName=com.xxx.service.ignite.vo.AnnounceProjectVO,fieldName=announceld,fieldTypeNamel=long,fieldTypeName2=String] I tried to destroy the original cache, but does not work. Looks the meta data had be cached, can it be refresh? Thanks -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ComputeTask is including Nodes for ComputeJobs prematurely
Chris, Sorry, I still don't understand. What are the things that need to happen exactly? Is there anything on Ignite level that is not initialized when job arrives for execution? -Val -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
"Duplicate field ID" Error when register a contentious query
Hi guys, I met a issue that "Duplicate field ID" Error occurs when register a contentious query. Here is my case: 1. Start a jdbc client and use SQL to create table and insert record 2. Start a another client and using contentious query to monitor table and insert the income data to another Mysql DB using Mybatis in setLocalListener But that error might occur sometimes when data injecting and Listener registering javax·cach.Exception:clas,org.apach.ignite.binary.Binarvobl.ctException:Duplicat.fieldID:profil.SQL at org.apache.ignite.internal.processors.cache.igniteCacheProxylmpl.query(ignitegcheProxyImpl.iava:597) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedcacheProxy.java:368) After deep analyze, we fond mySQL jdbc driver has to method, profileSQL and profileSql, and they will have same hascode after serialization private com.mysql.jdbc.ConnectionPropertiesImpl$BooleanConnectionProperty com.mysql.jdbc.ConnectionPropertiesImpl.profileSQL My question is, why Ignite need to serialize mysql driver? Does it mean the code in setLocalListener need to send to Ignite cluster? How can we avoid this issue? Many thanks -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Can JAVA API SqlQuery query a cache create by SQL DDL?
vkulichenko, thanks for your advice。 It works, by setting value_type=person during DDL, then running SqlQuery sql = new SqlQuery("person","id >? and id http://apache-ignite-users.70518.x6.nabble.com/
Re: ComputeTask is including Nodes for ComputeJobs prematurely
Hi Val, "Prematurely" in this case is a Node not fully initialized. In a nutshell, Ignite is started, enters the "discovery set", but further things must happen for it to be ready to do computation (take real load). When the Grid is initially started, this is pretty easy to control -- we can gate that at the load-balancer -- only allowing it to receive load when the Grid is "complete", and the caches are primed. By allowing the health-check to only return a 200 when the Grid can truly receive load. But later, when the Grid is complete, a Node entering the Grid must still be fully initialized before it can take load. But there is no easy way to say; "I am ready for load" for the ComputeTask. It seems there really needs to be. That said, I do not think the AdaptiveLoadBalancingSpi works in my scenario?? Because we are using Affinity, and the docs seem to indicate that load balancing and Affinity are mutually exclusive. > Data Affinity > Note that load balancing is triggered whenever your jobs are not > collocated with data or have no real preference on which node to execute. > If Collocation Of Compute and Data is used, then data affinity takes > priority over load balancing. A further question: Could you please confirm that: Affinity::mapKeysToNodes() returns ONLY PRIMARY Nodes ?? The javadoc is unclear there. And that I would need to use Affinity::mapKeyToPrimaryAndBackups() if I want to provide alternate Nodes when I want to provide alternate Nodes. Thanks, -- Chris -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Can JAVA API SqlQuery query a cache create by SQL DDL?
Hi, You should specify type name instead of cache name when creating the query: SqlQuery sql = new SqlQuery("person","id >? and id http://apache-ignite-users.70518.x6.nabble.com/
Re: ComputeTask is including Nodes for ComputeJobs prematurely
Hi Chris, Can you define "prematurely"? What exactly is required for a node to be ready to execute jobs? From Ignite perspective, it is ready as long as it's in topology on discovery level, and that's when it starts receiving requests. BTW, with AdaptiveLoadBalancingSpi you can implement custom AdaptiveLoadProbe which can use any information, there is no requirement to rely on out of the box metrics. -Val -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Deserializing to an object from a request based on SqlQueryField.
Hi again Mike, I'm confuse, in fact I did a basic mistake with the cast of the object. So, there is no problem ;-) Thanks for your help. Regards, Jérôme. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ComputeTask is including Nodes for ComputeJobs prematurely
Hmmm, the raw code does not appear to be visible. Reposting public Map map(List subgrid, @Nullable TRequest request) throws IgniteException { MapjobMap = new HashMap<>(); List cacheKeys = getIgniteAffinityCacheKeys(request); Map > nodeToKeysMap = ignite.affinity(getIgniteAffinityCacheName()).mapKeysToNodes(cacheKeys); for (Map.Entry > mapping : nodeToKeysMap.entrySet()) { ClusterNode node = mapping.getKey(); Collection mappedKeys = mapping.getValue(); final List uuidList = (mappedKeys instanceof List ? (List) mappedKeys : new ArrayList<>(mappedKeys)); if (node != null) { TRequest reducedRequest = requestWithPerNodeUUIDs(request, uuidList); AbstractComputeJob job = createJob(reducedRequest, context); jobMap.put(job, node); } } return jobMap; } Thanks, -- Chris -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ComputeTask is including Nodes for ComputeJobs prematurely
I figure I should probably add my map() method. I think this is all further complicated because we rely heavily on Affinity. I was thinking that could maybe use the AdaptiveLoadBalancingSpi But I see no way to add my own metrics to ClusterMetrics Plus, I do not see how this would work with affinity(getIgniteAffinityCacheName()).mapKeysToNodes(cacheKeys); Thanks, -- Chris -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ComputeTask is including Nodes for ComputeJobs prematurely
Hi, We are using Ignite’s Compute Grid. And mostly, it is working beautifully. But there is one glitch I need to work around. AWS, Nodes come and go. And when a new Node joins the existing Grid, it is given ComputeJobs before it is actually ready to handle them. In our infrastructure, a Node announces its availability by returning a 200 from an “Alive page”. This is used by the Load Balancers to determine if a particular Instance can take load. This works very nicely at that level. Is there a way to provide a similar functionality to this in the ComputeTask map() method?? Obviously, we cannot make an HTTP call for every map(). Perhaps there is a way to handle this “upstream” from the ComputeTask?? Perhaps using a similar “Alive page” functionality that Ignite could employ?? Or perhaps there is a way to enable the Ignite “communications port” only when it is ready?? (But would this break other things in Ignite for startup??) Make sense?? Any help/suggestions would be greatly appreciated. Thanks, -- Chris -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Deserializing to an object from a request based on SqlQueryField.
Hi, did you mark B field with @QuerySqlField? I'm just checked you case and it works fine for me, but I used java while you I guess you c#, don't you? Thanks, Mike. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Logging query on server node sent from client node
Hi, did you enabled events in configurations: https://apacheignite.readme.io/docs/events#section-configuration ? Thanks, Mike. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Grid freezing
I figured this out at long last... The root cause of the problem was the Scan object's toString method: @Override public String toString() { return ReflectionToStringBuilder.reflectionToString(this); } It used the apache common-lang's RefelectionToStringBuilder and this jar was not in the ignite libs folder. Once in there, transactions worked with multiple server nodes under heavy loads. The problem was easy to reproduce - just remove this jar from the libs folder. The error message that results is the same as shown above and does not point to the real underlying cause, so that is what caused the confusion: org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.implicitSingleResult(IgniteTxLocalAdapter.java:352) I am guessing that because this ReflectionToStringBuilder.toString() is only used in the Scan object's toString() method, it is never needed by Ignite until at some point an exception is thrown and a logger calls the toString on the Scan object and a java.lang.NoClassDefFoundError is not anticipated... I tried invoking the ReflectionToStringBuilder.toString() more directly from the dequeuer code itself and I did see the expected error in the logs: java.lang.NoClassDefFoundError: org/apache/commons/lang/builder/ReflectionToStringBuilder at... -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: QueryEntity Based Configuration
Thank you for response. I cant do the first offer(flatten map). And that about the second one I didnt understand ,can you please explain it in details? p.s. My cache value is List of Models. And I want get Models with sorting (by some fields ) quikly. And therefore I decided to use indexes and cache queries.If ignite not supported my case ,how xan I solve my problem with another way? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How can I get Ignite security plugin to work with JDBC thin client?
Hi Valdimir, There are at least two problems that I've found here. The first is the query execution engine as you have just pointed out. The second one is the JDBC thin driver itself. In JdbcThinTcpIo.handshake, it doesn't pass in user credentials (i.e., user & password) as shown below, so how can the server identify the user during the hand shaking? /** * Used for versions: 2.1.5 and 2.3.0. The protocol version is changed but handshake format isn't changed. * * @param ver JDBC client version. * @throws IOException On IO error. * @throws SQLException On connection reject. */ public void handshake(ClientListenerProtocolVersion ver) throws IOException, SQLException { BinaryWriterExImpl writer = new BinaryWriterExImpl(null, new BinaryHeapOutputStream(HANDSHAKE_MSG_SIZE), null, null); writer.writeByte((byte) ClientListenerRequest.HANDSHAKE); writer.writeShort(ver.major()); writer.writeShort(ver.minor()); writer.writeShort(ver.maintenance()); writer.writeByte(ClientListenerNioListener.JDBC_CLIENT); writer.writeBoolean(distributedJoins); writer.writeBoolean(enforceJoinOrder); writer.writeBoolean(collocated); writer.writeBoolean(replicatedOnly); writer.writeBoolean(autoCloseServerCursor); writer.writeBoolean(lazy); writer.writeBoolean(skipReducerOnUpdate); send(writer.array()); BinaryReaderExImpl reader = new BinaryReaderExImpl(null, new BinaryHeapInputStream(read()), null, null, false); boolean accepted = reader.readBoolean(); if (accepted) { if (reader.available() > 0) { byte maj = reader.readByte(); byte min = reader.readByte(); byte maintenance = reader.readByte(); String stage = reader.readString(); long ts = reader.readLong(); byte[] hash = reader.readByteArray(); igniteVer = new IgniteProductVersion(maj, min, maintenance, stage, ts, hash); } else igniteVer = new IgniteProductVersion((byte)2, (byte)0, (byte)0, "Unknown", 0L, null); } else { short maj = reader.readShort(); short min = reader.readShort(); short maintenance = reader.readShort(); String err = reader.readString(); ClientListenerProtocolVersion srvProtocolVer = ClientListenerProtocolVersion.create(maj, min, maintenance); if (VER_2_1_5.equals(srvProtocolVer)) handshake(VER_2_1_5); else if (VER_2_1_0.equals(srvProtocolVer)) handshake_2_1_0(); else { throw new SQLException("Handshake failed [driverProtocolVer=" + CURRENT_VER + ", remoteNodeProtocolVer=" + srvProtocolVer + ", err=" + err + ']', SqlStateCode.CONNECTION_REJECTED); } } } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Failed to find query with ID: 0
You are totally right, that's why there is a ticket for the issue which is fixed. Best Regards, Igor On Wed, Nov 15, 2017 at 6:40 PM, kenn_thomp...@qat.com < kenn_thomp...@qat.com> wrote: > Igor Sapego-2 wrote > > You get "Table exists" error, because table is successfully > > created by the previous query. Error "Failed to find query > > with ID:[xx]" generated when ODBC statement is closed > > after execution of query that produced empty result set. > > That makes sense, but I wouldn't expect a DDL statement like CREATE TABLE > to > return a resultset. In essence, it should just pass or fail. The current > behavior is always fail (throw an exception) with a cryptic message. > > > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Failed to find query with ID: 0
Igor Sapego-2 wrote > You get "Table exists" error, because table is successfully > created by the previous query. Error "Failed to find query > with ID:[xx]" generated when ODBC statement is closed > after execution of query that produced empty result set. That makes sense, but I wouldn't expect a DDL statement like CREATE TABLE to return a resultset. In essence, it should just pass or fail. The current behavior is always fail (throw an exception) with a cryptic message. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Failed to find query with ID: 0
You should use a driver from 2.2 too if you use nodes of version 2.2. You get "Table exists" error, because table is successfully created by the previous query. Error "Failed to find query with ID:[xx]" generated when ODBC statement is closed after execution of query that produced empty result set. Best Regards, Igor On Wed, Nov 15, 2017 at 5:42 PM, kenn_thomp...@qat.com < kenn_thomp...@qat.com> wrote: > For what it's worth, a little more poking - the create table sql included > the > option from the example in the docs... > > CREATE TABLE IF NOT EXISTS TestTable [blah blah blah] > > This was used over and over again to test with, always with the same > response (Failed to find query). To see if maybe the options were not > correct, I issued this SQL instead: > > CREATE TABLE TestTable [blah blah blah] > > This was a different error: Table already exists. I then ran it with > TestTable2 and no IF NOT EXISTS clause, and got an error, however the > second > time I ran it I got the Table Exists error. It appears than the DDL sql > returns an error but is actually successful. > > Just posting this for future searchers... > > > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Failed to find query with ID: 0
For what it's worth, a little more poking - the create table sql included the option from the example in the docs... CREATE TABLE IF NOT EXISTS TestTable [blah blah blah] This was used over and over again to test with, always with the same response (Failed to find query). To see if maybe the options were not correct, I issued this SQL instead: CREATE TABLE TestTable [blah blah blah] This was a different error: Table already exists. I then ran it with TestTable2 and no IF NOT EXISTS clause, and got an error, however the second time I ran it I got the Table Exists error. It appears than the DDL sql returns an error but is actually successful. Just posting this for future searchers... -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Failed to find query with ID: 0
Igor Sapego-2 wrote > You used ODBC through C# to create table with CREATE TABLE > command, right? If this is the case, then this is a known issue [1] for > Ignite 2.3, which, I assume you are using. The issue already fixed > and the patch was merged to master. As a workaround, you may try > using other Ignite release, like 2.2 So I pulled down 2.2, again fully default configuration. This time (after backing down the ODBC Driver to 2.1), I now get a different error: [HY000] Not enough data in the stream. For what it's worth, I get the same error when sending the SQL through the sqlline cmdline tool. Next up - 2.3+ -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Logging query on server node sent from client node
Is there a way to log query sent by client node on the server node. I tried this way IgnitePredicate locLsnr = new IgnitePredicate() { @Override public boolean apply(EventAdapter evt) { switch (evt.type()) { case EventType.EVT_CACHE_QUERY_EXECUTED: CacheQueryExecutedEvent cacheExecEvent = (CacheQueryExecutedEvent)evt; String clause = cacheExecEvent.clause(); System.out.println("query "+clause); break; case EventType.EVT_CACHE_QUERY_OBJECT_READ: CacheQueryExecutedEvent cacheObjReadEvent = (CacheQueryExecutedEvent)evt; String clause2 = cacheObjReadEvent.clause(); System.out.println("query "+clause2); break; default: System.out.println(evt.getClass()); break; } return true; } }; ignite.events().localListen(locLsnr, EventType.EVT_CACHE_QUERY_EXECUTED, EventType.EVT_CACHE_QUERY_OBJECT_READ); But, this did not work -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: QueryEntity Based Configuration
Hi, Indexes for nested collections and in particular for maps are not supported. If you need indexes for data please consider analyzing and refactoring the problem domain. You could either flatten map so that its keys become fields of the Model or move data into separate cache/caches related to Model. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Deserializing to an object from a request based on SqlQueryField.
Hi All, I’m facing a problem to get the object from a query based on SqlQueryField. Let me give you an example. The type of the field A is : string The type of the field B is : CustomizedObject with its own structure. In the table STRUCTURE we can see that B has been correctly serialized. The problem is when I try with the followed request to retrieve B I don’t obtain the deserialized value of my object, I got a new structure composed with 2 fields (an array (with the pseudo object) and a raw field). So my question is how can I deserialzed the value of the field in order to get the real original object? Any help would be very much appreciated. Regards, Here is the request : string select = "SELECT A, B " + "FROM \"" + CACHE + "\".STRUCTURE "; var fieldsQuery = new SqlFieldsQuery(select); fieldsQuery.EnableDistributedJoins = true; var res = cacheOneWayProposals.QueryFields(fieldsQuery).GetAll(); foreach(var row in res) { string A = row[0].ToString(); var B = row[1]; } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Best way to configure a single node embedded ignite 2.3
Hello, I think that there is no special setup for embedded mode. If I am not mistaken there is only one important difference between embedded and standalone modes. In case of embedded mode, Ignite node will be started in the same process. Perhaps, the following page will be helpful as well https://apacheignite.readme.io/docs/clients-vs-servers Thanks! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: No Error/Exception on Data Loss ?
Hi, If a client is up before a node leaves the cluster then it could listen to org.apache.ignite.events.EventType#EVT_NODE_LEFT and org.apache.ignite.events.EventType#EVT_NODE_FAILED events. If a client may start after a possible server node failure, then there is no solution out of the box but you could track the above-mentioned events on the server nodes and store a flag somewhere that a server node has left the cluster. Probably, enabling backups will be a better choice though. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Failed to find query with ID: 0
Hi, You used ODBC through C# to create table with CREATE TABLE command, right? If this is the case, then this is a known issue [1] for Ignite 2.3, which, I assume you are using. The issue already fixed and the patch was merged to master. As a workaround, you may try using other Ignite release, like 2.2, use nightly build [2], or use source code of the Ignite from the master branch. [1] - https://issues.apache.org/jira/browse/IGNITE-6765 [2] - https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/ Best Regards, Igor On Wed, Nov 15, 2017 at 12:28 AM, kenn_thomp...@qat.com < kenn_thomp...@qat.com> wrote: > This should be super simple... > > Started 2 nodes from the command line, they saw each other and now show > servers=2. Completely default configuration. > > Installed the ODBC Driver and set up a DSN. > > Started a C# app to open a connection and issue a create table sql > statement. > > It fails with "Failed to find query with ID:[xx]" where xx is the number of > attempts since the node started. > > Poured thru the docs and google to no avail. > > Help? > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: AW: NullPointer in GridAffinityAssignment.initPrimaryBackupMaps
Lukas, You are right, IGNITE_HOME could not be a problem in embedded mode. Do you have any specific Affinity Function parameters in Cache configuration? Thanks, Alexey -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/