Hello! What is the type that you are storing in this cache? Can you please show full cache configuration & key-value classes?
Regards, -- Ilya Kasnacheev пт, 25 янв. 2019 г. в 00:49, Premachandran, Mahesh (Nokia - IN/Bangalore) < mahesh.premachand...@nokia.com>: > Hi, > > > > Sorry for the earlier confusion, the type of apn_id/apnId is indeed > String. I had written a simple producer to publish messages to kafka topics > with random values the types of which are > > > > id java.lang.String > > reportStartTime java.lang.Long > > reportEndTime java.lang.Long > > apnId java.lang.String > > ggsnDiameterTotalEvents java.lang.Long > > apnIdVectorItemCount java.lang.Long > > requestType java.lang.Long > > requestTypeNumberEvents java.lang.Long > > requestTypeImsi java.lang.String > > requestTypeImsiVectorItemCount java.lang.Long > > requestTypeSuccessEvents java.lang.Long > > imsiDiameterSuccess java.lang.String > > imsiDiameterSuccessVectorItemCount java.lang.Long > > diameterRequestsUnsuccessful java.lang.Long > > imsiDiameterUnsuccessful java.lang.String > > imsiDiameterUnsuccessfulVectorItemCount java.lang.Long > > requestDelaySum java.lang.Double > > requestDelayEvents java.lang.Long > > resultCode java.lang.Long > > resultCodeEvents java.lang.Long > > resultCodeImsi java.lang.String > > resultCodeImsiVectorItemCount java.lang.Long > > terminationCause java.lang.Long > > terminationCauseEvent java.lang.Long > > > > This is the statement that was used to create the table on HIVE. > > > > CREATE TABLE apn_diameter_5_min (id VARCHAR(36), report_start_time > BIGINT,report_end_time BIGINT, apn_id > VARCHAR(200),ggsn_diameter_total_events BIGINT, apn_id_vector_item_count > BIGINT, request_type BIGINT,request_type_number_events BIGINT, > request_type_imsi VARCHAR(16), request_type_imsi_vector_item_count BIGINT, > request_type_success_events BIGINT, imsi_diameter_success > VARCHAR(16),imsi_diameter_success_vector_item_count BIGINT, > diameter_requests_unsuccessful BIGINT, imsi_diameter_unsuccessful > VARCHAR(16), imsi_diameter_unsuccessful_vector_item_count BIGINT, > request_delay_sum DOUBLE, request_delay_events BIGINT, result_code BIGINT, > result_code_events BIGINT, result_code_imsi VARCHAR(16), > result_code_imsi_vector_item_count BIGINT, termination_cause BIGINT, > termination_cause_event BIGINT) clustered by (id) into 2 buckets STORED AS > orc TBLPROPERTIES('transactional'='true'); > > > > > > I am populating a BinaryObject using the BinaryObjectBuilder in my > implementation of StreamSingleTupleExtractor. > > > > Mahesh > > > > *From:* Ilya Kasnacheev <ilya.kasnach...@gmail.com> > *Sent:* Thursday, January 24, 2019 7:39 PM > *To:* user@ignite.apache.org > *Subject:* Re: Error while persisting from Ignite to Hive for a > BinaryObject > > > > Hello! > > > > In your XML apn_id looks like String. Is it possible that actual type of > apnId in ApnDiameter5Min is neither Long nor String but some other > complex type? Can you attach those types? > > > > Regards, > > -- > > Ilya Kasnacheev > > > > > > ср, 23 янв. 2019 г. в 18:37, Premachandran, Mahesh (Nokia - IN/Bangalore) < > mahesh.premachand...@nokia.com>: > > Hi Ilya, > > > > The field apn_id is of type Long. I have been using the > CacheJdbcPojoStore, does that map the BinaryObjects to the database > schema? or is it only for java pojos? I have attached the xml I am using > with the client. > > > > Mahesh > > > > *From:* Ilya Kasnacheev <ilya.kasnach...@gmail.com> > *Sent:* Wednesday, January 23, 2019 6:43 PM > *To:* user@ignite.apache.org > *Subject:* Re: Error while persisting from Ignite to Hive for a > BinaryObject > > > > Hello! > > > > I think that your CacheStore implementation is confused by nested fields > or binary object values (what is the type of apn_id?). Consider using > CacheJdbcBlobStoreFactory instead which will serialize value to one big > field in BinaryObject formar. > > > > Regards, > > -- > > Ilya Kasnacheev > > > > > > ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) < > mahesh.premachand...@nokia.com>: > > Hi all, > > > > I am trying to stream some data from Kafka to Ignite using > IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The > data on Kafka is in avro format, which I am deserailising, populating an > Ignite BinaryObject using the binary builder and pushing it to Ignite. It > works well when I do not enable 3rd party persistence, but once that is > enabled, it throws the following exception. > > > > [12:32:07] (err) Failed to execute compound future reducer: > GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=2, done=true, > cancelled=false, err=class o.a.i.IgniteCheckedException: DataStreamer > request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, > true, true]]class org.apache.ignite.IgniteCheckedException: DataStreamer > request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2] > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346) > > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556) > > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184) > > at > org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125) > > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091) > > at > org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: javax.cache.integration.CacheWriterException: class > org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: > Failed to update keys (retry update if possible).: [2] > > at > org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280) > > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734) > > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087) > > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60) > > at > org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90) > > ... 6 more > > Caused by: class > org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: > Failed to update keys (retry update if possible).: [2] > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1117) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:606) > > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2372) > > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2349) > > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1084) > > ... 13 more > > Suppressed: class org.apache.ignite.IgniteCheckedException: Failed > to update keys. > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKey(UpdateErrors.java:108) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKey(GridNearAtomicUpdateResponse.java:329) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2560) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736) > > ... 23 more > > Suppressed: class > org.apache.ignite.IgniteCheckedException: Runtime failure on search row: > org.apache.ignite.internal.processors.cache.tree.SearchRow@78ca4051 > > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1637) > > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1249) > > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1529) > > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:352) > > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1767) > > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2421) > > ... 25 more > > Caused by: class org.apache.ignite.IgniteCheckedException: > javax.cache.CacheException: Failed to set statement parameter name: apn_id > > at > org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:597) > > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4927) > > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4746) > > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4460) > > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3083) > > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2977) > > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1726) > > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1610) > > ... 30 more > > Caused by: javax.cache.integration.CacheWriterException: > javax.cache.CacheException: Failed to set statement parameter name: apn_id > > ... 38 more > > Caused by: javax.cache.CacheException: Failed to set > statement parameter name: apn_id > > at > org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1391) > > at > org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillValueParameters(CacheAbstractJdbcStore.java:1443) > > at > org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeUpsert(CacheAbstractJdbcStore.java:919) > > at > org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.write(CacheAbstractJdbcStore.java:1027) > > at > org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:586) > > ... 37 more > > Caused by: java.sql.SQLException: Can't infer the SQL type > to use for an instance of > org.apache.ignite.internal.binary.BinaryObjectImpl. Use setObject() with an > explicit Types value to specify the type to use. > > at > org.apache.hive.jdbc.HivePreparedStatement.setObject(HivePreparedStatement.java:624) > > at > org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1385) > > ... 41 more > > > > > > > > Is this a configuration mistake on my end? I used Ignite Web Console to > get the config XML to create the table on Ignite and connect to hive. > > > > > > Regards, > > Mahesh > > > > > >