Re: Connection problem between client and server

2018-01-09 Thread Jeff Jiao
Hi Denis,

Does Ignite dev team give any feedback for this?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How possible we can get the reason why the ignite shut down itself?

2018-01-09 Thread aa...@tophold.com
Hi All, 

We have cache node(only one node not cluster), with native persistence enable, 
update of this cache will be frequently.

But not so frequent, we use this cache to aggregate the open close and high low 
price.  now only have about <1000 updates per seconds. 

we use the cache#invoke to update the price according to key every time. 

But but every one hour the cache just shut down by itself.  we got the 
CacheStoppedException: Fail to perform cache operation (cache is stopped)

The underling server in fact have 64 memory still 30G+ cache left there..  the 
GC is normal no full GC triggered.

the exception stack include no specific reason  why it shut down by itself, I 
not sure wherever any place to print this reason why the node shutdown itself?



Regards
Aaron
























aa...@tophold.com


Migrating from Oracle to Apache Ignite.

2018-01-09 Thread rizal123
Hi,

I have a project/poc, about migrating database oracle into in memory apache
ignite.

First of all, this is my topology.

 

in case image not showing: https://ibb.co/cbi5cR

I have done this thing:
1. Create node server cluster. And import schema oracle into it.
2. Load data from Oracle into server cluster using LoadCache.
3. From my application, change datasource into ignite cluster. (just only
one IP address). Currently i am using Jdbc Thin.
4. Start my application, and its Up. It`s running well.

I have the following problems:
1. JDBC Thin, does not support Transactional SQL. I really need this ticket
to be fixed.
2. Multi connection IP Address for JDBC Thin, or Load Balancer for JDBC
Thin. 
3. Automatic fail over. I have tested 1 machine, with 3 node cluster server.
If the first node (That was first turn on) down, the connection will down
too. Though there are still 2 clusters that live. 

Please let me know if there any solution...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC thin client load balancing and failover support

2018-01-09 Thread rizal123
Yes it is.
I can do put some logic to re-route to another address if primary node is
down. But I think this is not solution..

Btw, this is another question for me.
How about the load-balancer?

Currently i have 3 VM (3 ip address). Which will be used as 3 cluster
ignite. Behind that, i have Oracle database. 
I`m using JDBC Thin, at my datasource I put only one IP address.
How come the other cluster will get the data? If i only invoke into one Ip
address.
Please let me know if there is something I miss..







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteOutOfMemoryException when using putAll instead of put

2018-01-09 Thread lmark58
For testing I created a data region of 21 MB 

DataRegionConfiguration = 
(new DataRegionConfiguration)
  .setName("testRegion")
  .setInitialSize(21 * 1024 * 1024)
  .setMaxSize(21 * 1024 * 1024)
  .setPersistenceEnabled(false)
  .setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)
  .setMetricsEnabled(true)
  .setEvictionThreshold(.9)

I then created a cache that uses that data region.

  val cfg = new CacheConfiguration[Int, String]
  cfg.setName("testCache")
  .setCacheMode(CacheMode.PARTITIONED) // The most efficient mode that
allows a client to read
  .setAtomicityMode(CacheAtomicityMode.ATOMIC)
 
.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ETERNAL)) //
never throw away data
  .setDataRegionName("testRegion"))
  .setStatisticsEnabled(true)

   val myCache = ignite.getOrCreateCache(cfg)

This region is large enough to hold about 9900 entries where the String
value has a length of 1200.

If I put 2 values into the cache one at a time using put, it works as I
expect.   There is no error and a subset of the values is retained in the
cache.

But if I do a putall on the 2 values I get an IgniteOutOfMemoryException
( stack trace below).  Is this expected behavior?   The error suggests
enabling evictions, but they are already enabled.

This test is running just a single instance of ignite embedded in the test
program.  In production I will have much more memory, but I want to
understand if this is a bug, since there can always be a case where the
putall will require more memory than is currently available and if it does
not evict pages I could get this in prod.

[19:41:08,411][ERROR][main][GridDhtAtomicCache]  Unexpected exception
during cache update
class org.apache.ignite.IgniteException: Runtime failure on search row:
org.apache.ignite.internal.processors.cache.tree.SearchRow@262b2c86
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1632)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1201)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:343)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2419)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1882)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1735)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1627)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:812)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:664)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAll0(GridDhtAtomicCache.java:1068)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAll0(GridDhtAtomicCache.java:647)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2760)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1068)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:928)
at IgniteMain$.$anonfun$main$8(IgniteMain.scala:64)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.util.Try$.apply(Try.scala:209)
at IgniteMain$.main(IgniteMain.scala:64)
at IgniteMain.main(IgniteMain.scala)
Caused by: class org.apache.ignite.internal.mem.IgniteOutOfMemoryException:
Not enough memory allocated (consider increasing data region size or
enabling evictions) [policyName=RefData, size=22.0 MB]
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:292)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.allocateDataPage(FreeListImpl.java:456)
at

Creating multiple Ignite grids on same machine

2018-01-09 Thread Raymond Wilson
I’m trying out a proposal for an Ignite based topology that will have two
grids. I am using Ignite.Net 2.3.



One grid is responsible for processing inbound events, the second is
responsible for servicing read requests against an immutable
query-efficient projection of the data held in the first grid. Both grids
will handle compute intensive tasks.



So:  Ingest Data is processed into Grid #1 (read-write, mutable with
‘Default-Mutable’ data region configuration) which is then projected into
Grid #2 (read-only, immutable with ‘Default-Immutable’ data region
configuration)



Both grids will use persistency, and I’m keen on having isolation between
the two so I can scale read/write sides of the operation independently, as
well as start/stop the ingest grid independently of the read grid.



I currently have set up different ports (48500 for Grid 1 and 47500 for
Grid 2) on local host for each grid to start allocating port numbers for
nodes. These are assigned consistently for server and client nodes of each
grid, though server nodes in Grid #1 (on also act as clients to Grid #2),
ie: it creates a server node on 48500 and a client node on 47500.



When I start the two grids, each with a single node, I see errors like the
one below (from the log of the mutable grid server node) that suggests
Ignite is treating both nodes as if they belonged to the same grid and is
exchanging partition maps between them. The exception text in the error
cites the Default-Immutable data region which is configured only on the
immutable grid server node.



ERROR 2018-01-10 13:23:17,001 24025ms
GridCachePartitionExchangeManagerb__22   - Failed to
wait for completion of partition map exchange (preloading will not start):
GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=026fc6c4-ae5e-4e4b-ba7f-b74c7b685875,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:48500], discPort=48500, order=11,
intOrder=8, lastExchangeTime=1515543793995, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11,
nodeId8=026fc6c4, msg=null, type=NODE_JOINED, tstamp=1515543796137],
crd=TcpDiscoveryNode [id=d0b35790-4bad-4060-b86e-5382ac65e57a,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1515543795296, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=11, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=026fc6c4-ae5e-4e4b-ba7f-b74c7b685875,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:48500], discPort=48500, order=11,
intOrder=8, lastExchangeTime=1515543793995, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11,
nodeId8=026fc6c4, msg=null, type=NODE_JOINED, tstamp=1515543796137],
nodeId=026fc6c4, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=false, hash=7528364], init=false,
lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null,
initTs=1515543796171, centralizedAff=false, changeGlobalStateE=null,
done=true, state=SRV, evtLatch=0,
remaining=[d0b35790-4bad-4060-b86e-5382ac65e57a], super=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=class
o.a.i.IgniteCheckedException: Requested DataRegion is not configured:
Default-Immutable, hash=23921041]]

The error above cites both 48500 & 47500 discovery ports even though this
process was only ever configured with the 48500 discovery port for creation
of the Ignite server node. Given the discovery port ranges (48500-48600 and
47500-47600) don’t intersect, I am confused as to how this is happening.



Is there anything additional I need to add to the TCPDiscoverySPI other
than these settings to achieve two functioning grids running locally:



cfg.DiscoverySpi = new TcpDiscoverySpi()

{

LocalAddress = "127.0.0.1",

LocalPort = 48500 // 48500 for Grid #1, 47500 for Grid #2

};



Thanks,

Raymond.


Re: JDBC thin client load balancing and failover support

2018-01-09 Thread Denis Magda
It should be supported for 2.4.

Anyway, this should not be a showstopper because this logic can be embedded 
into an application. If the application loses a connection to one IP address it 
can re-connect to the cluster via a different one.

—
Denis

> On Jan 8, 2018, at 11:37 PM, rizal123  wrote:
> 
> Hi Val,
> 
> According to this ticket 'IGNITE-7029'.
> How long it will be *Live*?
> At least tell me the estimated time?
> 
> Because I have the same problem, about failover and Jdbc thin can access
> multiple Node (IP Address).
> And I need to go to POC.
> 
> regards,
> -Rizal
> 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Transaction operations using the Ignite Thin Client Protocol

2018-01-09 Thread Denis Magda
+ dev list

Igniters, Pavel,

I think we need to bring support for key-value transactions to one of the 
future versions. As far as I understand, a server node, a thin client will be 
connected to, will be the transaction coordinator and the client will simply 
offloading everything to it. What do you think?

—
Denis

> On Jan 8, 2018, at 1:12 AM, kotamrajuyashasvi  
> wrote:
> 
> Hi
> 
> I would like to perform Ignite Transaction operations from a C++ program
> using the Ignite Thin Client Protocol. Is it possible to do so ? If this
> feature is not available now, will it be added in future ? 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite Memory Storage Options

2018-01-09 Thread Denis Magda
1. Have a data region with the Ignite persistence enabled and set a little bit 
of RAM for that region. The more RAM you dedicate for the region the faster 
your queries will be.

2. Define two data regions - the first won’t have the persistence enabled and 
the second will be defined as in 1.

—
Denis

> On Jan 5, 2018, at 12:24 AM, rishi007bansod  wrote:
> 
> Hi,
>I am using Ignite version 2.3.0, I want to know whether it is possible
> to, 
> 
> (1) store all cache data in the disk (no data in memory at all)
> (2) store exclusive set of data in memory and on disk i.e. data stored in
> memory should be available in memory only and data stored on disk should be
> available on disk only(disk should not have the superset of data)
> 
> Thanks,
> Rishikesh
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: JDBC Thin Client with Transactions

2018-01-09 Thread Denis Magda
Hi,

The transactional SQL should be rolled out in Q2 this year.

"java.sql.SQLFeatureNotSupportedException: Transactions are not supported” 
exception is caused by this missing piece, yes. You can’t execute INSERTS, 
UPDATES, DELETES, MERGE queries inside of Ignite transactions.

However, you can run individual INSERTS, UPDATES, etc. for sure.

—
Denis

> On Jan 4, 2018, at 8:55 PM, Teki  wrote:
> 
> Hi Denis,
> 
> When this feature will be available? Is this why I am getting the error
> mentioned in the below post?
> Please assist. Thanks in advance !!
> 
> http://apache-ignite-users.70518.x6.nabble.com/insert-into-ignite-cache-from-Netezza-database-using-talend-tp19295.html
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Best practice Apache Ignite Data-Grid with Java EE application

2018-01-09 Thread Franz Mathauser
Hi Val,

thanks for the fast feedback ;)

Best Regards,
Franz

vkulichenko  schrieb am Di., 9. Jan. 2018 um
00:27 Uhr:

> Hi Franz,
>
> Running an Ignite client node embedded into an application would be the
> best
> way to interact with the cluster. You will have all the APIs available and
> also it's the preferred option from performance standpoint.
>
> Ignite already has thread pool pre configured, there is no need to specify
> anything explicitly unless you have specific performance concerns. I would
> start with creating a simple setup and running performance tests to see how
> it behaves.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL and backing cache question

2018-01-09 Thread slava.koptilin
Hi Naveen,

> Does it mean we do not need to develop these classes OR
> just dont need to deployed on server node's classpath. 
In case of using BinaryObject you do not need to create the key and value
types.

> How do I resolve this issue?? 
It seems that "CITY_DETAILS_BINARY" cache is not created.
I've just tried a simple example and it works...
Anyway, I would suggest retrieving data through JDBC or Java API.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ClassCastException using cache and continuous query

2018-01-09 Thread diego.gutierrez
Hello,

I'm trying to upgrade Ignite from 1.7.0 to 2.3.0 and I'm getting this
exception:

java.lang.ClassCastException:
com.company.datastore.ignite.model.IgniteFetchItem cannot be cast to
com.company.datastore.ignite.model.IgniteFetchItem.

No other changes were made, just the ignite dependency upgrade.

More context: An ignite cache is used as a work queue, so some items are
added to the cache and some logic should be executed when a new item is
added, for that a local listener is set:

Consumer>> localChangeHandler = (evts) -> {
  for (CacheEntryEvent e :
evts) {
 IgniteFetchItem fi = e.getValue();
 ...
  }
this.continousQuery.setLocalListener(new
CacheEntryListener<>(localChangeHandler));


The problem happens in the call to *IgniteFetchItem fi = e.getValue();*

Also, the class *IgniteFetchItem* contains only serializable fields.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Some question regards the DataStorageConfiguration when used in multiple data regions.

2018-01-09 Thread Denis Mekhanikov
Aaron,

Could you provide steps to reproduce the issue?
Can this issue be reproduced when there is only one cache and one data
region? Or when all caches are in a single data region.

Also please clarify, how you modify caches and what queried do you execute?

Denis

вт, 2 янв. 2018 г. в 16:26, aa...@tophold.com :

> This only happen when I shut down the nodes with this persist-able  cache.
>
> After bring those nodes back, either throw this exception, or the node
> hang up forever  waiting the initial partition  map.
>
> [WARN ] 2018-01-02 13:19:08.318 [main] [ig]
> GridCachePartitionExchangeManager - Still waiting for initial partition map
> exchange [fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
> [evtNode=TcpDiscoveryNode [id=bfb39f71-fa1d-4547-9b49-b59aed9ecbef,
> addrs=[10.31.23.18], sockAddrs=[iZuf62zdiq684kn72aatgjZ/10.31.23.18:47502],
> discPort=47502, order=37, intOrder=23, lastExchangeTime=1514899147831,
> loc=true, ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=37,
> nodeId8=bfb39f71, msg=null, type=NODE_JOINED, tstamp=1514898728282],
> crd=TcpDiscoveryNode [id=237a7983-a2d6-4f39-9f39-75cd8b17299c,
> addrs=[10.26.244.207], sockAddrs=[/10.26.244.207:47500], discPort=47500,
> order=1, intOrder=1, lastExchangeTime=1514898728181, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=37, minorTopVer=0], discoEvt=DiscoveryEvent
> [evtNode=TcpDiscoveryNode [id=bfb39f71-fa1d-4547-9b49-b59aed9ecbef,
> addrs=[10.31.23.18], sockAddrs=[iZuf62zdiq684kn72aatgjZ/10.31.23.18:47502],
> discPort=47502, order=37, intOrder=23, lastExchangeTime=1514899147831,
> loc=true, ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=37,
> nodeId8=bfb39f71, msg=null, type=NODE_JOINED, tstamp=1514898728282],
> nodeId=bfb39f71, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
> [ignoreInterrupts=false, state=DONE, res=true, hash=993782641], init=true,
> lastVer=null, partReleaseFut=PartitionReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=37, minorTopVer=0],
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
> [topVer=37, minorTopVer=0], futures=[]], TxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], futures=[]],
> AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion [topVer=37,
> minorTopVer=0], futures=[]], DataStreamerReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], futures=[,
> exchActions=null, affChangeMsg=null, initTs=1514898728302,
> centralizedAff=false, changeGlobalStateE=null, done=false, state=SRV,
> evtLatch=0, remaining=[a364148f-c04f-487e-9935-5298662aa7cc,
> e7c123fd-5da8-45ca-a04a-65bafe9ed6c6, e417828d-793f-4f24-bf80-7945321d8f51,
> 237a7983-a2d6-4f39-9f39-75cd8b17299c, bed139ea-9ea9-42e4-b296-79ed51477ad0,
> d3890e42-6003-4d33-a5f2-52c7c4547f27,
> 9ece2c56-ced5-4710-be57-94d6bd529a90],
>
>
>
> Previous when we use the DB as backend storage never meet this issue.
> attach are those two configuration.
>
> We have do all the thing, like active the ignite before use it etc.
>
>
>
>
>
> --
> aa...@tophold.com
>
>
> *From:* aa...@tophold.com
> *Date:* 2018-01-02 17:30
> *To:* user 
> *Subject:* Re: Re: Some question regards the DataStorageConfiguration
> when used in multiple data regions.
>
> Hi Denis,
>
> Thanks for so quick response the NPE as:
>
> Caused by: java.lang.NullPointerException
>
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:538)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.metadata(CacheObjectBinaryProcessorImpl.java:194)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1266)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:284)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:183)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryObjectImpl.reader(BinaryObjectImpl.java:830)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryObjectImpl.reader(BinaryObjectImpl.java:845)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.binary.BinaryObjectImpl.field(BinaryObjectImpl.java:308)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:245)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
>
> at
> 

Re: Date type field in select query is returning wrong value when Time zones of Ignite Client and Server are different

2018-01-09 Thread Ilya Kasnacheev
Hello!

I have confirmed that the problem exists, raised
https://issues.apache.org/jira/browse/IGNITE-7363

Regards,

-- 
Ilya Kasnacheev

2018-01-09 11:09 GMT+03:00 kotamrajuyashasvi :

> Hi
>
> Thanks for your response.
>
> Temporary work-around that I found was to set Ignite Servers JVM timezone
> and Ignite Client JVM Timezone to a common value using
> _JAVA_OPTIONS="-Duser.timezone=xxx"
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite & unixODBC and truncating text

2018-01-09 Thread Igor Sapego
I've checked the scenario with the pure ODBC and it seems like
the issue is not with the ODBC driver, as it lets you insert and
select varchar data of any length, but with the tools, which truncate
any data that user passes to column size, which set to 64 by default
for variable length columns.

I've created a ticket [1] for this issue so you can track our progress
with it.

[1] - https://issues.apache.org/jira/browse/IGNITE-7362

Best Regards,
Igor

On Tue, Jan 9, 2018 at 11:06 AM, bagsiur 
wrote:

> Yes, and maybe will difficult to find solution on this error:
> https://bugzilla.xamarin.com/show_bug.cgi?id=37368
>
> If will be nessesery I can try to prepare VS 2017 and compile it one more
> time...
>
> So, here is my simple script:
>
> 
> ini_set("display_errors", 1);
> error_reporting(E_ALL);
>
> try {
>
> $ignite = new PDO('odbc:Apache Ignite');
> $ignite->setAttribute(PDO::ATTR_ERRMODE,
> PDO::ERRMODE_EXCEPTION);
>
> $sql = 'CREATE TABLE IF NOT EXISTS test_md5 (id int
> PRIMARY KEY, userkey
> LONGVARCHAR, server LONGVARCHAR, tsession LONGVARCHAR, tpost LONGVARCHAR,
> tget LONGVARCHAR, adddate int) WITH
> "atomicity=transactional,cachegroup=somegroup"';
> $ignite->exec($sql);
>
> for($i=0; $i <= 10; $i++){
> $dbs = $ignite->prepare("INSERT INTO test_md5 (id,
> userkey, server,
> tsession, tpost, tget, adddate) VALUES ($i, 'Lorem ipsum dolor sit amet,
> consectetur adipiscing elit, sed do elit, sed', 'b', 'c', 'd', 'e', 1)");
> $dbs->execute();
> }
>
>
> } catch (PDOException $e) {
> print "Error!: " . $e->getMessage() . "\n";
> die();
> }
>
> ?>
>
> If TEXT is a little bit longer I have error like in my second post.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Why does Ignite de-allocate memory regions ONLY during shutdown?

2018-01-09 Thread Alexey Popov
Hi John,

Could you please re-phrase you question.
What issue are you trying to solve?

Probably, you should address your question directly to dev-list. 

Anyway, you can read about memory region architecture at [1] and [2]. Hope it 
helps.

[1] https://apacheignite.readme.io/docs/memory-architecture
[2] 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood

Thank you,
Alexey

From: John Wilson
Sent: Tuesday, January 9, 2018 1:58 AM
To: user@ignite.apache.org
Subject: Why does Ignite de-allocate memory regions ONLY during shutdown?

Hi,

I was looking at the UnsafeMemoryProvide and it looks to me that allocated 
direct memory regions are deallocated only during shutdown.

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L80

https://github.com/apache/ignite/blob/c5a04da7103701d4ee95910d90ba786a6ea5750b/modules/core/src/main/java/org/apache/ignite/internal/mem/unsafe/UnsafeMemoryProvider.java#L63

My question:

if a memory region has been allocated and, during execution, all data pages 
that are in the region are removed, then why is the memory region de-allocated?

Thanks,





Re: xml and java configuration

2018-01-09 Thread Nikolay Izhikov
Hello, Mikael.

Yes, it possible.

You can load IgniteConfiguration throw Ignition.loadSpringBean [1]
method.
After it you can modify result if you want.

Please, look at example:

```
final String cfg = "modules/yardstick/config/ignite-localhost-
config.xml";

IgniteConfiguration nodeCfg = Ignition.loadSpringBean(cfg, "grid.cfg");

nodeCfg.setIgniteInstanceName("MyNewName");

Ignition.start(nodeCfg);
```

[1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite
/Ignition.html#loadSpringBean(java.lang.String,%20java.lang.String)

В Вт, 09/01/2018 в 15:19 +0100, Mikael пишет:
> Hi!
> 
> Is it possible to mix Ignite configuration so I can have the basic 
> configuration in my xml files and add some extra configuration from
> the 
> Java application or do I have to use one or the other ?
> 
> Mikael
> 
> 


xml and java configuration

2018-01-09 Thread Mikael

Hi!

Is it possible to mix Ignite configuration so I can have the basic 
configuration in my xml files and add some extra configuration from the 
Java application or do I have to use one or the other ?


Mikael




Re: off heap memory usage

2018-01-09 Thread colinc
Thanks for this. The referenced post mentions AllocatedPages rather than
PhysicalPages. Which would you advise is the most appropriate for this
application?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NullPointerException in GridDhtPartitionDemander

2018-01-09 Thread ilya.kasnacheev
Hello!

I think you should seek ways to avoid creating client for each request. This
is slow and dangerous since clients are members of topology and join process
is non-trivial. You can have your rebalancing postponed by joining clients,
for example.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to access ignite web console in AWS Ec2

2018-01-09 Thread ilya.kasnacheev
Hello!

If we're talking about the same thing, you can specify non-random port when
running via

./ignite-web-console-linux --server:port 3000

(and open 3000 in firewall rule, you can also use any other different port)

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: dotnet thin client - multiple hosts?

2018-01-09 Thread Alexey Popov
Hi Colin,

There is a ticket for this improvement.
https://issues.apache.org/jira/browse/IGNITE-7282

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ignite best practice

2018-01-09 Thread Borisov Sergey
Hi,
Sorry for bad english.
Need council on configuring Ignite, which is used as a SQL Grid.
The task is rather simple - to store in realtime information about
connections to the services and to be able to quickly search for it.
Tell me please in what direction to diagnose and what are the variants for
optimizing performance?
The workload in the production mode is expected to be about ~ 100-150k RPS
and ~ 1 million rows in the cache.

*Test Infrastructure:*
3 Ignite nodes (version 2.3) in kubernetes on 3 servers (4 CPUs, 16 GB RAM)
/JVM_OPTS/ = -Xms8g -Xmx8g -server -XX:+AlwaysPreTouch -XX:+UseG1GC
-XX:+DisableExplicitGC -XX:MaxDirectMemorySize=1024M
-XX:+ScavengeBeforeFullGC
/IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE/ = 1
/IGNITE_QUIET/ = false

*Cache structure:*
CREATE TABLE IF NOT EXISTS TEST
(
id varchar (8),
g_id varchar (17),
update_at bigint,
tag varchar (8),
ver varchar (4),
size smallint,
score real,
PRIMARY KEY (id)
) WITH "TEMPLATE = PARTITIONED, CACHE_NAME = TEST,
WRITE_SYNCHRONIZATION_MODE = FULL_ASYNC, BACKUPS = 0, ATOMICITY = ATOMIC";
CREATE INDEX IF NOT EXISTS idx_g_id_v ON TEST (ver ASC, g_id ASC);
CREATE INDEX IF NOT EXISTS idx_size ON TEST (size ASC);
CREATE INDEX IF NOT EXISTS idx_update_at ON TEST (update_at DESC);
CREATE INDEX IF NOT EXISTS idx_tag ON TEST (tag ASC);

*Queries executed while the application is running:*
1) Updating rows data (60%  workload)
MERGE INTO TEST (id, g_id, update_at, tag, ver, size, score)VALUES ()2)
Removing (3% workload)DELETE FROM TEST WHERE id =?3) Once a minute, remove
not actual rows (TTL)DELETE FROM TEST WHERE update_at <=?4) Getting
requested rows (37% workload)(SELECT a.kFROM (SELECT id AS
k, t.score AS s FROM TEST tWHERE t.update_at> = $ {u} AND t.ver =
${v}AND t.g_id = '${g}' AND t.size> = ${cc1} AND t.size <=
${cc2}AND t.tag = `${t}`AND id NOT IN ('', '', '',
'', , '')ORDER BY RAND ()LIMIT 64) aORDER BY
POWER ($ {pp} -a.s, 2) ASCLIMIT 16)UNION ALL(SELECT b.kFROM
(SELECT id AS k, t.score AS s FROM TEST tWHERE t.update_at>
= $ {u} AND t.ver = ${v}AND t.g_id = '${g}' AND t.size> = ${cc1}
AND t.size <= ${cc2}AND (t.tag <> `${t}` OR t.tag IS
NULL)AND id NOT IN ('', '', '', '', , '')ORDER BY
RAND ()LIMIT 64) bORDER BY POWER (${pp} -a.s, 2)
ASCLIMIT 16)LIMIT 16/The first iteration was through the REST
API/:https://apacheignite.readme.io/docs#section-sql-fields-query-execute<=
20k requests per minute - response time: merge 4ms, select 30ms> 20k: merge
& select 300ms - *9ms*, then complete degradation and fall/The second
iteration was through jdbc and batch/:1) every 3 seconds from 500 to 1000
rows: MERGE INTO T VALUES (...), (...), ... (...);2) every 3 seconds from 0
to 150 rows: DELETE FROM T WHERE ID in ('', '', ... '');The performance
increase was approximately 2.5 - 3 times, which is very small.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Using native persistence question

2018-01-09 Thread Mikael
Silly question, if I have native persistence on and a node goes down for 
some time, say I have a partitioned cache with backups or a replicated 
cache, so that cache lives on in the other nodes, will the old node 
handle this correct when it comes back online again and throw away any 
old cache entries that are no longer valid that is has stored on disk 
and start using the correct new entries from the other surviving nodes ?


Mikael





Re: Date type field in select query is returning wrong value when Time zones of Ignite Client and Server are different

2018-01-09 Thread kotamrajuyashasvi
Hi

Thanks for your response. 

Temporary work-around that I found was to set Ignite Servers JVM timezone
and Ignite Client JVM Timezone to a common value using 
_JAVA_OPTIONS="-Duser.timezone=xxx"




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Apache Ignite & unixODBC and truncating text

2018-01-09 Thread bagsiur
Yes, and maybe will difficult to find solution on this error:
https://bugzilla.xamarin.com/show_bug.cgi?id=37368

If will be nessesery I can try to prepare VS 2017 and compile it one more
time...

So, here is my simple script:

setAttribute(PDO::ATTR_ERRMODE, 
PDO::ERRMODE_EXCEPTION);

$sql = 'CREATE TABLE IF NOT EXISTS test_md5 (id int PRIMARY 
KEY, userkey
LONGVARCHAR, server LONGVARCHAR, tsession LONGVARCHAR, tpost LONGVARCHAR,
tget LONGVARCHAR, adddate int) WITH
"atomicity=transactional,cachegroup=somegroup"';
$ignite->exec($sql);

for($i=0; $i <= 10; $i++){
$dbs = $ignite->prepare("INSERT INTO test_md5 (id, 
userkey, server,
tsession, tpost, tget, adddate) VALUES ($i, 'Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do elit, sed', 'b', 'c', 'd', 'e', 1)");
$dbs->execute();
}


} catch (PDOException $e) {
print "Error!: " . $e->getMessage() . "\n";
die();
}

?>

If TEXT is a little bit longer I have error like in my second post.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/