Re: IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not s erializable.

2018-05-22 Thread rizal123
Is there any solutions?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not s erializable.

2018-05-22 Thread rizal123
Dear Master Ignite,

i have exception "IgniteCheckedException: Failed to validate cache
configuration. Cache store factory is not s
erializable.".

*Here is my code:*

/**
 * 
 */
package com.sybase365.mobiliser.custom.btpn.brand.ignite.custom.config;

import java.io.Serializable;
import java.math.BigDecimal;
import java.sql.SQLException;
import java.sql.Types;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedHashMap;
import java.util.Properties;

import javax.cache.configuration.Factory;
import javax.sql.DataSource;

import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.QueryEntity;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
import org.apache.ignite.cache.store.jdbc.JdbcType;
import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
import org.apache.ignite.cache.store.jdbc.dialect.OracleDialect;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

import oracle.jdbc.pool.OracleDataSource;

/**
 * @author 17054072
 *
 */
public class ClientConfigurationFactory implements Serializable{

/**
 * 
 */
private static final long serialVersionUID = 4067320674189197181L;

private static final Properties props = new Properties();
/*static {

try {
InputStream in =
IgniteConfiguration.class.getClassLoader().getResourceAsStream("META-INF/spring/secret.properties");
props.load(in);
}
catch (Exception ignored) {
// No-op.
}
}*/

public static class DataSources {
  public static final OracleDataSource INSTANCE_dsOracle_Btpndev =
createdsOracle_Btpndev();

private static OracleDataSource createdsOracle_Btpndev() {
try {
OracleDataSource dsOracle_Btpndev = new OracleDataSource();

   
dsOracle_Btpndev.setURL("jdbc:oracle:thin:@10.1.92.63:1521:WOWDEV");
dsOracle_Btpndev.setUser("BTPN_BM_02");
dsOracle_Btpndev.setPassword("password");

   
/*dsOracle_Btpndev.setURL(props.getProperty("dsOracle_Btpndev.jdbc.url"));
   
dsOracle_Btpndev.setUser(props.getProperty("dsOracle_Btpndev.jdbc.username"));
   
dsOracle_Btpndev.setPassword(props.getProperty("dsOracle_Btpndev.jdbc.password"));*/

return dsOracle_Btpndev;
}
catch (SQLException ex) {
throw new Error(ex);
}
}
}

public static IgniteConfiguration createConfiguration() throws Exception
{
IgniteConfiguration cfg = new IgniteConfiguration();

cfg.setClientMode(true);
cfg.setIgniteInstanceName("BrandCluster");

TcpDiscoverySpi discovery = new TcpDiscoverySpi();

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();

ipFinder.setAddresses(Arrays.asList("10.1.92.137:47500")); 

discovery.setIpFinder(ipFinder);

cfg.setDiscoverySpi(discovery);

cfg.setCacheConfiguration(cacheSequenceCache());

return cfg;
}

public static CacheConfiguration cacheSequenceCache() throws Exception {
CacheConfiguration ccfg = new CacheConfiguration();

ccfg.setName("SequenceCache");
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);

CacheJdbcPojoStoreFactory cacheStoreFactory = new
CacheJdbcPojoStoreFactory();

cacheStoreFactory.setDataSourceFactory(new Factory() {
/**
 * 
 */
private static final long serialVersionUID = 
-8028910004810254541L;

/** {@inheritDoc} **/
public DataSource create() {
return DataSources.INSTANCE_dsOracle_Btpndev;
};
});

cacheStoreFactory.setDialect(new OracleDialect());

cacheStoreFactory.setTypes(jdbcTypeSequence(ccfg.getName()));

ccfg.setCacheStoreFactory(cacheStoreFactory);

ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ccfg.setSqlSchema("PUBLIC");

ArrayList qryEntities = new ArrayList();

QueryEntity qryEntity = new QueryEntity();

qryEntity.setKeyType("java.lang.String");
qryEntity.setValueType("com.btpn.rizal.khaerul.model.Sequence");
qryEntity.setTableName("SEQUENCE");
qryEntity.setKeyFieldName("seqName");

HashSet keyFields = new HashSet();

keyFields.add("seqName");

qryEntity.setKeyFields(keyFields);


Re: SQL Transactional Support for Commit and rollback operation.

2018-04-11 Thread rizal123
Dear Ignite Masters,

Yes, i also need this features. I hope it will be release soon. I hope.. :)

I have to pending my project 'mirgate oracle to ignite', cause of this. 

Thanks,
Rizal



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No transaction is currently active || Not allowed to create transaction on shared EntityManager - use Spring transactions or EJB CMT instead

2018-01-29 Thread rizal123
Hi Amir,

How to registered Ignite's Spring Transaction Manager?
Would you mind share your code...

Thanks for your help Amir



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No transaction is currently active || Not allowed to create transaction on shared EntityManager - use Spring transactions or EJB CMT instead

2018-01-29 Thread rizal123
Hi Evgenii ,

Yes i have try that. Without Spring Transaction.

private int sequenceManual(String seqName) {
int seq = 0;
Query query;
try {
em.getTransaction().begin();
query = this.em.createNativeQuery("UPDATE SEQUENCE SET SEQ_COUNT =
SEQ_COUNT + " + INCREMENT + " WHERE SEQ_NAME = '" +seqName+"'");
seq = (Integer) query.executeUpdate();
em.getTransaction().commit();

And also get an error:
java.lang.IllegalStateException: Not allowed to create transaction on shared
EntityManager - use Spring transactions or EJB CMT instead

Would you share your code here Evgenii ?
Thanks for your help...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


No transaction is currently active || Not allowed to create transaction on shared EntityManager - use Spring transactions or EJB CMT instead

2018-01-26 Thread rizal123
Hi,

First of all, yes I know Apache Ignite not support SQL Transaction. I hope
this is not showstopper of my POC. 
I`m here to find another way.

1. I have function for update sequence table.

private int sequenceManual(String seqName) {
int seq = 0;
Query query;
try {
query = this.em.createNativeQuery("UPDATE SEQUENCE SET 
SEQ_COUNT =
SEQ_COUNT + " + INCREMENT + " WHERE SEQ_NAME = '" +seqName+"'");
seq = (Integer) query.executeUpdate();
} catch (Exception e) {
LOG.error("An exception was thrown while Update Sequence " + 
seqName, e);
}

try {
query = this.em.createNativeQuery("SELECT SEQ_COUNT FROM 
SEQUENCE WHERE
SEQ_NAME = '"+ seqName +"'");
seq = (int) ((Number) query.getSingleResult()).longValue();
} catch (Exception e) {
LOG.error("An exception was thrown while Next Return Sequence " 
+ seqName,
e);
}
return seq;
}

with this code, I have an error:
javax.persistence.TransactionRequiredException: Exception Description: No
transaction is currently active


Then, I modified my code with @Transactional Spring.
@Transactional(propagation=Propagation.REQUIRED)
private int sequenceManual(String seqName) {
. . . .

I have an error:
java.lang.IllegalStateException: Not allowed to create transaction on shared
EntityManager - use Spring transactions or EJB CMT instead

Is there something suggestion to running my Update Sequence SQL?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data lost when primary nodes down.

2018-01-24 Thread rizal123
Hi Denis,

Would you mind clarification my statement?
I'm a little bit confuse with primary and backup nodes. And where the data
is stored.

Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CacheWriterException: Failed to write entry to database

2018-01-19 Thread rizal123
Hi,

Some one please help me.
It takes my time to figured this out. 

I got this error.
It is happen when insert/update/delete operation. 

ignite-4c6985f2.log
 
 

Here is my cluster configuration:
IgniteConfiguration cfg = new IgniteConfiguration();

cfg.setIgniteInstanceName("BrandCluster");
cfg.setClientMode(false);

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47509")); 

TcpDiscoverySpi discovery = new TcpDiscoverySpi();
discovery.setLocalAddress("127.0.0.1");
discovery.setLocalPort(47500);
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);

And here is my server configuration:
ServerConfigurationFactory.java

  





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data lost when primary nodes down.

2018-01-19 Thread rizal123
Hi Denis,

Thanks for your reply.

> You didn't configure any backups for your cache, so, there is only one
> node for every key in the cluster.
In order to configure backups for cache, use
CacheConfiguration.setBackups(int) method.

If i have 3 nodes server, do i have to set up which is the primary and
backup nodes?
It is like paradigm DBMS Master and Slave? 
I will put F5 before invoke into my 3 nodes server. So the F5 will become
load balancer and also failover.
Is it fine?
What if i set second node and third nodes to 'Backup', and suddenly first
node down, then F5 will pointer to 2nd or 3rd nodes. How about
insert/put/update operations will be? Where is the data will be stored?


> Another option is to make the cache replicated, but it will make
> insert/put/update operations slower. Covered here:
> https://apacheignite.readme.io/docs/cache-modes

Yes I have implemented replication cache
(https://apacheignite.readme.io/docs/cache-modes).

> Do you mean, that you connected to different Ignite nodes, using JDBC
> driver, and ran queries? 
yes it is.

>In this case you will get the same result, regardless of where the data is
stored. 
In my case, data will be stored at this nodes (IP = 10.5.42.95) right?
>> 2. Dbeaver connect to IP = 10.5.42.95 

>If some data is missing on a local node, Ignite sends requests to other
nodes. 
I think the data will be stored on each cluster. I was wrong about this.

Would you mind to clarification about this...





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Data lost when primary nodes down.

2018-01-17 Thread rizal123
Hi,

I have experiment create 2nodes server. (IP: 10.5.42.95 and 10.5.42.96)
With this Ignite Configuration: 
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("BrandCluster");
TcpDiscoveryVmIpFinder ipFinder = new
TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("10.5.42.95:47500..47509",
"10.5.42.96:47500..47509"));

TcpDiscoverySpi discovery = new TcpDiscoverySpi();
discovery.setLocalPort(47500);
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);

And this is Cache Configuration:
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName("MMessageLogCache");
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setOnheapCacheEnabled(true);
ccfg.setSqlSchema("PUBLIC");
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
   
ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC);
ccfg.setCacheStoreFactory(cacheStoreFactory); // Cache store
to oracle
ccfg.setWriteBehindEnabled(true);
ccfg.setWriteBehindFlushSize(3);
ccfg.setWriteBehindFlushFrequency(15000); // 15 seconds

My Case:
1. Client using DBeaver (for testing this case).
2. Dbeaver connect to IP = 10.5.42.95 (Primary Node).
3. Running statement "Insert Into M_Message ... XYZ ... "
4. Check both nodes with Dbeaver, running statement 'Select * from
M_Message'.
5. Data 'XYZ' is present on both node.
6. Shutdown Primary Node (Kill -9).
7. The data 'XYZ' also lost at second node.
8. I have set 'Write behind enable', and use frequency 15seconds or 3
(inserted row).
9. So the data never stored to oracle, because the data at second node is
also lost.

Please let me know if there is misconfiguration...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Migrating from Oracle to Apache Ignite.

2018-01-14 Thread rizal123
Hi Ilya,

Thanks for your reply.

It is solved now, our cluster has distributed the data.
And from Log Node, it is show 3 server. 

"[09:45:33] Topology snapshot [ver=9, servers=1, clients=0, CPUs=8,
heap=0.77GB]
[09:47:15] Topology snapshot [ver=10, servers=2, clients=0, CPUs=12,
heap=1.8GB]
[09:47:44] Topology snapshot [ver=11, servers=3, clients=0, CPUs=12,
heap=2.8GB]"

Thanks Ilya.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Migrating from Oracle to Apache Ignite.

2018-01-10 Thread rizal123
Hi Andrew,

Thanks for your reply.

Hope the ticket will be on 2.4

Next question about replication. I have 3 node server with different
machine/ip. How ignite replicate/distribution data between them? Whereas my
application only access into one node.

Please let me know if there something I miss...

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Migrating from Oracle to Apache Ignite.

2018-01-09 Thread rizal123
Hi,

I have a project/poc, about migrating database oracle into in memory apache
ignite.

First of all, this is my topology.

 

in case image not showing: https://ibb.co/cbi5cR

I have done this thing:
1. Create node server cluster. And import schema oracle into it.
2. Load data from Oracle into server cluster using LoadCache.
3. From my application, change datasource into ignite cluster. (just only
one IP address). Currently i am using Jdbc Thin.
4. Start my application, and its Up. It`s running well.

I have the following problems:
1. JDBC Thin, does not support Transactional SQL. I really need this ticket
to be fixed.
2. Multi connection IP Address for JDBC Thin, or Load Balancer for JDBC
Thin. 
3. Automatic fail over. I have tested 1 machine, with 3 node cluster server.
If the first node (That was first turn on) down, the connection will down
too. Though there are still 2 clusters that live. 

Please let me know if there any solution...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC thin client load balancing and failover support

2018-01-09 Thread rizal123
Yes it is.
I can do put some logic to re-route to another address if primary node is
down. But I think this is not solution..

Btw, this is another question for me.
How about the load-balancer?

Currently i have 3 VM (3 ip address). Which will be used as 3 cluster
ignite. Behind that, i have Oracle database. 
I`m using JDBC Thin, at my datasource I put only one IP address.
How come the other cluster will get the data? If i only invoke into one Ip
address.
Please let me know if there is something I miss..







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC thin client load balancing and failover support

2018-01-08 Thread rizal123
Hi Val,

According to this ticket 'IGNITE-7029'.
How long it will be *Live*?
At least tell me the estimated time?

Because I have the same problem, about failover and Jdbc thin can access
multiple Node (IP Address).
And I need to go to POC.

regards,
-Rizal





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


XML Format for Expiry Policies

2018-01-04 Thread rizal123
Hi all,

Would you mind to create xml format for this java syntax.

https://apacheignite.readme.io/v2.3/docs/expiry-policies

cfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ONE_MINUTE));

thanks for your help.

regards.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/