Re: link cache to object class?

2016-04-14 Thread Denis Magda
I'm not sure I understand you properly.

However in any case you can use a code block like the one below to get an
instance of a cache related to a specific class

String cacheName = object.getClass().getSimpleName().toLowerCase();

IgniteCache cache = ignite.cache(cacheName);

The caches have to be pre-configured with this names in XML configuration or
can be started dynamically with ignite.getOrCreateCache()

--
Denis



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/link-cache-to-object-class-tp4119p4211.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pecuilar loopback address on Mac seems to break cluster of linux and mac....

2016-04-14 Thread Denis Magda
Hi Kristian,

Thanks for reporting on this. I've opened in issue in Apache Ignite JIRA
https://issues.apache.org/jira/browse/IGNITE-3011

As a workaround as you already noted you can set
-Djava.net.preferIPv4Stack=true to JVM upon startup. 

Other solution that may work in your case is to set a host address to use
for network communications explicitly in configuration using
IgniteConfiguration.setLocalHost method.
So if Mac node's IP address would be "10.123.24.25" and Linux's one
"11.123.24.25" then in the configuration of Mac node you should set
IgniteConfiguration.setLocalHost("10.123.24.25") and in Linux's one
IgniteConfiguration.setLocalHost("11.123.24.25").

--
Denis



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pecuilar-loopback-address-on-Mac-seems-to-break-cluster-of-linux-and-mac-tp4156p4210.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: multiple value key

2016-04-14 Thread vkulichenko
Yes, what IDE generates should be fine.


-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/multiple-value-key-tp4138p4209.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: multiple value key

2016-04-14 Thread vkulichenko
You can use loadCache method to load any set of data from the DB. It takes
optional list of arguments that you can use to parametrize the loading
process (e.g., use them to provide time bounds and query the DB based on
these bounds).

You can load based on any criteria, there are no limitation and this doesn't
depend on how your key and value classes look like. Just make sure that each
loaded row is properly mapped to key and value objects. Key has to be unique
for each value, otherwise values will overwrite each other.

Makes sense?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/multiple-value-key-tp4138p4203.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Database transaction updating more than 1 table from CacheStore

2016-04-14 Thread vkulichenko
Hi Binti,

Can you please properly subscribe to the mailing list so that the community
can receive email notifications? Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


bintisepaha wrote
> We have a use case where multiple database tables/caches need to updated
> within a transaction, on the cluster side I can achieve this using
> IgniteTransactions. But I want to do the same while its going to the DB
> Store using write-behind. Seems like CacheStoreSession is something I
> should be using, but I could not find examples that wrote to 2 tables
> together in a transaction. Does ignite have some way to chain this when
> writing to the database or does this have to be a custom cacheStore
> implementation?
> 
> table A -> cache A
> table B -> cache B
> 
> But when A is updated, B is always updated (ignite transaction). How can
> write-behind achieve this?

This is possible only with write-through with the help of cache store
session listeners. See store example [1] on how it is used.

With write-behind database is updated asynchronously, so the DB updates are
independent and can't be enlisted in a single DB transaction.

[1]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcStoreExample.java



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Database-transaction-updating-more-than-1-table-from-CacheStore-tp4197p4202.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: UPDATE sql query in ignite

2016-04-14 Thread tusharnakra
No, I know that. I want to know if we can use UPDATE sql query with ignite or
not?

Because, I want to do something like this: I have a table each in 2
different cache, I want to update some column entry in one table as well as
cache by performing cross-cache sqlqueryfield join. How do I do that?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/UPDATE-sql-query-in-ignite-tp4180p4198.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Services and Data Collocation

2016-04-14 Thread Kamal C
Thanks for your quick response Val!

I'll test throughly and update here.

--Kamal

On Thu, Apr 14, 2016 at 11:57 PM, vkulichenko  wrote:

> Kamal,
>
> I'm not sure I understood what you're trying to achieve. When you use cache
> API, all affinity mappings are done automatically, so you don't need to
> worry about this.
> In your particular case, the client is not aware of affinity and
> essentially
> sends a request to a random node, so the cache update can require one more
> network hop. But I don't see any way to change this without starting a
> client Ignite node.
>
> Can you provide more details? What is the sequence of events happening when
> you do a request and what you would like to change?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Services-and-Data-Collocation-tp4178p4192.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Services and Data Collocation

2016-04-14 Thread vkulichenko
Kamal,

I'm not sure I understood what you're trying to achieve. When you use cache
API, all affinity mappings are done automatically, so you don't need to
worry about this.
In your particular case, the client is not aware of affinity and essentially
sends a request to a random node, so the cache update can require one more
network hop. But I don't see any way to change this without starting a
client Ignite node.

Can you provide more details? What is the sequence of events happening when
you do a request and what you would like to change?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Services-and-Data-Collocation-tp4178p4192.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to configure user data type for sql queries?

2016-04-14 Thread vkulichenko
Kamal,

Generally, binary format is recommended for all use cases.
OptimizedMarshaller implements legacy serialization format, it's also
compact and efficient, but requires to have classes on all nodes and do not
support dynamic schema changes. If there are any limitations in binary
format, most likely they are just bugs that should be fixed (like the one
described above).

JDK marshaller is just a wrapper around native Java serialization, it's used
very rarely.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-user-data-type-for-sql-queries-tp3867p4190.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Data Grid with write through mode to HDFS with layer of IGFS

2016-04-14 Thread vijayendra bhati
Hi Vladimir,Not really, we do not want to store historical data in cache or may 
be cache it for few hours and then evict it.But if recent data is missing in 
cache then yes we want to cache it.So it would require some custom caching 
logic to decide which data to cache.So seems like storing historical data in 
persistent storage seems reasonable.The only thing is I have to use ignite 
cache as side cache rather than a write through cache to HDFS, because i dont 
think it would be nice idea to store individual key value pair in cache.May be 
can think about HBase bt not directly to HDFS.
Regards,Vij
Sent from Yahoo Mail on Android 
 
  On Thu, 14 Apr, 2016 at 7:26 pm, Vladimir Ozerov wrote: 
  Hi Vij,
Storing hot recent data in cache, and historical data in persistent store 
sounds like a perfectly reasonable idea. 
If you decide to store historical data in HDFS, then you should be able to 
construct HDFS path from the key because store interface accepts keys to 
store/load data. If this is possible, then I do not see any obvious problems 
with this approach.
On the other hand, do you want this historical data to be cached on access?
Vladimir.
On Thu, Apr 14, 2016 at 3:17 PM, vijayendra bhati  
wrote:

Thanks Vladimir !!
The drawback with using HDFS as persistent store behind Ignite cache is how we 
will take care of appending single key value pair to HDFS file.Ideally we 
should use some NoSQL store or RDBMS as persistent back up behind Ignite cache 
and then run some scheduled batch to transfer the data to HDFS as it happens in 
normal Lambda Architecture.
Now question comes why we want to use Ignite Cache ? Answer is it gives SQL 
interface that means we can query on any attribute on the fly.Other we could 
have used any other NoSQL. But NoSQL data model is entirely based upon query 
pattern so to bring the flexibility at the time of query we think Ignite cache 
would be better.
For our use case we want to put the latest 2 week data in Ignite cache to meet 
the latency requirements and then for any back date get the data from backend 
persistent storage, for which we are thinking about HDFS.Thats why we were 
thinking if we can make Ignite cache write through cache with HDFS as backed up 
persistent storage it would serve the purpose.
Please let me know whats your view on this. 
Many thanks,Vij 

On Wednesday, April 13, 2016 8:58 PM, Vladimir Ozerov 
 wrote:
 

 Vij,
No, it doesn't. IGFS serves the very different purpose - it is 
Hadoop-compatible file system. It means that, for example, you can load data to 
IGFS and then query it using Hive. But native Ignite SQL is not applicable here.
Vladimir.
On Wed, Apr 13, 2016 at 3:55 PM, vijayendra bhati  
wrote:

Thanks Vladimir,
I have not gone through complete documentation but if you could let me know 
does IGFS provide SQL support like Ignite cache does ?
Regards,Vij 

On Wednesday, April 13, 2016 5:54 PM, Vladimir Ozerov 
 wrote:
 

 Hi Vijayendra,
IGFS is designed to be a distributed file system which could cache data from 
Hadoop file systems. It cannot be used as cache store by design.Ignite doesn't 
have store implementation for HDFS, so you should implement your own if needed. 
Particularly, you should implement org.apache.ignite.cache.store.CacheStore 
interface.
Vladimir.
On Wed, Apr 13, 2016 at 2:38 PM, vijayendra bhati  
wrote:

Hi,
Can some body please provide me any pointers regarding how I can use Ignite 
Data Grid/ In Memory caching with write through/write behind mode and writing 
to HDFS ?
I know Ignite provides IGFS but its different from what I am looking for.
The other way could be I can use IGFS as my In Memory store but is it the right 
approach ?
Regards,Vijayendra Bhati



   



   

  


Re: re: querying, sql performance of apache ignite

2016-04-14 Thread vkulichenko
1. Yes, this is possible.
2. In a key-value storage, each entry has to have unique key. Entries with
the equal keys will overwrite each other.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/querying-sql-performance-of-apache-ignite-tp4135p4188.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Off-heap memory usage questions

2016-04-14 Thread vkulichenko
Hi Shaomin,

1. Yes, this is a per node setting. So if there are two nodes on one box,
it's possible that 20G will be allocated on this box. You should make sure
that this limit correlates with the amount of physical memory you have.
2. In 1.5. IgniteCache.metrics() return values for local node only, but you
can use IgniteCache.metrics(ClusterGroup) to get aggregated values for a set
of nodes. This is already fixed in master - in 1.6 aggregated metrics will
be returned by default.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-heap-memory-usage-questions-tp4163p4185.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Passing argument in sql query under where clause giving error

2016-04-14 Thread vkulichenko
No, SQL is read-only now. Support for UPDATE and INSERT statements is on the
roadmap, though.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Passing-argument-in-sql-query-under-where-clause-giving-error-tp4164p4184.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite Services and Data Collocation

2016-04-14 Thread Kamal C
Hi all,

I've a cluster of 2 ignite server nodes + 1 client node (non-ignite).
I've collocated the data resides in ignite cache based on affinity key.

e.g.
Server 1 - contains all the data related to the affinity key (A, C, E)
Server 2 - contains all the data related to the affinity key (B, D, F)

I've deployed a service using node singleton approach and the same also
provided in RMI for backward compatibility.

Client can add, update and remove the data using API services. I would like
to end up the call called by the client to the node / server where it's
data is located to minimize data serialization within the network.

With Ignite client, I can able to do it by passing a predicate while
getting the service. But, my client works only with RMI. Once I received a
call, what approaches should i take to re-direct the computation to the
node where data is located ?

--Kamal


Re: How to configure user data type for sql queries?

2016-04-14 Thread Kamal C
Val,

Can you explain with use-case when to use Binary, Optimized, GridOptimized
and JDK Marshallers ?

--Kamal

On Tue, Apr 5, 2016 at 3:41 AM, edwardkblk 
wrote:

> Yes, it works with OptimizedMarshaller.  Thank you.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-user-data-type-for-sql-queries-tp3867p3912.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Passing argument in sql query under where clause giving error

2016-04-14 Thread tusharnakra
Thanks it works now!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Passing-argument-in-sql-query-under-where-clause-giving-error-tp4164p4176.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Data Grid with write through mode to HDFS with layer of IGFS

2016-04-14 Thread tomk
Could you tell more about  it ? How to do it ?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Grid-with-write-through-mode-to-HDFS-with-layer-of-IGFS-tp4122p4171.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: spark and ignite - possibilities

2016-04-14 Thread tomk
Could you tell how to do it ?
Thriftserver may read data from filesystem (for example parquets files)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/spark-and-ignite-possibilities-tp4055p4170.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: re: querying, sql performance of apache ignite

2016-04-14 Thread tomk
Try to reply to my:
1. When my key is timestamp. Ii it possible to load to cache memory all rows
from year 2000 to year 2012 ?
2. When my key is timestamp and id (Integer). Is it possible to load load to
cache memory all rows from year 2000 to year 2012 ? Note, that I don't set
value of key id (only one part of key).



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/querying-sql-performance-of-apache-ignite-tp4135p4168.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: spark and ignite - possibilities

2016-04-14 Thread Vladimir Ozerov
IGFS is a Hadoop-compatible file-system. If ThriftServer doesn't have
strong dependencies on some HDFS-specific features, then yes - it could be
used instead of HDFS.

Could you please provide more detailed explanation of your use case with
ThriftServer?

On Wed, Apr 13, 2016 at 3:17 PM, tomk  wrote:

> IGFS can works with thriftserver without HDFS?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/spark-and-ignite-possibilities-tp4055p4126.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Data Grid with write through mode to HDFS with layer of IGFS

2016-04-14 Thread Vladimir Ozerov
Hi Vij,

Storing hot recent data in cache, and historical data in persistent store
sounds like a perfectly reasonable idea.

If you decide to store historical data in HDFS, then you should be able to
construct HDFS path from the key because store interface accepts keys to
store/load data. If this is possible, then I do not see any obvious
problems with this approach.

On the other hand, do you want this historical data to be cached on access?

Vladimir.

On Thu, Apr 14, 2016 at 3:17 PM, vijayendra bhati 
wrote:

> Thanks Vladimir !!
>
> The drawback with using HDFS as persistent store behind Ignite cache is
> how we will take care of appending single key value pair to HDFS
> file.Ideally we should use some NoSQL store or RDBMS as persistent back up
> behind Ignite cache and then run some scheduled batch to transfer the data
> to HDFS as it happens in normal Lambda Architecture.
>
> Now question comes why we want to use Ignite Cache ? Answer is it gives
> SQL interface that means we can query on any attribute on the fly.Other we
> could have used any other NoSQL. But NoSQL data model is entirely based
> upon query pattern so to bring the flexibility at the time of query we
> think Ignite cache would be better.
>
> For our use case we want to put the latest 2 week data in Ignite cache to
> meet the latency requirements and then for any back date get the data from
> backend persistent storage, for which we are thinking about HDFS.Thats why
> we were thinking if we can make Ignite cache write through cache with HDFS
> as backed up persistent storage it would serve the purpose.
>
> Please let me know whats your view on this.
>
> Many thanks,
> Vij
>
>
> On Wednesday, April 13, 2016 8:58 PM, Vladimir Ozerov <
> voze...@gridgain.com> wrote:
>
>
> Vij,
>
> No, it doesn't. IGFS serves the very different purpose - it is
> Hadoop-compatible file system. It means that, for example, you can load
> data to IGFS and then query it using Hive. But native Ignite SQL is not
> applicable here.
>
> Vladimir.
>
> On Wed, Apr 13, 2016 at 3:55 PM, vijayendra bhati 
> wrote:
>
> Thanks Vladimir,
>
> I have not gone through complete documentation but if you could let me
> know does IGFS provide SQL support like Ignite cache does ?
>
> Regards,
> Vij
>
>
> On Wednesday, April 13, 2016 5:54 PM, Vladimir Ozerov <
> voze...@gridgain.com> wrote:
>
>
> Hi Vijayendra,
>
> IGFS is designed to be a distributed file system which could cache data
> from Hadoop file systems. It cannot be used as cache store by design.
> Ignite doesn't have store implementation for HDFS, so you should implement
> your own if needed. Particularly, you should implement
> org.apache.ignite.cache.store.CacheStore interface.
>
> Vladimir.
>
> On Wed, Apr 13, 2016 at 2:38 PM, vijayendra bhati 
> wrote:
>
> Hi,
>
> Can some body please provide me any pointers regarding how I can use
> Ignite Data Grid/ In Memory caching with write through/write behind mode
> and writing to HDFS ?
>
> I know Ignite provides IGFS but its different from what I am looking for.
>
> The other way could be I can use IGFS as my In Memory store but is it the
> right approach ?
>
> Regards,
> Vijayendra Bhati
>
>
>
>
>
>
>
>


Passing argument in sql query under where clause giving error

2016-04-14 Thread tusharnakra
Hi,

I'm trying to execute the cross-cache sql fields query with join and it
executes fine as long as I don't do setArgs and pass an argument, when I
need to pass an argument with the where clause, then it gives error: Failed
to execute local query: GridQueryRequest [reqId=1, pageSize=1024,
space=PCache, qrys=[GridCacheSqlQuery [qry=SELECT
"PCache".PERSON._KEY __C0,
"PCache".PERSON._VAL __C1
FROM "PCache".PERSON
WHERE (SALARY > ?1) AND (SALARY <= ?2), params=[0, 1000], paramIdxs=[0, 1],
paramsSize=2, cols={__C0=GridSqlType [type=19, scale=0,
precision=2147483647, displaySize=2147483647, sql=OTHER], __C1=GridSqlType
[type=19, scale=0, precision=2147483647, displaySize=2147483647,
sql=OTHER]}, alias=null]], topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=2], extraSpaces=null, parts=null]
class org.apache.ignite.IgniteCheckedException: Failed to execute SQL query.
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:832)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:855)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:454)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onMessage(GridMapQueryExecutor.java:184)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.send(GridReduceQueryExecutor.java:1065)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:572)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$2.iterator(IgniteH2Indexing.java:956)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:61)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$3.iterator(IgniteH2Indexing.java:990)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:61)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:73)
at org.apache.ignite.organization.Demo.sqlQuery(Demo.java:141)
at org.apache.ignite.organization.Demo.main(Demo.java:90)
Caused by: org.h2.jdbc.JdbcSQLException: Hexadecimal string with odd number
of characters: "0"; SQL statement:

Here is my Demo.java:

package org.apache.ignite.organization;

import java.util.List;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.security.KeyStore.Entry;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

import javax.cache.Cache;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteException;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.affinity.AffinityKey;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.SqlQuery;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.examples.model.Person;
import org.apache.ignite.transactions.Transaction;


import com.mysql.jdbc.jdbc2.optional.MysqlDataSource;


public class Demo {


   static final String JDBC_DRIVER = "com.mysql.jdbc.Driver";  
   static final String DB_URL = "jdbc:mysql://localhost/ORG";
   static final String USER = "root";
   static final String PASS = "mysql";
   private static final String ORG_CACHE = "OrgCache";
   
   private static class MySQLDemoStoreFactory extends
CacheJdbcPojoStoreFactory {
 //{@inheritDoc} 
 @Override public CacheJdbcPojoStore create() {


MysqlDataSource dataSource = new MysqlDataSource(); 
dataSource.setURL("jdbc:mysql://localhost/ORG"); 
dataSource.setUser("root"); 
dataSource.setPassword("mysql"); 
setDataSource(dataSource);
return super.create();

}

}

/**
 * Executes demo.
 */
public static void main(String[] args) throws IgniteException {
System.out.println(">>> Start demo...");


// Start Ignite node.
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {


CacheConfiguration cfg =
CacheConfig.cache("PCache", new
MySQLDemoStoreFactory

Re: Ignite Data Grid with write through mode to HDFS with layer of IGFS

2016-04-14 Thread vijayendra bhati
Thanks Vladimir !!
The drawback with using HDFS as persistent store behind Ignite cache is how we 
will take care of appending single key value pair to HDFS file.Ideally we 
should use some NoSQL store or RDBMS as persistent back up behind Ignite cache 
and then run some scheduled batch to transfer the data to HDFS as it happens in 
normal Lambda Architecture.
Now question comes why we want to use Ignite Cache ? Answer is it gives SQL 
interface that means we can query on any attribute on the fly.Other we could 
have used any other NoSQL. But NoSQL data model is entirely based upon query 
pattern so to bring the flexibility at the time of query we think Ignite cache 
would be better.
For our use case we want to put the latest 2 week data in Ignite cache to meet 
the latency requirements and then for any back date get the data from backend 
persistent storage, for which we are thinking about HDFS.Thats why we were 
thinking if we can make Ignite cache write through cache with HDFS as backed up 
persistent storage it would serve the purpose.
Please let me know whats your view on this. 
Many thanks,Vij 

On Wednesday, April 13, 2016 8:58 PM, Vladimir Ozerov 
 wrote:
 

 Vij,
No, it doesn't. IGFS serves the very different purpose - it is 
Hadoop-compatible file system. It means that, for example, you can load data to 
IGFS and then query it using Hive. But native Ignite SQL is not applicable here.
Vladimir.
On Wed, Apr 13, 2016 at 3:55 PM, vijayendra bhati  
wrote:

Thanks Vladimir,
I have not gone through complete documentation but if you could let me know 
does IGFS provide SQL support like Ignite cache does ?
Regards,Vij 

On Wednesday, April 13, 2016 5:54 PM, Vladimir Ozerov 
 wrote:
 

 Hi Vijayendra,
IGFS is designed to be a distributed file system which could cache data from 
Hadoop file systems. It cannot be used as cache store by design.Ignite doesn't 
have store implementation for HDFS, so you should implement your own if needed. 
Particularly, you should implement org.apache.ignite.cache.store.CacheStore 
interface.
Vladimir.
On Wed, Apr 13, 2016 at 2:38 PM, vijayendra bhati  
wrote:

Hi,
Can some body please provide me any pointers regarding how I can use Ignite 
Data Grid/ In Memory caching with write through/write behind mode and writing 
to HDFS ?
I know Ignite provides IGFS but its different from what I am looking for.
The other way could be I can use IGFS as my In Memory store but is it the right 
approach ?
Regards,Vijayendra Bhati



   



  

Re: do you support json based cache?

2016-04-14 Thread Alexey Kuznetsov
Hi, Ravi!

Not yet, but we have an issue for that
https://issues.apache.org/jira/browse/IGNITE-962
You can track it JIRA.

On Thu, Apr 14, 2016 at 6:20 PM, Ravi Puri 
wrote:

> I want to know do you support json based data to be cached and
> implemnented?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/do-you-support-json-based-cache-tp4160.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


do you support json based cache?

2016-04-14 Thread Ravi Puri
I want to know do you support json based data to be cached and implemnented?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/do-you-support-json-based-cache-tp4160.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Pecuilar loopback address on Mac seems to break cluster of linux and mac....

2016-04-14 Thread Kristian Rosenvold
I was seeing quite substantial instabilities in my newly configured 1.5.0
cluster, where messages like this would pop up, resulting in the
termination of the node.:

java.net.UnknownHostException: no such interface lo
at java.net.Inet6Address.initstr(Inet6Address.java:487) ~[na:1.8.0_60]
at java.net.Inet6Address.(Inet6Address.java:408) ~[na:1.8.0_60]
at java.net.InetAddress.getAllByName(InetAddress.java:1181) ~[na:1.8.0_60]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_60]
at java.net.InetAddress.getByName(InetAddress.java:1076) ~[na:1.8.0_60]
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1259)
~[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1241)
~[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.sendMessageAcrossRing(ServerImpl.java:2456)
[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processHeartbeatMessage(ServerImpl.java:4432)
[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2267)
[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:5784)
[ignite-core-1.5.0.final.jar:1.5.0.final]
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2161)
[ignite-core-1.5.0.final.jar:1.5.0.final]
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[ignite-core-1.5.0.final.jar:1.5.0.final]
08:23:23.189 [tcp-disco-msg-worker-#2%RA-ignite%] WARN
 o.a.i.s.d.tcp.TcpDiscoverySpi  - Local node has detected failed nodes and
started cluster-wide procedure. To speed up failure detection please see
'Failure Detection' section under javadoc for 'TcpDiscoverySpi'

Now in our mysql discovery database I saw a host called
'0:0:0:0:0:0:0:1%lo' (as well as '0:0:0:0:0:0:0:1') . On a hunch I deleted
the "lo" row from the database and things seem to have stabilized.

It would appear to me that when I start a node on my local mac, it inserts
a row into the discovery database that does not parse properly on the linux
node (or vice versa, I have not been able to determine entirely).
 According to the docs on TcpDiscoverySpi, a random entry from the
discovery address is used and it would appear thing start breaking down
whenever this address is chosen.


It appears things have stabilized significantly once I switched the entire
cluster to -Djava.net.preferIPv4Stack=true

Is there a known fix for this issue ? What would be the appropriate root
problem to fix in a patch here ?

Kristian


re: problem of using object as key in cache configurations

2016-04-14 Thread Zhengqingzheng
You are right Val, I did not read all the examples. Hopefully problem solved.

Cheers,
Kevin

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年4月14日 14:34
收件人: user@ignite.apache.org
主题: Re: problem of using object as key in cache configurations

We already have such examples. For example, CacheClientBinaryPutGetExample.
But I agree with Andrey that this message is very confusing. I will fix it 
shortly.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/problem-of-using-object-as-key-in-cache-configurations-tp4116p4153.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: problem of using object as key in cache configurations

2016-04-14 Thread vkulichenko
We already have such examples. For example, CacheClientBinaryPutGetExample.
But I agree with Andrey that this message is very confusing. I will fix it
shortly.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/problem-of-using-object-as-key-in-cache-configurations-tp4116p4153.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Schema Import Utility - Mismatch when loading data between Date and Timestamp

2016-04-14 Thread jan.swaelens
Thank you, planning a test session next week!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Schema-Import-Utility-Mismatch-when-loading-data-between-Date-and-Timestamp-tp3790p4152.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.