Is there a QUEUE based messaging?

2020-04-21 Thread xingjl6280
hi team,

I'm looking for a queue based messaging, comparing to topic, i dont want all
subscriber received the message but one of them.

please kindly advise

thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite -Option to close the open files

2020-04-21 Thread Evgenii Zhuravlev
Hi,

I don't think that it's possible just to close files. How many caches do
you have?

Evgenii

вт, 21 апр. 2020 г. в 12:14, Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Hi,
>
>
>
> Ignite server in preproduction and production are going down with too many
> open files error.
>
>
>
> *Caused by: java.nio.file.FileSystemException:
> /opt/apache-ignite-fabric-2.5.0-bin/work/db/node00-94e4310a-f450-4bbc-acfd-f84ab29a158c/cache-SQL_PUBLIC_x/part-185.bin:
> Too many open files*
>
>
>
> Based on the suggestion given in
> https://issues.apache.org/jira/browse/IGNITE-11783, we have increased the
> limit.
>
>
>
> But recently again after increasing the limit to 30 also, sever has
> crashed.
>
>
>
> Is there a way programmatically to close the open files, so that the
> threshold limit is not exceeded.
>
>
>
> Please suggest as production servers also, we are facing same issue and it
> is very critical for us.
>
>
>
> Thanks,
>
> Sriveena
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


Re: joins

2020-04-21 Thread akorensh
No. In this case, you would need to create an index on orgId yourself.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Best way to track if key was read more than X times?

2020-04-21 Thread John Smith
Ok bu the event just tells me if the key was read correct? I need to keep a
count of how many times each key was read globally.

The other way I was thinking of doing it is by having a cache as
Cache and then use cache.invoke(, new
CounterEntryProcessor())

And then in the EntryProcessor...

MyTuple is just a a value class that holds 2 values, one being the counter.

class CounterEntryProcessor implements EntryProcessor {
@Override public Integer process(MutableEntry e,
Object... args) {


  MyTuple newVal = e.getValue();
  newVal.counter++

  // Update cache.
  e.setValue(newVal);

  return newVal.counter;
}
  }

And then check if cache.invoke(, new CounterEntryProcessor()) >= 3
remove the entry from cache.


On Tue, 21 Apr 2020 at 16:28, akorensh  wrote:

> Hi,
>
> You can use Events:https://apacheignite.readme.io/docs/events
> In Particular: EVT_CACHE_OBJECT_READ
> see:
>
> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/events/EventsExample.java
>
>
>
> Here is an example: (modified localListen() fn in the above example)
>  private static void localListen() throws IgniteException,
> InterruptedException {
> System.out.println(">>> Local event listener example.");
>
> Ignite ignite = Ignition.ignite();
>
> IgnitePredicate lsnr = evt -> {
> System.out.println("Received task event [evt=" + evt.name() +
> ",
> keyName=" + evt.key() + ']');
>
> return true; // Return true to continue listening.
> };
>
> ignite.events().localListen(lsnr, EVT_CACHE_OBJECT_READ);
>
> ignite.getOrCreateCache("test").put("a", "a");
> ignite.getOrCreateCache("test").put("b", "b");
>
> for (int i = 0; i < 100 ; i++) {
> if(i %2 ==0 ) ignite.getOrCreateCache("test").get("a");
> else ignite.getOrCreateCache("test").get("b");
> Thread.sleep(1000);
> System.out.println("sleeping..");
> }
>
> // Unsubscribe local task event listener.
> ignite.events().stopLocalListen(lsnr);
> }
>
>
> You can also use continuous queries to listen for updates:
> https://apacheignite.readme.io/docs/continuous-queries  for
> updates/inserts
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: joins

2020-04-21 Thread narges saleh
Thanks Alex for the explanation.
I guess, I will need to set the distributed join option to false, if the
affinity is in action.
But doesn't the primary keys result in indexes? If so, then the requirement
for having an index should be satisfied, right? I am joining the two tables
on the field, orgid.

On Tue, Apr 21, 2020 at 2:43 PM akorensh  wrote:

> Hi,
>   You are doing a join between a PARTITIONED and REPLICATED caches with
> distributedJoin=true.
>   In this case an index is required . If you look at the full log, it will
> tell you where to place the index.
>   see:
>
> https://apacheignite-sql.readme.io/docs/distributed-joins#non-collocated-joins
>
>   Either add the index or change the PERSON cache to be REPLICATED as well.
>
>
>  The PERSON cache is created as PARTITIONED because it is the default when
> you don't explicitly
>   set the cacheMode
>
>
> 
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Best way to track if key was read more than X times?

2020-04-21 Thread akorensh
Hi,

You can use Events:https://apacheignite.readme.io/docs/events
In Particular: EVT_CACHE_OBJECT_READ
see:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/events/EventsExample.java



Here is an example: (modified localListen() fn in the above example)
 private static void localListen() throws IgniteException,
InterruptedException {
System.out.println(">>> Local event listener example.");

Ignite ignite = Ignition.ignite();

IgnitePredicate lsnr = evt -> {
System.out.println("Received task event [evt=" + evt.name() + ",
keyName=" + evt.key() + ']');

return true; // Return true to continue listening.
};

ignite.events().localListen(lsnr, EVT_CACHE_OBJECT_READ);

ignite.getOrCreateCache("test").put("a", "a");
ignite.getOrCreateCache("test").put("b", "b");

for (int i = 0; i < 100 ; i++) {
if(i %2 ==0 ) ignite.getOrCreateCache("test").get("a");
else ignite.getOrCreateCache("test").get("b");
Thread.sleep(1000);
System.out.println("sleeping..");
}

// Unsubscribe local task event listener.
ignite.events().stopLocalListen(lsnr);
}


You can also use continuous queries to listen for updates:
https://apacheignite.readme.io/docs/continuous-queries  for updates/inserts 







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: joins

2020-04-21 Thread akorensh
Hi,
  You are doing a join between a PARTITIONED and REPLICATED caches with
distributedJoin=true.
  In this case an index is required . If you look at the full log, it will
tell you where to place the index.
  see:
https://apacheignite-sql.readme.io/docs/distributed-joins#non-collocated-joins
  
  Either add the index or change the PERSON cache to be REPLICATED as well.


 The PERSON cache is created as PARTITIONED because it is the default when
you don't explicitly
  set the cacheMode

   
 
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: joins

2020-04-21 Thread akorensh
Hi,
  You are doing a join between a PARTITIONED and REPLICATED caches with
distributedJoin=true.
  In this case an index is required . If you look at the full log, it will
tell you where to place the index.
  see:
https://apacheignite-sql.readme.io/docs/distributed-joins#non-collocated-joins
  
  Either add the index or change the PERSON cache to be REPLICATED as well.


 The PERSON cache is created as PARTITIONED because it is the default when
you don't explicitly
  set the cacheMode

   
 
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite -Option to close the open files

2020-04-21 Thread Sriveena Mattaparthi
Hi,

Ignite server in preproduction and production are going down with too many open 
files error.

Caused by: java.nio.file.FileSystemException: 
/opt/apache-ignite-fabric-2.5.0-bin/work/db/node00-94e4310a-f450-4bbc-acfd-f84ab29a158c/cache-SQL_PUBLIC_x/part-185.bin:
 Too many open files

Based on the suggestion given in 
https://issues.apache.org/jira/browse/IGNITE-11783, we have increased the limit.

But recently again after increasing the limit to 30 also, sever has crashed.

Is there a way programmatically to close the open files, so that the threshold 
limit is not exceeded.

Please suggest as production servers also, we are facing same issue and it is 
very critical for us.

Thanks,
Sriveena
"Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited."


Best way to track if key was read more than X times?

2020-04-21 Thread John Smith
Hi I want to store a key/value and If that key has been accessed more than
3 times for example remove it. What is the best way to do this?


Re: JDBC Connection

2020-04-21 Thread Denis Magda
I would advise using the thin JDBC driver that is more lightweight and
supports all the latest capabilities of the SQL engine:
https://apacheignite-sql.readme.io/docs/jdbc-driver

With that driver, you need to switch the streaming on/off using the SET
command:
https://apacheignite-sql.readme.io/docs/jdbc-driver#section-streaming

-
Denis


On Tue, Apr 21, 2020 at 4:27 AM narges saleh  wrote:

> Denis,
> I am setting streaming on in my JDBC connection URL, and I try to insert
> data.
> Here is the sequence:
> Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://
> *streaming=true*@file:///opt/ignite/config/config.xml");
> PreparedStatement stmt = conn.prepareStatement(
>   "INSERT INTO PERSON.PERSON(orgName,firstName, lastName, resume, salary)
> VALUES(?,?, ?, ?, ?)");
> Isn't this sufficient? Am I supposed to set streaming on explicitly in the
> code, not in the connection?
>
> thanks.
>
> On Mon, Apr 20, 2020 at 11:25 PM Denis Magda  wrote:
>
>> You need to issues SET STREAMING ON/OFF commands after opening a
>> connection with the driver:
>> https://apacheignite-sql.readme.io/docs/set
>>
>> -
>> Denis
>>
>>
>> On Sun, Apr 19, 2020 at 7:55 AM narges saleh 
>> wrote:
>>
>>> Actually, I still have issue with this.
>>> It seems I am not able to specify a JDBC connection with streaming
>>> enabled, if the cache is not specified.
>>> I get the following error:
>>> SQLException: Cache cannot be null when streaming is enabled.
>>>
>>> Anyway, out of this, i.e., specifying a JDBC connection with streaming
>>> enabled, and leaving the cache out? The cache will be specified in the SQL.
>>>
>>> thanks.
>>>
>>> On Fri, Apr 10, 2020 at 6:47 AM narges saleh 
>>> wrote:
>>>
 Hi All,
 I think I have figured it out. I just need to specify the schema.
 thanks.

 On Thu, Apr 9, 2020 at 9:30 PM narges saleh 
 wrote:

> Hi All,
> 1) How would one use ignite's JDBC connection to query multiple
> tables, both in case of joins and separately? I.e., Can one get a JDBC
> connection, and use it to query multiple caches? Another use case, is 
> where
> a JDBC connection is used to query caches whose name is known at runtime.
> The client can be think or thick.
>
> 2)The same for inserts. Can one uses the same JDBC connection with
> different jdbc statements to load data into different caches?
>
> I'd appreciate link and/or examples.
>
> thanks.
>



Re: Unable to run several ContinuousQuery due to: Failed to unmarshal discovery data for component: CONTINUOUS_PROC

2020-04-21 Thread Evgenii Zhuravlev
Why is the client needs to be serializable? Have you tried suggestion from
this answer
https://stackoverflow.com/questions/61293343/failed-to-unmarshal-discovery-data-for-component-continuous-proc-with-more-than/61318360#61318360
 ?

Evgenii

вт, 21 апр. 2020 г. в 00:36, AlexBor :

> Hi Denis,
>
> Both servers are looking to the same server.
> Here are code samples:
>
> Server:
>
> public class IgniteServerCacheBootstrap {
>
> final static Logger logger =
> LoggerFactory.getLogger(IgniteCacheClient.class);
>
> public static void main(String[] args) throws IgniteCheckedException,
> InterruptedException {
>
> IgniteConfiguration serverConfig = new IgniteConfiguration()
> .setGridLogger(new Log4J2Logger("log4j2.xml"));
>
> Ignite server = Ignition.start(serverConfig);
> Thread.currentThread().join();
> }
>
> }
>
>
> Client (I run two of such clients in parallel). Code is mostly taken from
> Ignite samples:
>
> public class IgniteCacheClient implements Serializable {
>
> Logger logger = LoggerFactory.getLogger(IgniteCacheClient.class);
>
> private IgniteCache igniteCache;
>
> public IgniteCacheClient() throws IgniteCheckedException {
> IgniteConfiguration clientConfig = new IgniteConfiguration()
> .setGridLogger(new Log4J2Logger("log4j2.xml"))
> .setClientMode(true);
>
> Ignite client = Ignition.getOrStart(clientConfig);
> igniteCache = client.getOrCreateCache("MY_CACHE");
> }
>
> public void run() throws InterruptedException {
>
> // Create new continuous query.
> ContinuousQuery qry = new ContinuousQuery<>();
>
> qry.setInitialQuery(new ScanQuery<>(new IgniteBiPredicate String>() {
> @Override
> public boolean apply(Integer key, String val) {
> return key > 10;
> }
> }));
>
> // Callback that is called locally when update notifications are
> received.
> qry.setLocalListener(new CacheEntryUpdatedListener String>() {
> @Override
> public void onUpdated(Iterable Integer, ? extends String>> evts) {
> for (CacheEntryEvent e
> : evts)
> logger.info("Updated entry [key=" + e.getKey() + ",
> val=" + e.getValue() + ']');
> }
> });
>
> // This filter will be evaluated remotely on all nodes.
> // Entry that pass this filter will be sent to the caller.
> qry.setRemoteFilterFactory(new
> Factory>() {
> @Override
> public CacheEntryEventFilter create() {
> return new CacheEntryEventFilter() {
> @Override
> public boolean evaluate(CacheEntryEvent Integer, ? extends String> e) {
> return e.getKey() > 10;
> }
> };
> }
> });
>
> // Execute query.
> QueryCursor> cur =
> igniteCache.query(qry);
>
> // Iterate through existing data.
> for (Cache.Entry e : cur)
> logger.info("Queried existing entry [key=" + e.getKey() + ",
> val=" + e.getValue() + ']');
>
> Thread.currentThread().join();
> }
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


SQL queries returning incorrect results during High Load on Ignite V2.7.6

2020-04-21 Thread neerajarora100


I have a table in which during the performance runs, there are inserts
happening in the beginning when the job starts, during the insertion time
there are also parallel operations(GET/UPDATE queries) happening on that
table. The Get operation also updates a value in column marking that record
as picked. However, the next get performed on the table would again return
back the same record even when the record was marked in progress. 

P.S. --> both the operations are done by the same single thread existing in
the system. Logs below for reference, record marked in progress at Line 1 on
**20:36:42,864**, however, it is returned back in the result set of query
executed after **20:36:42,891** by the same thread. 
We also observed that during high load (usually during same scenario as
mentioned above) some update operation (intermittent) were not happening on
the table even when the update executed successfully (validated using the
returned result and then doing a get just after that to check the updated
value ) without throwing an exception.


13 Apr 2020 20:36:42,864 [SHT-4083-initial] FINEST  -
AbstractCacheHelper.markContactInProgress:2321 -  Action state after mark in
progresss contactId.ATTR=: 514409 for jobId : 4083 is actionState : 128

13 Apr 2020 20:36:42,891 [SHT-4083-initial] FINEST  -
CacheAdvListMgmtHelper.getNextContactToProcess:347 - Query : select
priority, contact_id, action_state, pim_contact_store_id, action_id
, retry_session_id, attempt_type, zone_id, action_pos  from pim_4083 where
handler_id = ? and attempt_type != ?  and next_attempt_after <= ? and
action_state = ? and exclude_flag = ?  order
by attempt_type desc, priority desc, next_attempt_after asc,contact_id asc   
limit 1


This happens usually during the performance runs when there are parallel
JOB's started which are working on Ignite. Can anyone suggest what can be
done to avoid such a situation..?

We have 2 ignite data nodes that are deployed as springBootService deployed
in the cluster being accessed, by 3 client nodes with 6GB of RAM and
peristence enabled.
Ignite version -> 2.7.6, Cache configuration is as follows, 

IgniteConfiguration cfg = new IgniteConfiguration();
   CacheConfiguration cachecfg = new CacheConfiguration(CACHE_NAME);
   cachecfg.setRebalanceThrottle(100);
   cachecfg.setBackups(1);
   cachecfg.setCacheMode(CacheMode.REPLICATED);
   cachecfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
   cachecfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
  
cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
   // Defining and creating a new cache to be used by Ignite Spring Data
repository.
   CacheConfiguration ccfg = new CacheConfiguration(CACHE_TEMPLATE);
   ccfg.setStatisticsEnabled(true);
   ccfg.setCacheMode(CacheMode.REPLICATED);
   ccfg.setBackups(1);
   DataStorageConfiguration dsCfg = new DataStorageConfiguration();
  
dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
   dsCfg.setStoragePath(storagePath);
   dsCfg.setWalMode(WALMode.FSYNC);
   dsCfg.setWalPath(walStoragePath);
   dsCfg.setWalArchivePath(archiveWalStoragePath);
   dsCfg.setWriteThrottlingEnabled(true);
   cfg.setAuthenticationEnabled(true);
   dsCfg.getDefaultDataRegionConfiguration()
.setInitialSize(Long.parseLong(cacheInitialMemSize) * 1024 *
1024);
  
dsCfg.getDefaultDataRegionConfiguration().setMaxSize(Long.parseLong(cacheMaxMemSize)
* 1024 * 1024);
   cfg.setDataStorageConfiguration(dsCfg);
   
   cfg.setClientConnectorConfiguration(clientCfg);
   // Run the command to alter the default user credentials
   // ALTER USER "ignite" WITH PASSWORD 'new_passwd'
   cfg.setCacheConfiguration(cachecfg);
   cfg.setFailureDetectionTimeout(Long.parseLong(cacheFailureTimeout));
   ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
  
ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
   ccfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
   ccfg.setRebalanceThrottle(100);
   int pool = cfg.getSystemThreadPoolSize();
   cfg.setRebalanceThreadPoolSize(2);
   cfg.setLifecycleBeans(new MyLifecycleBean());
   logger.info(methodName, "Starting ignite service");
   ignite = Ignition.start(cfg);
   ignite.cluster().active(true);
   // Get all server nodes that are already up and running.
   Collection nodes =
ignite.cluster().forServers().nodes();
   // Set the baseline topology that is represented by these nodes.
   ignite.cluster().setBaselineTopology(nodes);
   ignite.addCacheConfiguration(ccfg);







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


joins

2020-04-21 Thread narges saleh
Hi All,
I have defined two caches/tables: person and org.
They are defined via query entities, and both have the same key fields.
I have set the affinity on both tables.
When I try to join the two tables, I get the following message
[Failed to prepare distributed join query. Join condition does not use
index.]
My query is
select a.org_id, a.person_id, a.name, b.name from
 person.person a, org.org b
where a.org_id = b.org_id and
  a.person.id = ?













  
  



  
  ORG_ID
  




  



  
  








  
  
  



  
  PERSON_ID
   ORG_ID
  




  


Re: JDBC Connection

2020-04-21 Thread narges saleh
Denis,
I am setting streaming on in my JDBC connection URL, and I try to insert
data.
Here is the sequence:
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://
*streaming=true*@file:///opt/ignite/config/config.xml");
PreparedStatement stmt = conn.prepareStatement(
  "INSERT INTO PERSON.PERSON(orgName,firstName, lastName, resume, salary)
VALUES(?,?, ?, ?, ?)");
Isn't this sufficient? Am I supposed to set streaming on explicitly in the
code, not in the connection?

thanks.

On Mon, Apr 20, 2020 at 11:25 PM Denis Magda  wrote:

> You need to issues SET STREAMING ON/OFF commands after opening a
> connection with the driver:
> https://apacheignite-sql.readme.io/docs/set
>
> -
> Denis
>
>
> On Sun, Apr 19, 2020 at 7:55 AM narges saleh  wrote:
>
>> Actually, I still have issue with this.
>> It seems I am not able to specify a JDBC connection with streaming
>> enabled, if the cache is not specified.
>> I get the following error:
>> SQLException: Cache cannot be null when streaming is enabled.
>>
>> Anyway, out of this, i.e., specifying a JDBC connection with streaming
>> enabled, and leaving the cache out? The cache will be specified in the SQL.
>>
>> thanks.
>>
>> On Fri, Apr 10, 2020 at 6:47 AM narges saleh 
>> wrote:
>>
>>> Hi All,
>>> I think I have figured it out. I just need to specify the schema.
>>> thanks.
>>>
>>> On Thu, Apr 9, 2020 at 9:30 PM narges saleh 
>>> wrote:
>>>
 Hi All,
 1) How would one use ignite's JDBC connection to query multiple tables,
 both in case of joins and separately? I.e., Can one get a JDBC connection,
 and use it to query multiple caches? Another use case, is where a JDBC
 connection is used to query caches whose name is known at runtime. The
 client can be think or thick.

 2)The same for inserts. Can one uses the same JDBC connection with
 different jdbc statements to load data into different caches?

 I'd appreciate link and/or examples.

 thanks.

>>>


Re: Ignite.net caching a item to specific machine

2020-04-21 Thread Sudhir Patil
Thanks Pavel.

I will look at code and get back on it...

Regards
Sudhir

On Tuesday, April 21, 2020, Pavel Tupitsyn  wrote:

> I've prepared the example:
> https://github.com/ptupitsyn/ignite-net-examples/tree/
> master/CacheNodeFilter
>
> We don't actually need to write any Java, because Ignite ships with
> predefined AttributeNodeFilter.
>
> In the example there are two caches: "user" and "company", and two server
> nodes.
> Every cache stores data only on one of the server nodes, which is
> demonstrated with Affinity API.
>
>
> On Mon, Apr 20, 2020 at 8:43 AM Sudhir Patil 
> wrote:
>
>> Hi Pavel,
>>
>> Thanks. Yes, maybe sample around this would be helpful.
>> Requirement side what we wanted is store specific cache items on machine
>> so that they served in better ways. I am not sure will it be helpful or
>> not...
>>
>> Regards
>> Sudhir
>>
>> On Saturday, April 18, 2020, Pavel Tupitsyn  wrote:
>>
>>> Hi,
>>>
>>> There is a CacheConfiguration.NodeFilter property.
>>> The filter defines which nodes should store given cache data.
>>>
>>> Unfortunately, this property is not available in Ignite.NET.
>>> You can define a filter in Java, then use it in your Ignite.NET
>>> application:
>>> * Write a filter in Java and compile
>>> * Prepare Spring XML config file with  CacheConfiguration.NodeFilter set
>>> to your Java filter
>>> * Add compiled class/jar path to IgniteConfiguration.JvmClasspath in
>>> .NET
>>> * Set IgniteConfiguration.SpringConfigUrl in .NET
>>>
>>> Let me know if this works for you. I can prepare a working example as
>>> well.
>>>
>>> PS It may not be a good idea to separate data like that. What is the use
>>> case here?
>>>
>>> On Sat, Apr 18, 2020 at 4:59 AM Sudhir Patil 
>>> wrote:
>>>
 Hi All,

 I am using ignite.net. Question is does it support caching a cache
 item on specific machine?
 E.g. I want to store Employee records on machine A and Employer records
 on machine B.

 Regards
 Sudhir


 --
 Thanks & Regards,
 Sudhir Patil,
 +91 9881095647.

>>>
>>
>> --
>> Thanks & Regards,
>> Sudhir Patil,
>> +91 9881095647.
>>
>

-- 
Thanks & Regards,
Sudhir Patil,
+91 9881095647.


Re: Regarding EVT_NODE_SEGMENTED event

2020-04-21 Thread VeenaMithare
Thanks Monal,

What is the best way to generate a EVT_NODE_SEGMENTED event on the client
side for testing the event handler ? ( I am able to generate this on server
side. )

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unable to initiate IgniteContext in spark-shell

2020-04-21 Thread ameyakulkarni00
Hi 
I am trying to do a POC with apache ignite and spark for improving our spark
application performance.
I have a 10 node Dev cluster ( centos 7, HDP 3.1, spark 2.3.2 ).
I have installed apache ignite (2.8.0) on 5 of those servers. The
installation was smooth and all 5 nodes were live with default configuration
and were identified by each other.

I am simply trying to test apache ignite using spark-shell from one of the
nodes where ignite is installed.
I am getting the below error on executing : 

val ic = new IgniteContext(sc,()=> new IgniteConfiguration())

sparkShellError.txt
 
 

And the below error in the ignite process:
igniteProcessError.txt

  

Kindly help me in fixing this. Or kindly point me to the right direction.

Regards
Ameya



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Is strong consistency supported in SQL mode?

2020-04-21 Thread priyank
Hi,
I see according to this article:
https://www.gridgain.com/resources/blog/apache-cassandra-vs-apache-ignite-strong-consistency-and-transactions
that Apache Ignite has support for strong consistency. The code example
listed by them uses key-values.

Is this true even when running Ignite in SQL mode? 

Thanks for your time!
Regards,
Priyank





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to run several ContinuousQuery due to: Failed to unmarshal discovery data for component: CONTINUOUS_PROC

2020-04-21 Thread AlexBor
Hi Denis,

Both servers are looking to the same server.
Here are code samples:

Server:

public class IgniteServerCacheBootstrap {

final static Logger logger =
LoggerFactory.getLogger(IgniteCacheClient.class);

public static void main(String[] args) throws IgniteCheckedException,
InterruptedException {

IgniteConfiguration serverConfig = new IgniteConfiguration()
.setGridLogger(new Log4J2Logger("log4j2.xml"));

Ignite server = Ignition.start(serverConfig);
Thread.currentThread().join();
}

}


Client (I run two of such clients in parallel). Code is mostly taken from
Ignite samples:

public class IgniteCacheClient implements Serializable {

Logger logger = LoggerFactory.getLogger(IgniteCacheClient.class);

private IgniteCache igniteCache;

public IgniteCacheClient() throws IgniteCheckedException {
IgniteConfiguration clientConfig = new IgniteConfiguration()
.setGridLogger(new Log4J2Logger("log4j2.xml"))
.setClientMode(true);

Ignite client = Ignition.getOrStart(clientConfig);
igniteCache = client.getOrCreateCache("MY_CACHE");
}

public void run() throws InterruptedException {

// Create new continuous query.
ContinuousQuery qry = new ContinuousQuery<>();

qry.setInitialQuery(new ScanQuery<>(new IgniteBiPredicate() {
@Override
public boolean apply(Integer key, String val) {
return key > 10;
}
}));

// Callback that is called locally when update notifications are
received.
qry.setLocalListener(new CacheEntryUpdatedListener() {
@Override
public void onUpdated(Iterable> evts) {
for (CacheEntryEvent e
: evts)
logger.info("Updated entry [key=" + e.getKey() + ",
val=" + e.getValue() + ']');
}
});

// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new
Factory>() {
@Override
public CacheEntryEventFilter create() {
return new CacheEntryEventFilter() {
@Override
public boolean evaluate(CacheEntryEvent e) {
return e.getKey() > 10;
}
};
}
});

// Execute query.
QueryCursor> cur =
igniteCache.query(qry);

// Iterate through existing data.
for (Cache.Entry e : cur)
logger.info("Queried existing entry [key=" + e.getKey() + ",
val=" + e.getValue() + ']');

Thread.currentThread().join();
}
}






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Subquery or Joins query not returning correct result Ignite V2.7.6

2020-04-21 Thread siva
Hi All,
I am using Apache Ignite V2.7.6 .Net client And Server.
-Tables(*Company*,*CompanyTypes*) creating with QueryEntities at the time
cache configuration.

I have two  model   
class.And both classes all properties are SQL Query fields with private and
public modifier.

Company class (EntityId pk) and CompanyTypes(CompanyId,CapabilityId both are
pk fields).

*here is the sql query:
*


So i am facing issue like no of records returning from ignite is wrong.

above query working fine on sql server but from ignite returning result is
wrong.


for example expected selected rows:100 records,
sql server result rows:100 records
ignite result  rows:some time less or some time more row.

Please let me know if any other information needed.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Subquery or Joins query not returning correct result Ignite V2.7.6

2020-04-21 Thread siva
Hi All,
I am using Apache Ignite V2.7.6 .Net client And Server.
-Tables(*Company*,*CompanyTypes*) creating with QueryEntities at the time
cache configuration.

I have two  model   
class.And both classes all properties are SQL Query fields with private and
public modifier.

Company class (EntityId pk) and CompanyTypes(CompanyId,CapabilityId both are
pk fields).

*here is the sql query:
*


So i am facing issue like no of records returning from ignite is wrong.

above query working fine on sql server but from ignite returning result is
wrong.


for example expected selected rows:100 records,
sql server result rows:100 records
ignite result  rows:some time less or some time more row.

Please let me know if any other information needed.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/