lock and unlock the cache key?

2016-02-11 Thread Ravi
how it can be performed as i want to lock a particular key and then unlock
it.Suggest me the way how can i check whether key is locked or not?


cache.lock(key)- it locks the key
but how to unlock the key??



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/lock-and-unlock-the-cache-key-tp2950.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MyBatis-Ignite integration release

2016-02-11 Thread Dmitriy Setrakyan
Thanks Roman, this is awesome news!

Do you know what is the process at MyBatis to move from beta into a final
GA release?

D.

On Wed, Feb 10, 2016 at 11:41 PM, Roman Shtykh  wrote:

> Dear community,
>
> I would like to share the news about MyBatis-Ignite integration release,
> as a result of the collaboration between Ignite and MyBatis teams.
>
> http://blog.mybatis.org/2016/02/mybatis-ignite-released.html
>
> From now please consider using Apache Ignite as your 2nd level MaBatis
> cache.
>
> Best regards,
> Roman
>


Re: Performance drops when ContinuosQuery starts

2016-02-11 Thread flob
Hey Val,

Thanks for your reply, I'll try playing with the page size and see if that
helps.

Regarding the measurements, I have a metric on the app that measures the
time spent for updating the cache whenever a new event is received from the
DB. In this update I'm not only propagating the update to the cache but I'm
also doing some quick SQL queries to the cache to compute some values.
Normally the queries are quick.

Thanks!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-drops-when-ContinuosQuery-starts-tp2927p2953.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Migrating From Hazelcast Service Interface To Apache Ignite Service Interface

2016-02-11 Thread vkulichenko
Hi Gareth,

Ignite's Service Grid really differs from Hazelcast SPI. From what I see
these are completely different features.

Can you please elaborate what your service does? If we know more details
about your use case, it would be easier to propose the best solution.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Migrating-From-Hazelcast-Service-Interface-To-Apache-Ignite-Service-Interface-tp2970p2971.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Transactions with READ WRITE through and Spring

2016-02-11 Thread vkulichenko
amit2103 wrote
> I get the idea (make the caches transactional). But I have my initial
> doubt
> Let me explain. My cluster has say 3 nodes when I start the below process
> in 1 node
> All caches are transactional.
> 
> 
> 1) Open transaction
> 2) Write to 1 Person cache (which is supposed to do read write through,
> the item is not in the cache)
> 3) Write to 1 Address cache (which is also read write through).
> 
> 4) Close transaction
> 
> The problem I faced was that for the person cache was the data was sent to
> another node due to partiniing probably and while doing write through for
> the Address cache we were facing a error as the person record was supposed
> to be inserted in the Db before the Address record.
> 
> Wil this be solved of I use Transactional cache?

Transactional caches do all store updates from the node that initiated
transaction during its commit, regardless of how entries involved in
transaction are distributed across nodes. So the answer is YES - you're not
going to have any consistency issues if you run the transaction properly.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Transactions-with-READ-WRITE-through-and-Spring-tp2964p2974.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Migrating From Hazelcast Service Interface To Apache Ignite Service Interface

2016-02-11 Thread Dmitriy Setrakyan
In my view Ignite service grid supports a lot more than Hazelcast and
provides much simpler API for it.

   1. You can deploy service on any single node in the cluster, any cluster
   group, or on the whole grid.

   2. Ignite Service is not a wrapper around cache. It can have any user
   logic within a service and access any kind of data structure provided by
   Ignite. For example, your service can access several Ignite caches, or
   queues, issue computations on the grid, etc, etc.

   3. Ignite service grid supports a rich deployment model. In addition to
   deploying any number of services on any node, you can also use convenient
   service deploy methods on Ignite API to deploy cluster-singletons,
   node-singletons, and key-singletons.

   4. Ignite series can be accessed locally on the servers they are
   deployed on and remotely. In case of remote access, Ignite will still
   provide you with the same deployed service API, but will proxy the calls
   internally to the servers on which the services are deployed.

   5. You can use Ignite services in sticky or non-sticky mode. In case of
   sticky mode, Ignite will always proxy service API calls to the same remote
   server. For non-sticky usage, Ignite will automatically load-balance
   service invocations among the service that have the service deployed.

As Valentin suggested, if you describe your use case, will may be able to
suggest in more detail how it can be implemented with Ignite.

D.

On Thu, Feb 11, 2016 at 9:16 PM, vkulichenko 
wrote:

> Hi Gareth,
>
> Ignite's Service Grid really differs from Hazelcast SPI. From what I see
> these are completely different features.
>
> Can you please elaborate what your service does? If we know more details
> about your use case, it would be easier to propose the best solution.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Migrating-From-Hazelcast-Service-Interface-To-Apache-Ignite-Service-Interface-tp2970p2971.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


intialize the cache manager?

2016-02-11 Thread Ravi
how to initialize cache manager by passing configuration ignite.xml file
path?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/intialize-the-cache-manager-tp2972.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Export/download documentation (plan/exist)

2016-02-11 Thread vkulichenko
Hi Tomas,

Can you please properly subscribe to the mailing list so that the community
receives email notifications? Please follow the instruction here:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


tomascejka wrote
> Hello, I would like to know if you plan (or already exist) export
> documentation from readme.io?
> I would like to read docs at my ebook reader.

Currently this doesn't exist, but as far as I know readme.io allows to
export to markdown which can be then converted to PDF. I believe this can be
useful, but will require some effort. If you want to do this, I recommend to
send an email to dev@ list and someone will help you.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Export-download-documentation-plan-exist-tp2948p2963.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: lock and unlock the cache key?

2016-02-11 Thread Ravi
tHANKS to all 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/lock-and-unlock-the-cache-key-tp2950p2958.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: lock and unlock the cache key?

2016-02-11 Thread Ravi
Lock.unlock(long) here long is needed but i want unlock object as key??
as i locked using IgniteCache.lock(key).?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/lock-and-unlock-the-cache-key-tp2950p2954.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Sharing Spark RDDs with Ignite

2016-02-11 Thread Dmitriy Morozov
Hi Valentin,

Sorry, I realize I didn't get it right. I'm using IgniteRDD to save RDD
values now and IgniteCache to cache StructType.
I'm using a ~1mb Parquet file for testing which has ~75K rows. I noticed
that saving IgniteRDD is expensive, it takes about 4 seconds on my laptop.
 I tried both client and server mode for IgniteContext but still couldn't
make it faster.

Here's the code

that I tried. I'd appreciate if somebody could give a hint on how to make
it faster.

Thanks!

On 10 February 2016 at 21:55, vkulichenko 
wrote:

> Hi Dmitry,
>
> What are you trying to achieve by putting the RDD into the cache as a
> single
> entry? If you want to save RDD data into the Ignite cache, it's better to
> create IgniteRDD and use its savePairs() or saveValues() methods. See [1]
> for details.
>
> [1]
>
> https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd#section-saving-values-to-ignite
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Sharing-Spark-RDDs-with-Ignite-tp2805p2941.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Kind regards,
Dima


Re: lock and unlock the cache key?

2016-02-11 Thread Vladimir Ozerov
Ravi,

Please clarify what long do you mean?

Here is short code snippet showing lock/unlock cycle:

IgniteCache cache = ...;
Lock lock = cache.lock(key);
lock.unlock();

Vladimir.


On Thu, Feb 11, 2016 at 1:12 PM, Ravi  wrote:

> Lock.unlock(long) here long is needed but i want unlock object as key??
> as i locked using IgniteCache.lock(key).?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/lock-and-unlock-the-cache-key-tp2950p2954.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: lock and unlock the cache key?

2016-02-11 Thread Alexey Goncharuk
Ravi,

A small typo sneaked to the code snippet - the lock() call was omitted - it
should be like this (I also omitting the try-finally block for the
simplicity)

IgniteCache cache = ...;
Lock lock = cache.lock(key);

lock.lock();
// ... process while lock is held
lock.unlock();


Re: Immediate flush of cache into backing store

2016-02-11 Thread Kobe
Sorry, I see a FileSystemConfiguration defaultMode property:

fs.setDefaultMode(IgfsMode.DUAL_SYNC);

What is the default value of this setting?

Thanks!

/Kobe


Kobe wrote
> Val,
> 
> I assume you are referring to CacheWriteSynchronizationMode? I do not see
> the
> mode DUAL_ASYNC. I see only FULL_ASYNC, FULL_SYNC and PRIMARY_ASYNC.
> 
> AM I missing something?
> 
> 
>  
*
>CacheConfiguration datacache = new CacheConfiguration ();
> datacache.setName("igfs-data");
> datacache.setCacheMode(CacheMode.REPLICATED);
> datacache.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> datacache.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> datacache.setBackups(0);
> datacache.setAffinityMapper( new IgfsGroupDataBlocksKeyMapper
> (512));
*
> 
> /Kobe
> vkulichenko wrote
>> Hi Kobe,
>> 
>> It sounds like you're using DUAL_ASYNC IGFS mode, is this the case? To
>> update the secondary file system synchronously, simply switch to
>> DUAL_SYNC in the configuration. Is this what you're looking for?
>> 
>> -Val





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Immediate-flush-of-cache-into-backing-store-tp2942p2967.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Node discovery behind NAT

2016-02-11 Thread vkulichenko
Hi,

No, this socket can't be reused. And you're right, server's communication
SPI can try to establish connection with the client (to port 47100 by
default).

If client is behind NAT, you can setup port forwarding on the router. You
will also have to implement AddressResolver interface and provide it in
IgniteConfiguration.setAddressResolver().

For example, assume that server can access the router by IP 1.2.3.4 and your
laptop has address 192.168.10.10. In this case the router should be able to
forward 1.2.3.4:47100 to 192.168.10.10:47100. And address resolver should
map 192.168.10.10 to 1.2.3.4, so that the server node knows which address to
use to connect to the client.

Will this work for you?

Actually, I think we should add proper routing for client nodes. This is a
known issue, I will check if we already have a ticket for this.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Node-discovery-behind-NAT-tp2956p2965.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Transactions with READ WRITE through and Spring

2016-02-11 Thread vkulichenko
Hi Amit,

Can you please properly subscribe to the mailing list so that the community
receives email notifications? Please follow the instruction here:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


amit2103 wrote
> Many thanks for the wonderful project.
> 
> I have two caches of two entities say person and account. Now we need to
> update Person and Account within the same transaction. We are using Spring
> transactions (SprinGTransactionmanager) and a partitioned cache in the
> same VM. Now it works fine sometimes, but sometimes the  first entity goes
> to another Node and the second tries to write through in the first node
> itself. Now second should be inserted in the Db after the first.
> 
> Naturally it causes an Integrity error. We cannot jumble this together
> using Affinity key . As we have another separate JVM service which updates
> Address ( A separate entity and Cache )and Person
> 
> How to solve the first scenario and if we use Affinity keys how to keep
> the Cache values consistent so that data in all JVM node sis in sync. We
> want that in the first case all data in the entities Person and Account
> should be written sync to the Db through the same JVm and in the same case
> also the same happens. 
> 
> Can you guys please suggest an alternative or the way to configure this?
> Please note that my caches are now Atomic and Transactional and are
> Partioned and in memory for each service

Ignite transactions are thread-local, i.e. to enlist two updates into one
transaction, you need to execute both updates from one thread. But you say
that you update them from different nodes, which means that you have two
independent updates, and it's always possible that one of them succeeds and
another does not.

Also did I understand correctly that one of the caches is atomic? Both
caches need to be transactional to execute such a cross-cache transaction.

Makes sense?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Transactions-with-READ-WRITE-through-and-Spring-tp2964p2968.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Node discovery behind NAT

2016-02-11 Thread pshomov
Hello,

I have an Ignite node (IGFS caching HDFS) running on one of my servers,
close to where my data is. And I would like to have a node on my laptop to
have cached data close by. I setup the config for the node on my laptop to
have discoverySpi like this

/





XX.XX.XX.XX






/
The node on the server however reports this

/[10:30:08,910][INFO ][disco-event-worker-#49%null%][GridDiscoveryManager]
Added new node to topology: TcpDiscoveryNode
[id=d527106b-9286-4218-95aa-2558c9ea31e5, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
172.16.208.1, 172.16.216.1, 192.168.1.42], sockAddrs=[/172.16.216.1:47500,
/0:0:0:0:0:0:0:1:47500, /172.16.208.1:47500, /127.0.0.1:47500,
/192.168.1.42:47500, /172.16.208.1:47500, /172.16.216.1:47500,
/192.168.1.42:47500], discPort=47500, order=28, intOrder=15,
lastExchangeTime=1455186602893, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2,
isClient=false]
[10:30:08,911][WARN ][disco-event-worker-#49%null%][GridDiscoveryManager]
Node FAILED: TcpDiscoveryNode [id=d527106b-9286-4218-95aa-2558c9ea31e5,
addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 172.16.208.1, 172.16.216.1,
192.168.1.42], sockAddrs=[/172.16.216.1:47500, /0:0:0:0:0:0:0:1:47500,
/172.16.208.1:47500, /127.0.0.1:47500, /192.168.1.42:47500,
/172.16.208.1:47500, /172.16.216.1:47500, /192.168.1.42:47500],
discPort=47500, order=28, intOrder=15, lastExchangeTime=1455186602893,
loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]
   
[10:30:08,939][INFO
][exchange-worker-#52%null%][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=28,
minorTopVer=0], evt=NODE_JOINED, node=d527106b-9286-4218-95aa-2558c9ea31e5] 
 

[10:30:08,949][INFO
][exchange-worker-#52%null%][GridCachePartitionExchangeManager] Skipping
rebalanc
ing (nothing scheduled) [top=AffinityTopologyVersion [topVer=29,
minorTopVer=0], evt=NODE_FAILED node=d527106b-9286-4218-95aa-2558c9ea31e5]
/

If I get this right, the node on the server is trying to initiate a
connection back to my laptop on the IP address the node is running on. And
that is a problem, since on my laptop I have a local IP provided by DHCP
server on the local network.

Is there a way to persuade the communicationSpi to re-use the socket we have
already opened from the laptop?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Node-discovery-behind-NAT-tp2956.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ApacheCon NA 2016 - Important Dates!!!

2016-02-11 Thread Melissa Warnkin
 Hello everyone!
I hope this email finds you well.  I hope everyone is as excited about 
ApacheCon as I am!
I'd like to remind you all of a couple of important dates, as well as ask for 
your assistance in spreading the word! Please use your social media platform(s) 
to get the word out! The more visibility, the better ApacheCon will be for 
all!! :)
CFP Close: February 12, 2016CFP Notifications: February 29, 2016Schedule 
Announced: March 3, 2016
To submit a talk, please visit:  
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp

Link to the main site can be found here:  
http://events.linuxfoundation.org/events/apache-big-data-north-america

Apache: Big Data North America 2016 Registration Fees:
Attendee Registration Fee: US$599 through March 6, US$799 through April 10, 
US$999 thereafterCommitter Registration Fee: US$275 through April 10, US$375 
thereafterStudent Registration Fee: US$275 through April 10, $375 thereafter
Planning to attend ApacheCon North America 2016 May 11 - 13, 2016? There is an 
add-on option on the registration form to join the conference for a discounted 
fee of US$399, available only to Apache: Big Data North America attendees.
So, please tweet away!!
I look forward to seeing you in Vancouver! Have a groovy day!!
~Melissaon behalf of the ApacheCon Team