Questions regarding Ignite as hdfs cache

2017-11-01 Thread shailesh prajapati
Hello,

I am evaluating Ignite to be able to use it as a hdfs cache to speedup my
hive queries. I am using hive with tez. Below are my cluster and Ignite
configurations,

*Cluster: *
4 data nodes with 32gb RAM each, 1 edge node
4 ignite servers, one for each data node. Ignite servers were started with
Xmx10g

*Setup done using:*
https://apacheignite-fs.readme.io/docs/installing-on-hortonworks-hdp
https://apacheignite-fs.readme.io/docs/running-apache-hive-over-ignited-hadoop

*Ignite configuration file (provided to each ignite server): *















































node1:47500..47509
node2:47500..47509
 node3:47500..47509
 node4:47500..47509








*Dataset used for the experiment: *
TPCH
customer 150 rows
lineitem 59986052 rows
nation 25 rows
orders 1500 rows
part 200 rows
partsupp 800 rows
region 5 rows
supplier 10 rows

and using standard TPCH queries

*Querying from hive shell with below properties:*
set fs.default.name=igfs://igfs@node1:10500/;



I have now following questions:

1) My queries are running fine with the above configurations. I want to see
whether the data is caching and coming from cache or not. How should i
check this? I used Ignite visor to see if the data is available in cache,
but i did not find any cache there.

Although, in the Ignite server logs, i can see messages for local node
metrics like shown below. The Heap usage is continuously increases while
running query. what does this means?

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=e38943b2, name=null, uptime=03:02:18:866]
^-- H/N/C [hosts=4, nodes=4, CPUs=32]
^-- CPU [cur=0.23%, avg=0.13%, GC=0%]
^-- PageMemory [pages=7381]
^-- Heap [used=1050MB, free=88.46%, comm=3343MB]
^-- Non heap [used=83MB, free=98.45%, comm=84MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
^-- Outbound messages queue [size=0]


2) I ran queries on both hive+tez+hdfs and hive+tez+ignite+hdfs. I found
that the queries are slower when using ignite as a cache layer. For example
consider below TPCH standard query,

select
n_name,
sum(l_extendedprice * (1 - l_discount)) as revenue
from
customer,
orders,
lineitem,
supplier,
nation,
region
where
c_custkey = o_custkey
and l_orderkey = o_orderkey
and l_suppkey = s_suppkey
and c_nationkey = s_nationkey
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'AFRICA'
and o_orderdate >= '1993-01-01'
and o_orderdate < '1994-01-01'
group by
n_name
order by
revenue desc;

Hive+tez avg time: 35.542s
Hive+tez+ignite avg time: 38.221s

Am i using wrong configurations?

3) I tried running queries with ignite MR with below configs set in hive.
set hive.rpc.query.plan = true;
set hive.execution.engine = mr;
set mapreduce.framework.name = ignite;
set mapreduce.jobtracker.address = node1:11211;

The queries were even slower than hive+tez+ignite. Is there any other
configuration for Ignite MR that i need to do?

4) Are my configurations optimal? if not can you please suggest me one.

5) What serialization algo (kryo, native java ...) Ignite uses?

Thanks


Re: Node failed to startup due to deadlock

2017-11-01 Thread naresh.goty
Hi Alexey,

Actually i have shared threaddumps from both nodes earlier. I have
re-created the scenario, and attached the log and threaddumps from the
Node1.

Note:
Multiple Thread dumps are taken in different times for Node 1 when Node2 was
down and started again.
 

Regards,
Naresh

Node1_AfterNode2_shutdown.tdump

  
Node1_AfterNode2_shutdown_afterlittleDelay.tdump

  
Node1_AfterNode2_StartedAgain.tdump

  

Node1_Logs.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.3 still not support group_concat

2017-11-01 Thread james wu
The new ignite 2.3 release doc said there are aggregation group_concat and
related usage
https://apacheignite-sql.readme.io/docs/group_concat, but look into the
source code, this function still not support
/** {@inheritDoc} */
@Override public String getSQL() {
String text;

switch (type) {
case GROUP_CONCAT:
throw new UnsupportedOperationException();

case COUNT_ALL:
return "COUNT(*)";

default:
text = type.name();

break;
}

if (distinct)
return text + "(DISTINCT " + child().getSQL() + ")";

return text + StringUtils.enclose(child().getSQL());
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: map types in Cassandra

2017-11-01 Thread Denis Magda
Considering that this feature is not supported in Cassandra, I would suggest 
replacing the latter with Ignite in general. 

Just enable Ignite persistence [1] and you will get all the features Cassandra 
offers plus those supported by Ignite only - ACID transactions, SQL with joins, 
full-fledged in-memory storage.

[1] https://apacheignite.readme.io/docs/distributed-persistent-store
 
—
Denis

> On Oct 25, 2017, at 9:14 AM, Igor Rudyak  wrote:
> 
> Ignite-Cassandra module doesn't support Cassandra complex types like
> map. Only BLOB and simple types which could be directly mapped
> to appropriate java types are supported.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



[ANNOUNCE] Apache Ignite 2.3.0 Released

2017-11-01 Thread Denis Magda
The Apache Ignite Community is pleased to announce the release of Apache Ignite 
2.3.0.

Apache Ignite [1] is the in-memory computing platform that is durable, strongly 
consistent and highly available 
with powerful SQL, key-value and processing APIs.

This release adds more SQL capabilities and Ignite persistence improvements to 
the projects. The major changes are covered
in our regular blog post: 
https://blogs.apache.org/ignite/entry/apache-ignite-2-3-more

As a short summary of the article:
- now we support ALTER TABLE command for columns addition.
- include SQLLine tool in every Ignite release.
- released new SQL documentation [2].
- prepared a screencast for you that shows how to interact with Ignite as with 
an SQL database [3].
  
The full list of the changes can be found here [4].

Here is where you can grab the latest Ignite version:
https://ignite.apache.org/download.cgi

Please let us know [5] if you encounter any problems.

Regards,
The Apache Ignite Community

[1] https://ignite.apache.org
[2] https://apacheignite-sql.readme.io/docs
[3] https://www.youtube.com/watch?v=FKS8A86h-VY&feature=youtu.be
[4] https://ignite.apache.org/releases/2.3.0/release_notes.html
[5] https://ignite.apache.org/community/resources.html#ask



Re: Code deployment throught UriDeploumentSpi

2017-11-01 Thread daivanov
Hi.
I have found another work around. 
I change thread context classloader to parent classloader before proxy
method invoke and swich it back after. 

So the code look like this:

public class ServiceHelper {
public static  T serviceMethodInvoke(Object service, final String
methodName, final Object... args) {
ClassLoader initClassloader =
Thread.currentThread().getContextClassLoader();
   
Thread.currentThread().setContextClassLoader(getCorrectClassloader(initClassloader));
try {
Method method = service.getClass().getMethod(methodName,
Arrays.stream(args).map(Object::getClass).toArray(Class[]::new));
if (service instanceof GridServiceProxy) {
return (T) ((GridServiceProxy) service).invokeMethod(method,
args);
} else {
return (T) method.invoke(service, args);
}
} catch (NoSuchMethodException | IllegalAccessException |
InvocationTargetException e) {
throw new IllegalStateException("Error while method invoke: " +
methodName, e);
} finally {
Thread.currentThread().setContextClassLoader(initClassloader);
}
}
private static ClassLoader getCorrectClassloader(ClassLoader
initClassloader) {
ClassLoader correctClassloader = initClassloader;
while (correctClassloader.getParent() != null &&
correctClassloader.getClass().getName().equals("org.apache.ignite.spi.deployment.uri.GridUriDeploymentClassLoader"))
{
correctClassloader = correctClassloader.getParent();
}
return correctClassloader;
}
}


But I don't understand what do you mean by "locally-deployed job with same
dependency"?

Your sincerely, Dmitry.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Code deployment throught UriDeploumentSpi

2017-11-01 Thread daivanov
Hi.
I have found another work around. 
I change thread context classloader to parent classloader before proxy
method invoke and swich it back after. 

So the code look like this:

public class ServiceHelper {
public static  T serviceMethodInvoke(Object service, final String
methodName, final Object... args) {
ClassLoader initClassloader =
Thread.currentThread().getContextClassLoader();
   
Thread.currentThread().setContextClassLoader(getCorrectClassloader(initClassloader));
try {
Method method = service.getClass().getMethod(methodName,
Arrays.stream(args).map(Object::getClass).toArray(Class[]::new));
if (service instanceof GridServiceProxy) {
return (T) ((GridServiceProxy) service).invokeMethod(method,
args);
} else {
return (T) method.invoke(service, args);
}
} catch (NoSuchMethodException | IllegalAccessException |
InvocationTargetException e) {
throw new IllegalStateException("Error while method invoke: " +
methodName, e);
} finally {
Thread.currentThread().setContextClassLoader(initClassloader);
}
}
private static ClassLoader getCorrectClassloader(ClassLoader
initClassloader) {
ClassLoader correctClassloader = initClassloader;
while (correctClassloader.getParent() != null &&
correctClassloader.getClass().getName().equals("org.apache.ignite.spi.deployment.uri.GridUriDeploymentClassLoader"))
{
correctClassloader = correctClassloader.getParent();
}
return correctClassloader;
}
}


But I don't understand what do you mean by "locally-deployed job with same
dependency"?

Your sincerely, Dmitry.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


implement shared RDD in spark streaming in scala

2017-11-01 Thread roshan joe
I am very new to Ignite and went through the documentation and samples on
the web. Below is the use case I am hoping to resolve using shared spark
RDDs.

   - Spark Streaming App-1 writes incremental records, which does not
   already exist (by using a hash value generated inside the App) to a shared
   RDD.


   - Spark Streaming App-2 looks up the data in this shared RDD using key
   columns and gets additional column values from the shared RDD populated by
   the App-1 above.


   - Spark Streaming App-3, Spark Streaming App-4 etc do the same lookup
   against shared RDD, same as the above.

I am sure this use-case has been implemented before and am trying to avoid
spending time to build this from scratch. Is there some sample code to do
this using scala that can be shared? Thank you very much!


Re: Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
Excellent! Thanks!

On Wed, Nov 1, 2017 at 12:19 PM, Alexey Kukushkin  wrote:

> John,
>
> The default mode is the slowest of the 3 WAL write modes available. The
> other two are OS buffered write "LOG_ONLY" and Ignite buffered write
> "BACKGROUND".
>
> You might need some benchmarking in your real environment (hardware and
> software) with your specific task to understand if it is "slow". Often you
> can significantly speed the things up with native persistence performance
> tuning like adjusting page size, using separate disk for WAL, SSD
> over-provisioning, etc. Please read about some tricks here
> 
> .
>
>


Re: Is WAL a memory-mapped file?

2017-11-01 Thread Alexey Kukushkin
John,

The default mode is the slowest of the 3 WAL write modes available. The
other two are OS buffered write "LOG_ONLY" and Ignite buffered write
"BACKGROUND".

You might need some benchmarking in your real environment (hardware and
software) with your specific task to understand if it is "slow". Often you
can significantly speed the things up with native persistence performance
tuning like adjusting page size, using separate disk for WAL, SSD
over-provisioning, etc. Please read about some tricks here

.


Unable to query Ignite 2.3 table using 2.2 JDBC thin client

2017-11-01 Thread blackfield
Hey,

I run *Ignite 2.3* locally for testing...just one node, persistence is
enabled.

Using DBeaver - using *Ignite 2.2 *JDBC thin client, 
1. Create two simple tables, City and Person, from the Getting started
example.
2. Insert rows to those tables

When I execute 'select * from Person', I got "Unknown SQL listener request
ID;[request ID=5]" error in the DBeaver UI.

If I use 2.3 JDBC thin client, I can perform the above query.


Is this a bug, Ignite does not support backward compatibility or else?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failed to startup due to deadlock

2017-11-01 Thread rajivgandhi
Hi Alexey,
We do have event listeners to handle data loss events. We do 3 things
related to ignite (amongst other unrelated things) in the listener (btw, an
instance of local listener if running on each node, these listeners have
remote filters as well):
1. Read an Ignite cache
2. Publish a message to another listener on local (same) node using
IgniteMessage 
3. Create a new thread which reloads (writes) the cache 

Is anything above objectionable?

Naresh will send you the requested details. He mentioned to me that he has
already provided thread dumps for both nodes. He will shortly provide the
other logs.

thanks,
Rajeev






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
With the WALMode = DEFAULT, which does a full-sync disk write, individual
put operations on a cache are written to disk then. That would make it
slow, right?

Thanks.

On Wed, Nov 1, 2017 at 10:58 AM, Dmitry Pavlov 
wrote:

> Hi John,
>
> No, WAL consists from several constant sized, append only files
> (segments).
> Segments are rotated and deleted after (WAL_History_size) checkpoints.
> WAL is common for all caches.
>
> If you are interested in low-level details of implementation, you can see
> it here in draft wiki page for Ignite developers https://cwiki.
> apache.org/confluence/display/IGNITE/Ignite+Persistent+
> Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALstructure
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 1 нояб. 2017 г. в 20:36, John Wilson :
>
>> Hi,
>>
>> Is the WAL a memory mapped file? Is it defined per cache?
>>
>> Thanks.
>>
>


Re: Is WAL a memory-mapped file?

2017-11-01 Thread Dmitry Pavlov
Hi John,

No, WAL consists from several constant sized, append only files (segments).
Segments are rotated and deleted after (WAL_History_size) checkpoints.
WAL is common for all caches.

If you are interested in low-level details of implementation, you can see
it here in draft wiki page for Ignite developers
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALstructure


Sincerely,
Dmitriy Pavlov

ср, 1 нояб. 2017 г. в 20:36, John Wilson :

> Hi,
>
> Is the WAL a memory mapped file? Is it defined per cache?
>
> Thanks.
>


Is WAL a memory-mapped file?

2017-11-01 Thread John Wilson
Hi,

Is the WAL a memory mapped file? Is it defined per cache?

Thanks.


Where is release notes for 2.3

2017-11-01 Thread Timofey Fedyanin
Hi!

URL from downloads page is bad (
https://ignite.apache.org/releases/2.3.0/release_notes.html).

Where else can I see it?


Re: Ignite/Cassandra failing to use supplied value for where clause

2017-11-01 Thread Andrey Mashenkov
Got it.

Seems, CassandraStore doesn't supports storeKeepBinary(true) flag.

On Wed, Nov 1, 2017 at 8:04 PM, Andrey Mashenkov  wrote:

> Hi Kenan,
>
> Looks like annotation configuration is not allowed together with xml
> comfiguration.
> I wonder there was no Exception. I've made an example with all mapping
> configured in xml and got no errors.
>
> Please, check an example that workable for me [1].
>
>
> [1] https://github.com/AMashenkov/ignite-with-cassandra-sample1
>
> On Fri, Oct 27, 2017 at 6:20 PM, Kenan Dalley  wrote:
>
>> So can someone try to take my example and actually get it to work?  It's
>> baffling to me why this fails as it does.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite/Cassandra failing to use supplied value for where clause

2017-11-01 Thread Andrey Mashenkov
Hi Kenan,

Looks like annotation configuration is not allowed together with xml
comfiguration.
I wonder there was no Exception. I've made an example with all mapping
configured in xml and got no errors.

Please, check an example that workable for me [1].


[1] https://github.com/AMashenkov/ignite-with-cassandra-sample1

On Fri, Oct 27, 2017 at 6:20 PM, Kenan Dalley  wrote:

> So can someone try to take my example and actually get it to work?  It's
> baffling to me why this fails as it does.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Using two ignite contexts with spark streaming

2017-11-01 Thread Denis Mekhanikov
Hi!

I don't really see what you are trying to achieve.
In the example, that you provided, the second IgniteContext, called
ignitec2, is not used. Do you mean that when you start the second
IgniteContext, then both of them stop working?

When ignitec.fromCache("Data2") is executed, then cache is getting created,
if it haven't been yet. If you see in log that SQL execution fails because
of lacking tables, then query entities are probably not configured
correctly.

Try to find a minimal example, that doesn't work for you, and attach it
here as an archived project. It will be easier to tell what is wrong.

Denis

сб, 28 окт. 2017 г. в 23:47, manuelmourato :

> Hello there,
>
> My use case is relatively simple: I want to be able to save an RDD in Spark
> to two different caches in Ignite, inside the same Spark Context.
>
> When I try to save an RDD to a single IgniteCache, everything works well:
>
>   case class Sensor_Att(
>  @(QuerySqlField @field)(index = false)active:
> String,
>  @(QuerySqlField @field)(index = false)`type`:
> String,
>  @(QuerySqlField @field)(index = true)name:
> String,
>)
>
> val sqlContext: SparkSession =
>
> SparkSession.builder().master("local[*]").appName("DataProcessing").getOrCreate()
> val sc: SparkContext = sqlContext.sparkContext
>  val ssc: StreamingContext = new StreamingContext(sc, Seconds(5))
>
> val ignitec:IgniteContext = new IgniteContext(sc,()=>new
>
> IgniteConfiguration().setClientMode(false).setLocalHost("127.0.0.1").setActiveOnStart(true).
>   setCacheConfiguration(new
>
> CacheConfiguration[String,Sensor_Att]().setIndexedTypes(classOf[String],classOf[Sensor_Att]).setName("sensorData").
>
>
> setCacheMode(CacheMode.PARTITIONED).setAtomicityMode(CacheAtomicityMode.ATOMIC)).setDiscoverySpi(new
> TcpDiscoverySpi().
>   setLocalAddress("127.0.0.1").setLocalPort(47090).setIpFinder(new
> TcpDiscoveryMulticastIpFinder().
>   setAddresses(new util.ArrayList[String]))).setCommunicationSpi(new
> TcpCommunicationSpi().setLocalPort(47000)))
>
>
>   val cachedRDD:IgniteRDD[String,Sensor_Att]=ignitec.fromCache("Data1")
>   val  RDD_with_key: RDD[(String, Sensor_Att)]
> =df_RDD_NEW_CLASS.map(x=>(x.name,x))
>   cachedRDD.savePairs(RDD_with_key)
>   val df=cachedRDD.sql("select * from Sensor_Att")
>   df.show()
>
>
> If however I try to add a second IgniteContext, using the same class as an
> index, and try to save an RDD to its cache, like so:
>
> (code above...)
>  val ignitec=...
>  val ignitec2:IgniteContext = new IgniteContext(sc,()=>new
>
> IgniteConfiguration().setClientMode(false).setLocalHost("127.0.0.1").setActiveOnStart(true).
>   setCacheConfiguration(new
>
> CacheConfiguration[String,Sensor_Att]().setIndexedTypes(classOf[String],classOf[Sensor_Att]).setName("historicsensorData").
>
>
> setCacheMode(CacheMode.PARTITIONED).setAtomicityMode(CacheAtomicityMode.ATOMIC)).setDiscoverySpi(new
> TcpDiscoverySpi().
>   setLocalAddress("127.0.0.1").setLocalPort(47091).setIpFinder(new
> TcpDiscoveryMulticastIpFinder().
>   setAddresses(new util.ArrayList[String]))).setCommunicationSpi(new
> TcpCommunicationSpi().setLocalPort(47007)))
>
> (code above)
>   val df=cachedRDD.sql("select * from Sensor_Att")
>   df.show()
>
>   val
> cachedRDD2:IgniteRDD[String,Sensor_Att]=ignitec.fromCache("Data2")
>   cachedRDD2.savePairs(RDD_with_key)
>   val df2=cachedRDD2.sql("select * from Sensor_Att")
>   df2.show()
>
>
> I get the following error:
>
> javax.cache.CacheException: class
> org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
> parse query: select * from Sensor_Att
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:807)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:765)
> at org.apache.ignite.spark.IgniteRDD.sql(IgniteRDD.scala:147)
> at
> sensorApp.SensorDataProcessing$.sensorApp$SensorDataProcessing$$data_proces
> (...)
>
>
> It seems that I can't derive a second IgniteContext from the same
> SparkContext, because it seems that the "Data2" cache was not created.
> Do you have any suggestions about this?
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to create string representation of binary object.

2017-11-01 Thread Alexey Popov
Ankit,

That looks very strange.
Your class does not have registrationInfoResponse field which is mentioned
in the error.

Please confirm that node with id="b2df236f-4fba-4794-b0e4-4e040581ba9d" is a
part of your load testing cluster.

Do you have peerClassLoadingEnabled=true at your configs?

Thanks,
Alexey






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What's the maximum number of nodes for an Apache Ignite cluster?

2017-11-01 Thread MrAsanjar .
thanks Alexey

On Wed, Nov 1, 2017 at 4:32 AM, Alexey Kukushkin 
wrote:

> Hi,
>
> As I know there is no limit on the number of nodes. The largest cluster
> one of Ignite users is building that I know about will have 2000 nodes -
> may be that was what caused confusion.
>
> Please note that the default affinity function
> (RendezvousAffinityFunction) manages 1024 partitions by default. That
> means, for example, having a cluster of more than 1024 nodes without
> backups might not be the most efficient use of resources, although it still
> provides additional fault tolerance. You can customise number of partitions
> in the configuration.
>


Re: Code deployment throught UriDeploumentSpi

2017-11-01 Thread Ilya Kasnacheev
Hello again!

I have found one work-around, it is not very sound but seems reliable.

Before first invocation of UriDeploymentSpi-deployed Compute Job with,
first run a locally-deployed job with same dependency. In this case,
ServiceTopologyCallable is called outside of UriDeploymentSpi context, is
deployed properly and doesn't cause problems later.

Work is underway for proper fix.

-- 
Ilya Kasnacheev

2017-10-29 14:46 GMT+03:00 Dmitriy Ivanov :

> Hi!
> In our project we need to deploy custom compute tasks into cluster without
> cluster restart and p2p class loading.
> I try to use org.apache.ignite.spi.deployment.uri.UriDeploumentSpi for
> that purpose, but I have a problem.
>
> I have simple Ignite Service and Ignite Compute Task which use it throught
> @ServiceResource.
> This ComputeTask located into .gar file which was deployed via
> UriDeploumentSpi.
>
> If I have service implementation on each node(node singleton service) then
> it works great.
> But if I deploy service as a cluster singleton then task executes
> correctly only on node with this service.
>
> On other nodes @ServiceResource returns ServiceProxy that throws exception
> on service remote method invokation (lambda with service call cannot be
> deployed):
>
> SEVERE: Failed to execute job 
> [jobId=68a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86,
> ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=task-one,
> dep=GridDeployment [ts=1509275650885, depMode=SHARED, 
> clsLdr=GridUriDeploymentClassLoader
> [urls=[file:/C:/IdeaProjects/dmp_code_deployment/test/out/
> deployment/gg.uri.deployment.tmp/428ec712-e6d0-4eab-97f9-
> ce58d7b3e0f5/dirzip_task-one6814855127293591501.gar/]],
> clsLdrId=7eb15d76f51-428ec712-e6d0-4eab-97f9-ce58d7b3e0f5, userVer=0,
> loc=true, sampleClsName=com.gridfore.tfedyanin.deploy.Task1,
> pendingUndeploy=false, undeployed=false, usage=1], 
> taskClsName=com.gridfore.tfedyanin.deploy.Task1,
> sesId=38a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86,
> startTime=1509275650601, endTime=9223372036854775807,
> taskNodeId=7919c34c-9a48-4068-bcd6-70dad5595e86, 
> clsLdr=GridUriDeploymentClassLoader
> [urls=[file:/C:/IdeaProjects/dmp_code_deployment/test/out/
> deployment/gg.uri.deployment.tmp/428ec712-e6d0-4eab-97f9-
> ce58d7b3e0f5/dirzip_task-one6814855127293591501.gar/]], closed=false,
> cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false,
> internal=false, subjId=7919c34c-9a48-4068-bcd6-70dad5595e86,
> mapFut=IgniteFuture [orig=GridFutureAdapter [ignoreInterrupts=false,
> state=INIT, res=null, hash=1254296516]], execName=null],
> jobId=68a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86]]
> class org.apache.ignite.IgniteDeploymentException: Failed to auto-deploy
> task (was task (re|un)deployed?): class org.apache.ignite.internal.
> processors.service.GridServiceProcessor$ServiceTopologyCallable
> at org.apache.ignite.internal.util.IgniteUtils$8.apply(
> IgniteUtils.java:833)
> at org.apache.ignite.internal.util.IgniteUtils$8.apply(
> IgniteUtils.java:831)
> at org.apache.ignite.internal.util.IgniteUtils.
> convertException(IgniteUtils.java:952)
> at org.apache.ignite.internal.processors.service.
> GridServiceProxy.invokeMethod(GridServiceProxy.java:208)
> at org.apache.ignite.internal.processors.service.GridServiceProxy$
> ProxyInvocationHandler.invoke(GridServiceProxy.java:356)
> at com.sun.proxy.$Proxy31.println(Unknown Source)
> at com.gridfore.tfedyanin.deploy.Task1$MyComputeJobAdapter.
> execute(Task1.java:49)
> at org.apache.ignite.internal.processors.job.GridJobWorker$
> 2.call(GridJobWorker.java:566)
> at org.apache.ignite.internal.util.IgniteUtils.
> wrapThreadLoader(IgniteUtils.java:6608)
> at org.apache.ignite.internal.processors.job.GridJobWorker.
> execute0(GridJobWorker.java:560)
> at org.apache.ignite.internal.processors.job.GridJobWorker.
> body(GridJobWorker.java:489)
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
> at org.apache.ignite.internal.processors.job.GridJobProcessor.
> processJobExecuteRequest(GridJobProcessor.java:1181)
> at org.apache.ignite.internal.processors.job.GridJobProcessor$
> JobExecutionListener.onMessage(GridJobProcessor.java:1908)
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1556)
> at org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:1184)
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$4200(GridIoManager.java:126)
> at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(
> GridIoManager.java:1097)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.internal.IgniteDeploymentCheckedException:
> Failed to auto-deploy task (was task (re|un)deployed?): class
> o

Re: Ignite Nodes not connecting to cluster in docker swam mode

2017-11-01 Thread Andrey Mashenkov
Hi Rishi,

Check if port 31100 is reachable.
Also it is possible you have outdated linux kernel with a bug that return
error instead of dropping a packet [1] e.g. network buffer is full.
Can you try to start a grid on newer linux\kernel version?

[1] https://github.com/playframework/play-plugins/issues/64

On Wed, Nov 1, 2017 at 2:25 PM, rishi007bansod 
wrote:

> Hi,
>  When I connect ignite multiple containers in --net=host mode then they
> get connected. But when I try to connect Ignite containers in
> swarm(cluster)
> mode, they are unable to connect. I am getting following error messege[in
> verbose(-v) mode]:
>
> *[10:46:41,947][SEVERE][grid-time-coordinator-#102%null%][
> GridClockSyncProcessor]
> Failed to send time request to remote node
> [rmtNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, addr=/10.255.0.5,
> port=31100]
> class org.apache.ignite.IgniteCheckedException: Failed to send datagram
> message to remote node [addr=/10.255.0.5, port=31100, msg=GridClockMessage
> [origNodeId=94cf9c90-285e-4afb-959c-fe649d665ae3,
> targetNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, origTs=1509533201848,
> replyTs=0]]
> at
> org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(
> GridClockServer.java:162)
> at
> org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$
> TimeCoordinator.requestTime(GridClockSyncProcessor.java:458)
> at
> org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$
> TimeCoordinator.body(GridClockSyncProcessor.java:385)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Operation not permitted (sendto failed)
> at java.net.PlainDatagramSocketImpl.send(Native Method)
> at java.net.DatagramSocket.send(DatagramSocket.java:693)
> at
> org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(
> GridClockServer.java:158)
> ... 4 more
> *
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Ignite Nodes not connecting to cluster in docker swam mode

2017-11-01 Thread rishi007bansod
Hi,
 When I connect ignite multiple containers in --net=host mode then they
get connected. But when I try to connect Ignite containers in swarm(cluster)
mode, they are unable to connect. I am getting following error messege[in
verbose(-v) mode]:

*[10:46:41,947][SEVERE][grid-time-coordinator-#102%null%][GridClockSyncProcessor]
Failed to send time request to remote node
[rmtNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, addr=/10.255.0.5,
port=31100]
class org.apache.ignite.IgniteCheckedException: Failed to send datagram
message to remote node [addr=/10.255.0.5, port=31100, msg=GridClockMessage
[origNodeId=94cf9c90-285e-4afb-959c-fe649d665ae3,
targetNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, origTs=1509533201848,
replyTs=0]]
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:162)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.requestTime(GridClockSyncProcessor.java:458)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.body(GridClockSyncProcessor.java:385)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Operation not permitted (sendto failed)
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:158)
... 4 more
*
Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Nodes not connecting to cluster in docker swam mode

2017-11-01 Thread rishi007bansod
Hi,
 When I connect ignite multiple containers in --net=host mode then they
get connected. But when I try to connect Ignite containers in swarm(cluster)
mode, they are unable to connect. I am getting following error messege:

*[10:46:41,947][SEVERE][grid-time-coordinator-#102%null%][GridClockSyncProcessor]
Failed to send time request to remote node
[rmtNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, addr=/10.255.0.5,
port=31100]
class org.apache.ignite.IgniteCheckedException: Failed to send datagram
message to remote node [addr=/10.255.0.5, port=31100, msg=GridClockMessage
[origNodeId=94cf9c90-285e-4afb-959c-fe649d665ae3,
targetNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, origTs=1509533201848,
replyTs=0]]
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:162)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.requestTime(GridClockSyncProcessor.java:458)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.body(GridClockSyncProcessor.java:385)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Operation not permitted (sendto failed)
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:158)
... 4 more
*
Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What's the maximum number of nodes for an Apache Ignite cluster?

2017-11-01 Thread Alexey Kukushkin
Hi,

As I know there is no limit on the number of nodes. The largest cluster one
of Ignite users is building that I know about will have 2000 nodes - may be
that was what caused confusion.

Please note that the default affinity function (RendezvousAffinityFunction)
manages 1024 partitions by default. That means, for example, having a
cluster of more than 1024 nodes without backups might not be the most
efficient use of resources, although it still provides additional fault
tolerance. You can customise number of partitions in the configuration.


Re: Benchmark results questions

2017-11-01 Thread Ray
Hi Dmitry,

It's been a while now, did you check out what happens?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Out of memory streaming to Ignite Apache 2.0

2017-11-01 Thread ilya.kasnacheev
You may be hitting this scenario from my experience:

As you have three nodes you begin to get deadlocks between load tasks.

These deadlocks cause tasks to be postponed, but real troubles happen when
they survive past 30 seconds and are dumped to logs. There's massive amount
of data in your tasks and they are pretty-printed for logging, leading to
massive spikes in RAM usage and eventually OOM.

The solution here to use smaller batches. How much data do you pass with one
task currently?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failed to startup due to deadlock

2017-11-01 Thread Alexey Popov
Naresh, Rajeev

I've looked through the logs.
Node2 is stuck because it is waiting for Node1
(id=96eea214-a2e3-4e0a-b8c6-7fdf3f29727c) partition map exchange response.
So it looks like the real deadlock is at Node1
(id=96eea214-a2e3-4e0a-b8c6-7fdf3f29727c)

Can you also send node1 logs and thread dumps from it?

I will check them and let you know if it is a known/fixed issue.

BTW, please ensure that your event listeners at node1 do not update Ignite
caches/etc. It could be the similar case as in
[http://apache-ignite-users.70518.x6.nabble.com/Threads-waiting-on-cache-reads-tt17802.html]

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: when client node connect to server node, server node throws NotSerializableException

2017-11-01 Thread Andrey Mashenkov
Hi Jeff,

CacheStore class should be in classpath on all nodes including clients.


1 нояб. 2017 г. 4:52 пользователь "Jeff Jiao" 
написал:

> Hi Andrew,
>
> Thanks a lot for all the replies.
>
> yes BoConverter implements Serializable, or Ignite will throw
> NotSerializableException.
> The "Class" here is actually for hibernate to get data from DB:
> org.hibernate.Session.get(Class clazz, Serializable id).
> After getting the data, BoConverter converts the data to a BinaryObject and
> then put into Ignite.
>
>
> What do you mean by "is the class present on all nodes"?
> if I have a server node which has config like below, then I want to have a
> client node to connect to it, do i need to add cacheStore config to client
> node config too? can you show me what a suggested client node config should
> look like under this situation? Thanks~
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
>  />
> 
> 
> 
> 
> 
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  factory-method="forName">
>  value="com.pingan.pilot.
> ignite.test.bo.otw.IgniteTestBO_OTW" />
> 
> 
> 
> 
> 
> 
> 
>  value="java.lang.String" />
>  value="IPIgniteTestBOImmutable"
> />
> 
> 
>  value="java.lang.Integer" />
>  value="java.lang.Long" />
>  value="java.lang.Double" />
>  value="java.lang.String" />
>  value="java.lang.Float" />
>  value="java.util.Array" />
> 
> 
> 
> 
>  class="org.apache.ignite.cache.QueryIndex">
>  value="intf" />
> 
>  class="org.apache.ignite.cache.QueryIndex">
>  value="longf" />
> 
>  class="org.apache.ignite.cache.QueryIndex">
>  value="doublef" />
> 
>  class="org.apache.ignite.cache.QueryIndex">
>  value="stringf" />
> 
>  class="org.apache.ignite.cache.QueryIndex">
>  value="floatf" />
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>