horizontal scaling

2018-01-27 Thread Rajarshi Pain
Hello,

We are trying to scale up our application to achieve more throughput. Right
now we are running 4 nodes (same physical system) and getting almost 1200
tps.

But when we are adding more node performance is getting decreased and not
able to get more than 600 tps while adding more than 6 nodes.

Our application is running on each node.  we have used *compute.run* to
perform the activity on local node.

We trying to investigate why it's giving less tps while adding more node,
but somehow we are not able to figure it out if we are making some mistake
or something is going wrong.

Will be able to help me how we can investigate and see if each node is
working as expected or there is some bottleneck and it's not able to
process it properly ?

I am not able to share the code as it's in client machine.

-- 
Regards,
Raj


Re: setNodeFilter throwing a CacheException

2018-01-27 Thread Shravya Nethula
Hi Dmitry,

Please find the logs below:

2018-01-26 07:52:28,563][INFO ][exchange-worker-#42][time] Started exchange
init [topVer=AffinityTopologyVersion [topVer=29, minorTopVer=0], crd=true,
evt=NODE_JOINED, evtNode=649c5360-7060-40cf-9454-ad6d08be2a7c,
customEvt=null, allowMerge=true]
[2018-01-26 07:52:28,563][INFO
][exchange-worker-#42][GridDhtPartitionsExchangeFuture] Finish exchange
future [startVer=AffinityTopologyVersion [topVer=29, minorTopVer=0],
resVer=AffinityTopologyVersion [topVer=29, minorTopVer=0], err=null]
[2018-01-26 07:52:28,564][INFO ][exchange-worker-#42][time] Finished
exchange init [topVer=AffinityTopologyVersion [topVer=29, minorTopVer=0],
crd=true]
[2018-01-26 07:52:28,564][INFO
][exchange-worker-#42][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=29,
minorTopVer=0], evt=NODE_JOINED, node=649c5360-7060-40cf-9454-ad6d08be2a7c]
[2018-01-26 07:52:29,963][INFO
][grid-nio-worker-tcp-comm-0-#25][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/10.0.0.3:47100,
rmtAddr=/183.82.140.186:31996]
[2018-01-26 07:52:41,461][ERROR][tcp-disco-msg-worker-#3][TcpDiscoverySpi]
Failed to unmarshal discovery custom message.
class org.apache.ignite.IgniteCheckedException: Failed to find class with
given class loader for unmarshalling (make sure same versions of all classes
are available on all nodes or enable peer-class-loading)
[clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6,
cls=net.aline.cloudedh.inmemorydb.ignite.DataNodeFilter]
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:126)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:143)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9795)
at
org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage.message(TcpDiscoveryCustomEventMessage.java:81)

Please let us know if we are missing any configurations!

Regards,
Shravya Nethula.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Question about data distribution

2018-01-27 Thread svonn
Hi!

My class for my keys looks like this.

private String deviceId;

@AffinityKeyMapped
private long measurementId;

private long timestamp;

public IgniteKey(String deviceId, long measurementId, long timestamp) {
this.deviceId = deviceId;
this.measurementId = measurementId;
this.timestamp = timestamp;

}


One device can have multiple measurements, but any calculation only requires
other entries from the same measurement as of now, thus only the
measurementId should be relevant.

One measurement contains 100k - 200k entries in one stream, and 500-1000 in
the other stream. Both streams use the same class for keys.

Whenever a new measurementId arrives I'm doing some output on the node it's
being processed on - I've had following case:
Measurement 1 (short M1) -> node1
M2 -> node1
M3 -> node2
M4 -> node1
M5 -> node1
M6 -> node1

I expected that even M2 will already be placed on node2 - however,
performance wise, I don't think either node is close to it's limit, I'm not
sure if that also relevant.
Due to the 5min expiry policy I can end up with one node having ~1 million
cache entries while the other one has 0.

- svonn





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/