CVE-2021-2816[3,4,5] vulnerabilities and Ignite 2.8.1

2021-10-27 Thread Dana Milan
Hi,
.
Can anyone provide a clarification of how jetty is being used by Ignite
2.8.1 and whether there is another way to avoid its vulnerabilities when
using Ignite besides upgrading to a newer Ignite version?
To be more specific, if I don't enable REST API (by not moving
ignite-rest-http from libs/optional to libs directory), will it eliminate
these vulnerabilities from my Ignite node?

Thanks a lot,
Dana


Ignite's memory consumption

2020-08-26 Thread Dana Milan
Hi all Igniters,

I am trying to minimize Ignite's memory consumption on my server.

Some background:
My server has 16GB RAM, and is supposed to run applications other than
Ignite.
I use Ignite to store a cache. I use the TRANSACTIONAL_SNAPSHOT mode and I
don't use persistence (configuration file attached). To read and update the
cache I use SQL queries, through ODBC Client in C++ and through an
embedded client-mode node in C#.
My data consists of a table with 5 columns, and I guess around tens of
thousands of rows.
Ignite metrics tell me that my data takes 167MB ("CFGDataRegion region
[used=167MB, free=67.23%, comm=256MB]", This region contains mainly this
one cache).

At the beginning, when I didn't tune the JVM at all, the Apache.Ignite
process consumed around 1.6-1.9GB of RAM.
After I've done some reading and research, I use the following JVM options
which have brought the process to consume around 760MB as of now:
-J-Xms512m
-J-Xmx512m
-J-Xmn64m
-J-XX:+UseG1GC
-J-XX:SurvivorRatio=128
-J-XX:MaxGCPauseMillis=1000
-J-XX:InitiatingHeapOccupancyPercent=40
-J-XX:+DisableExplicitGC
-J-XX:+UseStringDeduplication

Currently Ignite is up for 29 hours on my server. When I only started the
node, the Apache.Ignite process consumed around 600MB (after my data
insertion, which doesn't change much after), and as stated, now it consumes
around 760MB. I've been monitoring it every once in a while and this is not
a sudden rise, it has been rising slowly but steadily ever since the node
has started.
I used DBeaver to look into node metrics system view
, and I turned on the
garbage collector logs. The garbage collector log shows that heap is
constantly growing, but I guess this is due to the SQL queries and their
results being stored there. (There are a few queries in a second, the
results normally contain one row but can contain tens or hundreds of rows).
After every garbage collection the heap usage is between 80-220MB. This is
in accordance to what I see under HEAP_MEMORY_USED system view metric.
Also, I can see that NONHEAP_MEMORY_COMITTED is around 102MB and
NONHEAP_MEMORY_USED is around 98MB.

My question is, what could be causing the constant growth in memory usage?
What else consumes memory that doesn't appear in these metrics?

Thanks for your help!

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>
	









		




			
		



		
			

	
		
			
			
			
			
		
		
	








			
		
		

		
			

	


	

			
		
	
	


	
		
		
		
		
		

		
			
		
	  
	  
			
	
	
		
			
			

			
			
		
		

			
		
	


		
		
		
		
		
		
		
		
			
		
	  
	
	
		
	
		
			
		
	

			
		
	


 
	
		
		
		
		
		
		
		
			

	
		
		
	

			
		
	




Serialize a char array member as part of class serialization into cache - C++ API

2020-07-30 Thread Dana Milan
Hi,

I couldn't find an answer anywhere else, hopefully you can help me.

I have the following class:

class Pair {
friend struct ignite::binary::BinaryType;
public:
Pair() {
_len = 0;
_buff = nullptr;
}

Pair(char* buff, int len) {
_len = len;
_buff = new char[len];
for (int i = 0; i < len; i++) {
_buff[i] = buff[i];
}
}

~ Pair() {
delete[] _buff;
}

private:
char* _buff;
int _len;
};

I try to serialize the class into cache in the following manner:

template<>
struct ignite::binary::BinaryType
{
static int32_t GetTypeId()
{
return GetBinaryStringHashCode("Pair");
}

static void GetTypeName(std::string& name)
{
name = "Pair";
}

static int32_t GetFieldId(const char* name)
{
return GetBinaryStringHashCode(name);
}

static bool IsNull(const Pair& obj)
{
return false;
}

static void GetNull(Pair& dst)
{
dst = Pair();
}

static void Write(BinaryWriter& writer, const Pair& obj)
{
BinaryRawWriter rawWriter = writer.RawWriter();

int len = obj._len;
char* buff = obj._buff;

rawWriter.WriteInt32(len);

auto binWriter = rawWriter.WriteArray();
for (int i = 0; i < len; i++) {
binWriter.Write(buff[i]);
}
binWriter.Close();
}

static void Read(BinaryReader& reader, Pair& dst)
{
BinaryRawReader rawReader = reader.RawReader();

dst._len = rawReader.ReadInt32();

dst._buff = new char[dst._len];
auto binReader = rawReader.ReadArray();
for (int i = 0; i < dst._len; i++) {
dst._buff[i] = binReader.GetNext();
}
}
};

When I try to compile I get errors as following:
[image: image.png]
If I comment out the parts of reading and writing the array it compiles
successfully.

Does this happen because I also need to serialize the write of 'char'? (If
so, how do I do it?)
Am I using the ReadArray and WriteArray correctly?
Is there another way of storing the char buffer in cache?

If someone can provide a working code snippet that would be amazing, but
any help would be appreciated.
Thanks a lot!


Fwd: Exceptions in C++ Ignite Thin Client on process exit

2020-07-27 Thread Dana Milan
Hi,

I am using C++ Ignite Thin Client (2.8.1 version) to store values in cache
on an Ignite local node.
On Ignite log, I get many of the following error messages:

[09:41:13,542][WARNING][grid-nio-worker-client-listener-3-#33][ClientListenerProcessor]
Client disconnected abruptly due to network connection loss or because the
connection was left open on application shutdown. [cls=class
o.a.i.i.util.nio.GridNioException, msg=An existing connection was forcibly
closed by the remote host]
[09:41:14,663][SEVERE][grid-nio-worker-client-listener-1-#31][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-1, igniteInstanceName=null,
finished=false, heartbeatTs=1595832071759, hashCode=709103909,
interrupted=false, runner=grid-nio-worker-client-listener-1-#31]]],
writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
closeSocket=true, outboundMessagesQueueSizeMetric=null,
super=GridNioSessionImpl [locAddr=/127.0.0.1:0, rmtAddr=/127.0.0.1:54986,
createTime=1595832071696, closeTime=0, bytesSent=4320, bytesRcvd=5881,
bytesSent0=4320, bytesRcvd0=5881, sndSchedTime=1595832071759,
lastSndTime=1595832071759, lastRcvTime=1595832071759, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false],
SSL filter], accepted=true, markedForClose=false]]]
java.io.IOException: An existing connection was forcibly closed by the
remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1162)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2449)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2216)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1857)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Unknown Source)

To me it looks a lot like this problem:
http://apache-ignite-users.70518.x6.nabble.com/exceptions-in-Ignite-node-when-a-thin-client-process-ends-td28970.html

Though it is supposed to be fixed in Ignite version 2.8, so I don't know.

Is there a way to properly disconnect the client before the process dies in
C++ API?
Is this a real error unlike the question referred to above? If not, do I
have a way to stop the print of these messages?

Thank you for your help!