>> Why can't we use GridNioServer for java thin clients?
Yes, we can. Despite the naming, it can be used as client (set port as -1),
But doesn't have the same set of advantages as Netty. Netty has a way
better support (performance) for native transports and SSL, that
default java NIO.

But API is much, much worse.

If our goal is to keep thin client in core module in any circumstances,
that this is the only choice.

But lets see, for example, at Lettuce (netty based async redis client) - [1]
1. It supports reactive streams (additional module)
2. It supports kotlin coroutines (additional module)
I hardly believe, that we could support this in our core module.

Why not to consider separation? Why user of our thin client should have in
his classpath megabytes of unnecessary bytecode?


[1] -- https://lettuce.io/core/release/reference/index.html

пн, 19 окт. 2020 г. в 10:06, Alex Plehanov <plehanov.a...@gmail.com>:

> Pavel,
>
> Why can't we use GridNioServer for java thin clients?
> It has the same advantages as Netty (future based async API, SSL, etc) but
> without extra dependency.
> GridClient (control.sh), for example, uses GridNioServer for communication.
>
> сб, 17 окт. 2020 г. в 11:21, Ivan Daschinsky <ivanda...@gmail.com>:
>
> > Hi.
> > >>  Potentially reduced resource usage - share EventLoopGroop across all
> > connections within one IgniteClient.
> > Not potentially, definitely. Current approach (one receiver thread per
> > TcpClientChannel and shared FJP for continuation) requires too many
> > threads.
> > When TcpClientChannel is the only one, it's ok. But if we use multiple
> > addresses, things become worse.
> >
> > >> The obvious downside is an extra dependency in the core module.
> > There is another downside -- we should rework our transaction's API a
> > little bit (Actually,  in netty socket write is performed in other thread
> > (channel.write is async) and
> > current tx logic will not work
> > (org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))
> >
> > A little bit of offtopic.
> > I suppose, that the java thin client (and other thin clients) should be
> > separated from the main ignite repo and have a separate release cycle.
> > For example, java thin client depends on default binary
> > protocol's implementation, that is notorious for heavy usage of internal
> > JDK api and this for example.
> > prevents usage of our thin client in graalvm native image.
> >
> >
> > пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <ptupit...@apache.org>:
> >
> > > Igniters,
> > >
> > > I'm working on IEP-51 [1] to make Java thin client truly async
> > > and make sure user threads are never blocked
> > > (right now socket writes are performed from user threads).
> > >
> > > I've investigated potential approaches and came to the conclusion
> > > that Netty [2] is our best bet.
> > > - Nice Future-based async API => will greatly reduce our code
> complexity
> > >   and remove manual thread management
> > > - Potentially reduced resource usage - share EventLoopGroop across all
> > > connections within one IgniteClient
> > > - SSL is easy to use
> > > - Proven performance and reliability
> > >
> > > Other approaches, like AsynchronousSocketChannel or selectors, seem to
> be
> > > too complicated,
> > > especially when SSL comes into play.
> > > We should focus on Ignite-specific work instead of spending time on
> > > reinventing async IO.
> > >
> > > The obvious downside is an extra dependency in the core module.
> > > However, I heard some discussions about using Netty for GridNioServer
> in
> > > future.
> > >
> > > Let me know your thoughts.
> > >
> > > Pavel
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> > > [2] https://netty.io
> > >
> >
> >
> > --
> > Sincerely yours, Ivan Daschinskiy
> >
>


-- 
Sincerely yours, Ivan Daschinskiy

Reply via email to