Re: [Vysper] Server Cache requirements

2009-08-04 Thread Bernd Fondermann
On Tue, Aug 4, 2009 at 06:24, Ashishpaliwalash...@gmail.com wrote:
 While implementing Presence Cache for Vysper, was wondering, is this
 the only cache needed by Server?

 So, if other modules need caching, they may have to re-implement the
 cache like Presence Cache.

 How about having Cache implemntation as part of Server and required
 modules can either extend it or
 use the global implementation.

 wdyt?

There are a few things the server needs to have quick access to:
rosters, active sessions, resource ids etc.
Some XMPP extension mights needs this, too. They will have very
specific requirements, though.

Caching is an optimization strategy to improve throughput and scalability.
Currently, we have no idea where we need to optimize, haven't we?

What do you think would be the benefit of introducing such a global
implementation? What would be its features and benefits?

  Bernd


 --
 thanks
 ashish

 Blog: http://www.ashishpaliwal.com/blog
 My Photo Galleries: http://www.pbase.com/ashishpaliwal



Re: [Vysper] Server Cache requirements

2009-08-04 Thread Ashish
On Tue, Aug 4, 2009 at 1:13 PM, Bernd
Fondermannbernd.fonderm...@googlemail.com wrote:
 On Tue, Aug 4, 2009 at 06:24, Ashishpaliwalash...@gmail.com wrote:
 While implementing Presence Cache for Vysper, was wondering, is this
 the only cache needed by Server?

 So, if other modules need caching, they may have to re-implement the
 cache like Presence Cache.

 How about having Cache implemntation as part of Server and required
 modules can either extend it or
 use the global implementation.

 wdyt?

 There are a few things the server needs to have quick access to:
 rosters, active sessions, resource ids etc.
 Some XMPP extension mights needs this, too. They will have very
 specific requirements, though.

 Caching is an optimization strategy to improve throughput and scalability.
 Currently, we have no idea where we need to optimize, haven't we?

Yup it is. However, we have to start thinking about this. Well we know
how XMPP Server works and
we can atleast start putting our thoughts in place.


 What do you think would be the benefit of introducing such a global
 implementation? What would be its features and benefits?

Let me simplify it a bit. We have a cache implementation. How about
using the same implementation
by all. The user like presence, roster etc can just customize the way
they see cache.

Here is what I feel should be the requirements
1. CacheProvider - SPI like implementation to plugin custom caching
implementation. Our Server Users can choose what works best for them
2. Modules don't rewrite specific cache implementations, unless a
situation demands so. They rely on the global implementation. Please
note that global implementation doesn't means single cache. There can
be multiple cache's with or without replication. Even as of today we
use two cache instances for Presence, one based on Entity and other is
based on JID

The benefit is code reuse and ease of maintenance :-)

-- 
thanks
ashish


Re: [Vysper] Server Cache requirements

2009-08-04 Thread Bernd Fondermann
Ashish wrote:
 On Tue, Aug 4, 2009 at 1:13 PM, Bernd
 Fondermannbernd.fonderm...@googlemail.com wrote:
 On Tue, Aug 4, 2009 at 06:24, Ashishpaliwalash...@gmail.com wrote:
 While implementing Presence Cache for Vysper, was wondering, is this
 the only cache needed by Server?

 So, if other modules need caching, they may have to re-implement the
 cache like Presence Cache.

 How about having Cache implemntation as part of Server and required
 modules can either extend it or
 use the global implementation.

 wdyt?
 There are a few things the server needs to have quick access to:
 rosters, active sessions, resource ids etc.
 Some XMPP extension mights needs this, too. They will have very
 specific requirements, though.

 Caching is an optimization strategy to improve throughput and scalability.
 Currently, we have no idea where we need to optimize, haven't we?
 
 Yup it is. However, we have to start thinking about this. Well we know
 how XMPP Server works and
 we can atleast start putting our thoughts in place.
 
 What do you think would be the benefit of introducing such a global
 implementation? What would be its features and benefits?
 
 Let me simplify it a bit. We have a cache implementation. How about
 using the same implementation
 by all. The user like presence, roster etc can just customize the way
 they see cache.
 
 Here is what I feel should be the requirements
 1. CacheProvider - SPI like implementation to plugin custom caching
 implementation. Our Server Users can choose what works best for them
 2. Modules don't rewrite specific cache implementations, unless a
 situation demands so. They rely on the global implementation. Please
 note that global implementation doesn't means single cache. There can
 be multiple cache's with or without replication. Even as of today we
 use two cache instances for Presence, one based on Entity and other is
 based on JID
 
 The benefit is code reuse and ease of maintenance :-)

You mean you want to share code and build something like a cache
abstraction layer? I found the current implementations to be quite
lightweight and don't yet see the benefit.

Do you have some code or can outline what would be shared between cache
implementations?

  Bernd


Re: [Vysper] Server Cache requirements

2009-08-04 Thread Ashish
 Caching is an optimization strategy to improve throughput and scalability.
 Currently, we have no idea where we need to optimize, haven't we?

 Yup it is. However, we have to start thinking about this. Well we know
 how XMPP Server works and
 we can atleast start putting our thoughts in place.

 What do you think would be the benefit of introducing such a global
 implementation? What would be its features and benefits?

 Let me simplify it a bit. We have a cache implementation. How about
 using the same implementation
 by all. The user like presence, roster etc can just customize the way
 they see cache.

 Here is what I feel should be the requirements
 1. CacheProvider - SPI like implementation to plugin custom caching
 implementation. Our Server Users can choose what works best for them
 2. Modules don't rewrite specific cache implementations, unless a
 situation demands so. They rely on the global implementation. Please
 note that global implementation doesn't means single cache. There can
 be multiple cache's with or without replication. Even as of today we
 use two cache instances for Presence, one based on Entity and other is
 based on JID

 The benefit is code reuse and ease of maintenance :-)

 You mean you want to share code and build something like a cache
 abstraction layer? I found the current implementations to be quite
 lightweight and don't yet see the benefit.

Not yet, but shall the details as soon as I have more concrete design.

-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal


[jira] Assigned: (VYSPER-91) Implement Jabber Component Protocol (XEP-0114)

2009-08-04 Thread Bernd Fondermann (JIRA)

 [ 
https://issues.apache.org/jira/browse/VYSPER-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Fondermann reassigned VYSPER-91:
--

Assignee: (was: Bernd Fondermann)

 Implement Jabber Component Protocol (XEP-0114)
 --

 Key: VYSPER-91
 URL: https://issues.apache.org/jira/browse/VYSPER-91
 Project: VYSPER
  Issue Type: New Feature
  Components: extension
Reporter: Bernd Fondermann
Priority: Minor

 See 
   http://xmpp.org/extensions/xep-0114.html 
 A related XEP is
   http://xmpp.org/extensions/xep-0225.html
 Components are trusted functional extensions to a server, residing on their 
 own subdomain.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (VYSPER-169) Implement Ad-Hoc Commands XEP-0050

2009-08-04 Thread Bernd Fondermann (JIRA)
Implement Ad-Hoc Commands XEP-0050
--

 Key: VYSPER-169
 URL: https://issues.apache.org/jira/browse/VYSPER-169
 Project: VYSPER
  Issue Type: New Feature
  Components: extension
Reporter: Bernd Fondermann


Ad-Hoc commands specifies a framework for invoking server-side functions. 
see http://xmpp.org/extensions/xep-0050.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (VYSPER-170) Implement Service Administration XEP-0133

2009-08-04 Thread Bernd Fondermann (JIRA)
Implement Service Administration XEP-0133
-

 Key: VYSPER-170
 URL: https://issues.apache.org/jira/browse/VYSPER-170
 Project: VYSPER
  Issue Type: New Feature
  Components: extension
Reporter: Bernd Fondermann


Service Administration specifies Ad-hoc commands to admin the server (manage 
users, sessions, roster, etc).
see http://xmpp.org/extensions/xep-0133.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (VYSPER-10) Implement Service Discovery XEP-0030

2009-08-04 Thread Bernd Fondermann (JIRA)

 [ 
https://issues.apache.org/jira/browse/VYSPER-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Fondermann resolved VYSPER-10.


Resolution: Fixed

implemented and in use.

 Implement Service Discovery XEP-0030
 

 Key: VYSPER-10
 URL: https://issues.apache.org/jira/browse/VYSPER-10
 Project: VYSPER
  Issue Type: Bug
  Components: core protocol, extension
Reporter: Bernd Fondermann
Assignee: Bernd Fondermann

 Service Discovery is a central XMPP feature specified at 
 http://www.xmpp.org/extensions/xep-0030.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (DIRMINA-733) The profile method in the ProfilerTimerFilter can caused more methods to be profiled than intended.

2009-08-04 Thread Christopher Popp (JIRA)
The profile method in the ProfilerTimerFilter can caused more methods to be 
profiled than intended.
---

 Key: DIRMINA-733
 URL: https://issues.apache.org/jira/browse/DIRMINA-733
 Project: MINA
  Issue Type: Bug
  Components: Filter
Affects Versions: 2.0.0-M6
 Environment: Mina M6 and verified in the trunk.
Reporter: Christopher Popp
Priority: Minor


Issue affects the ProfilerTimerFilter.

Some of the case statements in the profile method are missing a return/break.  
This causes the possibility of profiling to be enabled for methods other than 
the one specified.  See method below

public void profile(IoEventType type) {
switch (type) {
case MESSAGE_RECEIVED :
profileMessageReceived = true;

if (messageReceivedTimerWorker == null) {
messageReceivedTimerWorker = new TimerWorker();
}

return;

case MESSAGE_SENT :
profileMessageSent = true;

if (messageSentTimerWorker == null) {
messageSentTimerWorker = new TimerWorker();
}

return;

case SESSION_CREATED :
profileSessionCreated = true;

if (sessionCreatedTimerWorker == null) {
sessionCreatedTimerWorker = new TimerWorker();
}

case SESSION_OPENED :
profileSessionOpened = true;

if (sessionOpenedTimerWorker == null) {
sessionOpenedTimerWorker = new TimerWorker();
}

case SESSION_IDLE :
profileSessionIdle = true;

if (sessionIdleTimerWorker == null) {
sessionIdleTimerWorker = new TimerWorker();
}

case SESSION_CLOSED :
profileSessionClosed = true;

if (sessionClosedTimerWorker == null) {
sessionClosedTimerWorker = new TimerWorker();
}
}
}


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



FTPS with passive mode is slow

2009-08-04 Thread Sai Pullabhotla
I've been noticing that the passive data connections are taking quite
some time when using SSL. I finally got some time to look into this
and noticed the following while debugging through the code. This issue
might have been introduced with the fix we put in for FTPSERVER-241.

The code that wraps the plain socket into an SSL socket uses the
following line:

SSLSocket sslSocket = (SSLSocket) ssocketFactory
.createSocket(serverSocket,
serverSocket.getInetAddress().getHostName(),
serverSocket.getPort(), true);

Based on the JavaDocs, the InetAddress.getHostName() performs a
reverse name look up, which was taking about 1.5 seconds on every
system on our network. I'm not sure if this is an issue with the way
our network is setup. Some one please let me know if this in fact an
issue with our network.

We are not seeing this lag when client and server are running on the
same system. Things work too fast in this case, probably because the
system knows very well about itself.

Just to try it out, I changed the code to simply use the IP address
rather than the host name, and I was able to get rid of the lag and
things seem to be working much faster. Below is the change to the
above line:

SSLSocket sslSocket = (SSLSocket) ssocketFactory
.createSocket(serverSocket,
serverSocket.getInetAddress().getHostAddress(),
serverSocket.getPort(), true);

Could some one test the current code base with client and server
running on different systems and tell me if they notice the lag when
creating the passive data connection. If this can be reproduced on one
of your environments, we should probably put the above fix. I don't
think this suggested fix should cause any other issues, do you?

Regards,

Sai Pullabhotla
www.jMethods.com


Re: FTPS with passive mode is slow

2009-08-04 Thread Niklas Gustavsson
I believe this problem has been reported multiple times. Please open a
JIRA and apply the patch, it makes perfect sense.

/niklas

On Tue, Aug 4, 2009 at 10:02 PM, Sai
Pullabhotlasai.pullabho...@jmethods.com wrote:
 I've been noticing that the passive data connections are taking quite
 some time when using SSL. I finally got some time to look into this
 and noticed the following while debugging through the code. This issue
 might have been introduced with the fix we put in for FTPSERVER-241.

 The code that wraps the plain socket into an SSL socket uses the
 following line:

                    SSLSocket sslSocket = (SSLSocket) ssocketFactory
                            .createSocket(serverSocket,
 serverSocket.getInetAddress().getHostName(),
                                    serverSocket.getPort(), true);

 Based on the JavaDocs, the InetAddress.getHostName() performs a
 reverse name look up, which was taking about 1.5 seconds on every
 system on our network. I'm not sure if this is an issue with the way
 our network is setup. Some one please let me know if this in fact an
 issue with our network.

 We are not seeing this lag when client and server are running on the
 same system. Things work too fast in this case, probably because the
 system knows very well about itself.

 Just to try it out, I changed the code to simply use the IP address
 rather than the host name, and I was able to get rid of the lag and
 things seem to be working much faster. Below is the change to the
 above line:

                    SSLSocket sslSocket = (SSLSocket) ssocketFactory
                            .createSocket(serverSocket,
 serverSocket.getInetAddress().getHostAddress(),
                                    serverSocket.getPort(), true);

 Could some one test the current code base with client and server
 running on different systems and tell me if they notice the lag when
 creating the passive data connection. If this can be reproduced on one
 of your environments, we should probably put the above fix. I don't
 think this suggested fix should cause any other issues, do you?

 Regards,

 Sai Pullabhotla
 www.jMethods.com



[jira] Closed: (VYSPER-103) Discovering Rooms

2009-08-04 Thread Niklas Gustavsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/VYSPER-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklas Gustavsson closed VYSPER-103.


Resolution: Fixed

Implemented in rev 800979

 Discovering Rooms
 -

 Key: VYSPER-103
 URL: https://issues.apache.org/jira/browse/VYSPER-103
 Project: VYSPER
  Issue Type: Sub-task
  Components: XEP-0045 MUC
Reporter: Niklas Gustavsson



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (DIRMINA-734) Regression with flushing in MINA 2.0.0-M7 trunk

2009-08-04 Thread Serge Baranov (JIRA)
Regression with flushing in MINA 2.0.0-M7 trunk
---

 Key: DIRMINA-734
 URL: https://issues.apache.org/jira/browse/DIRMINA-734
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M7
Reporter: Serge Baranov


It looks like the present trunk of MINA in the repository
has a bug. Updating from M7 built on 06.06.2009 to the current trunk
broke some of our tests.

I've tried to build it today and ran our application tests, some of
them failed.

Reverting back to 06.06.2009 build has fixed the problem.

At the first look it appears that session.close(false) behaves like
session.close(true), as a result some messages are truncated (not
being flushed on session.close(false)).

If I comment out a call to session.close(false) (i.e. not closing the
session at all when needed), the problem goes away.

This behavior is inconsistent, test fails/succeeds randomly. On the
first run it may pass, on the second run it may fail. Test fails about
10% of runs. The part which is not flushed is also random. Could be a
race condition somewhere in MINA introduced in the last 2 months.

I can't provide a test case yet, but our application is a kind of
proxy, so it behaves like this:

1. client connects to the app and send a request
2. app connects to another server and sends a request
3. app gets a reply from the server and sends it back to the client
4. when another server closes the connection, app is closing the
   connection with the client using session.close(false)

The app may be still flushing data to client when closing the
connection. As I said it worked fine with previous MINA versions, but
has broken only recently.

We are using OrderedThreadPoolExecutor and CumulativeProtocolDecoder
if that matters.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.