[jira] [Created] (FELIX-4662) WebConsole Xdialog javascript function is not working correctly
Valentin Valchev created FELIX-4662: --- Summary: WebConsole Xdialog javascript function is not working correctly Key: FELIX-4662 URL: https://issues.apache.org/jira/browse/FELIX-4662 Project: Felix Issue Type: Bug Components: Web Console Affects Versions: webconsole-4.2.2 Reporter: Valentin Valchev Assignee: Valentin Valchev Priority: Blocker When jQuery and jQuery were updated, seems nobody tested the Xdialog function. The problem is that it tries to destroy the dialog without checking if the dialog object is already created. Previously that seems the be ignored, but now, jQuery UI will throw exception and that will break the normal operation of the JavaScript code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (FELIX-4660) Security problem in WebConsoleUtil.getParameter() method
[ https://issues.apache.org/jira/browse/FELIX-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Valentin Valchev resolved FELIX-4660. - Resolution: Fixed Fix Version/s: webconsole-4.2.4 Assignee: Valentin Valchev Fixed in rev. 1629129 Security problem in WebConsoleUtil.getParameter() method Key: FELIX-4660 URL: https://issues.apache.org/jira/browse/FELIX-4660 Project: Felix Issue Type: Bug Components: Web Console Affects Versions: webconsole-4.2.2 Reporter: Valentin Valchev Assignee: Valentin Valchev Fix For: webconsole-4.2.4 The mentioned method is used to get simple parameters as well FileItems, if the request is multipart. If a big file has been uploaded Apache File Upload will store the file in a temporary folder, instead of keeping it in memory. That folder is specified by system property 'java.io.tmpdir'. When running with security the file upload will require the bundle to have the following permission: (java.util.PropertyPermission java.io.tmpdir read) But in order to read/write/delete to that folder the bundle will require (java.io.FilePermission ALL FILES read,write,delete) Because we don't know where the file will be stored and cannot express that using system properties, we need to give permission to read any file on system and that is well .. bad. In OSGi however, it's guaranteed that the bundle will have permission to read/write/delete files in it's data folder. So all we need is to set the repository path: {code} DiskFileItemFactory factory factory.setRepository( 256000 ); {code} To keep compatibility with existing version(s) I suggest that we add a new constant: AbstractWebConsolePlugin.ATTR_FILEUPLOAD_DIR The value of that attribute is a File object - a folder, which plugins obtain using BundleContext.getDataFile(). So if the attribute is set, the getParameter() method will set that file as repository to the DiskFileItemFactory. That wouldn't require any changes to the API, though any plugins, that use FileUpload are recommended to update their code and set that attribute. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (FELIX-4662) WebConsole Xdialog javascript function is not working correctly
[ https://issues.apache.org/jira/browse/FELIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Valentin Valchev resolved FELIX-4662. - Resolution: Fixed Fix Version/s: webconsole-4.2.4 Fixed in rev. 1629131 WebConsole Xdialog javascript function is not working correctly --- Key: FELIX-4662 URL: https://issues.apache.org/jira/browse/FELIX-4662 Project: Felix Issue Type: Bug Components: Web Console Affects Versions: webconsole-4.2.2 Reporter: Valentin Valchev Assignee: Valentin Valchev Priority: Blocker Fix For: webconsole-4.2.4 When jQuery and jQuery were updated, seems nobody tested the Xdialog function. The problem is that it tries to destroy the dialog without checking if the dialog object is already created. Previously that seems the be ignored, but now, jQuery UI will throw exception and that will break the normal operation of the JavaScript code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FELIX-4663) Potential memory leak in AsyncDeliveryTask
Hartmut Lang created FELIX-4663: --- Summary: Potential memory leak in AsyncDeliveryTask Key: FELIX-4663 URL: https://issues.apache.org/jira/browse/FELIX-4663 Project: Felix Issue Type: Bug Components: Event Admin Affects Versions: eventadmin-1.3.2 Reporter: Hartmut Lang EventAdmin 1.3.2 can create an OutOfMemory condition caused by not delivered async events. The problem can occur if an interrupted thread issues an async event (e.g. log-event). In EventAdmin 1.3.2 the async-delivery uses DefaultThreadPool based on PooledExecutor. If the already interrupted thread enters the execute-method in PooledExecutor an InterruptedException is thrown before the TaskExecutor was added to the Thread-Pool. This Exception is catched(not handled, only logged) in the DefaultThreadPool. As a result the TaskExecuter was not scheduled in the ThreadPool but is still part of the m_running_threads. All new events are added to the pool of the TaskExecuter, adding in a increasing LinkedList. The TaskExecutor is never started again. Memory is leaking. Seems that 1.4.x is not vulnerable related to interrupted threads. But the same catch-and-not-handle block is used in 1.4.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FELIX-4663) Potential memory leak in AsyncDeliveryTask
[ https://issues.apache.org/jira/browse/FELIX-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157815#comment-14157815 ] Hartmut Lang commented on FELIX-4663: - The critical code parts in 1.3.2 are: {code:title=AsyncDeliverTask.java} final Thread currentThread = Thread.currentThread(); TaskExecuter executer = null; synchronized (m_running_threads ) { final TaskExecuter runningExecutor = (TaskExecuter)m_running_threads.get(currentThread); if ( runningExecutor != null ) { runningExecutor.add(tasks, event); } else { executer = new TaskExecuter( tasks, event, currentThread ); m_running_threads.put(currentThread, executer); } } if ( executer != null ) { m_pool.executeTask(executer); } {code} and {code:title=DefaultThreadPool.java} public void executeTask(final Runnable task) { try { super.execute(task); } catch (final Throwable t) { LogWrapper.getLogger().log( LogWrapper.LOG_WARNING, Exception: + t, t); // ignore this } } {code} Potential memory leak in AsyncDeliveryTask -- Key: FELIX-4663 URL: https://issues.apache.org/jira/browse/FELIX-4663 Project: Felix Issue Type: Bug Components: Event Admin Affects Versions: eventadmin-1.3.2 Reporter: Hartmut Lang EventAdmin 1.3.2 can create an OutOfMemory condition caused by not delivered async events. The problem can occur if an interrupted thread issues an async event (e.g. log-event). In EventAdmin 1.3.2 the async-delivery uses DefaultThreadPool based on PooledExecutor. If the already interrupted thread enters the execute-method in PooledExecutor an InterruptedException is thrown before the TaskExecutor was added to the Thread-Pool. This Exception is catched(not handled, only logged) in the DefaultThreadPool. As a result the TaskExecuter was not scheduled in the ThreadPool but is still part of the m_running_threads. All new events are added to the pool of the TaskExecuter, adding in a increasing LinkedList. The TaskExecutor is never started again. Memory is leaking. Seems that 1.4.x is not vulnerable related to interrupted threads. But the same catch-and-not-handle block is used in 1.4.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FELIX-4663) Potential memory leak in AsyncDeliveryTask
[ https://issues.apache.org/jira/browse/FELIX-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157817#comment-14157817 ] Hartmut Lang commented on FELIX-4663: - Please also check if 1.4.2 can cause the same situation if ExecutorService of the DefaultThreadPool throws an RejectedExecutionException. Potential memory leak in AsyncDeliveryTask -- Key: FELIX-4663 URL: https://issues.apache.org/jira/browse/FELIX-4663 Project: Felix Issue Type: Bug Components: Event Admin Affects Versions: eventadmin-1.3.2 Reporter: Hartmut Lang EventAdmin 1.3.2 can create an OutOfMemory condition caused by not delivered async events. The problem can occur if an interrupted thread issues an async event (e.g. log-event). In EventAdmin 1.3.2 the async-delivery uses DefaultThreadPool based on PooledExecutor. If the already interrupted thread enters the execute-method in PooledExecutor an InterruptedException is thrown before the TaskExecutor was added to the Thread-Pool. This Exception is catched(not handled, only logged) in the DefaultThreadPool. As a result the TaskExecuter was not scheduled in the ThreadPool but is still part of the m_running_threads. All new events are added to the pool of the TaskExecuter, adding in a increasing LinkedList. The TaskExecutor is never started again. Memory is leaking. Seems that 1.4.x is not vulnerable related to interrupted threads. But the same catch-and-not-handle block is used in 1.4.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FELIX-4664) [SSLFilter] Support for pre-3.0 Servlet API
Felix Meschberger created FELIX-4664: Summary: [SSLFilter] Support for pre-3.0 Servlet API Key: FELIX-4664 URL: https://issues.apache.org/jira/browse/FELIX-4664 Project: Felix Issue Type: Bug Components: HTTP Service Affects Versions: http-sslfilter-0.1.0, http-2.3.0 Reporter: Felix Meschberger The SSL Filter bundle currently inherits the Servlet API dependency from the parent project which sets it to Servlet API 3.0. Actually, the filter itself does not depend at all on Servlet API 3.0 and the dependency should probably be fixed to a reduced version number -- maybe even the lowest version still compiling... There is one caveat: unit tests currently use HttpServletResponse.getHeader method which has been introduced with Servlet API 3.0. So the tests need to be tweaked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FELIX-4664) [SSLFilter] Support for pre-3.0 Servlet API
[ https://issues.apache.org/jira/browse/FELIX-4664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Felix Meschberger updated FELIX-4664: - Attachment: FELIX-4664.patch Proposed patch. [SSLFilter] Support for pre-3.0 Servlet API --- Key: FELIX-4664 URL: https://issues.apache.org/jira/browse/FELIX-4664 Project: Felix Issue Type: Bug Components: HTTP Service Affects Versions: http-2.3.0, http-sslfilter-0.1.0 Reporter: Felix Meschberger Attachments: FELIX-4664.patch The SSL Filter bundle currently inherits the Servlet API dependency from the parent project which sets it to Servlet API 3.0. Actually, the filter itself does not depend at all on Servlet API 3.0 and the dependency should probably be fixed to a reduced version number -- maybe even the lowest version still compiling... There is one caveat: unit tests currently use HttpServletResponse.getHeader method which has been introduced with Servlet API 3.0. So the tests need to be tweaked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FELIX-4656) Improve memory usage of the resolver
[ https://issues.apache.org/jira/browse/FELIX-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158115#comment-14158115 ] Richard S. Hall commented on FELIX-4656: From mailing list: Guillaume Nodet Tue, 30 Sep 2014 01:45:07 -0700 I recently run into OutOfMemory problems when using the resolver in addition to the slowness I already raised. I worked a bit on the resolver and pushed my results to a private branch for review: https://github.com/gnodet/felix/commits/resolver-improvements The memory problem was addressed by replacing the internal data structures hold by the Candidates class. This class was using a HashMap to store the MapRequirement, ListCapability for candidates and for each permutation, the whole data structure was copied. Unfortunately, the HashMap object is very memory intensive and the ListCapability were duplicated a lot during resolutions. Therefore, I introduced a few collections which are optimized for memory consumption. Those are the OpenHashMap (derived from the mahout collections) and the CopyOnWriteList/CopyOnWriteSet. The OpenHashMap uses open adressing and two Object[] internally instead of tons of entry objects. Copy should also be faster as much less objects are created. The values in this map are now CopyOnWriteList instead of ArrayList, which has the big benefit of not duplicating the entire arrays when the structure is copied. Those two collections work roughly the same way as CopyOnWriteArrayList but without any thread safety. In addition, creating a new list from a CopyOnWriteList does not lead to creating a new Object[] to store the data, but merely assign the same pointer internally. In terms of memory consumption, this means overall, that copying a Candidates class, will lead to two Object[] arrays creation (for the OpenHashMap) and the creation of small objects for the lists (but with no copy of the data itself). This is mainly commit https://github.com/gnodet/felix/commit/0bf1523f21f9983b21b2737b4f78bb8d78cd35fd The slowness problem was partially addressed because I found out that the resolver was attempting the same resolution multiple times. I think this is due to the order or removing possible candidates and multiple paths to the same resolution were executed. In order to solve this problem, the Candidates now holds a m_path structure which contains all the removed candidates. This object is used in the resolver loop to make sure we haven't already tried the same resolution. For big resolutions, I think it will improve things a lot. This is fixed by commit https://github.com/gnodet/felix/commit/090a67a7fc05170291ad9cff808229a0292b6fb2 I'm going to do further testing, and I'm going to add a big resolution test to measure performances and memory consumption. In the mean time, I'd like others to review and test it if possible. Improve memory usage of the resolver Key: FELIX-4656 URL: https://issues.apache.org/jira/browse/FELIX-4656 Project: Felix Issue Type: Improvement Components: Resolver Reporter: Guillaume Nodet Assignee: Guillaume Nodet Fix For: resolver-1.2.0 During big resolutions ( 100 bundles), the memory consumption can become very huge, mostly by keeping a lot of copies of the Candidates object. I want to lower the memory requirements of the resolver without touching the algorithm at all (which would be a different improvement). This can be done by using : * lower memory intensive collections * do smart copies of those collections (where they would only actually copy the data when modify) The second item is slightly more difficult to achieve, as the maps in the Candidate objects contains Set and List, which would mean that those must be copied too. So it could actually be complementary, if achievable. For the first one, the HashMap and HashSet are very memory intensive. I'll introduce two new collections which will lower the requirements. -- This message was sent by Atlassian JIRA (v6.3.4#6332)