[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14487600#comment-14487600 ] Thomas Jackson commented on TS-3016: Should we merge such a patch? If it breaks PFS then it doesn't seem like a good idea... If we wanted to get fancy we could try to change that option on a per context basis-- based on the cipher suite that we selected (by whether it support PFS or not). But as [~bcall] mentioned it is enabled everywhere else ;) > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Attachments: TS-3016.diff > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14104337#comment-14104337 ] Sudheer Vinukonda commented on TS-3016: --- fwiw, we have a local patch that adds a configurable setting to disable single ec/dh key generation options (with a default to OFF). Attaching the patch here, if it makes sense to pull it into trafficserver. > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099204#comment-14099204 ] Bryan Call commented on TS-3016: Other serves enable SSL_OP_SINGLE_ECDH_USE by default: {code} [bcall@homer tmp]$ grep -r SSL_OP_SINGLE_ECDH_USE httpd-2.4.7 nginx-1.5.8 stunnel-4.56 httpd-2.4.7/modules/ssl/ssl_engine_init.c:SSL_CTX_set_options(ctx, SSL_OP_SINGLE_ECDH_USE); nginx-1.5.8/src/event/ngx_event_openssl.c:SSL_CTX_set_options(ssl->ctx, SSL_OP_SINGLE_ECDH_USE); stunnel-4.56/src/options.c:#ifdef SSL_OP_SINGLE_ECDH_USE stunnel-4.56/src/options.c:{"SINGLE_ECDH_USE", SSL_OP_SINGLE_ECDH_USE}, {code} > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099189#comment-14099189 ] Sudheer Vinukonda commented on TS-3016: --- [~jpeach] - but, isn't this added new on trafficserver via commit d7bb4cd3c6ec6c1fc5e70251257e2e10e450c92f? so, we missed this commit in our ats4 version (probably, a mistake to begin with) and now that the ats5 version includes it, we ran into this high cpu problem. We are definitely going to discuss reenabling those options again, but, in any case, since this results in performance impact, is there a better way to address this problem? For example, openSSL documentation says the below - "If ``strong'' primes were used to generate the DH parameters, it is not strictly necessary to generate a new key for each handshake but it does improve forward secrecy. If it is not assured, that ``strong'' primes were used (see especially the section about DSA parameters below), SSL_OP_SINGLE_DH_USE must be used in order to prevent small subgroup attacks. Always using SSL_OP_SINGLE_DH_USE has an impact on the computer time needed during negotiation, but it is not very large, so application authors/users should consider to always enable this option." is there a way to confirm/check if "strong" primes are indeed being used? > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099137#comment-14099137 ] James Peach commented on TS-3016: - I'm pretty sure that I added {{SSL_OP_SINGLE_DH_USE}} and {{SSL_OP_SINGLE_ECDH_USE}} because that's what mod_ssl does. It's also unconditionally enabled there. > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099084#comment-14099084 ] Bryan Call commented on TS-3016: We should have SSL_OP_SINGLE_ECDH_USE enabled by default and take the CPU hit. It is not considered PFS for each handshake unless this option is enabled in OpenSSL. PFS would only occur across restarting the application if this option is not turned on. Reference Section 2.3: http://homes.esat.kuleuven.be/~fvercaut/papers/CTRSA2011.pdf > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098962#comment-14098962 ] Sudheer Vinukonda commented on TS-3016: --- [~zwoop] - Yes, this is a blocker for our ats 5.0 roll out, so, for the immediate production blocker, we are reverting the commit d7bb4cd3c6ec6c1fc5e70251257e2e10e450c92f. But, we do want to understand/investigate how to resolve this without impacting security. Based on openssl's documentation https://www.openssl.org/docs/ssl/SSL_CTX_set_tmp_dh_callback.html, it seems that enabling the SINGLE_USE options are recommended (although, it does warn about impacting cpu). It does however say that if 'strong' primes are used, these options may not be needed. Trying to investigate further whether the openssl versions we use do infact use the 'strong' primes referred here. But, in the mean time, any suggestions on how to address this? {code} When using a cipher with RSA authentication, an ephemeral DH key exchange can take place. Ciphers with DSA keys always use ephemeral DH keys as well. In these cases, the session data are negotiated using the ephemeral/temporary DH key and the key supplied and certified by the certificate chain is only used for signing. Anonymous ciphers (without a permanent server key) also use ephemeral DH keys. Using ephemeral DH key exchange yields forward secrecy, as the connection can only be decrypted, when the DH key is known. By generating a temporary DH key inside the server application that is lost when the application is left, it becomes impossible for an attacker to decrypt past sessions, even if he gets hold of the normal (certified) key, as this key was only used for signing. In order to perform a DH key exchange the server must use a DH group (DH parameters) and generate a DH key. The server will always generate a new DH key during the negotiation, when the DH parameters are supplied via callback and/or when the SSL_OP_SINGLE_DH_USE option of SSL_CTX_set_options(3) is set. It will immediately create a DH key, when DH parameters are supplied via SSL_CTX_set_tmp_dh() and SSL_OP_SINGLE_DH_USE is not set. In this case, it may happen that a key is generated on initialization without later being needed, while on the other hand the computer time during the negotiation is being saved. If ``strong'' primes were used to generate the DH parameters, it is not strictly necessary to generate a new key for each handshake but it does improve forward secrecy. If it is not assured, that ``strong'' primes were used (see especially the section about DSA parameters below), SSL_OP_SINGLE_DH_USE must be used in order to prevent small subgroup attacks. Always using SSL_OP_SINGLE_DH_USE has an impact on the computer time needed during negotiation, but it is not very large, so application authors/users should consider to always enable this option. As generating DH parameters is extremely time consuming, an application should not generate the parameters on the fly but supply the parameters. DH parameters can be reused, as the actual key is newly generated during the negotiation. The risk in reusing DH parameters is that an attacker may specialize on a very often used DH group. Applications should therefore generate their own DH parameters during the installation process using the openssl dhparam(1) application. In order to reduce the computer time needed for this generation, it is possible to use DSA parameters instead (see dhparam(1)), but in this case SSL_OP_SINGLE_DH_USE is mandatory. Application authors may compile in DH parameters. Files dh512.pem, dh1024.pem, dh2048.pem, and dh4096.pem in the 'apps' directory of current version of the OpenSSL distribution contain the 'SKIP' DH parameters, which use safe primes and were generated verifiably pseudo-randomly. These files can be converted into C code using the -C option of the dhparam(1) application. Authors may also generate their own set of parameters using dhparam(1), but a user may not be sure how the parameters were generated. The generation of DH parameters during installation is therefore recommended. An application may either directly specify the DH parameters or can supply the DH parameters via a callback function. The callback approach has the advantage, that the callback may supply DH parameters for different key lengths. {code} > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and aft
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098864#comment-14098864 ] Leif Hedstrom commented on TS-3016: --- Ok, that makes more sense. Should we keep this for v5.1.0 to fix, or move it out? > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098844#comment-14098844 ] Sudheer Vinukonda commented on TS-3016: --- So, it looks like the higher CPU seems to be caused by enabling the SSL options SSL_OP_SINGLE_DH_USE and SSL_OP_SINGLE_ECDH_USE via commit d7bb4cd3c6ec6c1fc5e70251257e2e10e450c92f in TS-2372. I kept overlooking TS-2372 all this while, since, we do have TS-2372 back ported into our ats4 - but, (un)fortunately, it looks like our back port in ats4 missed this particular commit from TS-2372. Note that I confirmed that the option SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION doesn't seem to impact CPU much (just the SINGLE_EC/DH_USE options) {code} + // Enable ephemeral DH parameters for the case where we use a cipher with DH forward security. +#ifdef SSL_OP_SINGLE_DH_USE + ssl_ctx_options |= SSL_OP_SINGLE_DH_USE; +#endif + +#ifdef SSL_OP_SINGLE_ECDH_USE + ssl_ctx_options |= SSL_OP_SINGLE_ECDH_USE; +#endif {code} > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098589#comment-14098589 ] Sudheer Vinukonda commented on TS-3016: --- [~zwoop] - Apologize, it looks like TS-1365 may not be the issue. I was running multiple versions and probably misread the data. Unfortunately, I ran out of the peak hour window and couldn't continue further yesterday. I am continuing the tests today, will update once I have a more concrete evidence. > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (TS-3016) High CPU in 5.0
[ https://issues.apache.org/jira/browse/TS-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098581#comment-14098581 ] Leif Hedstrom commented on TS-3016: --- Hmmm, which commit is the actual problem? > High CPU in 5.0 > --- > > Key: TS-3016 > URL: https://issues.apache.org/jira/browse/TS-3016 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Sudheer Vinukonda > Fix For: 5.1.0 > > > After 5.0 roll out, our production systems are noticing much higher cpu > compared to pre-upgrade ats4 cpu usage. > For example, on some of our hosts running 8 hosts, below is the comparison > before and after the upgade: > {code} > cpu %util: 88.7% (version: ats5) > cpu %util: 70% (version: ats4) > {code} > Running perf top on traffic_server shows most CPU spent in SSL api calls, > initially causing us to suspect any SSL related changes. However, after some > analysis, it looks like the issue might be caused by the changes made in > TS-1365. After reverting the changes in TS-1365, the CPU usage comes back to > pre-upgrade levels again. > {code} > ats5: > -- > Samples: 374K of event 'cycles', Event count (approx.): 29126701697 > 26.44% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 16.26% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 6.74% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 4.95% libcrypto.so.1.0.1e [.] BN_usub > 1.54% libcrypto.so.1.0.1e [.] sha256_block_data_order > 1.37% libcrypto.so.1.0.1e [.] BN_mod_mul_montgomery > 80.89 libcrypto.so.1.0.1e > 7.09 traffic_server > 5.02 [kernel] > 2.71 libc-2.12.so > 0.59 libpthread-2.12.so > 0.42 libssl.so.1.0.1e > 0.32 libtsutil.so.5 > 0.32 [bnx2] > 0.02 libstdc++.so.6.0.13 > 0.02 libpcre.so.0.0.1 > 0.01 quick_filter.so > 0.01 libtcl8.5.so > 0.01 librt-2.12.so > 0.01 [kernel].vsyscall_fn > 0.01 [kernel].vsyscall_1 > 0 regex_remap.so > 0 libresolv-2.12.so > 0 libgtlocip.so.1.7.4.B86 > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [kernel].vsyscall_0 > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > ats4: > - > Samples: 249K of event 'cycles', Event count (approx.): 18282595621 > 39.34% libcrypto.so.1.0.1e [.] bn_sqr4x_mont > 10.23% libcrypto.so.1.0.1e [.] bn_mul4x_mont_gather5 > 6.25% libcrypto.so.1.0.1e [.] bn_mul_mont <-- > 2.19% libcrypto.so.1.0.1e [.] sha256_block_data_order > 2.03% libcrypto.so.1.0.1e [.] BN_usub > 1.50% [kernel] [k] find_busiest_group > 1.08% libcrypto.so.1.0.1e [.] sha1_block_data_order_ssse3 > 80.93 libcrypto.so.1.0.1e > 6.9 [kernel] > 4.96 traffic_server > 3 libc-2.12.so > 0.84 libpthread-2.12.so > 0.81 libssl.so.1.0.1e > 0.38 [bnx2] > 0.32 libtsutil.so.4 > 0.02 libtcl8.5.so > 0.02 librt-2.12.so > 0.02 libpcre.so.0.0.1 > 0.02 [kernel].vsyscall_fn > 0.01 quick_filter.so > 0.01 [kernel].vsyscall_1 > 0.01 [kernel].vsyscall_0 > 0 regex_remap.so > 0 libstdc++.so.6.0.13 > 0 libresolv-2.12.so > 0 header_rewrite.so > 0 conf_remap.so > 0 [sd_mod] > 0 [mpt2sas] > 0 [jbd] > 0 [ipv6] > 0 [ext3] > 0 [dm_mod] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)