HAP, Modsecurity and SSL

2016-01-22 Thread Phil Daws
Hello: 

Are any of you running an architecture like 
http://blog.haproxy.com/2012/10/12/scalable-waf-protection-with-haproxy-and-apache-with-modsecurity/
 but with SSL termination in the mix ? Would be interested to hear how you have 
done it please. 

Thanks, Phil 


Odd SSL performance

2015-06-18 Thread Phil Daws
Hello all:

we are rolling out a new system and are testing the SSL performance with some 
strange results.  This is all being performed on a cloud hypervisor instance 
with the following:

HA-Proxy version 1.5.11 2015/01/31
8GM RAM / 8 CPUs

when we run 'ab' with nbproc set to '1' we see the following:

ab -n 4 -c 2000 https://localhost/status
This is ApacheBench, Version 2.3 $Revision: 1528965 $
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 4000 requests
Completed 8000 requests
Completed 12000 requests
Completed 16000 requests
Completed 2 requests
Completed 24000 requests
Completed 28000 requests
Completed 32000 requests
Completed 36000 requests
Completed 4 requests
Finished 4 requests


Server Software:nginx
Server Hostname:localhost
Server Port:443
SSL/TLS Protocol:   TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:  /status
Document Length:16 bytes

Concurrency Level:  2000
Time taken for tests:   101.824 seconds
Complete requests:  4
Failed requests:0
Total transferred:  1740 bytes
HTML transferred:   64 bytes
Requests per second:392.83 [#/sec] (mean)
Time per request:   5091.206 [ms] (mean)
Time per request:   2.546 [ms] (mean, across all concurrent requests)
Transfer rate:  166.88 [Kbytes/sec] received

Now the documentation does say the one should not need to raise nbproc as it 
can make debugging difficult, but to see what would happen we gave it a try:

ab -n 4 -c 2000 https://localhost/
This is ApacheBench, Version 2.3 $Revision: 1528965 $
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 4000 requests
Completed 8000 requests
Completed 12000 requests
Completed 16000 requests
Completed 2 requests
Completed 24000 requests
Completed 28000 requests
Completed 32000 requests
Completed 36000 requests
Completed 4 requests
Finished 4 requests


Server Software:nginx
Server Hostname:localhost
Server Port:443
SSL/TLS Protocol:   TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:  /
Document Length:0 bytes

Concurrency Level:  2000
Time taken for tests:   45.011 seconds
Complete requests:  4
Failed requests:0
Total transferred:  888 bytes
HTML transferred:   0 bytes
Requests per second:888.67 [#/sec] (mean)
Time per request:   2250.558 [ms] (mean)
Time per request:   1.125 [ms] (mean, across all concurrent requests)
Transfer rate:  192.66 [Kbytes/sec] received

so that is certainly better, but now look what happens if we use purely HTTP:

ab -n 4 -c 2000 http://localhost/status
This is ApacheBench, Version 2.3 $Revision: 1528965 $
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 4000 requests
Completed 8000 requests
Completed 12000 requests
Completed 16000 requests
Completed 2 requests
Completed 24000 requests
Completed 28000 requests
Completed 32000 requests
Completed 36000 requests
Completed 4 requests
Finished 4 requests


Server Software:nginx
Server Hostname:localhost
Server Port:80

Document Path:  /status
Document Length:16 bytes

Concurrency Level:  2000
Time taken for tests:   7.152 seconds
Complete requests:  4
Failed requests:0
Total transferred:  1740 bytes
HTML transferred:   64 bytes
Requests per second:5592.99 [#/sec] (mean)
Time per request:   357.591 [ms] (mean)
Time per request:   0.179 [ms] (mean, across all concurrent requests)
Transfer rate:  2375.93 [Kbytes/sec] received

Have tried adding the option prefer-last-server but that did not make a great 
deal of difference.  Any thoughts please as to what could be wrong ?

Thanks, Phil


(null)



Re: Odd SSL performance

2015-06-18 Thread Phil Daws
Baptiste,

as requested:

openssl speed rsa2048
Doing 2048 bit private rsa's for 10s: 1189 2048 bit private RSA's in 10.00s
Doing 2048 bit public rsa's for 10s: 50993 2048 bit public RSA's in 10.00s
OpenSSL 0.9.8w 23 Apr 2012
built on: Mon Feb 17 16:11:28 PST 2014
options:bn(64,64) md2(int) rc4(ptr,int) des(idx,cisc,16,int) aes(partial) 
idea(int) blowfish(ptr2) 
compiler: gcc -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN 
-DHAVE_DLFCN_H -fPIC -DPIC -O2 -DNDEBUG -Wl,-z,noexecstack -Wa,--noexecstack 
-m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DMD32_REG_T=int
available timing options: TIMES TIMEB HZ=100 [sysconf value]
timing function used: times
  signverifysign/s verify/s
rsa 2048 bits 0.008410s 0.000196s118.9   5099.3

we were concerned by using '-k' it would invalidate the results whether that is 
a founded concern ?

Thanks, Phil


- On 18 Jun, 2015, at 14:26, Baptiste bed...@gmail.com wrote:

 Phil,
 
 without -k, HAProxy spends its time to compute TLS keys.
 Can you run 'openssl speed rsa2048' and report here the number?
 My guess is that it shouldn't be too far from 400 :)
 
 Baptiste
 
 
 On Thu, Jun 18, 2015 at 3:20 PM, Phil Daws ux...@splatnix.net wrote:
 Hello Baptiste:

 we were seeing lower tps from a remote system to the front-end LB hence 
 trying
 to exclude client side issues by using the LB interface.  Yes, when we use
 '-k', we do see a huge difference but its interesting that we pretty much
 always get 390 tps for a single core, and when we go to nbproc 2 then 780.

 Appreciate the input Baptiste  Lukas.

 Thanks, Phil.

 - On 18 Jun, 2015, at 14:15, Baptiste bed...@gmail.com wrote:

 Phil,

 First, use '-k' option on ab to keep connections alive on ab side.

 From a pure benchamrk point of view, using the loopback is useless!
 Furthermore if all VMs are hosted on the same hypervisor.
 You won't be able to get any accurate conclusion from your test,
 because the injector VM is impacting the HAProxy VM, which migh be
 mutually impacted the server VMs...

 Baptiste


 On Thu, Jun 18, 2015 at 2:41 PM, Phil Daws ux...@splatnix.net wrote:
 Hello Lukas:

 Path is as follows:

 Internet - HAProxy [Frontend:443 - Backend:80] - 6 x NGINX

 Yeah, unfortunately due to the application behind NGINX our benchmarking 
 has to
 be without keep-alives :(

 Thanks, Phil

 - On 18 Jun, 2015, at 13:38, Lukas Tribus luky...@hotmail.com wrote:

 Hi Phil,


 Hello all:

 we are rolling out a new system and are testing the SSL performance with
 some strange results. This is all being performed on a cloud hypervisor
 instance with the following:

 You are saying nginx listens on 443 (SSL) and 80, and you connect to those
 ports directly from ab. Where in that picture is haproxy?



 Have tried adding the option prefer-last-server but that did not make a
 great deal of difference. Any thoughts please as to what could be wrong ?

 Without keepalive it won't make any difference. Enable keepalive with ab 
 (-k).



 Lukas

 (null)

 (null)

(null)



Re: Odd SSL performance

2015-06-18 Thread Phil Daws
Hello Lukas:

Path is as follows:

Internet - HAProxy [Frontend:443 - Backend:80] - 6 x NGINX

Yeah, unfortunately due to the application behind NGINX our benchmarking has to 
be without keep-alives :(

Thanks, Phil

- On 18 Jun, 2015, at 13:38, Lukas Tribus luky...@hotmail.com wrote:

 Hi Phil,
 
 
 Hello all:

 we are rolling out a new system and are testing the SSL performance with
 some strange results. This is all being performed on a cloud hypervisor
 instance with the following:
 
 You are saying nginx listens on 443 (SSL) and 80, and you connect to those
 ports directly from ab. Where in that picture is haproxy?
 
 
 
 Have tried adding the option prefer-last-server but that did not make a
 great deal of difference. Any thoughts please as to what could be wrong ?
 
 Without keepalive it won't make any difference. Enable keepalive with ab (-k).
 
 
 
 Lukas

(null)



Re: Odd SSL performance

2015-06-18 Thread Phil Daws
Hello Baptiste:

we were seeing lower tps from a remote system to the front-end LB hence trying 
to exclude client side issues by using the LB interface.  Yes, when we use 
'-k', we do see a huge difference but its interesting that we pretty much 
always get 390 tps for a single core, and when we go to nbproc 2 then 780.

Appreciate the input Baptiste  Lukas.

Thanks, Phil.

- On 18 Jun, 2015, at 14:15, Baptiste bed...@gmail.com wrote:

 Phil,
 
 First, use '-k' option on ab to keep connections alive on ab side.
 
 From a pure benchamrk point of view, using the loopback is useless!
 Furthermore if all VMs are hosted on the same hypervisor.
 You won't be able to get any accurate conclusion from your test,
 because the injector VM is impacting the HAProxy VM, which migh be
 mutually impacted the server VMs...
 
 Baptiste
 
 
 On Thu, Jun 18, 2015 at 2:41 PM, Phil Daws ux...@splatnix.net wrote:
 Hello Lukas:

 Path is as follows:

 Internet - HAProxy [Frontend:443 - Backend:80] - 6 x NGINX

 Yeah, unfortunately due to the application behind NGINX our benchmarking has 
 to
 be without keep-alives :(

 Thanks, Phil

 - On 18 Jun, 2015, at 13:38, Lukas Tribus luky...@hotmail.com wrote:

 Hi Phil,


 Hello all:

 we are rolling out a new system and are testing the SSL performance with
 some strange results. This is all being performed on a cloud hypervisor
 instance with the following:

 You are saying nginx listens on 443 (SSL) and 80, and you connect to those
 ports directly from ab. Where in that picture is haproxy?



 Have tried adding the option prefer-last-server but that did not make a
 great deal of difference. Any thoughts please as to what could be wrong ?

 Without keepalive it won't make any difference. Enable keepalive with ab 
 (-k).



 Lukas

 (null)

(null)



Re: send-proxy and x-forward-for

2015-05-18 Thread Phil Daws
Hello Willy,

- On 17 May, 2015, at 14:16, Willy Tarreau w...@1wt.eu wrote:

 Hello Phil,
 
 On Tue, May 12, 2015 at 07:54:35AM +0100, Phil Daws wrote:
 (...)
 the issue is that if I go to the web site via HTTPS, which does not pass
 through a CDN, then the correct client IP is being passed through but if I go
 via HTTP its the CDN's IP which is being presented.  When I was using
 real_ip_header x-forward-for then it would work fine, but that broke the
 HTTPS side of things.  Some how need to get the x-forward-for IP, if its
 present, into the proxy_protol one.  Is that possible ?
 
 For now I don't see how to do this. While it is possible to spoof
 the original IP address extracted from the x-forwarded-for header,
 I'm not seeing a way to do that for proxy-proto. In fact we could
 imagine to have an http-request rule to replace the incoming
 connections's source with something extracted from a header, that
 would solve most use cases I think.
 
 Regards,
 Willy

I believe a rule for performing a replacement would be very good indeed.  While 
Nenad has suggested using two NGINX server, which is also a good idea, it would 
provide great flexibility if this were able to be done within HAP.

Regards, Phil
(null)
(null)



Re: send-proxy and x-forward-for

2015-05-16 Thread Phil Daws
Any thoughts please ?

- Original Message -
From: Phil Daws ux...@splatnix.net
To: haproxy@formilux.org
Sent: Tuesday, 12 May, 2015 07:54:35
Subject: send-proxy and x-forward-for

Hello:

am testing NGINX behind HAP 1.5.11 and having trouble to understand how 
send-proxy should be used with a combination of x-forward-for.  What I so far 
in my haproxy.cfg is as follows:

frontend frontend-web-http
mode http
bind 192.168.8.70:80
default_backend backend-web-http
option forwardfor except 127.0.0.0/8
option http-server-close
option httplog

frontend frontend-web-https
mode tcp
bind 192.168.8.70:443
default_backend backend-web-https

backend backend-web-http
mode http
stick-table type string len 64 size 100k expire 15m
stick store-response res.cook(PHPSESSID)
stick match req.cook(PHPSESSID)
option forwardfor
option http-server-close
server web01 192.168.10.70:80 check send-proxy
server web02 192.168.10.71:80 check send-proxy backup

backend backend-web-https
mode tcp
server web01.gos.innovot.com 192.168.10.70:443 check send-proxy
server web02.gos.innovot.com 192.168.10.71:443 check send-proxy backup

and within NGINX:

# HAProxy
set_real_ip_from 192.168.8.70;

# Fastly Proxy Networks
set_real_ip_from 23.235.32.0/20;
set_real_ip_from 43.249.72.0/22;
set_real_ip_from 103.244.50.0/24;
set_real_ip_from 103.245.222.0/23;
set_real_ip_from 103.245.224.0/24;
set_real_ip_from 104.156.80.0/20;
set_real_ip_from 185.31.16.0/22;
set_real_ip_from 199.27.72.0/21;
set_real_ip_from 202.21.128.0/24;
set_real_ip_from 203.57.145.0/24;
set_real_ip_from 10.1.8.0/24;

real_ip_header proxy_protocol;

the issue is that if I go to the web site via HTTPS, which does not pass 
through a CDN, then the correct client IP is being passed through but if I go 
via HTTP its the CDN's IP which is being presented.  When I was using 
real_ip_header x-forward-for then it would work fine, but that broke the HTTPS 
side of things.  Some how need to get the x-forward-for IP, if its present, 
into the proxy_protol one.  Is that possible ?

Thanks, Phil
(null)
(null)



send-proxy and x-forward-for

2015-05-12 Thread Phil Daws
Hello:

am testing NGINX behind HAP 1.5.11 and having trouble to understand how 
send-proxy should be used with a combination of x-forward-for.  What I so far 
in my haproxy.cfg is as follows:

frontend frontend-web-http
mode http
bind 192.168.8.70:80
default_backend backend-web-http
option forwardfor except 127.0.0.0/8
option http-server-close
option httplog

frontend frontend-web-https
mode tcp
bind 192.168.8.70:443
default_backend backend-web-https

backend backend-web-http
mode http
stick-table type string len 64 size 100k expire 15m
stick store-response res.cook(PHPSESSID)
stick match req.cook(PHPSESSID)
option forwardfor
option http-server-close
server web01 192.168.10.70:80 check send-proxy
server web02 192.168.10.71:80 check send-proxy backup

backend backend-web-https
mode tcp
server web01.gos.innovot.com 192.168.10.70:443 check send-proxy
server web02.gos.innovot.com 192.168.10.71:443 check send-proxy backup

and within NGINX:

# HAProxy
set_real_ip_from 192.168.8.70;

# Fastly Proxy Networks
set_real_ip_from 23.235.32.0/20;
set_real_ip_from 43.249.72.0/22;
set_real_ip_from 103.244.50.0/24;
set_real_ip_from 103.245.222.0/23;
set_real_ip_from 103.245.224.0/24;
set_real_ip_from 104.156.80.0/20;
set_real_ip_from 185.31.16.0/22;
set_real_ip_from 199.27.72.0/21;
set_real_ip_from 202.21.128.0/24;
set_real_ip_from 203.57.145.0/24;
set_real_ip_from 10.1.8.0/24;

real_ip_header proxy_protocol;

the issue is that if I go to the web site via HTTPS, which does not pass 
through a CDN, then the correct client IP is being passed through but if I go 
via HTTP its the CDN's IP which is being presented.  When I was using 
real_ip_header x-forward-for then it would work fine, but that broke the HTTPS 
side of things.  Some how need to get the x-forward-for IP, if its present, 
into the proxy_protol one.  Is that possible ?

Thanks, Phil


(null)
(null)



HAP 1.5.11 and SSL

2015-04-16 Thread Phil Daws
Hello all!

Long time no post but have lost some of my old notes and hitting an issue with 
SSL.  In my haproxy.conf I have:

frontend frontend-zimbra-zwc-http
mode http
bind 10.1.8.73:80
redirect scheme https if !{ ssl_fc }

frontend frontend-zimbra-zwc-https
bind 10.1.8.73:443 ssl crt /etc/haproxy/certs/mydomain.pem ciphers 
RC4:HIGH:!aNULL:!MD5
option tcplog
reqadd X-Forwarded-Proto:\ https
default_backend backend-zimbra-zwc

backend backend-zimbra-zwc
mode http
server zwc01 10.1.8.40:443 maxconn 1000 check-ssl verify none
server zwc02 10.1.8.41:443 maxconn 1000 check-ssl verify none backup

the HTTP connections are being re-directed to HTTPS as desired but when it hits 
the backend I see:

(NGINX) The plain HTTP request was sent to HTTPS port

If I have redirected at the frontend then why is plain HTTP being sent to the 
backend ?

Thanks, Phil
(null)
(null)



Capture IP Address

2013-10-17 Thread Phil Daws
Hello,

have searched but did not find an answer on whether its possible to pass the 
connecting IP address (src) as a variable on a redirect ?  This would be used 
on an ACL for certain access to URLs eg:

acl SEC_Admin url_dir -i /secure
acl ViaNOC src XXX.XXX.XXX.XX
redirect location http://internal.site?{SRC_IP} if SEC_Admin !ViaNOC

Is that possible ? Thank you.



Backend Failover

2013-09-03 Thread Phil Daws
Hello,

I have a configuration where I am proxying front-end connections to a back-end 
service:

frontend security-frontend
   bind 192.168.1.10:3307
   maxconn  2000
   default_backend security-backend

backend security-backend
  mode tcp
  balance roundrobin
  option httpchk
  server sec1 192.168.2.10:3307 check port 1

but now I would like to add a backup to the security-backend.  Is the only 
option to have something like:

server sec2 X.X.X.X backup

or is it possible to use another back-end group that has already been defined 
eg.

backend mysql-backend
  mode tcp
  balance roundrobin
  option httpchk
  server mysq1 192.168.2.20:3307 check port 1
  server mysq1 192.168.2.21:3307 check port 1
  server mysq1 192.168.2.22:3307 check port 1

Thanks.



Sticky Session Help

2013-07-02 Thread Phil Daws
Hello all,

I have built a small environment which has two web servers sat behind HAProxy 
(1.5) plus three MariaDB servers clustered using Galera.  I am finding that 
some web applications Admin panels eg. Wordpress/Joomla do not work if the 
MySQL session is being constantly re-directed to another node.  I thought I 
could use a sticky table and the source IP, but as I am proxying the web 
servers as-well all traffic gets directed to one server :(

Any thoughts on how to resolve this conundrum ? Is it even possible to resolve ?

Thanks.



Send-Proxy Checking

2013-04-11 Thread Phil Daws
Hello,

am working on setting up HAProxy and would like it to LB connections to our 
Zimbra server.  So far I have the following:

frontend zimbra-mta-frontend
  bind 172.30.8.22:25
  mode tcp
  no option http-server-close
  timeout client 1m
  log global
  option tcplog
  default_backend zimbra-mta-backend

backend zimbra-mta-backend
  mode tcp
  no option http-server-close
  log global
  option tcplog
  option smtpchk HELO mydomain.com
  timeout server 1m
  timeout connect 5s
  server zmta1 zm1.mydomain.com:1025 send-proxy

This works fine and proxies the connections through to Postscreen on the Zimbra 
very nicely indeed.  The problem I am having is how does one check that the 
service is running okay ? When I view in the statistics page the zmta1 line is 
greyed out.  I see that there is also a check-send-proxy option but not 
entirely sure of how it works.

Any help appreciated please.



Re: Send-Proxy Checking

2013-04-11 Thread Phil Daws
Resolved; did specify the correct options.  Once set the following all okay:

server zmta1 zm1.mydomain.com:1025 check check-send-proxy send-proxy


- Original Message -
From: Phil Daws ux...@splatnix.net
To: haproxy@formilux.org
Sent: Thursday, 11 April, 2013 1:51:59 PM
Subject: Send-Proxy Checking

Hello,

am working on setting up HAProxy and would like it to LB connections to our 
Zimbra server.  So far I have the following:

frontend zimbra-mta-frontend
  bind 172.30.8.22:25
  mode tcp
  no option http-server-close
  timeout client 1m
  log global
  option tcplog
  default_backend zimbra-mta-backend

backend zimbra-mta-backend
  mode tcp
  no option http-server-close
  log global
  option tcplog
  option smtpchk HELO mydomain.com
  timeout server 1m
  timeout connect 5s
  server zmta1 zm1.mydomain.com:1025 send-proxy

This works fine and proxies the connections through to Postscreen on the Zimbra 
very nicely indeed.  The problem I am having is how does one check that the 
service is running okay ? When I view in the statistics page the zmta1 line is 
greyed out.  I see that there is also a check-send-proxy option but not 
entirely sure of how it works.

Any help appreciated please.



HAProxy and Zimbra

2013-04-10 Thread Phil Daws
Hello,

have just started to explore HAProxy and am finding it amazing!  As a long time 
Zimbra user I wanted to see how one could balance the front-end web client so 
had a play around.  What I have at present is the following configuration:

frontend zimbra-zwc-frontend-https
bind 172.30.8.21:443 ssl crt /etc/haproxy/certs/zimbra.pem
mode tcp
option tcplog
reqadd X-Forwarded-Proto:\ https
default_backend zimbra-zwc-backend-http

backend zimbra-zwc-backend-http
   mode http
   balance roundrobin
   stick-table type ip size 200k expire 30m
   stick on src
   server zwc1 zm1:80 check port 80
   server zwc2 zm2:80 check port 80

I admit that the configuration has been cobbled together from other peoples 
thoughts and ideas; though it does actually work!  I did try to go the route of 
HTTPS - HTTPS but that completely fell apart due to Zimbra using NGINX and 
automatically re-routing HTTP - HTTPS.  The other stumbling block was I could 
not see how to check that the report HTTPS (443) port was available.  I have 
seen check port and check id used but neither worked as expected.  So at 
present I have HAProxy acting as the SSL terminator and backing off the 
requests to a HTTP backend.  I can take one backend node down, upgrade it, and 
restart it with affecting any new connections again a single destination IP 
address; NICE! :)

This is all very new to me so any expert advice, or directions to further 
reading, would be very grateful and encouraged.

Thank you.

P.



Re: Two HAProxy instances with a shared IP

2013-04-09 Thread Phil Daws
Thank you Jerome. Am looking at KeepAlived, UCARP and VRRP though not sure 
which way to go at the moment from a pro/cons perspective. Thanks.
- Original Message -
From: Jérôme Benoit jerome.ben...@grenouille.com
To: haproxy@formilux.org
Cc: Phil Daws ux...@splatnix.net
Sent: Tuesday, 9 April, 2013 9:42:53 PM
Subject: Re: Two HAProxy instances with a shared IP

On Mon, 8 Apr 2013 14:36:52 +0100 (BST) in 
564091370.1183442.1365428212786.javamail.r...@innovot.com, 
Phil Daws Phil Daws ux...@splatnix.net wrote:

 Hello,

Hello, 

 am making my first foray into setting up a test lab to play around with 
 HAProxy.  Ideally I am hoping to build an environment which consists of two 
 HAProxy nodes that share an IP address, and each node then offloads HTTP 
 connections to two backend web servers. Basically build a meshed architecture 
 for no single point of failure.  

A mesh for a shared IP ? How ? Encapsulating the IP datagram ? 

It looks like the best route would be to use Wackamole and Spread for the 
shared IP address. 

or keepalived or carp.  

 Am building on CentOS 6.4 so would be grateful for your thoughts on this 
 setup or whether there is a more appropriate one.  If all works well then 
 hopefully can start to look at LB/HA for other services.

The wackamole solution seem only valuable if you have more than one shared IP. 

++. 

-- 
Jérôme Benoit aka fraggle
La Météo du Net - http://grenouille.com
OpenPGP Key ID : 9FE9161D
Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D