About these ads
Tag Archive | ssl

Update: SQUID transparent SSL interception : Squid v3.2

In order to keep this blog post a bit more relevant, there have been some improvements since that post was written. Squid v3.2 has been released earlier this year, making ssl interception more seamless and easier. The new features for HTTPS interception can be found while reading through the man page for http_port:


More specifically:

1. The “transparent” keyword has been changed to “intercept“:

           intercept    Rename of old 'transparent' option to indicate proper functionality.

INTERCEPT is now better described as:

intercept	Support for IP-Layer interception of
			outgoing requests without browser settings.
			NP: disables authentication and IPv6 on the port.

2. In order to avoid more certificate errors when intercepting HTTPS sites, squid now can dynamically generate SSL certificates, using generate-host-certificates. This means the CN of the certificate should now match that of the origin server, though the certificate will still be generated using SQUID’s private key:

SSL Bump Mode Options:
	    In addition to these options ssl-bump requires TLS/SSL options.

			Dynamically create SSL server certificates for the
			destination hosts of bumped CONNECT requests.When 
			enabled, the cert and key options are used to sign
			generated certificates. Otherwise generated
			certificate will be selfsigned.
			If there is a CA certificate lifetime of the generated 
			certificate equals lifetime of the CA certificate. If
			generated certificate is selfsigned lifetime is three 
			This option is enabled by default when ssl-bump is used.
			See the ssl-bump option above for more information.

Looks like the above is an offshoot of the excellent work here: http://wiki.squid-cache.org/Features/DynamicSslCert

Make sure to use the above two features for smoother HTTPS interception – though remember, always warn users that SSL traffic is being decrypted, privacy is a highly-valued right…

About these ads

SQUID transparent SSL interception

July 2012: Small update on new versions of squid (squid v 3.2) here

There seems to be a bit of confusion about configuring SQUID to transparently intercept SSL (read: HTTPS) connections. Some sites say it’s plain not possible:


Recent development in SQUID features have made this possible. This article explores how to set this up at a basic level. The SQUID proxy will basically act as a man in the middle. The motivation behind setting this up is to decrypt HTTPS connections to apply content filtering and so on.

There are some concerns that transparently intercepting HTTPS traffic is unethical and can cause legality issues. True, and I agree that monitoring HTTPS connections without properly and explicitly notifying the user is bad but we can use technical means to ensure that the user is properly notified and even gets prompted to accept monitoring or back out. More on this towards the end of the article

So, on to the technical details of setting the proxy up. First, install the dependencies . We will need to compile SQUID from scratch since by default it’s not compiled using the necessary switches. I recommend downloading the latest 3.1 version, especially if you want to notify users about the monitoring. In ubuntu:

apt-get install build-essential libssl-dev

Note : for CentOS users, use openssl-devel rather than libssl-dev

Build-essentials downloads the compilers while libssl downloads SSL libraries that enable SQUID to intercept the encrypted traffic. This package (libssl) is needed during compilation. Without it, when running make you will see the errors similar to the following in the console:

error: ‘SSL’ was not declared in this scope

Download and extract the SQUID source code from their site. Next, configure, compile and install the source code using:

./configure –enable-icap-client –enable-ssl
make install

Note the switches I included in the configure command:

* enable-icap-client : we’ll need this to use ICAP to provide a notification page to clients that they are being monitored.

* enable-ssl : this is a prerequisite for SslBump, which squid uses to intercept SSL traffic transparently

Once SQUID has been installed, a very important step is to create the certificate that SQUID will present to the end client. In a test environment, you can easily create a self-signed certificate using OpenSSL by using the following:

openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout http://www.sample.com.pem  -out http://www.sample.com.pem

This will of course cause the client browser to display an error:


In an enterprise environment you’ll probably want to generate the certificate using a CA that the clients already trust. For example, you could generate the certificate using microsoft’s CA and use certificate auto-enrolment to push the certificate out to all the clients in your domain.

Onto the actual SQUID configuration. Edit the /etc/squid.conf file to show the following:

always_direct allow all
ssl_bump allow all

http_port transparent

#the below should be placed on a single line
https_port transparent ssl-bump cert=/etc/squid/ssl_cert/www.sample.com.pem key=/etc/squid/ssl_cert/private/www.sample.com.pem

Note you may need to change the “cert=” and the “key=” to point to the correct file in your environment. Also of course you will need to change the IP address

The first directive (always_direct) is due to SslBump. By default ssl_bump is set to accelerator mode. In debug logs cache.log you’d see “failed to select source for”. In accelerator mode, the proxy does not know which backend server to use to retrieve the file from, so this directive instructs the proxy to ignore the accelerator mode. More details on this here:


The second directive (ssl_bump) instructs the proxy to allow all SSL connections, but this can be modified to restirct access. You can also use the “sslproxy_cert_error” to deny access to sites with invalid certificates. More details on this here:


Start squid and check for any errors. If no errors are reported, run:

netstat -nap | grep 3129

to make sure the proxy is up and running. Next, configure iptables to perform destination NAT, basically to redirect the traffic to the proxy:

iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j DNAT –to-destination
iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 443 -j DNAT –to-destination

Last thing to be done was to either place the proxy physically in line with the traffic or to redirect the traffic to the proxy using a router. Keep in mind that the proxy will change the source IP address of the requests to it’s own IP. In other words, by default it does not reflect the client IP.

That was it in my case. I did try to implement something similar to the above but using explicit mode. This was my squid.conf file, note only one port is needed for both HTTP and HTTPS since HTTPS is tunneled over HTTP using the CONNECT method:

always_direct allow all
ssl_bump allow all

#the below should be placed on a single line

http_port 8080 ssl-bump cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Pp.pem

As regards my previous discussion of notifying users that they are being monitored, consider using greasyspoon:


With this in place, you can instruct greasyspoon to send a notify page to the clients. If they accept this notify page, a cookie (let’s say the cookie is called “NotifySSL”) is set. GreasySpoon can then check for the presence of this cookie in subsequent requests and if present, allow the connection. If the cookie is not present, customers again get the notify page. Due to security considerations, most of the time cookies are only valid for one domain, so you may end up with users having to accept the notify for each different domain they visit. But, you can use greasyspoon in conjunction with a backend MySQL database or similar to record source IP addresses that have been notified and perform IP based notifications. Anything is possible :)

SSL session ID & IPS

Intermittent access issues to HTTPS sites…

Issue :

Randomly, the same HTTPS site would sometimes not respond. IE would show its very unhelpful “page cannot be displayed” while firefox displays the slightly more descriptive “peer recieved a valid certificate but access denied”

Cause (in this case) :

An upstream Fortigate IPS was dropping “unknown” SSL session IDs

Troubleshooting :

In wireshark, run the following filter:


In this case we saw the following:


Usually, the “access denied” message means that the client is missing a client-side certificate used for authentication. So of course first step is to check if the site requires any client-side authentication. This wasn’t the case here, so we expand the above wireshark filter to see the whole ssl handshake:

ssl.alert_message or ssl.handshake

After isolating a tcp stream of interest we saw both successful and unsuccessful handshakes.

A successful one:


An unsuccessful one:


So, the problem has to be in the “client hello”

Comparing the client hellos the problem becomes apparent:


There seems to be a problem with the session ID. Everytime the client tries to re-use and SSL negotiation by specifying the session ID, something blocks this.

After some digging around, we found the following, which solved the issue:


Apparently fortigate has an inbuild IPS that drops any unknown session IDs. There’s some good theory in the above link :)

Redirecting HTTPS sites using ProxySG

Some customers often ask when using a proxy, if it’s possible to redirect one HTTPS site to another. IE will not accept a non-2xx code in response to an HTTPS. Officially, there is nothing more to it, it’s
not possible…

I have a workaround/hack for this. Please be aware that I provide this to you with no guarantees. I
have tested this and I know it works, but officially it is not supported Keep in mind the reason the browsers do not accept non 2xx responses is due to security advisories, so it’s not exactly a bluecoat issue. Nevertheless, here is my workaround.

In this example I will be redirecting https://www.google.com to https://www.hsbc.co.uk/1/2/.

1. Enable SSL interception. In both transparent and explicit proxy mode, SSL interception is
needed since without it, the URLs are encrypted and so the proxy cannot read the URLs.
2. Create a “web access” layer, and create a new rule

In my example, this is what the rule looks like:


The source is set to any. You can of course change this to your needs. The destination is an object of “Request URL”, and is simple:


Basically any HTTPS connection going to google will be redirected. The action is an object of type “Notify User”. This is the important part. It will look like this:


The “hack” is in the title bar. Note how I manually close the title tag </title> and insert a meta tag
<meta. This tag basically forces the client browser to “refresh” after 0 seconds, and causes the
refresh destination to be https://www.hsbc.co.uk/1/2/. The last <title> tag is there in order to
properly close the </title> tag that is automatically inserted by the proxy. This has the effect of
redirecting the client

Last two points to note are that I chose “notify on every host” so that every host will be redirected
every time, similarly for the “After 1 min”

Using client certificate authentication w/ BC ProxySG

Had to deal with an interesting case lately. This is what the customer wanted:


as you can see, the link between the client and the ProxySG is to be negotiated using HTTPS, while the link between the ProxySG and the OCS is to be plain old HTTP. This is easily handled by the ProxySG when using the reverse proxy configuration. I won’t go into that here. The interesting thing was the client certificate.

The idea is that only computers possessing this client certificate would be able to reach the OCS (Original Content Server) in question. The documentation available gave good clues but was not totally complete (hopefully we’ll get this content published in the BC KB to rectify this).

Turns out that the above scenario is possible using the “certificate authentication” in the ProxySG (configuration > authentication > certificate). There are four major sets to achieve the above:

1. Define an appropriate CCL (CA Certificate List)

2. Define an HTTPS reverse proxy service

3. Define a certificate authenticate realm

4. Define apporpriate CPL

Going into a bit more detail on the above steps:

1. Define an appropriate CCL (CA Certificate List)

In order for this to work, the appropriate certificates must be issued and imported intot he participating network nodes. A brief summary of certificates needed:

  • A certificate of type “webserver” must be issued to the ProxySG. Note: if you do not use a certificate of type “webserver” (i.e. with the correct constraints) IE will work, but FF will complain. This webserver certificate is known as the “keyring”. In my example, I chose to name this certificate and keyring as “proxy123”
  • The root CA certificate, and any intermediate CA certificates, must be imported into the ProxySG under SSL > CA certificates.
  • The correct client certificate of type “user”, issued by the same CA, should be given to the client and imported into their certificate store. This certificate is usually distributed in “pfx” format and requires a password to import

It’s important on the proxy for IE to work, that a separate CCL list is created under “SSL > CA certificates > CA certificate Lists” and within this CCL, add only the CA that was used above. This is critical for IE to work. Reason being that during the SSL/TLS negotiation, the proxy sends a list of all the CAs that it trusts, so that in turn the browser can send the correct client certificate. If this CA list is too long, it will span multiple handshake messages, which IE doesn’t like. So we instruct the Proxy to send only one CA in the list (via the CCL), which will fit in one message and so IE won’t complain. In this example, i’ve called the CCL “reverse_sg”

2. Define an HTTPS reverse proxy service

As previously noted, we definitely need reverse proxy setup, to convert between HTTPS and HTTP. There is plenty of documentation on how to set this up properly, but one thing of note here that differs from normal reverse proxy deployments is to ensure that the Verify Client option is enabled. In other words, under configuration > services > proxy services, edit the HTTPS reverse proxy service and you should see something similar to:


Note how the verify client option is enabled, and how the appropriate keyring (webserver certificate) and CCL created in step 1 is chosen.

3. Define a certificate authenticate realm

This is relatively straightforward. Simply create a new certificate realm, with any name needed (authentication > certificate > certificate realms). It’s interesting to note that the certificate realm is different from other realms in that it splits authentication and authorization into two distinct processes. Recall that authentication is the process of validating that a user is really who they say they are, while authorization decides what resources that user can use. In the certificate realm, authorization is optional. That’s to say, so long as a user can produce a valid certificate, they can access anything they need (also called a system-high access model).

This is the model I’ll be following in this example, so all that’s needed is to enter the correct OID under the “Extended Key Usage”. Under the “Certificate Main” tab, click the “ADD” button. This will ask for an OID that the proxy will check for in the client certificate. To obtain this OID, open a client certificate via MMC or IE, click the “Details” tab and select the “Enhanced Key Usage”. You’ll see the following:


Note the string of numbers near the client authentication:, this is the OID. It may differ depending on the CA of course.

4. Define apporpriate CPL

The hard part’s been done. All that’s left is to enforce what’s been done. This is done via CPL. Open the lcoal policy file, or create a new CPL layer under VPM. Insert the following:

<Proxy> ssl.proxy_mode = https-reverse-proxy

The first line instructs the proxy to use certificate authentication only for HTTPS reverse proxy users, not for all users.

The second line instructs the proxy which certificate realm to use. In my case, the certificate realm created in step 3 was named “tester_dv”, which is what I used here.



Get every new post delivered to your Inbox.

Join 138 other followers

%d bloggers like this: