Imposing encryption at scale – Fb Engineering

Our infrastructure supports thousands of services that process billions of requests per second. We previously discussed how we built ours Service encryption infrastructure to ensure that these globally distributed services function securely and efficiently. This post explains the system we developed for enforcing encryption policies on our network and shares some of the lessons learned. The goal of this enforcement is to quickly intercept and stop any regression to protect our internal application-level traffic via TLS.

Organizational challenges

Implementing an enforcement policy for enforcing transit encryption at the Facebook level requires careful planning and communication, in addition to the technical challenges we are about to discuss. We want the website to remain active and reliable so that users of our services are not affected by changes in the infrastructure and are not aware of them.

Communicating intent, specific schedules, and rollout strategy went a long way in minimizing potential disruption to the thousands of teams running services on Facebook. We use Workplace within Facebook, which allows us to share that information with a variety of groups with a single share button and consolidate feedback and concerns in a single place for all employees. We made sure to include:

  • A description of the impact of our enforcement mechanism and what it might look like at the application level
  • A dashboard for engineers to see if their traffic is being impacted
  • The rollout and monitoring plan
  • Dedicated contact points and a workspace group where users can ask questions about the impact and troubleshoot issues

The post required multiple discussions within the team to create a rollout plan, dashboard requirements, and realistic timelines to achieve the project’s goals. This level of communication proved useful as the team obtained important feedback early on.

Construction of our SSLWall

Hardware bottlenecks are a natural approach to transparent enforcement. There are options like Layer 7 firewalls that allow us to do deep packet inspection, but running fine-grained rollouts and the complexity of the Facebook network would make implementing such a solution a nightmare. Additionally, working at the network firewall level would introduce a much larger blast of affected traffic, and a single configuration problem could destroy traffic that we shouldn’t be touching.

Our team decided to develop and deploy a system known internally as SSLWall, a system that separates non-SSL connections across borders. Let’s dive a little into the design decisions behind this solution.


We had to be thorough in considering the requirements for a system that would potentially block traffic on such a large scale. The team made the following requirements for SSLWall, all of which affected our design decisions:

  • Visibility of what traffic is being blocked. Service owners needed a way to assess the impact, and our team needed to be proactive and come back when we felt a problem was brewing.
  • A passive surveillance mode where we could turn a knob to switch to active enforcement. This helps us to recognize effects at an early stage and to prepare teams.
  • A mechanism that allows certain use cases to bypass enforcement, e.g. B. BGP, SSH, and approved network diagnostic tools.
  • Support for cases like HTTP CONNECT and STARTTLS. These are instances that work a bit over plain text before a TLS handshake is performed. We have many use cases for these in our infrastructure, such as HTTP tunneling, MySQL security and SMTP, so these must not break, especially since they ultimately encrypt the data with TLS.
  • Expandable configurability. Depending on the environment in which SSLWall is operated, we may have different requirements. In addition, with key controls that can be tuned with little interruption, we can scroll functions forward or backward at our own pace.
  • Transparent for the application. Applications should not have to rewrite their code or make additional library dependencies to run SSLWall. The team needed the ability to iterate quickly and change configuration options independently. In addition, the transparency to the application means that SSLWall must be powerful and use minimal resources without affecting latency times.

These requirements all led us to manage a host-level daemon that had a user area and a kernel-level component. We needed a low-computational way of transparently inspecting and reacting to all connections.


Because we wanted to check every connection without making any application-level changes, we had to do some work in the kernel context. We Use eBPF comprehensive and offers all the functions that SSLWall needs to achieve its goals. We have used a number of technologies that eBPF offers:

  • tc-bpf: We used Linux Traffic control (TC) and implemented a filter with eBPF. At this level we are able to do some per-package calculations for packages going in and out of the box. TC allows us to work with a wider range of kernels within the Facebook fleet. It wasn’t the perfect solution, but it worked for our needs at the time.
  • kprobes: With eBPF we can attach programs to kprobes so that we can execute code in the kernel context every time certain functions are called. We were interested in them tcp_connect and tcp_v6_destroy_sock Functions. These functions are called when a TCP connection is established or terminated. Old kernels also played a role in our use of kprobes.
  • Maps: eBPF provides access to a number of map types including arrays, bounded LRU maps, and perf events

Diagrams showing how kprobes, the tc filter, and our maps interact with each other when determining if a connection needs to be blocked.Diagrams showing how kprobes, the tc filter, and our maps interact with each other when determining if a connection needs to be blocked.

The management daemon

We have built a daemon that manages the eBPF programs we have installed and outputs logs Clerk from our perf events. The daemon also offers the option to update our TC filter, processes configuration changes (usage Facebook’s configurator) and monitors the status.

Our eBPF programs are also bundled with this daemon. This makes it easier to manage releases as we only have to monitor one software unit instead of tracking a daemon and eBPF release. In addition, we can change the schema of our BPF tables, which are consulted by both user-space and kernel-space, with no compatibility problems between the releases.

Technical challenges

Unsurprisingly, we encountered a number of interesting technical challenges in introducing the Facebook-scale SSLWall. Some highlights are:

  • TCP quick opening (TFO): We encountered an interesting challenge regarding the execution order of kprobe and TC filters that was exposed by our use of TFO in the infra. In particular, we had to move part of our flow tracking code to a kprobe pre-handler.
  • Size restriction for BPF programs: All BPF programs are subject to size and complexity restrictions, which can vary depending on the kernel version.
  • Performance: We spent many development cycles tweaking our BPF programs, especially the TC filter, so the CPU impact of SSLWall on some of our critical high QPS and high fanout services remained trivial. Identifying conditions for early exit and using BPF arrays over LRUs when possible was found to be effective.

TransparentTLS and the long tail

With enforcement, we needed a way to address non-compliant services without significant development time. This included things like torrent clients, open source message queues, and some Java applications. While most applications use common internal libraries into which we can build this logic, those that don’t need another solution will need it.

Essentially, the team left the following requirements for what we call transparent TLS (TTLS for short):

  • Encrypt connections transparently without application changes.
  • Avoid double encryption for existing TLS connections.
  • The performance can be suboptimal for this longtail.

It’s clear that a proxy solution would have helped here, but we had to make sure the application code didn’t need to be changed and the configuration was minimal.

We decided on the following architecture:

The challenge with this approach is to transparently redirect application connections to the local proxy. Again, we use BPF to solve this problem. Thanks to the cgroup / connect6 hook, we can all intercept connect (2) Calls to the application and redirect them to the proxy if necessary.

Diagram showing application and proxy logic for transparent connections.Diagram showing application and proxy logic for transparent connections.

Aside from keeping the application unchanged, the BPF program makes policy decisions about routing through the proxy. For example, we optimized this flow to bypass the proxy for all TLS connections created by the application to avoid double encryption.

That enforcement work has put us in a state where we can confidently say that our traffic is encrypted to our extent. However, our work is not finished yet. For example, there are many new possibilities in BPF that we want to take advantage of when we remove the old kernel support. We can also improve our transparent proxy solutions and use custom protocols to multiplex connections and improve performance.

We thank Takshak Chahande, Andrey Ignatov, Petr Lapukhov, Puneet Mehra, Kyle Nekritz, Deepak Ravikumar, Paul Saab, and Michael Shao for their work on this project.

Comments are closed.