Safe Linking

Starting from glibc 2.32, a new Safe-Linking mechanism was implemented to protect the singly-linked lists (the fastbins and tcachebins). The theory is to protect the fd pointer of free chunks in these bins with a mangling operation, making it more difficult to overwrite it with an arbitrary value.

Every single fd pointer is protected by the PROTECT_PTR macro, which is undone by the REVEAL_PTR macro:

#define PROTECT_PTR(pos, ptr) \
  ((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr)))
#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)

Here, pos is the location of the current chunk and ptr the location of the chunk we are pointing to (which is NULL if the chunk is the last in the bin). Once again, we are using ASLR to protect! The >>12 gets rid of the predictable last 12 bits of ASLR, keeping only the random upper 52 bits (or effectively 28, really, as the upper ones are pretty predictable):

It's a very rudimentary protection - we use the current location and the location we point to in order to mangle it. From a programming standpoint, it has virtually no overhead or performance impact. We can see that PROTECT_PTR has been implemented in tcache_put() and two locations in _int_free() (for fastbins) here and here. You can find REVEAL_PTR used as well.

So, what does this mean to an attacker?

Again, heap leaks are key. If we get a heap leak, we know both parts of the XOR in PROTECT_PTR, and we can easily recreate it to fake our own mangled pointer.


It might be tempting to say that a partial overwrite is still possible, but there is a new security check that comes along with this Safe-Linking mechanism, the alignment check. This check ensures that chunks are 16-bit aligned and is only relevant to singly-linked lists (like all of Safe-Linking). A quick Ctrl-F for unaligned in malloc.c will bring up plenty of different locations. The most important ones for us as attackers is probably the one in tcache_get() and the ones in _int_malloc().

When trying to get a chunk e out of the tcache, alignment is checked.

if (__glibc_unlikely (!aligned_OK (e)))
  malloc_printerr ("malloc(): unaligned tcache chunk detected");

You may notice some of them use !aligned_OK while others use misaligned_chunk().

#define aligned_OK(m)  (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0)

#define misaligned_chunk(p) \
  ((uintptr_t)(MALLOC_ALIGNMENT == 2 * SIZE_SZ ? (p) : chunk2mem (p)) \
   & MALLOC_ALIGN_MASK)

The macros are defined side-by-side, but really aligned_OK is for addresses while misaligned_chunk is for chunks.

MALLOC_ALIGN_MASK is defined as such:

#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)

MALLOC_ALIGNMENT is defined for i386 as 16. In binary that's 10000, so MALLOC_ALIGN_MASK is 1111, so the final byte is checked. This results in 16-bit alignment, as expected.

This alignment check means you would have to guess 16 bits of entropy, leading to a 1/16 chance if you attempt to brute-force the last 16 bits to be

Last updated