arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

Safe Linking

Starting from glibc 2.32, a new Safe-Linking mechanism was implemented to protect the singly-linked lists (the fastbins and tcachebins). The theory is to protect the fd pointer of free chunks in these bins with a mangling operation, making it more difficult to overwrite it with an arbitrary value.

Every single fd pointer is protected by the PROTECT_PTR macroarrow-up-right, which is undone by the REVEAL_PTR macroarrow-up-right:

Here, pos is the location of the current chunk and ptr the location of the chunk we are pointing to (which is NULL if the chunk is the last in the bin). Once again, we are using ASLR to protect! The >>12 gets rid of the predictable last 12 bits of ASLR, keeping only the random upper 52 bits (or effectively 28, really, as the upper ones are pretty predictable):

It's a very rudimentary protection - we use the current location and the location we point to in order to mangle it. From a programming standpoint, it has virtually no overhead or performance impact. We can see that PROTECT_PTR has been implemented in and two locations in _int_free() (for fastbins) and . You can find REVEAL_PTR used as well.

So, what does this mean to an attacker?

Again, heap leaks are key. If we get a heap leak, we know both parts of the XOR in PROTECT_PTR, and we can easily recreate it to fake our own mangled pointer.


It might be tempting to say that a partial overwrite is still possible, but there is a new security check that comes along with this Safe-Linking mechanism, the alignment check. This check ensures that chunks are 16-bit aligned and is only relevant to singly-linked lists (like all of Safe-Linking). A quick Ctrl-F for unaligned in will bring up plenty of different locations. The most important ones for us as attackers is probably the one in tcache_get() and the ones in _int_malloc().

When trying to get a chunk e out of the tcache, alignment is checked.

There are three checks here. First on , the macro for removing a chunk from a fastbin:

Once on :

And lastly on every fastbin chunk during the :

_int_free() checks the alignment if the tcache_entry

You may notice some of them use while others use .

The macros are defined side-by-side, but really aligned_OK is for addresses while misaligned_chunk is for chunks.

is defined as such:

is defined for i386 as 16. In binary that's 10000, so MALLOC_ALIGN_MASK is 1111, so the final byte is checked. This results in 16-bit alignment, as expected.

This alignment check means you would have to guess 16 bits of entropy, leading to a 1/16 chance if you attempt to brute-force the last 16 bits to be

#define PROTECT_PTR(pos, ptr) \
  ((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr)))
#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
is already set to the value it's meant to be and it has to do a whole double-free iteration check:

When all the fastbins are consolidated into the unsorted bin, they are checked for alignmentarrow-up-right:

Not super important functions for attackers, but fastbin chunks are checked for alignment in int_mallinfo()arrow-up-right, __malloc_info()arrow-up-right, do_check_malloc_state()arrow-up-right, tcache_thread_shutdown()arrow-up-right.

tcache_put()arrow-up-right
herearrow-up-right
herearrow-up-right
malloc.carrow-up-right
REMOVE_FBarrow-up-right
the first chunk returned from the fastbinarrow-up-right
movement over to the respective tcache binarrow-up-right
!aligned_OKarrow-up-right
misaligned_chunk()arrow-up-right
MALLOC_ALIGN_MASKarrow-up-right
MALLOC_ALIGNMENTarrow-up-right
Image courtesy of https://research.checkpoint.com/2020/safe-linking-eliminating-a-20-year-old-malloc-exploit-primitive/arrow-up-right
key
if (__glibc_unlikely (misaligned_chunk (p)))
    malloc_printerr ("malloc_consolidate(): "
		     "unaligned fastbin chunk detected");
if (__glibc_unlikely (misaligned_chunk (p)))
    malloc_printerr ("<funcname>(): "
		     "unaligned fastbin chunk detected")
if (__glibc_unlikely (!aligned_OK (e)))
    malloc_printerr ("tcache_thread_shutdown(): "
		     "unaligned tcache chunk detected");
if (__glibc_unlikely (!aligned_OK (e)))
  malloc_printerr ("malloc(): unaligned tcache chunk detected");
if (__glibc_unlikely (pp != NULL && misaligned_chunk (pp)))       \
    malloc_printerr ("malloc(): unaligned fastbin chunk detected");
if (__glibc_unlikely (misaligned_chunk (victim)))
    malloc_printerr ("malloc(): unaligned fastbin chunk detected 2");
if (__glibc_unlikely (misaligned_chunk (tc_victim)))
    malloc_printerr ("malloc(): unaligned fastbin chunk detected 3");
#define aligned_OK(m)  (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0)

#define misaligned_chunk(p) \
  ((uintptr_t)(MALLOC_ALIGNMENT == 2 * SIZE_SZ ? (p) : chunk2mem (p)) \
   & MALLOC_ALIGN_MASK)
#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
if (__glibc_unlikely (e->key == tcache))
{
    tcache_entry *tmp;
    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
    for (tmp = tcache->entries[tc_idx]; tmp; tmp = REVEAL_PTR (tmp->next))
    {
        if (__glibc_unlikely (!aligned_OK (tmp)))
            malloc_printerr ("free(): unaligned chunk detected in tcache 2");
        if (tmp == e)
            malloc_printerr ("free(): double free detected in tcache 2");
        /* If we get here, it was a coincidence.  We've wasted a
        few cycles, but don't abort.  */
    }
}