Starting from glibc 2.32, a new Safe-Linking mechanism was implemented to protect the singly-linked lists (the fastbins and tcachebins). The theory is to protect the fd
pointer of free chunks in these bins with a mangling operation, making it more difficult to overwrite it with an arbitrary value.
Every single fd
pointer is protected by the PROTECT_PTR
macro, which is undone by the REVEAL_PTR
macro:
Here, pos
is the location of the current chunk and ptr
the location of the chunk we are pointing to (which is NULL if the chunk is the last in the bin). Once again, we are using ASLR to protect! The >>12
gets rid of the predictable last 12 bits of ASLR, keeping only the random upper 52 bits (or effectively 28, really, as the upper ones are pretty predictable):
It's a very rudimentary protection - we use the current location and the location we point to in order to mangle it. From a programming standpoint, it has virtually no overhead or performance impact. We can see that PROTECT_PTR
has been implemented in tcache_put()
and two locations in _int_free()
(for fastbins) here and here. You can find REVEAL_PTR
used as well.
So, what does this mean to an attacker?
Again, heap leaks are key. If we get a heap leak, we know both parts of the XOR in PROTECT_PTR
, and we can easily recreate it to fake our own mangled pointer.
It might be tempting to say that a partial overwrite is still possible, but there is a new security check that comes along with this Safe-Linking mechanism, the alignment check. This check ensures that chunks are 16-bit aligned and is only relevant to singly-linked lists (like all of Safe-Linking). A quick Ctrl-F for unaligned
in malloc.c
will bring up plenty of different locations. The most important ones for us as attackers is probably the one in tcache_get()
and the ones in _int_malloc()
.
When trying to get a chunk e
out of the tcache, alignment is checked.
There are three checks here. First on REMOVE_FB
, the macro for removing a chunk from a fastbin:
Once on the first chunk returned from the fastbin:
And lastly on every fastbin chunk during the movement over to the respective tcache bin:
_int_free()
checks the alignment if the tcache_entry
key
is already set to the value it's meant to be and it has to do a whole double-free iteration check:
When all the fastbins are consolidated into the unsorted bin, they are checked for alignment:
Not super important functions for attackers, but fastbin chunks are checked for alignment in int_mallinfo()
, __malloc_info()
, do_check_malloc_state()
, tcache_thread_shutdown()
.
You may notice some of them use !aligned_OK
while others use misaligned_chunk()
.
The macros are defined side-by-side, but really aligned_OK
is for addresses while misaligned_chunk
is for chunks.
MALLOC_ALIGN_MASK
is defined as such:
MALLOC_ALIGNMENT
is defined for i386 as 16
. In binary that's 10000
, so MALLOC_ALIGN_MASK
is 1111
, so the final byte is checked. This results in 16-bit alignment, as expected.
This alignment check means you would have to guess 16 bits of entropy, leading to a 1/16 chance if you attempt to brute-force the last 16 bits to be