Hardening My Web Server: A Layered Security Approach
Published on May 16, 2025
Introduction
In managing my own web server, I need to properly deal with traffic. I'm almost solely administrating with SSH into the Debian box and zsh. I use private keys obviously with a good password, but I still need to make sure that nobody has the ability to attempt passwords 24/7. NGINX serves the static/dynamic content, logs every request, and won't serve anything sensitive. It's protected by fail2ban and UFW. The same goes with prompting the domain itself, mostly for common exploit paths. My web server gets hundreds of malicious scans per day. They're filtered out, logged, and blocked now, but I figured I'd give you a glimpse into not just my approach, but theirs as well.
Security Stack Overview
Before diving into details, here's the complete security approach I'll be covering:
•Log Analysis: Manual inspection with grep/awk and automated analysis with GoAccess
•Intrusion Prevention: Using fail2ban to automatically block malicious IPs
•Firewall Configuration: UFW setup with custom rules and rate limiting
•SSH Hardening: Key-based authentication with password attempts disabled
•Service Monitoring: Automatic restart for critical services like fail2ban and NGINX
This multi-layered strategy provides comprehensive protection against the continuous stream of automated attacks targeting web servers today.
Manual Log Analysis Techniques
If I check the NGINX logs at /var/log/nginx/access.log, I can see that every HTTP/HTTPS request is logged here, IP, path, method, status code, user agent, and more.
I can view them raw with:
tail -30 /var/log/nginx/access.log
to grab the most recent 30.

Not very readable. An easy way to ascertain the most common attack vector paths is by grepping for it:
grep '"GET ' /var/log/nginx/access.log
This will give us all of the GET requests on our most recent log file to our web server.
We can make it even more readable by piping our output into an awk command, which will split each line into fields and then grab the 7th field with \$7 (which will be the requested path) so that only the requested paths will get printed:
grep '"GET ' /var/log/nginx/access.log | awk '{print \$7}'

And then let's filter out those blank / lines. Let's pipe our last command's output into another grep filter to get rid of that junk. ^/\$ matches a line that's only /, (root path), -v means invert (exclude) that match. -E means allow extended regex syntax, (^/\$ in our case).
grep '"GET ' /var/log/nginx/access.log | awk '{print \$7}' | grep -vE '^/\$'

Now for kicks, let's get the count of the most attempted paths in these GET commands. We can tack on a sort and uniq -c:
grep '"GET ' /var/log/nginx/access.log | awk '{print \$7}' | grep -vE '^/\$' | sort | uniq -c | sort -nr
Sort puts the paths in order so our duplicates are matched up, uniq -c counts how many times each unique path appears, and then sort -nr sorts the counts numerically descending.

Pretty cool right? You can see exactly what the hostile scanners are looking for. Lastly, we can run:
grep '"GET ' /var/log/nginx/access.log | awk '{print \$7, \$9}' | grep -vE '^/\$' | sort | uniq
to not just show each path, but also the HTTP response codes. 200 or 304s most likely succeeded while everything else, 403, 404, 400s means blocked, missing, or refused.

You can see that nothing dangerous slipped through. Our legit frontend content, /blog, /certifications, /love, /projects, etc. are succeeding along with our dynamic content /blog?..., /certifications?..., etc. and some of our static JS assets from Next.js while everything else is a clean fail: .env .git /admin /wp-admin /api/env /login.jsp, etc., all exploit vectors that failed.
GoAccess: Streamlined Log Analysis
Now there is a much easier way to do this than regex and grep to check the NGINX access logs. GoAccess (https://goaccess.io/) describes itself as an "open source real-time web log analyzer and interactive viewer that runs in a terminal". I'm using it to parse that /var/log/access.log into an HTML report generated daily via cron. It'll give me an easy to read view into IPs, bots, hits, and patterns. It can be installed via apt simply with:
sudo apt install goaccess
You can check the documentation on the website or the man pages, but we can use:
goaccess /var/log/nginx/access.log -o ~/reports/report.html --log-format=COMBINED
to save an HTML report to my reports directory in the combined format meaning it'll include IP address, timestamp, HTTP method (GET/POST), Path, status code, referrer, and user agent. It matches NGINX's default logging format so it'll parse cleanly.
I also aliased the keyword accesslog to open GoAccess in interactive terminal mode instead of creating an HTML report:
goaccess /var/log/nginx/access.log --log-format=COMBINED -a

GoAccess runs the tool, /var/log/nginx/access.log points it at our active NGINX log, we know what log format combined does, and -a/--agent-list includes a detailed breakdown of user agents on the output, like browser, bot names, etc. Now we have a live, in-terminal dashboard. We can use arrow keys to scroll, tab to switch modules, and q to quit. Perfect for quick log reviews without generating files.
Insights from GoAccess Dashboard
The first page shows us unique visitors per day, requested files, and static requests, revealing some interesting information about interactions with my site. Keep in mind I do not have a robots.txt for Google crawlers and I only very recently put this URL on my resume and LinkedIn.
We have 484 total requests, 77 unique visitors. Mostly bots and scans. 112 404s, over 23% of our traffic is hitting invalid URLs! Scanning behavior! 10 static file hits, JS chunks, stylesheets, someone finally loading the frontend properly.
The most requested file is of course / with 81 hits, totally normal. Every browser and bot hits the root. We can see \x16\x03\x02\x01o..., likely a bad HTTPS handshake or some exploit probes on the wrong ports.
Section 4 shows us the most 404 URLs:
112 failed paths. That's everything from /.env to /admin/config.php. Scripted scanners.

69 hits to / with 404? It means something is hitting root with weird methods or some malformed headers, think POST, HEAD, or bad HTTPS on port 80. 24 hits to a non-existent robots.txt, bots looking for crawl rules. Legit crawlers hit this, some attackers do too, but mostly good bots looking for what not to index. I don't have a favicon, so 404 on that, which is standard bot behavior. /sitemap.xml. Normal bots scraping the site structure, it's not present so a 404. And lastly in today's top 7, POST / with 404, most likely misconfigured form spammers, XSS probes, or just junk recon. Notice the 0% visitors on all of them, these aren't users, just automation.
Section 5 shows us the top IPs and hostnames. NGINX serves website traffic. Sections 6 and 7 tell us the operating systems and browsers used. Generally, Windows, iOS, Macintosh, Android equals real people. Notice crawlers and "unknown" combined account for almost half my traffic. Automated! The same trend occurs with browsers where we have 151 hits with 15 users from Chrome, a mixed bag of browsers trailing that, then more Crawlers, Unknown, and Others, which are just a botnet soup.

The last section, 13, gives us the HTTP status codes given out:

This seals it, our traffic is mostly hostile, but our config is tight. Over 70% of the status codes returned are 4xx errors, just bots hammering non-existent paths, think .env, .git, /admin, etc. Our earlier grep commands confirm this. At least 22% of our hits are 2xx returns, successful returns. These are real users (66 of them!) hitting valid routes like /blog, /certifications, etc. The 3xx codes are redirects, thinking slash corrections (/webui -> /webui/), HTTP->HTTPS upgrades.
Implementing Fail2ban for Automated Protection
These 404s used to be in much greater volume and could lead to possible DDoS issues when I first launched the server. I setup fail2ban to watch for brute-force SSH attempts, repeated 404s, etc. and auto-ban those IPs abusing my server. We can install this with apt:
sudo apt install fail2ban
This gets us the service, config files, and system hooks. I adjusted the config files at /etc/fail2ban/jail.conf to make sure sshd was enabled and set to ban people:
[sshd]
enabled = true
port = ssh
logpath = /var/log/auth.log
maxretry = 5
findtime = 300
bantime = 3600
In the above conf file, I declare a jail named sshd, enable it, set the port to SSH (22), tell fail2ban which logpath to watch for (/var/log/auth.log is where Linux logs every SSH auth attempt). I set the maxretry to five so that if someone fails to authenticate five times within the findtime (300 seconds/5 minutes), it'll trigger a ban for the bantime (3600 seconds or 1 hour). I later changed this to 24 hours.
After changing the config file, restart the service and verify the status. I use Debian so systemd:
sudo systemctl restart fail2ban
sudo fail2ban-client status sshd
This will confirm our jail is up. Next, I can configure one for our NGINX 404 errors. Same process. I'll add a new jail conf file at /etc/fail2ban/jail.d/nginx-404.conf with content:
[nginx-404]
enabled = true
port = http,https
filter = nginx-404
logpath = /var/log/nginx/access.log
maxretry = 5
findtime = 600
bantime = 3600
nginx-404 is the jail defined specifically to track 404 spam. The ports it'll track are both 80 (HTTP) and 443 (HTTPS). The filter used is the nginx-404.conf filter located in /etc/fail2ban/filter.d to define which patterns count as a strike, most typically GETs returning 404. Max retry is 5 meaning if an IP causes 5 404s within the findtime (600 seconds/10 minute) window, they'll get banned for the bantime (3600 seconds/1 hour).
Restart fail2ban and confirm both jails are running:
fail2ban-client status
fail2ban-client status sshd
fail2ban-client status nginx-404
Now I can monitor fail2ban activity through the /var/log/fail2ban.log file and more specifically bans by grepping:
sudo grep 'Ban' /var/log/fail2ban.log
In case I wanted to block anything manually, I setup two aliases for my UFW (Uncomplicated Firewall) for quick retaliation:
alias block='sudo ufw deny from'
alias unblock='sudo ufw delete deny from'
I can adjust these conf files to be as aggressive as I desire, banning someone indefinitely for even one incorrect SSH attempt. The joke is on them though, I set the PasswordAuthentication flag in the SSH config at /etc/ssh/sshd_config to No so no matter how many password attempts they blast at port 22, if they don't have a valid private key, it'll automatically fail. The reason I even set up fail2ban with SSHD jail is so I can still ban the people trying to lockpick on my locked door that doesn't even have a handle.
I setup the alias alias jail='sudo fail2ban-client status' to check in on who's currently serving time. I can simply type jail [jailname] to check.

You can see our server is doing active threat triage. Our SSHD jail has 2248(!) failed logins. 19 are currently banned, with a total of 42 bans in the current log cycle, 23 of those have served their time. This is mostly about catching and punishing live attacks. As for our Nginx-404 jail, we have 398 total failures and 14 total bans with none active. This means these hits are spread out or at least slow enough not to trigger bans again. That's normal as scripted bots frequently rotate IPs and spray slowly to avoid detection. Luckily, I can just make my /etc/fail2ban/jail.d/nginx-404.conf configuration harsher.
Since grabbing these logs, I lowered my maxretry to 3 and my findtime to 1800/30 minutes. I'm considering setting it up to permaban them by setting the bantime flag to -1. Right now though? Our setup is working well. Our SSH is firewalled, hardened, and watched. Web scanners are logged, processed, and crushed when needed.
Firewall Configuration with UFW
My UFW firewall was one of the first things I setup, and here's my current config:

My inbound strategy:
•22/tcp ALLOW - SSH is open, not a problem since we've got PasswordAuthentication no and fail2ban watching it like a hawk
•Nginx Full ALLOW - Covers ports 80-443 for HTTP/HTTPS traffic. Standard and necessary
•443/tcp + 80/tcp LIMITED - I ran ufw limit 80/tcp and ufw limit 443/tcp to apply connection rate limiting at the firewall level. If a bot hammers my site (say, 10+ requests in under 30 seconds), UFW starts dropping or throttling those connections. Legitimate users will never trigger it, they browse like normal humans.
I have some targeted blocking I did earlier with my block alias. I'm hiding the addresses, but they're part of the 218.92.0.0/16 class B block of IP addresses from China Telecom giant Jiangsu. It's one of the worst offenders globally for SSH brute-force scanning. They get blocked in my fail2ban constantly. If you want to kill the whole block, just run:
sudo ufw deny from 218.92.0.0/16
A crucial part of securing web traffic is SSL/TLS. I use Certbot with Let's Encrypt for HTTPS, and a cron process (0 \*\/12 \* \* \* root certbot renew --quiet
) handles automatic certificate renewal to ensure uninterrupted service.
Service Monitoring Implementation
To ensure critical services stay running even if they encounter problems, I've implemented automatic service recovery using systemd. This is particularly important for services like fail2ban and NGINX that are essential to my security posture.
For fail2ban, I added an override configuration:
sudo systemctl edit fail2ban
Then added these service parameters:
[Service]
Restart=always
RestartSec=5
This tells systemd to always restart fail2ban if it crashes or stops unexpectedly, with a 5-second delay between restart attempts. After saving this configuration, I applied the changes:
sudo systemctl daemon-reexec
sudo systemctl restart fail2ban
I repeated the same process for NGINX:
sudo systemctl edit nginx
Added the same restart parameters:
[Service]
Restart=always
RestartSec=5
And reloaded the configuration:
sudo systemctl daemon-reexec
sudo systemctl restart nginx
With these changes, both services will automatically recover from failures without manual intervention. This is crucial for maintaining security, as any downtime in fail2ban could create a window for brute force attacks, and NGINX outages would disrupt site availability. The 5-second restart delay prevents excessive CPU usage if the service is consistently failing to start properly.
On the Horizon: Future Security Enhancements
While my current setup provides robust protection, security is always evolving. Here's what's next on my agenda:
•Log Rotation Implementation - Setting up logrotate to properly manage log files, preventing disk space issues while maintaining a comprehensive history of events.
•SSH Port Obfuscation - Moving SSH from the standard port 22 to a non-standard port to reduce automated scanning noise. While security through obscurity isn't a complete solution, it significantly reduces the volume of automated attacks.
•Log Redundancy - Improving the backup and redundancy solution for logs to ensure forensic data is preserved in case of compromise attempts.
•SSH Session Notifications - Setting up pam_exec for SSH session notifications to get real-time alerts when someone logs into the server.
These improvements will further harden my server against both automated attacks and sophisticated intrusion attempts. While no system is ever 100% secure, a well-layered defense makes it significantly harder for attackers to succeed.
Conclusion
This is the bulk of my layered approach to hardening my web server:
•SSH access is locked to private keys with PasswordAuthentication no, eliminating the threat of password brute-force.
•Logs from access.log, auth.log, and fail2ban.log are regularly parsed using grep and custom pipelines to isolate patterns, status codes, and attack vectors.
•GoAccess runs daily via cron, generating structured HTML reports stored in ~/reports/report.html and pulled locally through SCP.
•Aliases like logreport, accesslog, webreport, block, and unblock streamline all interaction.
•Fail2ban enforces security with active jails on sshd and nginx-404, automatically banning IPs that trip thresholds.
•UFW allows only essential ports, applies connection limits on HTTP and HTTPS, and blocks known malicious IPs including entire hostile CIDR ranges like 218.92.0.0/16.
•SSL/TLS is properly configured with Certbot and Let's Encrypt with automatic certificate renewal via crontab.
•Service Monitoring ensures critical services like fail2ban and NGINX automatically restart if they fail, maintaining continuous protection.
There is no GUI dependency. Everything is configured, logged, and controlled directly through shell and config files. The system is fast, visible, and precise.