If you have a website, or are thinking about making one, you need to keep it secure. Here are some things you need to know about website security.
For one, it’s easier to hack than it is to keep a website secure. You need to keep everything secure, but a hacker only needs to find one security issue.
I do more non-web coding than web development, but I’ve had many different websites over the years. Since 2012, I’ve made 9 different websites, 5 of which are still up. These websites were for a variety of purposes and ran different kinds of software, though I tend to focus on cPanel and the LAMP stack (Linux, Apache, MySQL, PHP). I once made a MEAN stack web project for a college class, though I’m not as experienced with it. However, many of these tips are general and even apply to other tech stacks.
Won’t my web host keep things secure for me?
It really depends. For some server offerings, you’re on your own. For managed hosting, the web host will do some security-related things, but other stuff is your responsibility.
Don’t assume someone else will keep things secure for you! When in doubt, contact a company’s customer service and ask.
People who want a website but have no technical background are better off using some website-building service. They do everything for you, but it’s more expensive for what you get, and there are more limited options for what your site can be. However, because I’ve studied IT and computer science in college, I feel more competent with some more in-depth technical stuff. But if you don’t know how to secure something, don’t use it. That being said, I often use some managed options instead of doing everything 100% on my own. Keeping a hands-off VPS secure is a much more laborious process compared to using a managed host where the web host does a lot of stuff for you.
In addition to security responsibility, there might also be responsibility for reliability. For a typical cheap web host for a personal site, the provider might have your server down every now and then, but for a corporate website, that would not be acceptable, and thus a company might use an SLA, or Service Level Agreement, which states that the server must be online a certain amount of time or else the client is entitled to compensation. A “five 9s” SLA means 99.999% uptime. For a small website, you probably won’t get an SLA.
Do developers need to be security people?
In a large-scale company, there will be dedicated security employees, separate from software developers. But for a personal or small-scale site, the developer is also in charge of security. That being said, all developers need to at least know the basics of security.
There is no silver bullet for security
If someone tells you that security will be solved if you just do this one thing, they’re either lying or don’t know what they’re talking about. There is no such thing as perfect security, and security is a very complicated and multi-faceted topic. Security is also constantly evolving as time goes on. Every year, we see new trends, new apps, new programming frameworks, and new security concerns.
Laws and regulatory compliance standards
If you’re making a personal website, like a blog or portfolio, then there isn’t much regulation involved. But for ecommerce shops or corporate websites which have PII (personally identifiable information), payment info, or healthcare info, there are laws which stipulate that you need to make sure things are secure. For healthcare, there’s HIPAA. For payment info, there’s PCI-DSS. For general privacy (and website cookies) in the EU, there’s GDPR. For government websites, there’s FISMA.
If you don’t take customer payments or info, you don’t really need to worry about being compliant. Companies that take customer payment info need compliance audits. To be compliant, you need to meet certain criteria for security and privacy. Passing a compliance audit doesn’t mean your security is perfect, but it does mean you have at least a minimum level of security.
This article doesn’t really go over the in-depth info related to compliance. This is about security for personal websites. Just know that, if you want a website for your business, or you want to make an online store, you need to be aware of compliance. However, you can make an ecommerce shop using someone else’s stuff (i.e. Shopify or WooCommerce). In those situations, the software might be PCI-compliant, but you should always double check anyway.
Do I really need to care about security?
Yes. Everyone who makes a website needs to think about security. People WILL attempt to hack your sites. 100% guaranteed. They don’t know or care who you are. They just want to make money. People can make money by hacking sites, and then do things like sending email spam, DDoS extortion with a botnet, or hosting profitable malware (i.e. ransomware) on your site to distribute it to victims. I check my server logs and there are failed hacking attempts on a daily basis. Luckily, most of them are very basic, and many of them don’t even apply to my server at all. What hackers will do is find a way to hack a specific version of a specific piece of software, then write a bot which will attempt that exploit on many different web servers.
People think of hackers as people who are frantically typing away at a keyboard, sometimes wearing a hoodie and face mask for some reason. People think hackers only target certain people. In reality, someone might write a script that will attempt to hack thousands or even millions of sites, and then go relax for a while while the script does its thing. Some attacks are manual, but many are automated.
Where are hackers located?
All over the world. Every country has hackers. In some cases, a hacker will use a VPN, Tor, or other method of hiding their true IP address and location. In some cases, a hacker will hack a server and then use that to launch attacks on other servers. A hacker in country A might use a VPS or hacked server in country B, then use it to do nefarious things. Then people might look through their server logs and think the hacks were coming from country B, even though they really weren’t. The process of figuring out who is behind a hack is referred to as attribution, and it’s very difficult to do.
Not all bots are bad. Some are crawlers for search engines. But some bots are used for automated hacking.
The above screenshot shows some bots that crawl my servers.
Quite often, people run bots instead of manually attempting to hack things. I’ve checked my server logs and see that Go or Python bots (based on their user agent) often attempt things like finding .env files, XSS exploits, automatic login attempts with weak passwords, etc.
Server logs and web application firewalls
A web application firewall is something all web servers need. You can use a typical server WAF like ModSecurity, often used for the Apache web server. But some CMSs also have WAF plugins available, like WordFence for WordPress. WAFs are different from network firewalls. A WAF will block different kinds of malicious requests, like SQL injection, XSS, etc. Some more fully-featured CMS WAF offerings will do blocking based on IP blacklists of known malicious IPs.
Of course, IP blacklisting isn’t perfect. To get around IP blacklisting, a hacker might hack a legitimate website, and then use that server to attack other servers. For example, there might be some local business that barely has a website at all, and they’re not very tech-savvy. Maybe it’s a local deli shop, which is a small business that has very little to do with technology. Someone sets up a website for them, and then they never do much of anything with it, including updates. Then a hacker finds their site, sees that it has outdated and insecure software, and then uses it to attack other sites. When they first hack the site and start using it for hacking other sites, it won’t be on any blacklist. One bad thing about a company getting their site hacked is that it might end up in blacklists. The deli shop owners aren’t malicious, but because their website got hacked, their site gets blacklisted because of how it was used maliciously. Rinse and repeat with many other websites.
I look through my WordPress server WAF logs and find that many of the attack attempts are for WordPress plugins that I don’t even have installed. But hackers don’t care if most of the attempts they make are unsuccessful. If you cast a wide enough net, you’ll get something eventually. For example, the last time I checked the logs for one of my WordPress sites, someone was making requests for a real estate-related plugin. I don’t have any real estate websites. But apparently they know about a security vulnerability in a somewhat old version of a real estate CMS plugin. Does it matter to them that their hacking attempt failed on my server? No, because they’ll try the same exact thing on thousand of other servers. Eventually, they’ll find one which actually does have that specific version of the real estate plugin, and they’ll be able to hack it.
Some hackers even look for hacking tools, like web shells. WSO is an example of a web shell, which is a PHP file that allows an attacker to use shell commands on a server. Web shells are typically written in PHP and target LAMP servers, specifically ones with things like remote file inclusion vulnerabilities, such as if they have old versions of software that have known security issues, called CVEs. In some cases, hackers will look for hacked websites. If a website has a WSO shell on it (which might be called wso.php), then they might also be able to use the hacked site, assuming the web shell password hasn’t been changed from the default. It can be an easier way for lazy hackers to use an insecure server. Not every hacker does this though.
The above screenshot shows some malicious requests that were blocked by my server’s security software. It seems like the hackers were looking for files that don’t exist on my server. Again, this is because they are casting a wide net and looking for specific things. It doesn’t matter if they fail 99.9% of the time, because they will eventually find a server with what they’re looking for. In the above screenshot (of blocked requests), someone was looking for a WSO shell, which doesn’t exist on my server. None of the other things they were looking for were on my server either.
Use a server distro, not a consumer one (examples: CloudLinux, CentOS, Red Hat, Amazon Linux, etc). If you used a managed host, they might use something like CloudLinux, and at that point, you aren’t really in charge of configuring the OS anymore. You will really just interact with the server using a dashboard, such as cPanel. Other hosts offer similar things, even if they aren’t the exact same distro or dashboard program. But they will all have a web-based interface for managing the server and the software on it. But even if you use a managed/shared web host, and they deal with the OS setup and security, you are still responsible for things like SSL, the web application security itself (i.e. any PHP/Laravel/Django/NodeJS/Rails files you make on top of the OS), and other configuration-related stuff. So they do some of the security stuff for you, but not everything.
Something like a barebones VPS or Amazon AWS EC2 instance will be much more hands-off, so you will have to do a lot of in-depth OS configuration. I don’t recommend that for beginners. You have to secure basically everything aside from the hypervisor the VM is running on. It’s a lot more responsibility. Just because you use something on Amazon’s servers doesn’t mean they’re going to make sure it stays safe. That’s on you. And that’s why I don’t recommend more intensive AWS stuff for beginners. It’s something you might want to get into later, but it’s not good for a new developer who is learning the ropes.
If you want to mess around with VMs and OSs at home, you can just get a cheap computer and install VMware ESXi on it, then make any old random VM. You could do this to learn more about servers and operating systems, as well as use it as your development environment before you push to your production server. It doesn’t really matter too much that your local server is not super security-centric because it’s only on your local network, not internet-facing. But for a web server, which is accessible by anyone on the internet, your options are more limited due to the additional security requirements.
There is a big difference between consumer Linux distros (Mint, elementaryOS, Ubuntu, etc) and server distros, like the ones I mentioned previously.
Firewalls (not web application firewalls)
If you want to learn more about Linux, be sure to brush up on bash, SSH, and whatever firewall you’re using, like iptables or nftables (or pf if you’re using FreeBSD). Firewall configuration is complicated, but important to get right. Again, not something that beginners should be responsible for.
A shared web host will be in charge of setting up the firewall correctly so that you don’t have to.
For shared web hosts, where multiple websites are run on a single server (to save money), they might be run as separate isolated VMs, jail shells (i.e. FreeBSD), or containers. It is possible, though not super likely, that a vulnerability with the hypervisor or host OS to allow customers of one container or VM to see or do things on someone else’s VM or container on the same server. So it’s important that the hosting company keeps their virtualization software up to date. You can usually see info about a server with uname -a, or a cPanel dashboard might let you see what version it is even without using SSH.
It’s important for customers of a shared host to be isolated from one another so that they can’t do anything malicious with someone else’s server. I doubt many people paying for a server would hack someone else’s VM/container on the same server, considering that their name and payment info is associated with their account. But the real issue is that, if one customer’s VM/container/jail shell gets hacked, a hacker could then try to pivot to other customers’ stuff on the same server. It’s unlikely though, as there’s much more low-hanging fruit, and hackers tend to take the path of least resistance. But you need to make sure you use a reputable host, because a lazy one might not be diligent about installing security updates on their hypervisors/etc.
SSL stands for Secure Socket Layer. It encrypts traffic between a user of a website and the server itself. This type of encryption is referred to as “data in motion,” as opposed to file encryption on a computer, which is “data at rest.” Encryption is very important. SSL certificates are issued by companies called certificate authorities, or CAs.
Also, even though people still say SSL, the successor to SSL is TLS, or Transport Layer Security. TLS is what newer certs use, but people still call them SSL certificates.
You can tell when a website uses SSL when it says https:// as the protocol. A website that says http:// is not using SSL, which is bad. Browser typically show a lock icon to indicate SSL.
All websites need SSL. It makes traffic between users and the web server more secure and private. This is especially important for login information and payment details. It used to be the case that a lot of the web only used HTTP instead of HTTPS, except maybe for login pages. Nowadays, it’s more common to see SSL used everywhere on a site. If you have a website, it needs SSL, not only for security and privacy, but also for better SEO. Sites without SSL get deranked on search engines.
Let’s Encrypt is a certificate authority that issues free SSL certificates, but it can be a hassle to set them up. I prefer using my web host’s SSL certificates, which are less than $10 per year. If you’re a less technical web developer, you can even contact a hosting company’s customer support to help you set it up.
SSL certificates are only valid for a certain amount of time. They will eventually expire, at which time you either need to renew the certificate or buy a new one. A website with an expired SSL certificate will give the users a big security warning in their browser, which might scare them off. So it’s important to stay on top of renewing your SSL.
That said, SSL is not magic. You can still have security issues on your site even if you’re using SSL. SSL only deals with security and privacy of traffic between the user and the website. A website could still have a CSRF vulnerability, or could have been hacked and is now distributing ransomware. An SSL certificate does not necessarily mean a site is 100% secure.
Old versions of software have known vulnerabilities in them. Be sure to install the latest updates for everything. Installing software updates is extremely important, but it can also be complicated, because there are many different things that need to be updated, often separately. Linux distros will typically have a package manager that you can use to quickly and easily install updates for the OS and any packages you’ve installed via the package manager, but there can be some caveats to that, especially with web development. Sometimes, software needs to be installed outside of a package manager, and updating it can be manual and tedious. And in some cases, migrating from an old version of something to a new one can be more difficult than you might expect, like going from an old version of PHP to a new one. There can be sometimes be compatibility issues. For example. PHP 7 breaks compatibility with a lot of PHP 5.6 code, so in order to update PHP, a developer would first need to modify their code, which is a time-consuming process. Additionally, Java 8 was a major release of Java, but subsequent Java releases broke compatibility with a lot of things, so many people decided to stick with Java 8 for a while because they don’t feel like changing things to be compatible with the latest version.
It can sometimes be difficult to migrate your code so that it will be compatible with the latest stuff, but it’s still vitally important for security.
Of course, it is possible for new updates to introduce new security or stability issues, as opposed to just fixing existing ones. That’s why some places don’t do automatic updates, and some people will update their development and testing environments before going ahead and updating their production environment. But I’ve personally never had any severe issues with installing updates on my servers, like for Linux, PHP, WordPress, and so on. Of course, for a personal website, proper testing before updating is not a big deal, because a personal website is just a portfolio and not exactly business-critical. But for an enterprise setting, you’d need to be more thorough with testing.
If you can’t update it, don’t use it. If something is unmaintained, don’t use it. If you don’t understand what something is, don’t use it.
Open source software is great most of the time, but one issue is if you use an open source project that isn’t actively maintained. There could be a new security issue that someone found in it, and nobody will fix it. So make sure you only use active software projects that have a lot of users, and more importantly, lots of developers working on it.
Inventory and asset management
Keep a list of all your registrars, web hosts, servers, accounts, installed software, etc. A list might be something like AWS EC2, GoDaddy, Amazon Linux, Apache, MySQL, PHP, SSH, WordPress, phpMyAdmin, cPanel, Yoast, WooCommerce, sendmail, exim, clamscan, etc. Once you have figured out all that you have, you need to keep up to date with the news about these things. You don’t need to care about feature updates, but you need to pay attention to articles about security issues relating to all the things your websites use.
Every now and then, people find new security problems in software. Then the developers issue a fix in the form of a software update. So how do you find out about it?
Newsletters and alerts for known security vulnerabilities will help you know when there is a new security issue relating to the software you use on your servers. There are really two types of updates: in-band (routine), and out-of-band (emergency). An in-band update is something that can wait until it’s convenient for you to install it, though organizations typically have patch schedules, like Microsoft’s “Patch Tuesday.” But when a new high severity security issue is found in the software that is running on your server (i.e. remote code execution), you need to perform out-of-band updates. In a large-scale enterprise, there might be on-call operations (“ops”) employees who have to fix out-of-band security issues, even if it’s something like 2am. But for a small website made by a single person, just do it when you have time. Hackers will attempt to hack your site at all hours of the day though.
One particular source I like for security news is US-CERT, or the United States Computer Emergency Readiness Team. You can sign up for emails about security issues.
That being said, don’t fall for FUD. There are tons of articles that act like the world is going to end because some hard-to-exploit security issue was found. Read into the details. Something like local privilege escalation is less server than some security vulnerability that can be performed without any authentication. In fact, CVEs have scores on a scale of 0.0 to 10.0 to indicate how severe they are. US-CERT is not sensationalist stuff and it’s good source of information about security issues. But there are many less technical media outlets that write bad articles trying to make readers scared in order to get views and ad revenue.
You might think “okay, I installed this new security update that fixes the security problem. Now my security problems are solved!” But that’s wrong. Security is never finished. There will always be a new security vulnerability that you need to update your software to fix. Security will always be a problem, and that’s why you always need to stay diligent with updates and whatnot. Nothing will ever be 100% secure.
Aside from US-CERT, another great source for security information is CVE Details. They publish CVEs, which are Common Vulnerabilities and Exposures. You can look up vulnerabilities for specific pieces of software.
When you have a website, you need to devote some time to periodically running maintenance-related tasks, such as software updates. A website is emphatically not something you can just “set and forget.” If you can’t devote time to maintaining your website, maybe you shouldn’t have one.
If you don’t need it, don’t install it. If there’s a feature in a program and you don’t use it, disable it. The more software or features you have on your server, the more things there are for hackers to attack. Minimize attack surface to make it harder to be hacked.
Know something very well before you put it on the internet
One problem I’ve noticed is that many ambitious developers will try out new technology, like new server programs, operating systems, cloud offerings, programming languages, frameworks, etc. without really understanding it in-depth. This is because people rush to be the first to be experienced with some new and trendy piece of tech, so as to modernize their skill set. However, when you start learning some new technology, it’s important to really understand it well before you use it on an internet-facing server. It’s better to mess around with new stuff on your own desktop/laptop and maybe a home server, or perhaps a remote server that can only be accessed by people who log in. But don’t learn just the basics of some new tech and then immediately make a new site or app with it. As soon as you put an app or website on the internet, people will attempt to attack it. That doesn’t mean they’ll succeed, but they’ll still try anyway.
Scheduled maintenance with cron jobs
If you want to do things like make backups, install updates, run a virus scan, or any number of other useful maintenance-related tasks, doing it manually can be a chore. But task scheduling is a very useful feature of many operating systems. cron is a way that you can schedule tasks in Linux. For the root cron table, it’s /etc/crontab. For user cron tables, it’s /var/spool/cron/crontabs/. And by the way, if you’re new to Linux, you really need to learn the basics of it before you have a website.
Check cron to make sure there are no malicious tasks being scheduled. An attacker could maintain persistent access to a machine by making something like a cron task for creating a reverse shell or bind shell. So even though you can use cron for good, someone who hacks your server could use it for bad things.
If you use cPanel for a web administration dashboard, that also needs to be updated. However, many web hosts that offer turnkey cPanel usage will manage it for you so that you don’t have to. But if you use it in a VPS or whatever, you’re responsible for updating it.
You need to keep your database software up to date.
MySQL user account permissions
If something is only being used for SELECT queries, why should it be allowed to do more than that? Assign permissions on an as-needed basis. It’s all too easy to be lax and just give something more permissions than it needs. If an account can do anything, then you don’t need to think hard about what kind of permissions it really needs. But that way of thinking is not good for security.
If you store passwords in a database table, they need to be hashed. But aside from passwords stored in databases, there are other times that passwords might be stored on a server. Sometimes, people hard-code passwords or other credentials in a program, such as if someone is interacting with a remote API that requires an API key. Sometimes, people store credentials in configuration files. Be sure that these things aren’t publicly-facing.
If someone can perform SQL injection on your server, they can do something called data exfiltration, meaning getting private data from your database. It can be something like a SELECT statement to get all the rows from a table. Sometimes, hackers might even try to be sneaky with how they exfiltrate data, employing a technique called steganography in order to make it harder to detect. But a lot of the time, data exfiltration doesn’t require any special stealthiness.
Linux file permissions are read, write, and execute, for user, group, and other. Make sure you use minimal permissions so that people can’t do things for files they shouldn’t be allowed to access.
If you have a LAMP stack server, such as for WordPress or anything else PHP-related, you might have used phpMyAdmin. It’s an easy web-based way to manage databases and database tables. However, phpMyAdmin can have security issues too, which is why you need to keep it updated if you have it installed on your server. Every piece of software, whether the OS, a package, a container, DBMS, or database frontend, needs to be updated.
Django is used for Python web development. You need to keep Django up to date. Another alternative is Flask, which also needs to be updated. The Python programming language itself also needs to stay updated. Python 2.7 is obsolete. Use the latest version of Python 3. Maybe by the time you’re reading this, there will be Python 4. But just use the latest version.
Ruby on Rails
Just like anything else I’ve listed, if you’re a Ruby developer, you need to keep Rails updated.
Laravel and Composer
Laravel is a PHP framework and Composer is a PHP package manager. Be sure to keep these up to date too, if you use them. And on the topic of package managers, make sure you only install packages you trust, and make sure you’re not installing something with a typo. There have been cases of hackers submitting malicious packages to a package manager repo that have names that are very similar to legitimate ones.
Yes, even programming languages can have security vulnerabilities. You need to keep them updated. Many people still use PHP 5.6, but it’s not secure. Whether it’s Python, Java, PHP, C#, or whatever else, it’s a piece of software and thus needs to be updated.
Use a file manager to look through the files on your server and see if anything is suspicious. You can also write a shell script to see if there are files other than the ones you’re expecting to be there. You can view files via SSH or sometimes via a web-based file manager in your web server administration dashboard. It depends on the kind of software you’re using for your server.
For a LAMP server, check for new PHP files on your server, as they might be web shells (such as WSO). These are essentially backdoors that will allow a hacker to have persistent access to your server. Web shells are often PHP files that target servers running PHP with security issues like remote file inclusion or remote code execution.
I’ve also heard of instances where PHP files on WordPress sites are modified in a malicious way. So it can be useful to use diff tools to see the difference between a file on your server and the legitimate file from a git repo (if applicable).
Sometimes, a difference between files might not be malicious, and instead it’s just the way you’ve changed the software on your server. But diff tools are very useful for finding malware, even if not every single difference is malicious.
Accounts and theme editing in content management systems
If you use a CMS, make sure your other users of the site (who have accounts) aren’t able to change the site theme. An account that can edit a PHP CMS theme can sometimes be able to change PHP code, and in doing so, they will be able to add a PHP web shell, or even just straight-up shell_exec() stuff. Giving someone the ability to edit PHP files on a server is like giving them pretty much blanket access to execute arbitrary code.
Popular software projects are more likely to get security fixes
Use big software projects. There might be some cool server program or alternative CMS on GitHub that like 5 people use, and you might like its features, but you’re really better off using the big-name CMSs and server programs because those are what get a lot of community support, security fixes, etc. Same goes for CMS plugins. Stick with plugins that have lots of users and lots of good ratings and reviews, not some random thing that hasn’t proven itself to be trustworthy or well-maintained.
Some people think using obscure tech is a way to express themselves, but popular tech has a lot of benefits, specifically when it comes to security and documentation. If there’s no community around a server platform or framework, don’t use it. You want the option to discuss it with other people when you’re having issues (or conversely, so you can help other people with it, once you’re experienced with it).
CMS updates (WordPress, Joomla, etc)
WordPress is a program that gets installed on a computer. That computer is acting as a web server, but it’s still a computer nonetheless. All programs on computers need to be updated.
The appeal of a content management system is that you don’t have to write everything from scratch. But every now and then, people find security issues in the CMS code, and thus a security update is made. You need to install updates if you’re using a CMS. Hackers go after websites that have outdated versions of WordPress, Joomla, Drupal, and so on.
If you can get away with it, use a static site generator instead of a CMS to reduce attack surface. Jekyll is an example of a static site generator.
Some software will let you send emails if there are security incidents, like failed login attempts. Some software might also let you get email notifications based on things like needing to update something. However, a better solution would be automatic updates.
CMS plugin and theme updates
Just like a CMS itself, plugins and themes installed within the CMS need updates. Limit the number of CMS plugins you use, and only use well-known, highly-used, frequently-updated ones.
CMS security plugins
For my WordPress sites, I like using a plugin called WordFence. It has a lot of features that can make a WordPress site more secure. Even if you don’t use WordFence, you should use some sort of CMS security software.
Logging and monitoring
It’s important to log what’s happening on your server, and to also actually monitor said logs. For a small-scale website or app, manual log review is usually fine. But for a large-scale enterprise, you’d want to look into software that automatically parses logs and heuristically analyzes them for security incidents or otherwise unusual behavior.
Some examples of server logs include /var/log/httpd/access.log, /usr/local/cpanel/logs, /var/log/mysqld.log or /var/log/mysql.log, /var/log/dmesg, phpMyAdmin logs (Status → Binary Log), /var/log/auth.log, and many other things in /var/log. If you use a piece of software on your server, google “[software name] log location” and see where the logs are. It’s good to periodically review them.
Logs generate so many entries that it can be difficult or even impossible to manually review all of them, but it can still be good to look at them from time to time. There are some log analysis programs out there, or you can write your own.
AWStats is a log analysis tool included in many cPanel-based web hosting plans, though it is somewhat limited in features.
Log rotation is the act of putting old log entries into separate/archived files, rather than keeping a zillion lines in a single log file. They can be useful for historical purposes though. Depending on the specific software, some old log entries simply get deleted to save space.
Some software lets you visually see activity from logs. These kinds of programs are referred to as dashboards. Dashboards are often used in SOCs, or security operations centers. For a small/personal website, you probably won’t have a monitoring dashboard. cPanel is an admin dashboard, with only limited log analysis tools.
2FA for all accounts
If you get phished or have a keylogger on your computer, someone will then be able to log into your accounts, such as an account for a web server, which they can then use to distribute malware or create a scam/phishing page on your website. That would be really bad. So 2-factor authentication is important because someone who knows your password still can’t log in to your accounts unless they also have your phone. Authenticator app OTP/TOTP 2FA is better than SMS-based 2FA. Some people even recommend having a dedicated authenticator device, like an Android tablet you never put on the internet. It’s actually quite a hassle to move from an old phone to a new one if you have a lot of authenticator stuff on it. The way to go from an old phone to a new one, assuming you use OTP 2FA, is to deactivate 2FA temporarily on all accounts, then set it up again and re-enable it on your new phone.
Many people like to remotely administrate servers via SSH. If you do that, make sure you use a strong password, use rate limiting with something like fail2ban, keep SSH and the server OS updated, and change the port so it’s not on the default port.
If you don’t use rate limiting, people can brute force a login by trying many different combinations. Sometimes this is referred to as a dictionary attack. A way to do rate limiting for SSH is to use fail2ban. You can do rate limiting in WordPress using Loginizer. Even if you use some other CMS or whatever, you need to use some sort of rate limiting, otherwise people will try to brute forcing the login to eventually find a valid username and password combination. I also recommend not using usernames like “admin” for an administrator account. It’s also a good idea to have separate administrator and content accounts. For example, if you have a blog, don’t make posts with the admin account. Make a separate account with limited privileges that can only post articles. The administrator account, with a hard-to-guess name, like SgdsfgdfgF45345a, stored in a password manager, should only be used for software updates.
If your content-posting account and admin account are the same, people will try to guess what the username is based on your name. If your name is John Smith, and your username is johnsmith, then hackers will attempt to log in with that. You don’t want people to easily get access to admin privileges within a CMS, so make it harder by separating content vs. administration, even if you’re just one person running a personal site.
Separation of daily usage accounts vs. admin privilege accounts is an important security concept that extends beyond just websites.
Many hackers will attempt to log into your websites every single day. wp-login.php is the login page for WordPress sites. All of the requests in the above screenshot were blocked by WordFence. And XMLRPC is a way for an app to communicate with a WordPress server, as opposed to just viewing WordPress site content in a web browser.
Both XMLRPC and APIs (such as the JSON API in WordPress) need to be secured. If you use GraphQL, you need to secure that too.
Get rid of inactive accounts
If you’ve made an account for someone, like an email account, WordPress account, Linux user account, service account (an account used for software instead of a person), etc. and they no longer use it, delete or disable it. If you set up SSH keys but no longer use SSH on that server, get rid of it. Old and unused accounts/keys/etc are a way for attackers to compromise you.
Yes, there is malware even for Linux. Malware might not be a huge problem for Linux home computers (though it still exists), it’s a bigger problem for Linux web servers. As such, it’s good to run malware scans. ClamAV is a popular free and open source anti-malware solution. Some people say it’s not very effective, but it’s better than nothing, and doesn’t cost anything. Of course, scanning will increase your server’s CPU usage, so you might want to schedule it for times when you don’t have a lot of traffic. Not only that, but some shared hosts oversell capacity, so if every single customer on the server used 100% of their CPU usage (typically a single core for budget hosting), then the server wouldn’t be able to handle it. So doing CPU-intensive stuff all the time on a web server could potentially irritate your web host, and they might tell you to stop. But if you do it every now and then, it should be fine.
Malware scans are like turn signals. Most of the time, you’d be okay without them, but you need to use it all of the time for the couple of rare times that you really do need it.
Of course, if you find malware on your server, getting rid of it isn’t enough. It means there’s a security issue that allowed someone to put malware on your server to begin with. Getting rid of the malware without making any changes to the software or accounts on it will mean you’ll have a high chance of getting malware again. If there is malware on your server, get rid of it, install software updates, and reassess the PHP code on your site that might be vulnerable, allowing an attacker to put malware on it, like remote code execution, file inclusion, injection attacks, and so on. Then change your passwords.
VirusTotal is a malware scanner aggregation tool that can be used to scan files or websites for malware. You can try running VirusTotal on your own website to see if it finds anything malicious. Of course, one limitation of VirusTotal is that it will only scan what you tell it to scan, and it will also only scan the results of a file (for example, the source code of a PHP file is different from what gets displayed to the end user).
If a hacker puts up a malicious web page on a new page rather than maliciously modifying existing pages on your site, or adds a reverse shell or other thing that won’t be visible to people just browsing your website, then VirusTotal won’t find it. So it could be useful to look through your public www folder in Apache or whatever and see all the stuff that’s there, and scan it. Or just find checksums and compare them to known good ones of your files that you know aren’t malicious.
Polymorphic malware is malware that is changed slightly to avoid detection by traditional signature-based detection. They might have different file sizes, checksums, and randomized strings. Primitive means of detecting malware can be thwarted this way.
CMS vs. DIY coding
If you use a CMS, you can have a website with little to no coding. Still some configurations and technical stuff, but not really “real” web development. But you could also make your own web app from scratch (or at least with something like an MVC framework). But when you do that, you have more responsibility for keeping things secure.
If you’re writing your own code instead of using a CMS, make sure you know how to keep it secure. For example, if you’re using PHP and getting user input to put into a SQL query, make sure you use prepared statements in order to be secure. Some people try to do their own version of escaping/sanitizing/validating, but nothing beats prepared statements.
Client vs. server
Learn about OWASP top 10 vulnerabilities
There are a handful of security vulnerability categories that are found in many different web apps. They include injection attacks (such as but not limited to SQL injection), broken authentication, sensitive data exposure, XML external entities, broken access control, security misconfigurations, cross-site scripting (XSS), insecure deserialization, using components with known vulnerabilities (as in, a lack of software updates), and insufficient logging and monitoring. The OWASP wiki has a ton of great information about web application security.
There are far more than just 10 different kinds of security issues that websites will face, but it’s important to at least know about the 10 most common ones. Some additional security issues you should be aware of are remote code execution, resource exhaustion, privilege escalation, remote file inclusion, local file inclusion, cross-site request forgery, and directory traversal.
It’s not really worth it to be concerned with hypothetical, obscure, difficult-to-exploit security issues, or stuff like zero-day vulnerabilities. Get the fundamentals right before focusing on more advanced stuff.
For a static website that is merely content displayed for users to view, there’s less of an attack surface. But if you’re developing a CRUD system (Create, Read, Update, Delete), where you take user input and put it into a database, then you need to be more careful. Not everyone has the same threat model though.
It’s important to back up your website’s data and software. I personally like to use git repos, and then on your server, you can pull from a repo to get the newest stuff. Just make sure the .git folder or any sensitive config files aren’t public (enforce with something like .htaccess, or maybe just move the non-sensitive files to the public www folder and leave the other stuff in a private place on the server). Many shared host dashboards will let you easily back everything up, typically as a .tar.gz file (a compressed tape archive, kind of like a .zip or .rar). But even with a VPS or EC2 instance, rather than a graphical dashboard, you can just write a shell script and add it to your cron table to automatically compress files within certain directories and maybe either put them in a certain backup directory, or upload them to a remote FTP server or private git repo.
If your site somehow gets hacked and there’s malware on it, such as malicious modifications of legitimate files, one thing you can do is just get rid of everything and restore from a backup. Then update your software and make sure it’s not vulnerable so that the same exact thing can’t happen a second time.
Another reason to do site backups is because of ransomware. I’ve heard of hackers encrypting a website’s files, or encrypting databases, and then demanding a ransom from the website owner in order to get the decryption key. If you have site backups, you’ll never have to pay a ransom. Ransomware seems more common on desktops and laptops, but it’s possible on web servers too.
CDNs such as Cloudflare
A content delivery network can increase performance for delivering static assets to a user (such as an image or CSS file, but not for when the user wants to log in or perform a search). Some CDNs offer free tiers for personal sites, though they cost a little money for commercial sites. A CDN can increase performance in many circumstances, such as if your main server is far away from a user, but the user will instead fetch the static contents from a closer CDN server, as CDNs typically have many servers all around the world and deliver stuff to users based on their location.
In order to use Cloudflare (or any CDN) on your website, you’ll need to change your DNS so that your domain name points to the CDN servers. The CDN goes in between the users and your actual web server.
But aside from performance, CDNs can also help with protecting sites from DDoS attacks, and also block bad requests made from malicious users. CDNs attempting to block malicious requests won’t stop everything, but it’s an additional layer to help make your site a little more secure.
Google Analytics isn’t really a security-focused tool, but if you have a website, you might as well use it anyway.
A DDoS (Dsitributed Denial of Service) attack will strain your servers by overwhelming them with lots of traffic. The difference between a DDoS and a DoS attack is that a DDoS attack implies many computers, whereas a DoS attack can be carried out by just one device. DDoS attacks are typically more effective at overwhelming a server though.
You can tell if you’re being DDoSed by checking resource usage of your servers. If you have a CDN, such as Cloudflare, it can help protect you from this kind of attack. However, DDoS attacks are really tame compared to other kinds of attacks. All they can do is temporarily make your server unavailable to users. DDoSing can’t get any sort of personal information or code execution or whatever. So it’s not a big deal.
Monitor your resource usage on your server. If you see anything unusually high, it might be because a post on social media went viral and linked to your site, or it might be because you’re getting DDoSed.
DDoSing can be done with lots of hacked bots in a botnet, all making lots of network requests. In a three-way handshake, there’s SYN, SYN-ACK, and ACK. Flooding a site with SYN requests to initiate lots of TCP handshakes is called SYN flooding, which is just one of many ways to do a denial of service attack. But another type of denial of service attack, called Slowloris, doesn’t even need many computers to do it. A Slowloris DoS attack can be performed by a single computer that opens many different sessions with a server, but doesn’t use much bandwidth. Web server programs can only handle a certain number of concurrent sessions, so using all the available sessions means nobody else can use the server. Other types of denial of service attacks include reflection and amplification attacks, for either UDP or NTP.
Aside from network DDoSing, another way to slow down a site is with something called a fork bomb, which is a program that invokes itself multiple times, leading to exponential growth of resource usage. Of course, in order for someone to run a fork bomb, they’d need code execution, such as with a user account or a remote code execution exploit. If they have either of those, there are far worse things they can do than just a fork bomb. Of course, one way to use a lot of resources on a server without the need for a fork bomb would be CGI scripts with no rate limiting for their usage. CGI or Common Gateway Interface allows for a user to click something on a website that runs software on the back-end. If that CGI script uses a lot of resources, and there’s no limit to how often the user can run it, then they can increase the CPU usage of the server significantly by repeatedly making CGI requests.
Security issues like remote code execution, file inclusion, privilege escalation, or SQL injection are very severe. DDoSing is tame by comparison.
Even if your website is secure, you can still be compromised via DNS. DNS is like a phone book but for servers. A domain name corresponds to an IP address. When you go to example.com, you do what’s called a DNS lookup, and then the DNS server tells you where to go for that site.
To get a domain name, you use a domain registrar, such as GoDaddy or Namecheap. You pay yearly to register it. After you register a domain name, you can change the DNS records for it so that it will point to your new web server.
Under normal circumstances, your domain name will point to your actual web server, or a CDN that points to your server. But if your domain registrar account or registrar itself gets compromised, then hackers can change the DNS records so that your domain name points to a malicious server instead.
Not only that, but there is another DNS issue called subdomain takeovers. A domain name can have multiple subdomains, like mail.example.com, www.example.com, and myapp.example.com. Each subdomain can have a different purpose. Each subdomain can point to a different server. Maybe www.example.com is your landing page, but myapp.example.com is your new web app. Each subdomain can correspond to a different server. For example, maybe you’re using Amazon AWS, in which case the myapp.example.com subdomain points to myserver.compute-1.amazonaws.com. But what if you eventually shut down the app? If you didn’t change the subdomain, it is still pointing to myserver.compute-1.amazonaws.com. Then someone could make an AWS server at myserver.compute-1.amazonaws.com and then people who go to your site would go to the attacker’s server instead.
When you register a domain name, some registrant information is associated with it, and it’s publicly accessible. This is a big problem. What seemed like a good idea when they designed whois turned out to be a disaster. People use whois information for scams, phishing, and invasion of privacy. A regular domain name, with no whois privacy feature, will show your name, address, phone number, and email address. That’s not good. So you need to make sure you use some sort of whois privacy service such as WhoisGuard or DomainsByProxy. They will replace your contact info with their own.
Fake sites, phishing, typosquatting, and domain squatting
Some people make fake websites. If your company is coolcompany.com, someone could make a website called cool-company.com or wwwcoolcompay.com instead. They might use these kinds of fake sites for phishing. Another issue is typosquatting. If your website is example.com, someone could register the domain examlpe.com, and then people who typed the domain name wrong would go to the malicious site instead.
If you let your domain name expire, someone else can register it, and then it’s no longer yours. So even if you pull the plug on your website, like getting rid of the server, it can be worth it to keep on paying the $10 or so per year to keep the domain name registered, in case you ever want to do something with it again.
Instead of just squatting domains for trying to get money from the person who originally registered it and wants it back, another use of domain re-registration is to distribute malware. If a company makes an app, and there’s a button in the app that will open a web page that goes to www.example.com, and then the company goes bankrupt, some people might still be using the app, but then the domain name will expire. Then an attack could re-register www.example.com and then people who tap the button in the app would be taken to an attacker’s site instead. The same is true for social media links. If a Youtube video has a link in the video description, years later, the site owners might pull the plug on the site, as nothing lasts forever. But then people who find the video might click on the link and then get taken to a malicious site instead, which was made by people who knew that people were still watching that video.
I’ve heard of WordPress sites that were modified in a malicious way in order to redirect visitors away from the legitimate site and to a scam site. One such example is WordPress sites that had some of their PHP code modified to redirect people to “you’re the billionth user, input your info to win a prize” kind of scams.
Even if a website itself isn’t malicious, the ads on it could be. So it’s best to go with more reputable advertising platforms. But even then, sometimes things slip through the cracks.
Can you get malware just from going to a website?
If someone hacks your site, they can use it to spread malware to other people. That would harm the visitors of your site, and also harm your reputation. That’s why it’s important to take security seriously BEFORE something bad happens. Many people wait until their site gets hacked, and then they commit to taking security more seriously. But that’s a reactive approach. You need to be proactive about security.
When dealing with websites, you’ll have many different credentials. You could have a domain registrar account, web hosting account, SSH keys and Linux user accounts, root user account, API keys for APIs you use with your apps, cPanel login account, WordPress admin account, WordPress user account, CDN account, Google account for Google Analytics and Google AdSense, reCAPTCHA keys, Amazon account for AWS S3 or EC2, accounts associated with plugins or services you pay for, and more. That’s a lot to manage. I’ve heard some people jokingly talk about keeping track of accounts with sticky notes, but that’s no good. Use a password manager. There are cloud-based ones like LastPass or 1Password, or if you’re really paranoid, you can use an offline, open source one, like KeePass. Some people are concerned that cloud-based password managers could be compromised, but they do encrypt stuff. But the downside to the offline one is that, if your storage that keeps KeePass on it dies, then you lose all your passwords. There are pros and cons to cloud vs. offline password managers. But whatever you do, don’t just write your passwords on a piece of paper or in a text file/word document.
That being said, it’s not enough to simply use a password manager to store passwords. You also need to use strong passwords. Many password managers have built-in random password generators. Even if you don’t want to use those, you still need to use passwords that are long and complex. Never reuse passwords, and change passwords every now and then. If you have a security compromise, make sure you get rid of the issue, and then change all your passwords. If a website gets compromised in a data breach (i.e. SQL injection and database table exfiltration), then you need to change your password there. If you reuse the same password for multiple accounts, then a hacker who knows your password to one account will try it on many different sites/servers.
Separation of concerns
If you’re only making a personal website and you’re on a tight budget, it’s fine to have multiple things on the same server, like multiple websites and even a mail server. If a personal website gets hacked, it sucks, but it’s not the end of the world. But for a serious business, having everything on one server would not be acceptable. In an enterprise environment, you’d want one server to be a web server, and another to be a mail server. That way, if one server gets compromised, they’re not all compromised. Just one thing at a time. These servers might be virtual servers, which are all VMs on the same hypervisor. But even then, they are still technically separate servers because they have their own operating systems and they’re isolated from each other.
Pen testing is seeing if you can hack your own stuff. This is done in order to find security weaknesses that you can improve. Sometimes, a company might even pay someone else to do a pen test. But I’d say you should really only do pen testing in your dev/testing environment, but probably not in production due to server provider terms of service. You might be allowed to use nmap/Metasploit/WPscan on your local server at home, but the same is not true of a server you’re renting from a hosting company. You’re paying to use it, but you don’t own it. As such, you’re not allowed to do just any old thing on it. If you really want to do pen testing in production, ask your server provider first to see if you’re allowed to so you won’t get in any trouble.
Geoblocking for login attempts
It’s possible to set up geoblocking for a number of things. The most common example that most people have heard of is media companies blocking certain countries due to copyright issues. Someone in one country might not be able to watch a video or streaming service if there are issues with intellectual property rights and how the media company can distribute the material. But geoblocking can also be useful for security. If you make a site for a local business, maybe you want to block all IPs outside of your own country. You might also want to limit which countries or subnets can attempt to log into your site. That being said, IP addresses are typically leased, not 100% static. So don’t set up your site so that it can only be logged into from a single IP, because the next time you reset your modem and router (such as in a power outage), you might get a new IP address. But if you live in the US, you can get away with blocking all non-US IPs from being able to attempt to log in.
What software you use for geoblocking really depends on the kind of software you’re running on your server.
Make sure you know how to contact your host
If there’s an issue with your web server or domain registrar, make sure you know how to contact customer support. They might not be responsible for security, but they might be able to help you anyway.
Debug or development features should not be enabled in production
When you’re developing software, you might turn on debug features that are helpful for developers, but should not be seen by end users. It’s possible to make the mistake of having the same debug features you used in your dev and testing environments enabled in your production environment. But this is bad for security. Debug features might let users see stack traces when there’s an exception, which could then allow hackers to learn more about the software that’s on the server. That’s not good.
Looking through my own server logs, it seems like some hackers look for backup files or even developer-related files (i.e. an environment configuration file, git repo config files, etc). These types of things don’t exist on my servers, but that doesn’t stop people from making HTTP requests for them anyway. I guess some people accidentally put backups in a public place, or maybe have a .git directory in a public www folder without the proper .htaccess stuff for it.
Don’t make things public if they don’t need to be. It’s a common mistake though. If you’re using the Apache web server, it uses files called .htaccess. These can be used to make a folder inaccessible to the public, like by adding a username and password to access it. If you don’t configure .htaccess files correctly, it could allow hackers to view files that they’re not supposed to see.
robots.txt can be a double-edged sword
If you have a website, search engine bots, called crawlers, will visit it. Then your web pages could be in search engine results. Sometimes, people don’t want parts of their site to be indexed in search engines. The way to limit what a search engine can index is with a file called robots.txt. In theory, search engine crawlers will honor it and ignore the files you tell them to leave alone. In reality, bots can still potentially crawl your website even if your tell them not to, and hackers specifically look for robots.txt files to see what the website owner doesn’t want to be seen in a search engine. That could be private APIs and other stuff that could potentially have private info or be hacked. When your robots.txt file says “don’t index somepage.html.” hackers interpret it as “somepage.html is of interest.”
In reality, if you really want to keep something private from search engines, you’re better off enforcing that with .htaccess. In more advanced cases, some developers can even make it so that, instead of getting a 403 forbidden error when someone is attempting to visit a private file or directory, it will give a fake 404 instead, because a 403 forbidden message is still an acknowledgment that it exists. There might be sensitive information even in the path or filename of a file on a website.
GitHub is an example of a site that does the fake 404 thing. If you have a private repository, you don’t want anyone to know that it exists. But if someone were to visit the URL for it, a 403 response would indicate that a repo with that name is owned by you, which is divulging some info that you don’t want to be public. So whether you go to an actual 404 page of a repo that doesn’t exist, or you just go to a private repo page that you don’t have permission to view, you will get the same 404 page each time.
You might think that you can put a file or folder on your server and that it’ll be private because there are no links to it. But that’s wrong. Someone can brute-force directories and files, even when there are no links to them anywhere, using tools that do something called directory enumeration. DirBuster is an example of a directory enumeration tool. So really, set your file permissions such that, even if someone uses a directory enumeration tool, they will not be allowed to view a file or folder. And make sure there is no private information in the names of the files or folders.
AWS has many different services, but one seems to garner media attention due to how many developers mess it up. If you want to store files in AWS, you can use AWS S3. It stands for Simple Storage Service. Files in S3 are stored in things called buckets. A bucket can be public or private. Many people seemingly accidentally make their S3 buckets public, even when they have private data that obviously isn’t supposed to be seen. S3 uses an XML API which can be queried to get a list of files in the bucket. When you query an S3 bucket, you get back an XML response which contains many <key> fields, each with a filename. If you add max-keys=2147483647 to the query string, you can see every single key in it (the default is 1000). I wrote a proof-of-concept S3 bucket scraper that can find all files in an S3 bucket and then download all of them. However, it’s not publicly available because I don’t want to encourage people to do anything bad, but it was actually really easy for me to write the scraper, so I’m sure other people have been able to make similar tools.
If you accidentally make your S3 bucket public, people can and will download all the files within it.
If you don’t know how to use something securely, don’t use it at all.
Make sure your computer itself is secure
As a web developer, you don’t just need to secure your web server. You also need to secure your own computer. There are many misconceptions about desktop security, like that only Windows gets malware (which is not true) or that antivirus software is useless (also not true). I recommend using uBlock Origin, HTTPS Everywhere, Malwarebytes, Ninite Updater, Acronis, and a VPN to help you stay more secure. Just make sure you’re not using a shady free VPN. If a VPN is free, your data is the product being sold. Use a paid VPN instead, such as ExpressVPN.
Home network security
In addition to securing your web server and computer, you also need to secure your home network. For a router, I recommend using DD-WRT or OpenWRT, which are third party router firmware projects which can get security updates longer than first party router firmware. Or you can use a regular computer and install something called pfSense on it, turning the computer into a router. The only caveats about pfSense is that the computer needs dual ethernet ports (one to your modem, and one to your switch), and that you’ll also need a separate switch and wireless access point (such as an Ubiquiti Unifi). But getting a regular home router and installing either DD-WRT or OpenWRT on it might be a cheaper option. But not every router is compatible with them though. You could also try a Fortinet Fortigate, which is a router/firewall with a lot of cool security features. Fortigates run FortiOS, which has a browser-based interface for managing it. Although FortiOS gets security updates, updates and support are subscription-based rather than being free. FortiOS is more geared towards businesses rather than home users.
To remain secure, it’s best not to use port forwarding or remote access on your home router. And be sure to install updates. If your router gets compromised, your entire network is compromised, including your computer that you use to administrate your website.
One way to improve your security is with a bug bounty program. A bug bounty program means that a security researcher can find a security issue on your website, then report it to you, and you give them money for helping you. Instead of dealing directly with security researchers, you can use a company like HackerOne, which is a liaison between researchers and companies.
A bug bounty doesn’t have to be a huge amount, especially for a small project. But any sort of bug bounty program is better than nothing at all.
Contact info for security researchers (security.txt)
robots.txt is for search engines, but some people are proposing the idea of a security.txt file on a website, which would have info for how security researchers can contact you if they find a security problem with your site. Security researchers are not the same as malicious hackers, though people who are not well-versed in security might conflate the two. security.txt isn’t really a standard yet, but it might be useful. Or if you’re using a CMS, you could just make a section about security in an “about” page.
When a security researcher finds a security issue with your site or app, they will tell it to you first privately (via VM or email) instead of making it public. They will give you some time to fix the issue before publicly mentioning it. This is called responsible disclosure. You get some time to fix the security issue, but then the security researcher can prove that they did research and know a thing or two about security by eventually making the info public, but only after it doesn’t apply anymore because you’ve fixed it.
Responsible disclosure might look something like this: a security researcher might email you saying there’s a SQL injection vulnerability in your contact form on your website, so then you research how to make a secure contact form that isn’t vulnerable to it, and then change your code so that it’s secure. Then, after your site is no longer vulnerable to SQL injection, the security researcher would announce that they found a SQL injection vulnerability issue on your site. They do this do build up a portfolio of people they’ve helped.
Not all disclosures are responsible disclosures.
Captchas are ways to make it harder for bots to use your website or app. A very popular captcha offering is Google reCAPTCHA, which makes it easy to add captcha to your site. Captchas can be useful for things like account signup, comments, contact forms, and more.
Older captchas required users to input things, like typing text from distorted images of words (which bots would struggle with), or selecting all pictures of traffic lights. But modern reCAPTCHA allows for “invisible captcha” which simply analyzes the user and gives them a score to determine how likely it is that they’re human vs. a bot.
Red team vs. blue team security
Red team security is offense (learning how to hack). Blue team security is defending from attackers. In an organization, during security audits, there might be a red team to try and see if there are any security weaknesses in an organization. This article is mostly defense-centric security stuff, but you can learn a great deal about security by learning offensive security stuff. I’d recommend making a Kali Linux VM in Virtualbox and then using something like Virtual Hacking Labs (costs money) or Hack The Box (free, at least for some of their content). Red team security will give you a better insight into how hackers think and what they do. Security labs involve hacking an intentionally insecure server. Not only are security labs educational, but they’re also really fun.
Some people even say “purple team” to describe skills that combine offensive and defensive security.
In addition to a website, you might want your own email. It looks more professional to have email@example.com instead of some firstname.lastname@example.org address. However, email can be tricky. If you try to set up your own mail server, there are lots of problems associated with that, so you’re better off just paying for email service from either your web host or some email provider. You can use web-based email stuff like Horde, RoundCube, G Suite, or Office 365. Or alternatively, you can go with a traditional mail client. Shared hosting providers typically have an inexpensive email offering where you can get a decent number of email accounts with your own domain name. When you use such a service, you don’t have to worry about configuring sendmail, postfix, exim, or anything like that. You can even administrate email accounts via cPanel.
I have my own mail servers, but I basically pay a company so that they’re in charge of configuring and securing them, so that all I have to do is log into my email accounts and not worry about anything else.
If you have a contact form on your website, you can set it up to send emails to an email address on your mail server. You absolutely will get spam unless you use captcha and spam traps. I once made the mistake of setting up a contact form on a website and forgot to add captcha to it. I got lots of spam as a result.
It’s also not a good idea to directly post your email address on your website, because there are bots that crawl websites to look for email addresses, which they will put into mailing lists to send spam to. Something you can do is either to only display the email address if someone fills out a captcha, or open something like MS Paint/Photoshop/GIMP and then write text in an image, and then save it as a PNG or JPEG, which will allow visitors to your site to see your email address without bots being able to see it, because most bots only parse text and don’t have the ability to analyze an image to see what text is in it (it’s possible, but less common due to the added sophistication required).
For WordPress, a good contact form plugin is Contact Form 7. But if you use that by itself, you will get a lot of spam messages. So you should use it in conjunction with Akismet, Google reCAPTCHA, and/or Contact Form 7 Honeypot, which will prevent you from getting spam messages.
Many people think spam isn’t a thing anymore. People used to encounter it more in the early 2000s. Spam is very much alive, it’s just that popular email services like gmail are good with spam filtering. But when you have your own website, contact form, and mail server, you’ll see that spammers still exist.
Don’t display images in emails. An <img> tag can be used maliciously for a CSRF attack. Text is the only safe thing to display in email. Email attachments can also be malicious, even if it’s a PDF, DOC, or DOCX file.
Hackers take the path of least resistance
Most people take the easy way out. This even applies to hackers. Most hackers will try the easiest types of attack vectors: weak passwords, known security issues, common misconfigurations, port and vulnerability scanners, publicly available security exploits (such as on exploit-db), and OWASP top 10 stuff (often using a tool to carry out an attack rather than doing it on their own). If you concentrate on getting the essential, simple stuff secure, you should be good, for the most part. Sophisticated attacks are comparatively rare. The most common thing I notice on my servers is just people trying to log in. Nothing complex about that.
The reason why hackers can take the path of least resistance is that, even if your website doesn’t have easy security issues, plenty of other websites will. What might be blocked by your site’s WAF might work on some other less secure site.
What should I do if I think my site has been hacked?
There are companies you can pay to help you recover after you’ve been hacked. It’s also good to let your service provider know. A good strategy is to do routine backups, and once you’ve been hacked, let your host know, then basically just reset your server, and then restore stuff from a backup – but make sure you’re not restoring a backup that is backdoored. Also, if you got hacked once, and then you merely revert back to a previous site backup, you’ll get hacked again, because clearly you had some sort of security issue. So install software updates, analyze your code and see if something you coded was insecure, and also be sure to change your passwords, API keys, and SSH keys.
There are different ways to get hacked, so your response really depends on what happened. The way you can figure out what happened is by looking through server logs, which is why logging and monitoring are very important, otherwise you won’t have any insight into how you got hacked and what hackers did on your server.
If you just fell for a phishing site and gave out login info, that doesn’t necessarily mean there’s malware or a security issue on your site, just that someone stole your login info. Of course, they could’ve used that login info to change stuff within the CMS to add malicious modifications to it, but that’s not a guarantee. And let’s say someone found a remote file inclusion vulnerability on your site. If you’ve been making routine backups for a while now, and a web shell was uploaded using an RFI vulnerability on your site, then restoring your site to a backup that still has the web shell won’t help, because it’s still insecure. This is why inventory and asset management are important. Know what’s on your servers, so that, if something bad happens, you can tell the difference between the normal stuff and the malicious modifications.
The above stuff covers what to do if you use a shared host for a CMS site, but if you’re using something like a VPS or AWS EC2, which are more hands-off, you might want to periodically make VM snapshots and save them, and if there’s a security issue, you can revert to a known secure version of the VM (although, again, sometimes it can take a while for you to figure out that your site has been compromised, in which case, some of your most recent backups might also be compromised, with a rootkit or web shell or whatever). Another approach would be to just ditch the VM or EC2 instance entirely and just start fresh. Of course, this is more complicated when you have data in a database as well.
Sometimes, hacking doesn’t mean malware. Sometimes, hacking can mean that someone exfiltrated private data from your server because a form or search feature on your site had a SQL injection vulnerability. In that case, if you have a website where there are users who have signed up with your site, then you might want to email them letting them know there was a security issue and that they should change their passwords. But there’s no point in changing a password before you’ve made sure that you got rid of the security issue to begin with. Transparency is important following a data breach.
Sometimes, it can be hard to tell if a site has been hacked. Maybe one of your users noticed weird behavior, and it was because of malvertising rather than malware hosted on your server. Malvertising would be malicious ads hosted by Google AdSense (or some other ad agency), and wouldn’t actually be on your server at all.
Another thing is that a user of your site might think your site gave them malware, when in fact they got it from another source. Some malware even changes DNS to lead people to fake websites. And even if there is malware on a web server, sometimes it can try to be stealthy. Sometimes, hackers will use things called traffic distribution systems, which will serve people different content based on things, like OS, browser, browser version, and things like that. So one person might go to a hacked website and see no differences at all, whereas someone else might go and be redirected to a different site (something that happened with some insecure WordPress websites in recent months). Maybe a hacker has a browser code execution exploit that only affect 64-bit Firefox version 71 on Windows 10. Maybe Firefox 72 fixed the issue. So a TDS might involve checking to see if the user is even vulnerable to the exploit to begin with, and only giving it to them if it’s possible to run it on their system.
Some other types of website malware might be cryptominers, which will increase CPU usage on someone’s computer to try and mine cryptocurrency for the hacker, ransomware, mass mailers, or browser lockers, which will prevent the user from closing the browser and tells them to go to a fake tech support website. However, some website malware won’t make itself known to the user at all. Some criminals prefer to be stealthy.
There are many different ways a website can get hacked, so your response really depends on what happened. Sometimes, your site might be hacked and you don’t even know it, because some people put a lot of effort into not getting caught. This is why logging and monitoring are important, as well as keeping your software up to date, making sure you have strong passwords, and making sure the code you write is secure against at least the OWASP top 10 list.
Attribution for a hack is very difficult. It’s usually not worth it to try and track down whoever hacked your site. In all likelihood, the person who hacked a site neither knows nor cares who the site owner is. It’s nothing personal, they just wanted to make money. In rarer circumstances, people do “hacktivism” and things like that, but for the most part, hacking is just petty crime for making cash. It’s nothing personal.
If you wait until you’ve been hacked to start taking security seriously, then you’re doing it wrong. An ounce of prevention is worth a pound of cure.
Security is hard. There’s no getting around it. But it’s still important to try and follow security best practices. If you don’t feel confident with your ability to secure a website, do more research before putting anything online. Maybe start with a local web server that can only be accessed from your home network, not the internet. Play around with that until you feel more competent. A website is both an asset and a liability.