Use a blacklist of bad IPs on your linux firewall (tutorial)

edited April 23 in Technical

Probably it's already discussed, but I decided to make a mini-guide for anyone interested, because a) I am bored and b) I saw that ipset is not used efficiently enough

Well, if you are like me, you run fail2ban but still you are annoyed by all these failed ssh and nginx attempts in the logs. So it would be nice if you could use a blacklist like abuseipdb in your firewall, but how can you keep up with thousands of IPs and keep things fast and clean at the same time?

This is where a marvelous feature in the linux kernel comes into play, the ip sets

ipset maintains a list of things, most of the time ip addresses as a hashtable, thus making look-ups O(1) and using very limited memory (a couple of MB for 100.000 IPs) Then this list can be referenced by an iptables rule.

So here is how to use it:

1. Install ipset:

# apt install ipset

2. Choose your favorite list

There are various here and there, for example this guy here aggregates a few sources along with abuseipdb https://github.com/borestad/blocklist-abuseipdb or this guy https://www.blocklist.de/en/export.html

I aggregate my own here http://abuse.myip.cam/allips.txt, but it's a bit aggressive so use it if you like, but at your own risk.

Take some extra care when you are using external sources to avoid unintended blocks.

3. Let it roll

The following script is what I use. You can place it in a cron and run it as root quite often, like every 10-15 minutes. The idea is to download the IPv4 list and create an ipset restore file. The restore file is cleared and reloaded in less than a second for 70k IPs. Finally the iptables rule is added, if it's not already there:

#!/bin/bash

# URL of the IP list, use whatever you like
URL="http://abuse.myip.cam/allips.txt"

# Temporary file where the IP list will be stored
IP_LIST="/tmp/iplist.txt"

# Temporary file for ipset restore commands
IPSET_RESTORE_FILE="/tmp/ipset_restore.txt"

# Download the latest IP list
if ! wget --compression=gzip -nv -O $IP_LIST $URL; then
    echo "Failed to download IP list from $URL"
    exit 1
fi

# Prepare ipset restore file
echo "create blacklist hash:ip maxelem 200000 -exist" > $IPSET_RESTORE_FILE
echo "flush blacklist" >> $IPSET_RESTORE_FILE

# Append add commands to the restore file, make sure it looks like an ipv4 
grep -P '^([0-9]{1,3}\.){3}[0-9]{1,3}$' "$IP_LIST" | while IFS= read -r ip; do
    echo "add blacklist $ip" >> $IPSET_RESTORE_FILE
done

# Apply ipset changes
ipset restore < $IPSET_RESTORE_FILE

# Check if the iptables rule exists, if not, add it
if ! iptables -C INPUT -m set --match-set blacklist src -j DROP 2>/dev/null; then
    iptables -I INPUT -m set --match-set blacklist src -j DROP
fi

You can verify what's in the ipset by using:

# ipset list | more

If you have any good blacklist source to suggest, be my guest. It's a pity that abuseipdb charges so much for the basic plan.

Feel free to ask if there is anything confusing in the script.

Comments

  • @itsdeadjim said: If you have any good blacklist source to suggest (...)

    I run the CrowdSec-plugin on OPNsense. I am under the impression that it comes with a (daily updated) blocklist, and that it adds source IPs from attacks to my firewall to their list.

    I don't pay for it, but I'm not sure (read: no idea) where the blocklist is or how to download it manually.

    Thanked by (1)itsdeadjim
  • I kept my VPS shot down ! also works super fine.

  • edited April 24

    fail2ban ftw
    Trust no list, dropping traffic just because ip is on a list, is not the way to go, personally.

    Tutorial useful for some people, thanks for your contribution.

    Thanked by (3)itsdeadjim skhron Janevski
  • Thanks for this, I've found it very useful in our environment.

    One issue I am having is when I use your IP list it add's all the IP's to ipset but if I use another txt file it only adds the last IP in the file for some reason.

    Do you perhaps know why this would be?

  • Extra space or CRLF in the lines which cause them not to match the pattern?
    Try downloading the file manually and grep -P '^([0-9]{1,3}\.){3}[0-9]{1,3}$' on it to see.

  • edited October 6

    There are too many "bad" ips on the internet, putting them all in ram shall exaust use your ram and draw a lot of cpu for processing that particular netfilter rule, whenever traffic arrives.

    Besides, static listst are not a solution for dynamic behavior. Some ip could be good, then turn bad, then turn good again... Nothing is written in stone.

  • skhronskhron Hosting Provider

    @Janevski said:
    There are too many "bad" ips on the internet, putting them all in ram shall exaust use your ram and draw a lot of cpu for processing that particular netfilter rule, whenever traffic arrives.

    If your task is to drop all traffic for "bad" networks, you will aggregate them (so, less RAM consumption) and I recommend to use routing table to blackhole them if performance is your concern (it is not for most LE* world).

    Check our KVM VPS (flags are clickable): πŸ‡΅πŸ‡± πŸ‡ΈπŸ‡ͺ | Looking glass: πŸ‡΅πŸ‡± πŸ‡ΈπŸ‡ͺ

  • As noted, these kinds of lists have to be routinely updated. FireHOL's update-ipsets (& blocklists) may conveniently assist you in this task; iprange may conveniently reduce your ipsets in a way that's faster for the kernel.
    It's not a bulletproof solution and at best it may remove some unwarranted "noise" in the logs. Same applies for some rogue ASNUMs you may choose to block, or some scanners. There are some freemium and/or commercial services out there offering lists/ipsets comprising the latest Zmap scanners, attackers, and the like. It's probably yet not the bullet proof solution you may desire. Some people may want to pick country IP blocks for some of their apps/servers.
    Once you've built your ipsets and managed to deploy a mechanism to keep them updated, you may want to use netfilter's recent module in order to make dynamic lists of IPs hitting ports they weren't supposed to hit and fail2ban to correct misbehaving clients eventually reaching your applications.

Sign In or Register to comment.