GeodSoft logo   GeodSoft

Hardening OpenBSD Internet Servers
Packet Filter and IP Filter on Non Firewalls

Traditionally firewall software has run on computers with two or more network interfaces to control the flow of traffic between them. Increasingly firewall software is run on a single machine which it protects. I've tried to cover everything you need to know to use IP Filter or Packet Filter as a single host firewall on this page and discuss when such use may or may not be appropriate.

Packet Filter and IP Filter

Firewall software such as IP Filter and its 3.0 replacement, Packet Filter, has traditionally been used to create firewalls on computers with two or more network interfaces. In this configuration it protects computers on the inside from unauthorized access from the outside, usually the Internet. A firewall may also control the types of network traffic that are allowed to leave the local network. Adequate coverage of the issues in setting up a firewall would take a book. If you are going to use Packet Filter, it's probably best to read the OpenBSD documentation at http://www.openbsd.org/faq/pf/ . If for some reason you want to use the older IP Filter firewall you should visit its home page IP Filter home page which has a concise set of examples that cover most of IP Filter's capabilities. There was an "ipf HOWTO" but that seems to have disappeared. When I wrote this PF had only recently replaced IP Filter and documentation was somewhat sparse. After more than a decade as OpenBSD's official firewall PF is well documented and accepted. I certainly would not wish to switch to an older product that has not been an integral part of OpenBSD for more than decade. When the change was made I went with what came with OpenBSD. Specific adjustments were necessary. Some changes were not documented; see the OpenBSD 2.9 to 3.0 page for details. The OpenBSD developers were quite helpful when I asked about the issues I encountered., The concepts and filter rules are very similar but not identical and the handling of state related rules is entirely different; the controlling programs and options are also very different.

Depending on the OpenBSD version, Packet Filter or IP Filter can also be used as a more flexible and powerful replacement for TCP Wrappers protecting only the computer on which it runs. The advantage of a firewall is that it allows complete control of network traffic before it reaches any IP port. Incoming UDP and ICMP traffic as well as any outgoing traffic can be controlled. TCP Wrappers cannot control anything but incoming TCP connections and then only to sshd, Sendmail, and services that are started by inetd. The disadvantage of a firewall is that its rules are much more complex than TCP Wrapper's simple hosts.allow and hosts.deny files.

In 2.9 and earlier, if you are using the GENERIC kernel or a custom kernel with the IPFILTER and IPFILTER_LOG options left on, IP Filter is in your kernel and ready to go. You can turn it on from the command line with "# ipf -Fa -f /etc/ipf.rules -E". In 3.0, the kernel equivalents are:

pseudo-device   pf   1   # packet filter
pseudo-device   pflog   1   # pf log

The 3.0 command line equivalent is "#pfctl -Fa -R /etc/pf.conf -e". In both cases, the first time you don't need the "-Fa", flush all, option but making this a habit insures that any rules you load in the specified file will be the only active rules. With Packet Filter "- Fr" would flush only the rules and keep NAT rules, state and statistics. /etc/ipf.rules is the default location for IP Filter rules and /etc/pf.conf for Packet Filter rules. In 2.9 and earlier, if you change "ipfilter=NO" in /etc/rc.conf to "ipfilter=YES", or in 3.0 change "pf=NO" to "pf= YES", after rebooting you will have exactly the same active ruleset as the command line examples.

Sample Rules

I've included an example firewall rule set suitable for use on a single NIC web server. The examples include a backslash to indicate a continued long line; this will not work with either Packet Filter or IP Filter but the sample rules, except as noted in the comments, will work with both. I'll cover the rules in the order they were developed, not their current physical order in the file. The first rules were

pass in quick on lo0 all
pass out quick on lo0 all

which were added to the automatically installed, default rules, before the original pass in and out from any to any rules. Both Packet Filter and IP Filter's normal processing order is to evaluate all the rules and apply the last rule that matches a packet being checked. The "quick" keyword causes rule evaluation to stop as soon as the packet matches a rule with the quick keyword. The lo0 is the loopback interface. Soon rules that block everything not explicitly allowed will be added. Unintentionally blocking loopback traffic can cause problems.

The purpose of using a firewall, is to control network traffic between the local computer and any other networked computers. Conceptually it's simpler to assure all loopback traffic is allowed and focus our attention on the connection between the local computer and the network, i.e., the NIC, which happens to be the dc0 interface in this example.

Next,

pass in quick on dc0 proto tcp from 145.58.94.107/32 to 145.58.94.75/32 port = 22 flags S keep state

is added. In our example we are pretending to be on the 145.58.94.0 network subnetted as a class C or slash 24 ("/24") network. The local machine that the IP Filter rules are being created for is a public web server at 145.58.94.75; the /32 notation indicates a single host. "/31" would be two machines, "/30" four and "/29" eight, etc. The management workstation from which we are working is 143.58.94.107. "port = 22" allows traffic to the sshd server running on the web server. If telnet were being still being used, the port would be 23 instead.

The "flags S" in this rule matches a packet that is the begining of a new TCP session, i.e., only the SYN flag is set. The "keep state" causes all subsequent packets that are part of the same session in both directions to be matched as well. The firewall's understanding of IP headers allows it to determine which packets are part of the same session. With "keep state" there is no need to figure out the reverse rules, i.e., those that allow reply traffic. This rule will not match packets that are part of an already existing SSH (or telnet) session between these two machines. It only allows inbound connections from one machine and does not allow outbound SSH sessions.

In the example we are using, a host with a single NIC, and not a real firewall with multiple network interfaces, Packet Filter and IP Filter will handle state rules in essentially the same manner. The same is not true when two or more network interfaces are present. With IP Filter, a state rule applied to the firewall as a whole and once a stream of packets was allowed by a state rule on any interface it would pass through other interfaces, even if there were rules that otherwise would have blocked them. Packet Filter state rules are interface specific. A state rule on the inner interface of a dual homed firewall will allow return packets back through the inner interface but the inner interface will never see the packets if the outer interface has a rule that blocks them.

One place this will surely cause problems switching from 2.9 to 3.0, is bridged firewalls. IP Filter did not allow "out" rules to be defined so "in" rules had to be used on both or all interfaces to control the flow of traffic. A simple mechanical transformation will allow an IP Filter bridged firewall rule set to be adapted for Packet Filter. Simply change all "in" rules on an inner interface to "out" rules on the outside interface. The net results will be the same except that it is possible some packets might pass into (but not out of) the firewall that would formerly have been blocked on the inner interface before entering the firewall. You could add a rule on the inner interface to block these but before doing so be sure the rule won't block return packets that are part of a stateful connection established on the outer interface.

The rule above was set up first so that we would have an open path for SSH. As long as this is available, we can work remotely and do all additional configuration from our normal workstation. This rule was set up remotely. Once set up, if it's right, we should be able to open a new SSH session anytime we need one from our management workstation, regardless of any other rules that may be added (unless we put a more general and contradictory rule with the "quick" keyword before this rule).

One disadvantage to the preceding rule is that each time a new rule set is loaded for testing, any open SSH connections will be broken and we will need to log into the web server again. This can be solved with a pair of rules:

pass in quick on dc0 proto tcp from 145.58.94.107/32 to 145.58.94.75/32 port = 22
pass in quick on dc0 proto tcp from 145.58.94.75/32 port = 22 to 145.58.94.107/32

These allow any packets between the management workstation and sshd on the web server so SSH sessions may be kept active even when the active rule set is replaced, as long as they are kept in the new active rule set, before any other rule blocks this traffic.

IP Filter Logs and Ipmon

This web page was first written when IP Filter was the current OpenBSD firewall. For the logging, I'll cover IP Filter logs first and then cover the same issues for Packet Filter. When the above rule was first added, the keyword "log" immediately followed "pass in" and preceded "quick". This allowed verification, by checking the ipf logs, that the new SSH (or telnet) session was actually using this rule, and not the still present pass all to all rules.

When debugging a new IP Filter rule set, it can be very helpful to use the ipf logs as a debugging aid. By turning on logging for the new rules regardless of whether they are pass or block, you can see exactly what the rule or rules are doing. Leaving or turning logging off all other rules, helps isolate the results of the rules you are working on. When the rule set nears completion, the "log" keyword can easily be added or removed as appropriate to log only the traffic you wish to log.

The default ipmon setup makes reading the logs unnecessarily difficult. Adding about 35 unnecessary bytes to the begining of each log entry, causes most log entries to be just about two full lines on standard text displays, making it very hard to tell where one log entry ends and the next begins. Syslogd adds date and time, hostname, process name and process ID to every log entry. Ipmon already includes complete date and time data. Syslog's extra information makes sense when it is being used to send logs to a remote logging host or combining logs from multiple programs in the same log file. Syslogd is however set up to use a dedicated log file for ipmon, /var/log/ipflog.

Since syslogd's facilities are not being used to advantage, it makes better sense to have ipmon log directly to a file. Replacing the default flags, "ipmon_flags=-Ds", in /etc/rc.conf with "ipmon_flags="- aD /var/log/ipflog" will eliminate the syslogd additions and make the logs much more readable. You can stop logging at any time with "# kill -TERM `cat /var/run/ipmon.pid`" and restart it with "# ipmon -aD /var/log/ipflog". You could use "-Do I" instead of "-aD" if you didn't want to see state entries in the logs but only normal IP Filter entries. The state entries are helpful in understanding what's going on when you're setting up rules with state. To truncate (clear) the log file while ipmon is running you can use "# echo> /var/log/ipflog". This makes it easy to see the effects of just changed rules without scrolling through old entries. It's not recommended once IP Filter is being used in a real environment.

If you use IP Filter you should know about "# ipfstat -hion". This shows all active rules, not the contents of the rules file. Each rule is preceeded by a hit ("h") count so you can quickly see which rules are being matched and which not. The "i" and "o" specify that both in and out rules be listed. The "n" causes the rule number to be listed. The listed rules are always grouped into out rules and in rules with out rules first and shown in the order the individual rules are applied. Output and input each have there own rule numbers, starting with 1 and preceeded by an at sign ("@"). These rule numbers match the rule numbers listed in the logs. Most real rule sets will require piping through less.

Packet Filter and pflogd

The most significant change between IP Filter and Packet Filter is logging. I don't think it's an exaggeration to say everything related to logging is different between the two products. IP Filter logged to a highly structured text format that focused mainly on what the firewall did with the packet and contained only a few key pieces of information describing key packet header settings with no actual packet contents. Packet Filter, instead logs to a standard tcpdump binary format. This includes the entire packet contents or at least the first 96 bytes by default, if the packet is longer. The length of packet data logged can be changed. The format also includes the time the firewall processed the packet and the same rule number, disposition (passed or blocked) and interface information that IP Filter includes in its logs.

To see the log content in a human readable form you need to use tcpdump with some format options (discussed below) to interpret the binary file. Most of the key information content can be the same as IP Filter, with the right format options, but the actual format is very different.

The logging daemon ipmon, has been replaced by pflogd. Where ipmon logged by default through syslogd and only optionally to a file, pflogd always logs to a file. If you want standard log rotation, newsyslog will work with pflogd. In fact the rotation is already set up as part of the default install. If you use log rotation, the standard setup replaces a log when it exceeds 250K and keeps only 4 generations. On many firewalls this would keep only 4 hours of logs because with aggressive logging and the binary format, the 250K can easily be exceeded in an hour, the default interval for newsyslog.

In my opinion, the most valuable content on a firewall is the firewall logs. Controlling network traffic is only half the firewall's job. Understanding what the firewall has responded to, is equally important and only the logs will reveal that. My /var filesystems are 2 - 8GB to allow a significant amount of logs to be readily available for online analysis. Further, new logs are transferred daily to a remote machine where they are written to CD-Rs.

Newsyslog is a friend to any overworked system administrator and OpenBSD's install defaults pretty much assure there will be no disk space problems if the defaults are not changed. Any administrator who reviews or uses scripts to analyze logs will likely want to change or increase newsyslog.conf defaults depending on what they regard as important and disk space availability. Newsyslog allows logs to be rotated based on both size and or interval since the previous rotation. Unfortunately it has no provision for rotating logs at specific times such as midnight or midnight each Sunday or the first day of the month. For logs that I care about, which include firewall and web server logs in addition to some custom logs I create with cron and scripts, I do not use newsyslog but write my own scripts that include the date as part of the log's filename. Depending on the log I use one of four date formats: YYMM, YYMMDD, MMDDhhmm, YYMMDDhhmmss.

With ipmon's simple text format, when you wanted to quickly clear a log while debugging new rules, you could simply truncate the existing log with "#echo> /var/log/ipflog" at any time. This corrupts the tcpdump binary format written by pflogd, making the log unusable. I have not found any way to clear Packet Filter logs except by stopping pflogd, moving or removing the existing /var/log/pflog, and restarting pflogd. The following will clear the current log:

#kill -TERM `cat /var/run/pflogd.pid`;rm pflog; pflogd -d 5

If you want to examine the old logs contents, you'd rename the log instead of deleting it. The above could easily be put into a simple shell script, adding a few lines of "cp pflog1 pflog2", "cp pflog pflog1" to preserve as many "generations" of test logs as desired. In the above example "-d 5" is used to set the "delay" to 5 seconds, i.e., force pflogd to write the log to disk every 5 seconds. For performance reasons, in a production environment this my not be desirable but if you are using logs as a debugging aid while developing new rules you don't want to sit waiting 60 seconds (the default delay) at a time see results. When pflogd terminates, it will flush the buffer to disk but if you want to look at the live log, the 5 second delay will facilitate this.

To view either a saved Packet Filter log or the current one, you'll need to use tcpdump. The example from the pflogd man page is a good starting point:

tcpdump -n -e -ttt -r /var/log/pflog

or the similar:

tcpdump -netttvr /var/log/pflog

The second is the same as the first with the options run together and the "-v", verbose, option added. "-netttvvr" with a second "v" will provided some additional information for some packets. Once you decide which format you prefer, I'd suggest putting this in a shell script named pfl (list) or pfv (view) in /usr/local/sbin or wherever you put your own system admin scripts. In any case, the output will also need to be piped to less.

The "-n" option prevents tcpdump from doing a DNS lookup on IP address in the logs. For firewall log analysis, besides adding an enormous processing time overhead and adding significant network traffic, DNS lookups serve no useful purpose. Except for idle curiosity, who is sending packets is irrelevant. The IP addresses for most real probes or attacks won't provide a DNS name in the first place, and when they do, they will normally be some cable or DSL connection or perhaps a dialup. In many cases these will be compromised systems. You don't need DNS until after you have already determined that you have significant activity that represents an intense or prolonged probe or actual attack. I'd start with whois or dig on the command line to determine the ISP or owner of the netblock from which the attack is coming. If you don't provide an alternate server these should go to your default DNS servers, i.e., those provided by your ISP. If for some reason these do not work you'll need to locate a DNS server that returns useful information, possibly a free web whois service.

The "-e" option is probably essential as this causes the matching rule set and number, the packet disposition (passed or blocked), the interface name and the direction, to be included in the output. The format is rather more verbose than IP Filter's counterparts.

The "-ttt" option causes the abbreviated month, day, and formated time to start each log line. The default with no "t" option includes the formated time but not the month or day. There is no year option. "-tt" prints the unformatted time, i.e. the number of seconds since midnight, Jan. 1, 1970. "-t" suppresses any time output in the logs.

A single "-v" adds additional information about the packets that may be useful. A second "v" adds more information and changes the format of the information generated by the first "v" for some packets. Many packets have identical output whether one or two v's are used.

Once you settle on a time format and include the "-e" option, the text output from tcpdump is roughly analogous to the IP Filter log format as far as the source and destination IP addresses and ports. After that everything changes. Not counting STATE log entries and certain ICMP error messages, where IP Filter had a limited amount of information about the packet in a very standardized format, the Packet Filter / tcpdump output is highly variable depending on just what the packet contained. Besides being highly variable, there tends to be a lot more information, especially with the "v" options.

If you have reason to have Packet Filter logs scroll on a terminal screen as "tail -f" would do for IP Filter logs, the following command will work:

tcpdump -netttvi pflog0

This is the same as before except "r pflog" has been replaced by "i pflog0" which causes tcpdump to listen on the pflog0 pseudo interface instead of reading a file. Whatever is logged from Packet Filter will appear on the screen.

More Rules

After the stateful SSH or telnet connection is working,

block in log all
block out log all

replaced the previous default pass everything rules at the end of the rules file. This will typically break the current telnet or SSH session when first implemented but a new session can be started if the previous rule for SSH or telnet is right. If logging is on, the log file will start to grow. I watched what appeared and set up rules to allow the traffic which I thought should be allowed.

There is good bit of NTP (Network Time Protocol for synchronizing computer times) traffic on my network so NTP related entries were among the first in the logs. NTP is a UDP protocol that uses port 123 at both ends. The following rules were added to allow NTP traffic:

pass in quick on dc0 proto udp from 145.58.94.0/24 port = 123 to 145.58.94.0/24 port = 123
pass out quick on dc0 proto udp from 145.58.94.0/24 port = 123 to 145.58.94.0/24 port = 123

Two rules covers everything both ways and no state information is needed. It's sometimes considered poor practice to use a source port as part of a filtering rule because whoever controls the machine can typically change this at will and you cannot rely on the expected protocols being on the standard ports. When you control both ends of a connection there seems to be no reason not to specify the standard port in the filter rules. If you happen to be running a version of ntpd that has buffer overflow issues (spring 2001) and an attacker gains control of one of your machines, no rule that allows the local machine to perform its normal NTP communication with the now compromised machine, can prevent the attacker from taking advantage of the ntpd weakness.

Both Packet Filter and IP Filter can track "stateful" UDP connections even though UDP is a stateless protocol. I chose not to use stateful inspection in this case because my web servers typically also act as NTP servers and thus are clients to some computers and servers to others, so stateful connections would need to be set up both directions. Since it's my intent to allow all NTP packets in both directions there is no reason to examine state.

There is some ICMP traffic in connection with NTP. Also, on a LAN it's handy to be able to ping other machines and so all local ICMP traffic is allowed with the following two rules:

pass in quick on dc0 proto icmp from 145.58.94.0/24 to 145.58.94.0/24
pass out quick on dc0 proto icmp from 145.58.94.0/24 to 145.58.94.0/24

To allow the web server to ping anywhere and receive responses replace the second rule with:

pass out quick on dc0 proto icmp from 145.58.94.0/24 to any keep state

The "keep state" rule will create temporary rules that allow any response to a ICMP packet sent out the dc0 interface to come back in. Outside attempts to probe the web server will be denied because the don't have the state to match the second rule nor the correct address to match the first and will be blocked when they don't match any other explicitly allowed traffic.

In these ICMP rules, the second IP address and netmask in the first rule and the first IP address and netmask in the second rule could have been specific to the current host 145.58.94.75/32. Below there are rules that allow specific services to make connections between individual hosts on the local network and these use host specific addresses. As both ICMP and NTP traffic are considered routine on our LAN, these rules are written more generally so that this rule set can be used on a different hosts without the need to reevaluate every single rule and IP address.

There are periodic outbound transfers of backup and log data to an FTP server at 145.58.94.23. To allow these the following rules are created.

pass out quick on dc0 proto tcp from 145.58.94.75/32 port > 1024 to 145.58.94.23/32 port = 21 flags S keep state
pass in quick on dc0 proto tcp from 145.58.94.23 port = 20 to 145.58.94.75/32 port > 1024 flags S keep state

These are typical TCP high port to low port connections. The standard or active FTP complication is added to the mix. The client, in this case the local web server, makes a control connection to the ftp server on port 21. For data transfer, the ftp server then opens a connection from port 20 to a high numbered port on the client that has been communicated via the control connection. Here the "keep state" saves on the number of rules by eliminating the need to specify the reverse connections.

After setting up these rules, the FTP transfers began working but I found blocked high port to high port connections from the client to the ftp server. These consistently came between the opening of the control connection and the data connection. Apparently this combination of client and server were defaulting to passive mode FTP and when that failed reverted to active mode. After adding a rule (next below) to allow connections from high ports on the web server client to high ports on the ftp server, the active connections from the ftp server's port 20 to high client ports stopped:

pass out quick on dc0 proto tcp from 145.58.94.75/32 port > 1024 to 145.58.94.23/32 port > 1024 flags S keep state

Allowing data to be transferred to the web server from the management / development workstation via FTP required another set of rules in which the web server was also the ftp server, reversing the role performed by the web server in the previous set of rules:

pass in quick on dc0 proto tcp from 145.58.94.107/32 port > 1024 to 145.58.94.75/32 port = 21 flags S keep state
pass out quick on dc0 proto tcp from 145.58.94.75 port = 20 to 145.58.94.107/32 port > 1024 flags S keep state

Note that all the FTP related rule are to and from specific machines ("/32") which are the only ones on the LAN to perform these functions. After all rules to allow the various adminstrative traffic were in place, a rule to enable the computer to serve as a public web server was set up.

pass in quick on dc0 proto tcp from any to 145.58.94.75/32 port = 80 flags S keep state

The preceding rule will allow new web connections from any computer using any port to the local computer's port 80, standard HTTP port, and all return traffic.

Finally, for IP Filter only, some standard rules from another firewall are added to block all source routed packets:

block in log quick on dc0 all with opt lsrr
block in log quick on dc0 all with opt ssrr

The previous rules will cause syntax errors on Packet Filter. These options are unnecessary as Packet Filter blocks all IP packets with any options set, including all source routed packets. A new "allow-opts" Packet Filter option will allow packets with options set; this is less safe than the default. Also the keywords "head" and "group" which were allowed by IP Filter are gone and will cause errors with Packet Filter.

Port Scans With Nmap

Because the test web server with the sample IP Filter ruleset has an active publicly accessible web server, there is no way to hide the existence of this computer. The defined rules do however hide nearly all other information regarding the test web server. Nmap was used from another computer on the same network segment to scan the test web server. (This testing has not been repeated with OpenBSD 3.0 and Packet Filter.) No other firewall or protective device intervened in any way. With the IP Filter rules turned off, nmap accurately and almost immediately identifies all open ports including NTP if a UDP scan is performed. Besides the public web server, ftpd, telnetd and sshd servers were available. Once the rules are turned on, the only port that nmap can reliably find is the public web server on port 80.

A standard nmap ping scan (-sP) concludes the "Host seems down." and suggests trying "-P0". "-sP -P0" correctly concludes the host is up. Standard "-sT" and "-sS" scans report all TCP ports except 80 as blocked; nmap cannot tell its user whether any other TCP services are actually running or not.

A "-sU" UDP scan misses that 123 is open because nmap normally uses high ports to scan from. As the rules only allow traffic from 123 as well as to 123 the nmap probe is blocked. Without the source port restriction nmap would likely find the open port. Surprisingly, even using the source port option "-g 123", nmap still fails to find the open NTP port. Attempts to limit the range of ports searched, "-sU -p 123-127" repeatedly result in false positive open port reports for all scanned ports. This occurs whether or not a source port is specified or not and whether a useful (123) or arbitrary high port is specified. In the meantime, the computer from which nmap is running has an active NTP connection with the test web server that's being probed. The nmap machine was actually a time server for the test web server but no combination of command line settings that I could find, allowed nmap to find this ntpd server once the IP Filter rules were active.

In short, nmap, generally acknowledged as the most sophisticated network scanner available, is not able to learn more about the test web server when it's protected by strong IP Filter rules than any ordinary user with ping and a web browser could. Its able to learn exactly what the IP Filter rules on the test web server allow the machine from which nmap is running to know. Its attempt at OS finger printing completely fails, listing 9 possibilities, none of which are even in the right OS family. With the IP Filter rules disabled, nmap, determines the right OS but is off by three version numbers.

Packet Filter or IP Filter, running on a single host, can provide the same kind of strong firewall protection to that host, that one would normally expect from a dedicated firewall using Packet Filter or IP Filter.

Some Assumptions

If this were a standard firewall, additional rules to block all the non routable address such as 10., 192.168., 127., etc. would be added to prevent these invalid packets from coming in or escaping out to the Internet. Most are already blocked by the existing rules because there are no rules allowing them except the "any" to the web server at port 80. Doing a dozen or so rules that should already be in place on another firewall to protect one port seems excessive.

The preceding paragraph makes a statement that is an assumption that applies throughout the Hardening OpenBSD Internet Servers section. It is that the steps discussed are being applied to a machine that will become a dedicated firewall or be behind a dedicated firewall.

A Single Home Machine

This does not need to be the case. There are at least two scenarios where using techniques discussed here without a dedicated firewall makes sense. One is a single home machine, perhaps as a dual or multi-boot system with other operating systems. I expect that the large majority of persons with an OpenBSD system at home have multiple machines but there must be some with only one. There are now several million active cable and DSL connections. The large majority of these are connected to Windows machines. The most secure of these are using Zone Alarm or another "personal" firewall. There is no reason that a single OpenBSD machine using Packet Filter or IP Filter should not be more secure.

If you have two or more machines, I'd strongly recommend putting a 486 or low end Pentium to use as a dedicated firewall with NAT. Even a 486 should be quite adequate for a cable or DSL connection. It's much easier to develop a tight rule set on dedicated firewall where there is clear cut distinction between inside and outside than to set up comparably tight rules on a machine where you want secure connections with the outside world and fairly open connections between local machines. Using dynamic NAT, where multiple internal IP addresses are dynamically translated into a single valid external IP address, on a dedicated firewall allows flexibility not possible otherwise. You can put pretty much whatever you want behind NAT and those on the outside are not likely to ever know.

Regarding services, unless this is purely a learning experiment and the machine has no valuable or sensitive data, I'd stay away from SMTP (accepting outside connections) and DNS given Sendmail and Bind's histories and get these services from your ISP. I'd run web and ftp servers only if they were part of the reason for having an Internet connection. Running your own web server would let you do and learn things not feasible on any hosted site. The more adventurous your web experiments become, the more likely they are to expose your machine to risks, at least if the web site is public or semi-public. In the firewall rules, I'd block everything inbound, unless you had a public web or ftp server.

Outbound, I'd make everything stateful and seriously think about blocking everything except specific services you actually know you use. If your IP address is 213.187.49.65 and your network interface is xx0 the fast way to let anything out while preserving some degree of safety is:

pass out quick on xx0 proto tcp from 213.187.49.65/32 to any keep state
pass out quick on xx0 proto udp from 213.187.49.65/32 to any keep state
pass out quick on xx0 proto icmp from 213.187.49.65/32 to any keep state
block in quick on xx0 all

I'd be extremely wary of active FTP and would not use the following rule though it's about as good as you can do to enable active FTP to any potential site:
pass in quick on xx0 proto tcp from any port = 20 to 213.187.49.65/32 port > 1024 flags S keep state

This is an example of the kind of poor rule that depends on trusting a source port. Any attacker who controls his own machine can set the source port for almost anything to 20. It leaves Xwindow and other software that uses higher ports exposed. Instead, I'd wait until I encounterd a site that had something I really wanted and did not support passive FTP and then create a custom rule for that site. I'd also clear the rule later unless I expected to reuse the site.

A Small Organization Web Server

The other time that an Internet connected computer, not behind a dedicated firewall, might make sense can occur at a small organization. Such an organization may already have a permanent Internet connection behind dedicated hardware. Examples would include a Novell LAN behind an Instant Internet or any LAN behind other Internet appliances. After evaluating the alternatives, it's decided that available hosting options are not acceptable, and that the organization needs to locally host its own web site. Placing a public web server in the same network space as the LAN, would be a very poor choice, since the LAN would be unprotected if the web server were compromised. If the security device supported three or more network segments, placing the new web server on separate segment would make the best sense.

Assuming the security device lacks the ability to protect both the LAN and the new web server while keeping them separate, it makes better sense to put the web server outside the security device. If there are two or more computers outside the security device, a dedicated firewall will almost surely be appropriate. If there is only one computer outside, running Packet Filter or IP Filter on the web server, might make better sense than setting up a separate machine as a dedicated firewall.

If there is only one outside computer and no dedicated firewall, then taking all the steps discussed in these pages makes good sense. The computer will be exposed to a full time Internet connection and advertise its presence with only its own resources to protect it. I would definitely take advantage of extensive file removal, immutable files, including on the main system executable directories, and security level 2. As the system might have content developers who may not be counted on for strong passwords, very close attention should be paid to file and directory permissions and users' group memberships. Assigned passwords might be appropriate.

Clearly TCP Wrappers is not sufficient and Packet Filter or IP Filter with a tight custom ruleset, a must. The rule set would look a lot like the example above. Additional rules to cover all the non routable addresses not actually used locally, should be included. If the inner LAN is NATed, the web server would only see traffic from outer interface of the router, firewall, NAT, machine and not any inside LAN addresses.

With the right firewall rule set, an attack through the web application itself should be the only avenue available to the whole Internet. With a public web server, this path cannot be closed. Thus, close attention should also be paid to the security features of the web server, presumably Apache. Any capabilities not actually used by the web applications should be removed. Custom compilation including all necessary and only necessary modules would be a good idea. Even if unwanted modules can be excluding by editing httpd.conf, it's more secure if unwanted modules are not be available for loading.

An IP Filter Bug

Returning to the more general issues related to Packet Filter and IP Filter, it should be clear that these firewalls are much more powerful and flexible as a network access control tools than TCP Wrappers. Should every OpenBSD computer connected to the Internet be running Packet Filter or IP Filter, even if it is behind a strong firewall with a custom rule set? Considering that one of the primary arguments in favor of using firewalls is the ability to centralize a major part of the security policy and the complexity Packet Filter and IP Filter rules compared to TCP Wrappers, for most sites, I expect the answer is no.

In some circumstances, two layers of IP Filter or other single firewall can actually provide less security than TCP Wrappers behind IP Filter. Consider the early April 2001, IP Filter fragment caching vulnerability. This was described as a "serious vulnerability." A careful reading of the details of the problem suggest that it would be a major technical challenge to actually use this in real life situation. In essence, if a stateful session could be established with a computer, for example a public web server, IP packets can be constructed that will allow the computer with the connection to the web server to send those packets to any other port on the web server or even switch UDP with TCP traffic. With the right packets, firewall rules prohibiting connections to other ports or with other protocols would be ignored.

Constructing the necessary packets was not a trivial technical task. To actually do anything with the vulnerability, however, also requires that the public web server be running other services that have some vulnerability and that an exploit for the second vulnerability exists and can be wrapped inside the packets used to bypass the firewall rules. I've seen nothing to suggest there were any real security breaches as a result of the caching vulnerability.

My understanding of the vulnerability is that if a public web server can be reached through an IP Filter firewall, successfully constructing the right packets to take advantage of the vulnerability would allow these packets to pass through a second or third IP Filter firewall as easily as the first. This bug or any other bug in IP Filter that allows circumventing the firewall rules could very well negate any value of stacked IP Filters. Since the vulnerability allows reaching services that are by definition not public (they were believed to be protected by IP Filter rules) if a second IP Filter also denied access to the service, TCP Wrappers placed at the same location would also be expected to deny access to the service. The second IP Filter is likely to have the same bugs as the first but TCP Wrappers being a totally different product almost certainly would not. Unless the attacking computer happened to be in TCP Wrappers hosts.allow file, the attack would not be expected to be allowed through.

How Much Is Too Much

In the case of bugs in Packet Filter or IP Filter, TCP Wrappers will almost certainly provide better protection than a second layer of the outer firewall. Of course there is nothing technical to stop TCP Wrappers from running on the same computer as the second copy of Packet Filter or IP Filter. Still more protection but at what cost. Each of these protections makes a network more difficult and expensive to change. Making change more difficult is the entire point of many of these measures. Thus when an event such as hardware failure requires rapid change, this is hindered. Suitable replacement hardware might be available but the more customized each machine's rule sets and configuration, the more work will be required to make a substitution.

Everything has a point of diminishing returns. Defense in depth means taking multiple approaches to protecting resources, not duplicating effort. No one puts two different makes of firewalls serially in line with each other. Superficially this is more secure but it doubles the work for at best a marginal security improvement. In practice it's likely to hurt security. Staff can't learn either firewall as well as they would if there were only one. Unless at least one is a bridged firewall, back to back firewalls create routing issues. It creates two back to back, single points of failure. If both were routing firewalls, removing one would require changes to the other before traffic could flow. If either firewall is down the network is disconnected. Instead of being careful with one rule set, staff may rush changes with two because there is twice the work or they may confuse syntax between different rule sets.

No one should confuse these hypothetical back to back firewalls with the real and common situation where an outer firewall protects a DMZ with public Internet servers and a second firewall, likely of the same make as the first but with different rules and probably supporting NAT, protects the LAN and DMZ from each other.

A proliferation of IP address or port restrictions around a network looks more like stacking firewalls than a carefully planned mixture of backups, user and password management, disabled services, firewall, file access controls, etc., which complement but don't duplicate each other. For myself, taking a few minutes to change a few bytes in inetd.conf and add half a dozen short lines to hosts.allow and deny is worth an extra layer of protection for critical services on a few hosts in a DMZ. Widespread use of Packet Filter or IP Filter on individual hosts already protected by a strong custom rule set on a dedicated Packet Filter or IP Filter firewall is overkill for most situations. Still, it is worth knowing how to setup a firewall on a computer with a single network interface for those situations where it is appropriate.

transparent spacer

Top of Page - Site Map

Copyright © 2000 - 2014 by George Shaffer. This material may be distributed only subject to the terms and conditions set forth in http://GeodSoft.com/terms.htm (or http://GeodSoft.com/cgi-bin/terms.pl). These terms are subject to change. Distribution is subject to the current terms, or at the choice of the distributor, those in an earlier, digitally signed electronic copy of http://GeodSoft.com/terms.htm (or cgi-bin/terms.pl) from the time of the distribution. Distribution of substantively modified versions of GeodSoft content is prohibited without the explicit written permission of George Shaffer. Distribution of the work or derivatives of the work, in whole or in part, for commercial purposes is prohibited unless prior written permission is obtained from George Shaffer. Distribution in accordance with these terms, for unrestricted and uncompensated public access, non profit, or internal company use is allowed.

 


What's New
How-To
Opinion
Book
                                       
Email address

Copyright © 2000-2014, George Shaffer. Terms and Conditions of Use.