The Little Network That Could

(Credit: Flickr)

Gather ’round, folks – Grandpa has a story to tell. (OK, I’m not a grandpa and I don’t expect to be one any time soon, but as I’ve journeyed back in my memory to write this post, I sure feel old…)

For as long as it has been possible to have a full-time Internet in a residential home, I’ve been running my own home network, and I want to share its story: it’s The Little Network That Could, or TLNTC, as I sometimes call it.

(This will be the first in a series of posts covering design and implementation of a range of technologies in the Linux and networking space. I expect it to take me a while to finish. If you like this part of the story and want to read another network’s story while you’re waiting, check out Tom Eastep’s Shorewall documentation. Tom has done a great job over many years in providing detailed documentation of his configuration and design choices, and teaching a lot of people about networking in the process.)

Origins

Back in the bad old days (the late 1980s and early 1990s, in my case), those of us who were technical enough would connect to the Internet (which was only used for research and development purposes at the time) by a dial-up modem for long enough to download our emails and news spools into our offline text-based readers. This is because access was billed by the hour (or hours per month) at prices which meant that almost no one could have a full-time Internet connection – most connections were maintained by universities and commercial research labs. So my knowledge about networking was gained at work where I administered large commercial Unix systems, and mostly from the Unix host side (the network infrastructure was handled by another team).

In the early-to-mid 1990s, the Internet was opened for use by commercial organisations. This sparked a burst of growth which resulted in commercial ISPs providing more affordable Internet access (although it was still really only usable for people with reasonable technical skills), which eventually got to the point where it was affordable to have a full-time Internet connection over 56K modem. If my memory serves me correctly, this was around 1998, and I think my first full-time connection was via APANA.

I had been using Linux since around kernel version 0.97 (on the long-defunct SLS), and my gateway machine at the time when I first connected full-time was probably running Red Hat Linux 5.x (not to be confused with Red Hat Enterprise Linux 5.x, which was about 9 years later). I’m a little hazy on the details of this, although I do distinctly remember in 2000 deciding to upgrade from Pinstripe (6.9.5) to Guinness (7.0) without downloading the whole distribution first – it took 3 days of continuous downloading over my 56K modem, and failed in the middle twice, but eventually worked!

Around 2001-2002, Linux became well-enough accepted in the enterprise that I was able to shift my focus at work from what I considered to be increasingly unwieldy commercial Unix (and its overpriced proprietary hardware) to the much more frequently-updated and flexible Linux distributions – mostly Red Hat Enterprise Linux and SUSE Linux Enterprise Server in my work life, and Debian at home (Ubuntu was still a gleam in Mark Shuttleworth’s eye). Around the same time, Internet connectivity options were improving, and at home we switched from 56K dialup to cable modem and later to ADSL (which was actually a bit slower, but gave us a wider selection of ISPs).

Once our family had downstream connectivity in the order of megabits per second and upstream in the order of hundreds of kilobits per second, the benefits of local network infrastructure really showed themselves: we could have a local HTTP proxy (squid) and DNS cache/forwarder (BIND), which significantly improved web browsing performance (especially when multiple users were browsing), whilst still having enough bandwidth to receive and send email to and from our local server behind the scenes.

Raison d’être

A change in my home network’s role came in late 2006, when I went into business for myself – what had been a fairly organic start became a rather more focused strategy, and the first part of my thinking around TLNTC was born. I was consulting on Linux, networking, and software development, and I dedicated around 15-20% of my time to research and professional development in order to keep my skills sharp. I needed more from my network than just a gateway for faster Internet and email access for my family – it had to be a proving ground for the technologies I was recommending, installing, troubleshooting, and maintaining for my clients. What was needed on site had to be demonstrated at home first, and workable over the long term.

Even though I’m not doing independent consulting at the moment, TLNTC still fills this role in my technical professional development, and this is why I’ve continued pondering its purpose and characteristics.

More than a home lab

Home labs are discussed regularly on various blogs, forums, dedicated sites, and on the various podcasts to which I listen, especially those on the Packet Pushers Network:

A lot of engineers who work with Cisco, Juniper, Microsoft, or VMware products at work tend to have a bunch of gear at home, which they fire up when needed to study for a certification or build a proof-of-concept. Such home labs are often composed of retired enterprise equipment pulled out of a data centre rack or acquired on eBay from second hand vendors, although depending on budget and type of equipment, so more dedicated labbers might buy new gear. They often live in a full 42RU 19-inch rack in the garage, and are so noisy as to make other members of the household complain about jet engines and such when they are fired up. So their configurations can be short-lived and don’t necessarily need to be practical to maintain in the medium-to-long term.

I do use TLNTC as a test lab and I have some equipment that only gets turned on when I need it, but my focus is on applying learning to create a usable, reliable infrastructure rather than learning for its own sake. In short, it is designed to be an enterprise network in miniature. To that end, I’ve implemented a number of components which I generally encounter in enterprises, but would never recommend to home users unless they have similar goals:

  • 802.1X authentication for wireless networks
  • multiple layers of packet filtering firewalls, with OSPF and BGP for routing
  • OpenVPN and IPsec VPNs
  • IDS using full packet capture from a monitor port
  • GPS-synced stratum 1 NTP server
  • IPv6 through most of the network (more on this later in the series)
  • URL filtering using a dedicated proxy server
  • Network Monitoring System (LibreNMS) integrated with Opsgenie for alerting

Despite these similarities with enterprise networks, there are also differences:

  • I strive as much as possible to only use well-maintained Free Software. I do run some proprietary software, including Junos on my main EX2200-C switch and Ruckus Unleashed for wireless, but these are the exception rather than the rule. When I first started consulting, this was sometimes a limitation, but it’s becoming less and less so. Nowadays I can usually find an Open Source technology for almost any enterprise software niche if I look hard enough.
  • Performance and availability are generally lower priority for me than cost and noise. That’s not to say I don’t care about them at all, but there’s a balance to be struck. All my servers run RAID and some of those RAID sets are on SSD for speed, but generally I aim for small, quiet, and cheap. If I need more storage space, I’ll generally go for a spinning rust drive, due to the lower cost per TB. If a piece of server or network hardware dies, my family waits until I can get it repaired or replaced. If they urgently need Internet access, they use the hotspot on their phone. If they need email, they fall back to their gmail account temporarily.
  • Core routing and firewalling happens in software rather than hardware. This is partially because VMs and containers are easy to modify and adapt, but also because firewall and router vendors have so consistently failed to produce platforms which are easily and frequently updated. I may take this point up in a later post in the series, but for now I’ll just say that I have found image-based software distribution such as that used by Cisco and Juniper much harder to manage and update than standard Linux distributions based on dpkg/apt or rpm/yum. Because of this, I don’t use dedicated firewall appliances, but build them from standard Linux distributions.

But it’s great for learning, too

I think there is also a learning benefit to taking the “mini-enterprise” approach to the home network: not only does the learning serve the infrastructure, but the process of implementing the infrastructure cements the learning. This means when I put technology on my resume, I do so knowing that I can confidently answer questions about it from experience rather than rote learning.

How mini is my mini-enterprise network?

To give an idea of scale, here’s a quick overview of what comprises TLNTC:

  • 23 VMs or containers running across 3 dual-core VM/container hosts; 36 GB RAM total
  • 3 small switches (all routing-capable, but running as L2 switches), total of 27 ports
  • About 10 VLANs, each of which (for the most part) maps to an IPv4 and an IPv6 subnet and thence to a firewall zone

So clearly this is not a large network, but it’s considerably more complex than the average home network. Based on my experience during my time in consulting, it’s probably similar in size and complexity to the network of a small business with 25-100 employees, depending on how technical their work is.

Why not cloud?

A big question I’ve recently asked myself (and been asked many times, particularly when I tell people I run my own mail server) is: why aren’t you just putting this all in the cloud? Given that my day job involves working with AWS & Azure public clouds and container technologies like Docker and Kubernetes, I did seriously consider doing this, but decided against it on two grounds:

  1. I would still need most of the same on-premises hardware, and
  2. cost.

I used the AWS and Azure pricing tools to work out how much my infrastructure would cost to run in their respective clouds. Azure’s pricing tool told me that my virtualised workloads would cost $60K to run in their cloud over 5 years, and $11K on-prem. AWS’ tool told me that I would save $273K over 5 years by moving my workload to their cloud. In reality, I’ve spent less than $7K on hardware in the past 10 years, and if I’m generous, maybe $5K on power over the same period.

Obviously this is not an apples-to-apples comparison since public clouds offer many features and services which my network doesn’t, but clearly if I don’t need all those services and I continue to prioritise cost over availability and performance, cloud is not the right answer. VMs and containers work pretty much the same on-prem as they do in cloud, so I’m not backing myself into a corner if one day I end up putting some of my home network’s workloads in public cloud. (This web site would likely be one of the prime candidates.)

[Edit: I couldn’t resist throwing this in – I just listened to IPv6 Buzz episode 055, where (around 49:30) Geoff Huston was heard to utter: “Folks who are running computers in their basement … are the dinosaurs of today’s age, and the enterprise networks that go with it are equally … dinosaur-reptilian-based bits of infrastructure.” I may circle around and respond to Geoff’s views in a future post, but in the meantime I only hope I can be a thoughtful, well-informed dinosaur. Triceratops is my favourite – can I be one of those?]

So that’s the beginning of the story of TLNTC – I hope it was informative. The next part of the story will be about TLNTC’s adventures in IPv6-land.

I’m happy to receive questions and comments in the section below, or via the usual social media channels (over on the right).

NAT is evil, but not bad

2011-09-20: Edited to add section about IPv6 options; minor cleanup; references added.

This is kind of a follow-on from my post about the subnet addressing design differences between IPv4 and IPv6. Recently, Tom Hollingsworth started a little Twitter conversation about NAT where i mentioned that i liked NAT for the purpose of decoupling my internal and external address spaces; 140-character limits got in my way there, and i realised i needed to clarify my logic more, so this is my attempt to do that.  I’m very interested in feedback – have i missed something important?

A bit of context

I’ve never worked for a service provider and i don’t work in large data centres at the moment.  So i don’t have in mind huge, publicly-addressed networks.  I have in mind “corporate” or “enterprise” networks, which might include campus networks on one site with a few thousand ports, or organisations spread across 40 or 50 sites. In such organisations, the “data centre” might comprise something like 4 or 5 racks, usually on one or two sites, with maybe 100-200 gigabit ports or so.

Exposing only what is necessary 

If i have a network of, say, 2000 devices, including desktops, servers, printers, tablets, mobiles, etc. there are a variety of different access requirements.  The servers which largely serve clients on the LAN or internal WAN have limited web access requirements.  Some clients might talk to local servers for most of their applications.  For other clients (especially mobile devices), accessing the web (and perhaps email) is the only thing they need to do.  Another whole range of devices (printers, security cameras, etc.) have no need for inbound or outbound Internet traffic at all – if they need updates or configuration changes, that usually happens through a local management server.

For performance, bandwidth control, security, and auditing purposes, web browsing on most of these devices is forced through a local proxy server. Doing this eliminates most reasons for client devices to directly contact any system in the outside world. This significantly changes the security posture of the devices in question (cf. Greg Ferro’s comments in Packet Pushers #47 about inline load balancers allowing the web servers they balance to have no default route).  Of course, that’s not perfect security, and we still have to be careful that we’re doing the right checks in the proxy server, but it cuts out a whole range of possible attack vectors, with the result that only a tiny portion of a corporate network actually needs to be addressable globally.  This is not in itself justification for NAT, but rather justification for exposure of only a small external address range.

Internal addressing plans

I haven’t yet seen a corporate IP addressing plan that didn’t use the organisational unit, or the geographical location, or both.  In many cases, they are the only real world entity represented by the 2nd or 3rd IPv4 octet, even if there are not 256 organisational units or locations.  This is a little inefficient, and I’m sure that if everyone thought in binary, we could pack things in there and save 3 or 4 bits in many cases, but for the most part it’s a good practice because it saves support costs by allowing everyone to use 8-bit boundaries.  (I suspect when we go to IPv6 people will work on 16-bit boundaries, and burn even more bits on internal subnet addressing.)

The relevance of this to the NAT question is that most corporate networks would prefer that the internal structure of the network is not disclosed when client PCs contact outside addresses during day-to-day tasks, and NAT achieves this rather nicely. Of course, any determined attacker can learn lots about clients by passively watching their traffic, but funneling client traffic through a NAT gateway is one component of the solution.

NAT not a security mechanism?

It’s almost a truism in the networking industry that “NAT is not a security mechanism”.  This is at least somewhat true: a great deal can still be discovered about a host behind a NAT gateway using passive packet sniffing, and if a vulnerable service is exposed through a port forward, then all bets are off.  But in one sense, saying that NAT is not a security mechanism is a misrepresentation, because NAT provides a significant level of protection against active attacks.

For example, if a Windows PC’s file sharing service is open on the internal network but it’s behind a NAT gateway, it cannot be compromised by external hosts through a buffer overrun vulnerability in its SMB protocol handler.  Similarly, if a server has an ssh daemon which allows password-based access, it cannot be compromised by the (very common) ssh password brute-forcing worms that infest the Internet if it’s behind a NAT gateway which does not port-forward to that ssh daemon.  So whilst NAT is not a tool designed to provide security, the address space conservation that it’s designed for also provides some security against common types of attack as a useful by-product.

Most of the discussion about hating on NAT in Packet Pushers episode #61 (starting at about the 40 minute mark) was set in the context of a web hosting or large data centre environment (to which the issue of public vs. private address space does not apply), and assumed that those who deploy NAT do so along with thoughtless port forwarding and without suitable DMZ design. [1]  But NAT and poor network security design need not go hand-in-hand.

NAT fails closed, not open

One aspect of NAT makes it desirable from a security perspective, and this is why the majority of SOHO routers in the world are deployed with NAT enabled by default: NAT is closed to outside access by default.  That is, unless you take active steps to open up outside access to ports and/or hosts behind a NAT gateway, their normal TCP and UDP ports cannot be accessed.  I don’t dispute the possibility of attacks which could exploit weaknesses in the packet forwarding algorithms used by NAT gateways in order to attack the hosts behind them, nor suggest that spear phishing or drive-by downloads are not a significant risk to those hosts, nor suggest that the security of the gateway itself is not essential.  But these are risks apply equally to hosts behind routed firewalls.

Designing for things to fail is part of good network design, and in many (most?) coprorate networks, it’s preferable to fail closed rather than open.  On a NAT gateway, if there is a failure in the routing or firewalling engine, only one host remains open to external attack: the gateway itself.  On the other hand, if a routed firewall’s ACLs fail to be applied for any reason – say, during a system restart after a software update – the default scenario for many operating systems is that their routing functions remain functional even if their firewall does not.  So in a failure scenario, NAT’s security posture is more desirable than that of a similarly-configured non-NATed network.

Similarly, if i make a mistake in specifying a netmask on an ACL in a routed network (as a colleague recently did on a client’s network), i might accidentally allow outside access to double the number of systems i intended to.  Using NAT means that i’m less likely to do this, because such ACLs usually only apply in an outbound direction.

NAT simplifies problems where scale overwhelms the administrator

This is the part where the networking high-flyers are going to start laughing at me.  But please, read and understand first.  There are factors in many organisations (usually at layer 8 or 9 of the OSI network model) that mean that we don’t always have access to the best people.  Finding someone with deep understanding of how all the components of a network hang together is actually hard to come by in many places.

For those of us who are left, NAT is a helpful tool in cutting down the size of a network design or management problem from immense to manageable.  If we can provide Internet access to a large number of systems using a much smaller number of external addresses, we will have a much greater chance of understanding the configuration and producing a good result for our employers and/or clients.

But the naysayers are still right…

In many cases, NAT is only an obscurity mechanism which is fundamentally a waste of time in terms of security.  It adds complexity to the troubleshooting process, often for no additional value. But NAT can and in many cases should be part of a network administrator’s toolkit, when applied rightly.

Thinking IPv6

How this applies to IPv6 is where i start to get uneasy.  The internal-external decoupling that NAT provides seems not to be on the radar for IPv6.  The suggestions i’ve seen so far are either to use unique local addressing internally and do one-to-one translation between these and provider independent addresses at the border router (which seems to me to provide no benefit at all over straight routed firewalling), or to use only unique local addresses and not bother with providing external addresses for corporate end-user PCs at all [2] (which will cease being practical as soon as the sales manager decides he or she needs Skype).


[1] When listening to that episode, one could be forgiven for thinking that connection tracking of FTP had never been invented…

[2] At about the 9:00 mark in the video.


Source: libertysys.com.au

Pondering subnet allocations

Edit, 2011-05-03: To all those poor souls who have been directed here by Google in their search for best practices on IPv4 and/or IPv6 subnet allocations (or worse, the HP A5500’s NAT capabilities), please accept my sincere apologies.  This page is more about asking questions than providing answers.

Edit, 2011-08-07: Network World has an interesting blog post by Jeff Doyle talking about issues in IPv6 address space design. Good reading.

This is my third go at writing this post.  I started in the middle of the night, because i woke up with IPv4 allocations and VLAN assignments running around in my head and couldn’t get back sleep.  After writing what seemed to me a reasonably coherent post, i accidentally hit the back button instead of the left arrow (surely they could have found somewhere better to put that on the ThinkPad keyboard).  Dismal failure 1 for the day.  After that i just threw a few notes in here as a draft and went back to bed.

I’m in the middle of a network redesign for a major client, a medium-sized K-12 private school.  We have about 70 switches, and a little over 2000 ports.  It’s nowhere near the scale of a university, enterprise data centre, or service provider network, but it requires significantly more design, planning, and implementation effort than your average small network.

The campus houses a few loosely-coupled related entities over about 25 or so buildings, all connected by Gigabit fibre.  A few years ago when we upgraded the phone system and switched to VoIP, i made an allocation plan for subnets using the 10.0.0.0/8 IPv4 space.  We have quite a few VLANs, using /16 and /24 subnet sizes.

The network upgrade i’m working on has a number of goals: getting all client systems off the server VLAN (which has been progressing slowly over the last 18-24 months), providing redundant routing using a new pair of HP A5500 switches using IRF (HP/3Com’s equivalent to Cisco stacking), and moving routing between VLANs from an old cluster of Linux servers to the new switches.

At the same time, i’m planning a move from switch-based VLANs to building-based VLANs, and i thought to myself: since we’re going to need IPv6 on the outside network soon, i’d better make sure my new plan allows for IPv6 on the inside.  I want to keep the IPv6 structure pretty much identical to IPv4, since our subnet plan mirrors the physical structure of the network.

Selecting the subnet size on IPv6 is easy, since it’s pretty much fixed to /64 (insert appropriate mind-boggling about why we would want burn half of our addressing bits on the local subnet here), but there’s another complication: because there’s no NAT (yet), my IPv6 subnet plan must fit within our external address range.  This is a big difference for many (most?) organisations using IPv4 only: at the moment, we have complete logical decoupling of our internal and external address ranges; under IPv6 we must tie the two together.

This is my big concern with the lack of NAT in IPv6: it places constraints on internal network design that do not exist in the IPv4+NAT world.  I don’t dispute the wisdom of the designers in leaving out NAT – it is unquestionably a complicating hack.  But in my limited understanding of IPv6, i’m not aware of an equivalent to the useful part of IPv4 NAT (the internal/external address decoupling).  When we implement IPv6, i’m guessing that i’ll implement one-to-one address translation at our network edge to achieve equivalent functionality.

So what happens with our external address space?  I’m not fully clear on APNIC‘s IPv6 allocation rules, but as far as i can tell, an existing holder of an IPv4 /23 can expect a maximum IPv6 allocation of a /48.  This means that we would have 16 bits of subnets, which is exactly the same number of /24s we have available in the 10.0.0.0/8 address space.  My first reaction to this was, “Sweet – i’ll use the exact same subnet numbers in hexadecimal, and i’ll have my IPv6 subnet plan.”  But i wonder whether that’s all there is to it.  And having exactly the same number of subnets at our disposal doesn’t seem like much of a leap forward in terms of protocol functionality…

What are other people’s thoughts?  Are the issues for IPv6 subnet allocation different from IPv4?  Is there a best practice for this sort of thing?


Source: libertysys.com.au