I’ve liked VyOS (and its predecessor Vyatta Core) for a longtime. It’s always my first choice when I want to test a new BGP or OSPF scenario or set up an IPsec VPN. Its compelling value proposition to me is that it turns Debian Linux into a network appliance with a Juniper-like CLI. Or to put it another way, VyOS is to routing as Cumulus Linux is to switching – a router that makes sense to both network engineers and Linux geeks.
The certification is different from most others I’ve done, being 100% practical. There are no written examination requirements, no multiple-choice questions. It presents a practical scenario with a number of broken configurations, which need to be fixed in order to pass the certification. (I’ve been told this is how the Red Hat Certified Engineer test is structured as well, though I haven’t experienced it first-hand.) It uses a browser-based VNC client to hook up to a dedicated training/certification scenario platform (find all the details in their blog announcing the certification).
The announcement claimed:
We tried to avoid obscure options so that an experienced VyOS user and network admin can pass it without preparing specially for that certification, but it still requires a pretty broad experience.
I think the exam stands up pretty well to that claim. To prepare, I read through the blueprint, made sure I could get at least 90% of the sample questions right without additional study, labbed up a BGP + IPsec/VTI scenario between my home network and AWS (learning a little about compatibility between IKEv1 and IKEv2 along the way!), and then booked the exam. Experienced network and Linux admins should find the certification relatively straightforward, and easily achievable within the two hours finish time allotted.
I had a couple of administrative difficulties (mostly due to my time zone being a long way from theirs) and a couple of very minor technical gotchas in the exam. (I never realised I was so dependent upon Ctrl-W when it comes to VyOS command-line editing, and this doesn’t work in a browser-based emulator.) The VyOS team were very apologetic about the administrative dramas, but honestly they were not really even an inconvenience. Typos and errors and failed technology are quite common in certification exams, but because the VCNE exam is based on actual VyOS running in a VM, there’s not a lot of text to get wrong, and you don’t get the level of quirkiness that simulations offer.
Contrast this with the AWS Certified Solutions Architect – Associate, which is a traditional multiple-choice exam administered by Pearson. I studied it from a paper book (I’ve never really learned well from the video training that many people swear by) for about 3 months off & on, and although I passed well, I never felt that it tested my knowledge in the right ways. And the multiple-choice format has given rise to the whole question-dumping industry which lurks in the shadows of many vendor certification studies.
On the negative side for the VyOS exam, there was no IPv6, which I think is a serious gap in any network-oriented certification nowadays. I also found the IPsec problem a little on the easy side. It’s hard for me to judge, but I think that the difficulty might be on the low end of intermediate level, which is where this certification is aimed.
Overall I think the VyOS CNE exam was my most pleasant certification experience yet, and one which demonstrates skills which actually matter in real life. I’m really glad to see Sentrium getting enough traction in the marketplace that a certification platform is commercially viable, and I’m keen to keep going with the certifications they offer.
root@maas:~# tail -n 1 /var/log/chrony/measurements.log 2019-04-09 09:42:54 172.22.160.61 N 2 111 111 1111 6 6 0.00 5.516e-04 1.142e-03 1.005e-05 8.240e-04 3.461e-02 AC16FE35 4B D K
I find chrony’s switching of units from milliseconds to microseconds (and in some cases nanoseconds) and its floating point formatting in logs difficult to parse visually. However, chrony’s excellent security track record means it has been adopted as the default NTP implementation on both Ubuntu and Red Hat. Both implementations claim excellent accuracy, and although I’ve never compared them empirically, they both seem to deliver on that claim. I tend to use whichever is the default for the distribution I’m working on.
As I’ve mentioned before, Network Time Protocol is one of those oft-ignored-but-nonetheless-essential subsystems which is largely unknown, except to a select few. Those who know it well generally fall into the following categories:
time geeks who work on protocol standardisation and implementation,
enthusiasts who tinker with GPS receivers or run servers in the NTP pool, or
sysadmins who have dealt with the consequences of inaccurate time in operational systems. (I fall mostly into this third category, with some briefforays into the others.)
One of the consequences of NTP’s low profile is that many important best practices aren’t widely known and implemented, and in some cases, myths are perpetuated.
Fortunately, Ubuntu & other major Linux distributions come out of the box with a best-practice-informed NTP configuration which works pretty well. So sometimes taking a hands-off approach to NTP is justified, because it mostly “just works” without any special care and attention. However, some environments require tuning the NTP configuration to meet operational requirements.
When best practices require more
One such environment is Canonical’s managed OpenStack service, BootStack. A primary service provided in BootStack is the distributed storage system, Ceph. Ceph’s distributed architecture requires the system time on all nodes to be synchronised to within 50 milliseconds of each other. Ordinarily NTP has no problem achieving synchronisation an order of magnitude better than this, but some of our customers run their private clouds in far-flung parts of the world, where reliable Internet bandwidth is limited, and high-quality local time sources are not available. This has sometimes resulted in time offsets larger than Ceph will tolerate.
A technique for dealing with this problem is to select several local hosts to act as a service stratum between the global NTP pool and the other hosts in the environment. The Juju ntp charms have supported this configuration for some time, and historically in BootStack we’ve achieved this by configuring two NTP services: one containing the manually-selected service stratum hosts, and one for all the remaining hosts.
We select hosts for the service stratum using a combination of the following factors:
Bare metal systems are preferred over VMs (but the latter are still workable). Containers are not viable as NTP servers because the system clock is not virtualised; time synchronisation for containers should be provided by their host.
There should be no “choke points” in the NTP strata – these are bad for both accuracy and availability. A minimum of 3 (but preferably 4-6) servers should be included in each stratum, and these should point to a similar number of higher-stratum NTP servers.
Because consistent time for Ceph is our primary goal, the Ceph hosts themselves should be clients rather than part of the service stratum, so that they get a consistent set of servers offering reliable response at local LAN latencies.
A manual service stratum deployment
Here’s a diagram depicting what a typical NTP deployment with a manual service stratum might look like (click for a larger image).
To deploy this in an existing BootStack environment, the sequence of commands might look something like this (application names are examples only):
# Create the two ntp applications:
$ juju deploy cs:ntp ntp-service
# ntp-service will use the default pools configuration
$ juju deploy cs:ntp ntp-client
$ juju add-relation ntp-service:ntpmaster ntp-client:master
# ntp-client uses ntp-service as its upstream stratum
# Deploy them to the cloud nodes:
$ juju add-relation infra-node ntp-service
# deploys ntp-service to the existing infra-node service
$ juju add-relation compute-node ntp-client
# deploys ntp-client to the existing compute-node service
Updating the ntp charm
It’s been my desire for some time to see this process made easier, more accurate, and less manual. Our customers come to us wanting their private clouds to “just work”, and we can’t expect them to provide the ideal environment for Ceph.
One of my co-workers, Stuart Bishop, started me thinking with this quote:
“[O]ne of the original goals of charms [was to] encode best practice so software can be deployed by non-experts.”
That seemed like a worthy goal, so I set out to update the ntp charm to automate the service stratum host selection process.
My goals for this update to the charm were to:
provide a stable NTP service for the local cloud and avoid constantly changing upstream servers,
improve testability of the charm code and increase test suite coverage.
What it does
This functionality is enabled using the auto_peers configuration option; this option was previously deprecated, because it could be better achieved through juju relations.
On initial configuration of auto_peers, each host tests its latency to the configured time sources.
The charm inspects the machine type and the software running on the system, using this knowledge to reduce the likelihood of a Ceph, Swift, or Nova compute host being selected, and to increase the likelihood that bare metal hosts are used. (This usually means that the Neutron gateways and infrastructure/monitoring hosts are more likely to be selected.)
The above factors are then combined into an overall suitability score for the host. Each host compares its score to the other hosts in the same juju service to determine whether it should be part of the service stratum.
The results of the scoring process are used to provide feedback in the charm status message, visible in the output of juju status.
if the charm detects that it’s running in a container, it sets the charm state to blocked and adds a status message indicating that NTP should be configured on the host rather than in the container.
The charm makes every effort to restrict load on the configured NTP servers by testing connectivity a maximum of once per day if configuration changes are made, or once a month if running from the update-status hook.
All this means that you can deploy a single ntp charm across a large number of OpenStack hosts, and be confident that the most appropriate hosts will be selected as the NTP service stratum.
Here’s a diagram showing the resulting architecture:
How it works
The new code uses ntpdate in test mode to test the latency to each configured source. This results in a delay in seconds for each IP address responding to the configured DNS name.
The delays for responses are combined using a root mean square, then converted to a score using the negative of the natural logarithm, so that delays approaching zero result in a higher score, and larger delays result in a lower score.
The scores for all host names are added together. If the charm is running on a bare metal machine, the overall score given a 25% increase in weighting. If the charm is running in a VM, no weight adjustment is made. If the charm is running in a container, the above scoring is skipped entirely and the weighting is set to zero.
The weight is then reduced by between 10% and 25% based on the presence of the following running processes: ceph, ceph-osd, nova-compute, or swift.
Each unit sends its calculated scores to its peer units on the peer relation. When the peer relation is updated, each unit calculates its position in the overall scoring results, and determines whether it is in the top 6 hosts (by default – this value is tunable). If so, it updates /etc/ntp.conf to use the configured NTP servers and flags itself as connecting to the upstream stratum. If the host is not in the top 6, it configures those 6 hosts as its own servers and flags itself as a client.
How to use it
This updated ntp charm has been tested successfully with production customer workloads. It’s available now in the charm store. Those interested in the details of the code change can review the merge proposal – if you’d like to test and comment on your experiences with this feature, that would be the best place to do so.
Here’s how to deploy it:
# Create a single ntp service:
$ juju deploy --channel=candidate cs:ntp ntp
# ntp service still uses default pools configuration
$ juju config ntp auto_peers=true
# Deploy to existing nodes:
$ juju add-relation infra-node ntp
$ juju add-relation compute-node ntp
I recently wanted to look at some packet captures on my NTP pool servers and find out if any NTP clients hitting my servers use extension fields or legacy MACs. Because the overall number of NTP packets is quite large, I didn’t want to spool all NTP packets to disk then later filter with a Wireshark display filter – I wanted to filter at the capture stage.
I started searching and found that not many quick guides exist to do this in the capture filter. However, the capability is there in both tcpdump and tshark, using either indexing into the UDP header, or using the overall captured frame length. Here’s an example of tcpdump doing the former (displaying it to the terminal), and tshark doing the latter (writing it to a file):
tcpdump -i eth0 -n -s 0 -vv 'udp port 123 and udp[4:2] > 56'
tshark -i eth0 -n -f 'udp port 123 and greater 91' -w file.pcap
Both of the above filters are designed to capture NTP packets greater than the most common 48-byte UDP payload. In the case of udp[4:2], we’re using the UDP header’s 16-bit length field, which includes the header itself. In the case of greater, it uses the overall captured frame length, and actually means greater-than-or-equal-to (i.e. the same as the >= operator); see the pcap-filter(7) man page for more details.