Building a Stratum 1 NTP Server with a Raspberry Pi 4 and Adafruit Ultimate GPS Hat

I’m going to be honest with you. There’s a lot of posts on the internet on how to do this, but there’s a lot of misinformation out there to. My goal in this post is to give you what you need to get this setup, explain why you need to do what you need to do, and give you the tools you’ll need to to study up if you want to know more. In the past week, I’ve been troubleshooting a GPS hat and I’ve built an NTP server on a fresh Raspbian build at least 5 times to perfect this, and yesterday I ran through it again writing down the steps as I went. That’s where this how-to came from.

There’s two trains of thought on how to do this post-RPi2; software UART or hardware UART. Software UART is if you still want to use bluetooth on the RPi. I don’t know why you would want to use bluetooth on your NTP server, but some folks might want to. This will not be for them. I’m in the hardware UART crowd.

The Ultimate GPS hat delivers its data over a serial port to GPIO pins 14 and 15 at 9600 bps. On the RPi2, this went directly to the hardware UART, but on the RPi 3 and 4, the hardware UART is taken up by the bluetooth subsystem, and the serial port for pins 14 and 15 is emulated in software. Luckily, we can disable bluetooth and use the hardware UART for the GPS. So, let’s get started.

Parts List and Cost:

Raspberry Pi 4 2GB: $34.99 (The 1GB version will work fine, I just had the 2GB version laying around)
Adafruit Industries Ultimate GPS HAT: $44.99
Waterproof GPS Active Antenna 28dB Gain: $14.99 (Optional, but you’ll really need this for best results)
Raspberry Pi 4 Power Supply with ON/Off Switch: $9:99
SanDisk Ultra 32GB microSDHC: $7.99
Total Cost: $112.95 (you can usually find things cheaper too). Compare this to commercial NTP servers that range from $1500 to $5000+

I’m going to assume that this is a fresh build and you’re doing this headless and over wifi. I also assume you’re using Raspbian. The build I used for this how-to was 2020-02-13-raspbian-buster-lite, but unless something major changes with Raspbian, it should work with any build of Raspbian. I also use apt rather than apt-get because I like the little progress bar at the bottom of the screen, but you can use apt-get if you like. I also use nano as my editor. If you’re one of those people that’s into self harm and you want to use vi, more power to you. Without further ado…

1. Burn the Raspbian image to an SD card
2. Add a blank ssh file and your wpa_supplicant.conf to /boot. (standard stuff for headless RPi access)
3. Put a battery in the hat, attach the hat, plug in your SD card, and either attach an external active GPS antenna or just put the whole RPi outside and power it up.

When I was writing out the instructions for this yesterday, I just plugged my RPi4 in and set it on my back patio for about 30 minutes. You really can’t do the later steps until the GPS has a fix. You’ll know it has a fix when the red LED stops blinking once per second and starts giving you a brief flash every 15 seconds.

While we’re waiting for a fix, go ahead and ssh to raspberrypi.local using your favorite terminal program (I’m on a Mac, so I use terminal, if you’re on Windows, I recommend Putty). Let’s get this thing updated while waiting…

sudo apt update
sudo apt upgrade

Once we have a GPS fix, we’ll move forward. The first thing we want to do is disable the console getty programs. We’ll be wanting to use /dev/ttyAMA0, and they’re currently using them. While we’re at it, we’re also going to disable the hciuart service, as it usually attempts to talk to the UART.

sudo systemctl stop [email protected]
sudo systemctl disable [email protected]
sudo systemctl disable hciuart

Even though we’ve stopped the console from starting, we need to stop the kernel from trying to use it. We edit /boot/cmdline.txt and remove the console.

sudo nano /boot/cmdline.txt
remove this: console=serial0,115200

Now we’ll need to actually disable bluetooth and take over the hardware UART. This will allow us to use /dev/ttyAMA0 for our GPS. While we’re in here, we’re also going to enable the PPS pin, which is GPIO pin 4 and disable power saving. We’re just doing this now so we don’t have to reboot again later.

sudo nano /boot/config.txt
#At the bottom of the file, add the following:

# Use the /dev/ttyAMA0 UART GPS instead of Bluetooth
dtoverlay=disable-bt

# enable GPS PPS
dtoverlay=pps-gpio,gpiopin=4

# Disable power saving
nohz=off

We also need to clean up one more thing before we move on. DHCP can be configured to deliver NTP server info on some networks, but that doesn’t work very well with NTP servers themselves. We want to make sure that this doesn’t interfere with us, so we’ll disable it. If we don’t, it could cause your ntp.conf file to be edited or ignored completely.

sudo rm /etc/dhcp/dhclient-exit-hooks.d/ntp
sudo rm /lib/dhcpcd/dhcpcd-hooks/50-ntp.conf

sudo nano /etc/dhcp/dhclient.conf
In the “request” block, remove dhcp6.sntp-servers and ntp-servers

Delete the highlighted options

Finally, we want to change our CPU scaling governor settings to keep the CPU set to the maximum speed for continuous usage. Normally enabling power saving features is a good thing: it saves you power. But when your CPU changes power saving modes, the impact on PPS timing is noticeable. For some reason the NO_HZ kernel mode has a similar bad effect on timekeeping. We disabled nohz earlier in the /boot/config.txt file and to change the scaling governor we need to do the following:

sudo nano /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
replace ondemand with performance

Now that we’ve edited those files, removed the DHCP configurations, and set our performance level, we need to reboot.

sudo reboot

Give it a couple minutes, then SSH back in and let’s check to see if we have communication from the GPS on /dev/ttyAMA0

sudo cat /dev/ttyAMA0

You should see something like this:

sudo cat /dev/ttyAMA0

Great, we have communication between the RPi and the GPS Hat! Awesome! Now, let’s add some tools to make this whole thing work.

sudo apt install gpsd gpsd-clients python-gps pps-tools ntp

GPSD is the service we’re going to use to decode the NMEA data coming from the GPS. Before it will work, we need to edit it’s configuration file. You’ll want the options to match the options below. We’re not using USB GPS, so we can turn that off, the devices are the /dev/ttyAMA0 that is the UART we stole from bluetooth, and the /dev/pps0 that we requested earlier for pin 4 in the /boot/config.txt file, and the -n option tells GPSD to start talking to the GPS device on startup and not to wait for the first client to attach.

sudo nano /etc/default/gpsd

START_DAEMON=”true”
USBAUTO=”false”
DEVICES=”/dev/ttyAMA0 /dev/pps0″
GPSD_OPTIONS=”-n”

After we save the config file, we need to restart the gpsd service so it can pick up the config.

sudo systemctl restart gpsd

Once gpsd is restarted, we’ll run gpsmon to see how we’re looking.

gpsmon

You should see something like this:

gpsmon

YAY! Your GPS is now passing data and GPSD is processing that data properly, but this is only half the battle. You should see PPS offsets in the gpsmon window, but to verify we have good communication on /dev/pps0, we run the following command.

sudo ppstest /dev/pps0

The output should look something like this:

sudo ppstest /dev/pps0

Great, we have a good PPS signal. The GPS is working, PPS is working, now all that’s left to do is to edit the ntp.conf file and add some pretty important stuff. Before we do that, I want to explain a few things as to why we’re going to do what we do.

NTP gets precise time from GPSD via a shared memory driver. That shared memory driver uses the magic pseudo-IP address of 127.127.28.X. 127.127.28.0 identifies unit 0 of the ntpd shared-memory driver (NTP0); 127.127.28.1 identifies unit 1 (NTP1). Unit 0 is used for in-band message timestamps and unit 1 for the (more accurate, when available) time derived from combining in-band message timestamps with the out-of-band PPS synchronization pulse. Splitting these notifications allows ntpd to use its normal heuristics to weight them.

Different units - 2 (NTP2) and 3 (NTP3), respectively - must be used when gpsd is not started as root. We’ve told our GPS HAT to put PPS time on GPIO pin 4, so will also use unit 2 (NTP2) for the PPS time correction. You can verify this by running the command ntpshmmon and it will show you that NTP2 is our primary shared memory clock source. Run that command as sudo, and you should see NTP0 and NTP1 show up as well.

Another thing to note is that even though you’re building a highly accurate GPS based stratum 1 NTP server, you’re going to want need than one time source. If something happens to the GPS, the antenna breaks, or something else, it’s best to have a few sources and let NTP handle the rest. I recommend adding servers that are close to you, and having a few of them available.

Now, we’ll get on with editing the ntp.conf file and adding a few NTP servers as well as a log file, our PPS reference and our GPS reference. Your ntp.conf file should look something like this:

sudo nano /etc/ntp.conf

# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

driftfile /var/lib/ntp/ntp.drift
logfile /var/log/ntp.log

# Leap seconds definition provided by tzdata
leapfile /usr/share/zoneinfo/leap-seconds.list

# Enable this if you want statistics to be logged.
statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

# You do need to talk to an NTP server or two (or three).
#server ntp.your-provider.example
server time.nist.gov iburst minpoll 5 maxpoll 5
server tick.usno.navy.mil iburst minpoll 5 maxpoll 5
server 0.us.pool.ntp.org iburst minpoll 5 maxpoll 5
server 1.us.pool.ntp.org iburst minpoll 5 maxpoll 5
server 2.us.pool.ntp.org iburst minpoll 5 maxpoll 5
server 3.us.pool.ntp.org iburst minpoll 5 maxpoll 5

# GPS PPS reference (NTP2)
server 127.127.28.2 minpoll 4 maxpoll 4 prefer
fudge 127.127.28.2 refid PPS

# GPS Serial data reference (NTP0)
server 127.127.28.0 minpoll 4 maxpoll 4
fudge 127.127.28.0 time1 0.500 refid GPS

# pool.ntp.org maps to about 1000 low-stratum NTP servers. Your server will
# pick a different set every time it starts up. Please consider joining the
# pool: <http://www.pool.ntp.org/join.html>
#pool 0.debian.pool.ntp.org iburst
#pool 1.debian.pool.ntp.org iburst
#pool 2.debian.pool.ntp.org iburst
#pool 3.debian.pool.ntp.org iburst

# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that “restrict” applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.

# By default, exchange time with everybody, but don’t allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited

# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1

# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255

As you can see, I’ve added a log file at the top to send logs to /var/log/ntp.log. I also enabled the statsdir to /var/log/ntpstats/. I added 6 NTP servers to make sure I had quite a bit of redundancy, but you can add however many you like. I’d suggest a minimum of 3. I added the minpoll 5 and maxpoll 5 because by default, ntp polls remote servers every 64 seconds, but Linux by default only keeps an ARP table entry for 60 seconds. If the ARP table has flushed the entry for a remote peer or server then when the NTP server sends a request to the remote server an entire ARP cycle will be added to the NTP packet round trip time (RTT). This will throw off the time measurements to servers on the local lan. On a RaspberryPi ARP has been shown to impact the remote offset by up to 600 uSec in some rare cases. The solution is the same for both ntpd and chronyd, add the “maxpoll 5″ command to any ‘server” or “peer directive. This will cause the maximum polling period to be 32 seconds, well under the 60 second ARP timeout.

Next we added our GPS data, first the PPS reference using the server 127.127.28.2, and we’re making PPS our preferred server. Next, we added the GPS signal from 127.127.28.0. We’re fudging that one by 500 ms as a start because in my experience, the GPS signal is usually around 500ms off. This will need to be tuned for it to be accurate. More information on that later.

Finally, I remarked out all the standard debian pools, as we’re using our own servers. You can leave them in if you want instead of using your own servers.

Now we need to restart the ntp service for it to pick up the config.

sudo systemctl restart ntp

Once NTP restarts, we can check the status by using the ntpq -p (or -pn if you don’t want name resolution) command.

ntpq -p

It will take a few moments for NTP to connect to the servers in your list and sort things out. You’re looking for the little space just to the left of the name or IP.

(blank) Discarded as not valid
x Discarded by the intersection algorithm as a falseticker
- Discarded by the cluster algorithm as an outlier
+ Included by the combining algorithm
# Backup time source
* System peer (This is what we’re looking for)
o Indicates a PPS peer whose driver support is directly compiled into ntpd (NA for us)

Ultimately, you’ll end up with something that looks like this:

ntpq -pn

And now, you have a working Stratum 1 NTP Server! Your next steps should be to go ahead and configure your RPi properly by setting localization options, giving it a static IP (you’ll definitely want to do this if you’re making an NTP server), and anything else I’ve completely skipped over in the making of this how-to, especially if you haven’t done this already. I would not recommend making this a public server as legacy NTP has some security issues with it. There is a hardened version of NTP called NTPSec that is available for Raspbian, but I haven’t gotten around to messing with it yet. I would assume that the steps would be the same though.

Update: I just installed NTPSec. It removes NTP, and the ntp.conf file looks like it lives at /etc/ntpsec/ntp.conf now, plus the service is obviously called ntpsec rather than ntp. If you want to make your sever publicly available, I’d suggest using ntpsec rather than regular ntp.

Remember earlier where I said that we were fudging the GPS signal by 500ms, but it needed to be tuned? Yeah, well that’s a world all it’s own. As it sits, the time you will receive right now will be just fine, but it you want more accurate time, you can fiddle around with it and tune things to become incredibly accurate. Here’s a link you can use to learn about that tuning. as it’s something that’s a little too deep to get into in this post. There’s more info in the references below as well about tuning,

GPSD Performance Tuning

There is one tool that comes with gpsd, it’s called ntpoffset. It’s mentioned in the link above and can be found in /usr/share/doc/gpsd-clients/examples/ for those of you that want to play with it (check out the README in that directory too) . If you’re going to try to tune this thing, I would recommend removing the 500ms fudge and letting it settle to get an accurate offset number, at least a day to be safe. I’m doing that right now myself. If you don’t mind, please let me know in the comments what your offset comes out to and how long you let it settle for before running the ntpoffset script. You won’t have to create the directory or chown it as said in the tuning link above, it will already be there for you. Just run the script and let me know your offset. Also, check periodically to see if your offset is changing. The script gives you an AVERAGE, and it’s probably going to change. Remember, once you set the offset (the fudge), that offset is going to change by number of your fudge, so if you set it to 0.500, then let it run for a day and the actual offset needed to be 0.540, the script is going to tell you it’s now -40.xxx. If you set it to 0.500 and let it run for a bit, then changed it to 0 and let it run for, say a couple hours and the offset is really 0.540, the ntpoffset script is going to spit out something like -220.xxx. In the image below (which it’s incorrect because I removed the fudge from the config and let it run for a short while, so it’s lower than it should be), you’ll see that the number is -465.353, so my fudge time1 number would be 0.465. In the live ntpq screen, it would be 0.553. If your offset is positive, say if ntpoffset gives you 465.353 (no negative sign), then your fudge time1 would be -0.465. Got it? Told you that it’s a world all it’s own…

I hope this help clear up some of the confusion out there and help some folks out. Be sure to check out the references below. I couldn’t have done this without them.

Resources:

BIPM 2018 Annual Report: Scroll to page 65 for the Time Dissemination Services section. This contains the NTP servers of the National Metrology Laboratories of countries around the world and is a great resource for other stratum 1 NTP servers, many of which are updated directly from atomic sources.
NTPSec.org

References:
Steve Friedl’s Unixwiz.net Tech Tips: Building a GPS Time Server with the Raspberry Pi 3
Gary E. Miller and Eric S. Raymond: GPSD Time Service HOWTO
David Taylor @ Satsignal.eu: The Raspberry Pi as a Stratum-1 NTP Server
Ax0n’s Den: Stratum-1 NTP Server
Adafruit: Adafruit Ultimate GPS HAT for Raspberry Pi

Custom PRTG Sensor with Speedtest.Net CLI (Windows)

There’s a few different options out there offering insight into how to create a custom speed test sensor to PRTG, but today I’m going to use this one from Nicolai Pederson as my jumping off point. Nicolai was using an .exe file from Github that hadn’t been updated in sometime now, and when I started messing with it, I noticed that the speed test really didn’t run long enough to give a valid result. Also, Ookla’s Speedtest.net recently released their own CLI tool, so I wanted to take what Nicolai did and make it work with the new took from Ookla, which is actually pretty easy. So, we’re going to follow his instructions, with a few changes to his .bat file and I’m going to make one change to keep the results consistent.

    1. Download the Speedtest.net CLI app from Ookla.
    2. Copy those files to “C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML”. For sake of simplicity, that’s going to be our working directory.
    3. Open up a command prompt, cd to “C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML”, and run “speedtest.exe -L”. This is going to give you a list of servers close to you. I would recommend picking the server of your ISP if it’s on the list.
    4. Once you have your server picked, make note of server ID. We’re going to be using that in our .bat file shortly. In my case, I’m using the Spectrum server, ID 16969.
    5. Open up Notepad and copy the following. We’re going to create a .bat file with it.
      1. @ECHO off
        SETLOCAL EnableDelayedExpansion
        SET “Latency=”
        SET “Download=”
        SET “Upload=”
        FOR /F “tokens=4,7,8 delims=,” %%A IN (‘”C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML\speedtest.exe” -accept-license -s 16969 -f csv’) DO (
        SET Latency=%%~A
        SET Download=%%~B
        SET Upload=%%~C
        )
        ECHO ^<PRTG^>
        ECHO ^<result^>
        ECHO ^<Channel^>Ping Latency^</Channel^>
        ECHO ^<value^>%Latency%^</value^>
        ECHO ^<Mode^>Absolute^</Mode^>
        ECHO ^<Unit^>TimeResponse^</Unit^>
        ECHO ^<Float^>1^</Float^>
        ECHO ^<ShowChart^>1^</ShowChart^>
        ECHO ^<ShowTable^>1^</ShowTable^>
        ECHO ^</result^>
        ECHO ^<result^>
        ECHO ^<Channel^>Download^</Channel^>
        ECHO ^<value^>%Download%^</value^>
        ECHO ^<Mode^>Absolute^</Mode^>
        ECHO ^<volumeSize^>MegaBit^</volumeSize^>
        ECHO ^<float^>0^</float^>
        ECHO ^<unit^>SpeedNet^</unit^>
        ECHO ^<ShowChart^>1^</ShowChart^>
        ECHO ^<ShowTable^>1^</ShowTable^>
        ECHO ^</result^>
        ECHO ^<result^>
        ECHO ^<Channel^>Upload^</Channel^>
        ECHO ^<value^>%Upload%^</value^>
        ECHO ^<Mode^>Absolute^</Mode^>
        ECHO ^<volumeSize^>MegaBit^</volumeSize^>
        ECHO ^<float^>0^</float^>
        ECHO ^<unit^>SpeedNet^</unit^>
        ECHO ^<ShowChart^>1^</ShowChart^>
        ECHO ^<ShowTable^>1^</ShowTable^>
        ECHO ^</result^>
        ECHO ^</PRTG^>

    6. Replace the server 16969 with the server ID of your choice. The reason we’re going to use the same server, preferably from your ISP, is to have consistency with your speed test. If you’re using multiple servers, you could get varying results as you don’t know what kind of bandwidth each server has. And, if you’re using your own ISP, they’re a lot more likely to give you truly accurate results and less likely to block you if you run the test a lot.
    7. Save the file as something like speedtest.bat in the working directory, “C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML”. Just make sure you remember what you saved it as.
    8. Go to PRTG and create a new sensor. The sensor type will be “EXE / Script Advanced”, then name it and select your “speedtest.bat” for EXE/Script under Sensor Settings.
    9. Once you have the sensor created and you gather some data, go in change the scanning interval. You obviously don’t want this thing scanning every 60 seconds or so. I set mine to scan every 6 hours, but you can set yours as you see fit.

So, why did I do this when Nicolai had already done the work? Well, the Github .exe that Nicolai uses only runs for a few seconds and doesn’t really run long enough to give an accurate reading, also it tried to use servers that didn’t exist anymore. If you check his website, you’ll see there’s some comments about people complaining that they were getting incorrect results. The “official” Speedtest CLI app solves that problem. Also, the official app can spit out JSON, but PRTG doesn’t like the format, and I’m not smart enough to know how to parse the data into a format that it does like, so I had to figure out a way to get the data I wanted into a format that PRTG wanted.

Now, for those of you like me that aren’t smart and want to figure out what that .bat file is doing, I’ll explain. The speedtest.exe file is spitting out the data in a CSV format (the “-f csv” behind the command is the formatting). The “FOR /F “tokens=4,7,8 delims=,”” in the .bat is a loop telling it that the output is comma delimited, and you want to look at the data that’s behind the 4th, 7th, and 8th comma. YOU MAY NEED TO CHANGE THIS! The reason It’s set to 4,7,8 on mine is because the output of the actual command comes back with the very first line being “Spectrum - Columbus, OH”, and it reads the comma before OH as, well, a comma. If the output of your command doesn’t have a comma there, you may have to change it. To find out for sure, you can run the following:

speedtest.exe -accept-license -s #YOURSERVERNUMBER# -f csv

The count commas. If you’re not sure what data is where, you can run the following and it will tell you.

speedtest.exe -accept-license -s #YOURSERVERNUMBER# -f csv -output-header

That will tell you what data is in which location. You’ll get something like this:

“server name”,”server id”,”latency”,”jitter”,”packet loss”,”download”,”upload”,”download bytes”,”upload bytes”,”share url”

“Spectrum - Columbus, OH”,”16969″,”7.495″,”0.786″,”N/A”,”110863914″,”4621941″,”1425470568″,”32103608″,”https://www.speedtest.net/result/c/73cf23fa-84cd-4473-a816-4154424fd027″

Of course, now that you know what’s being parsed and how, you can add more data to this if you want, like packet loss, jitter, download bytes, etc. You just need to follow the example set in the .bat file, make sure you test it out. You can run the .bat from the CLI and see the data or check for errors before creating the sensor. Since I first posted this, I’ve gone ahead and created an example that pulls all the information from the CLI output except packet loss into one place. You can download that here, and just rename it to .bat to run it. Don’t forget to change your server ID too! One more note of changes I made in the .bat file between his and mine; I removed his remarks, added a ~ between “%%~A”, etc to remove the quotes from the response in the CSV file, and cleaned up the formatting a bit, and removed the “00” from the upload and download values (they’re not needed). I should also note that I spent over 2 hours trying to figure out why I was seeing good, clean data at the CLI, but only zeros in PRTG. Let’s just say there’s very a reason the “-accept-license” option is set in the command now <grrr….>. Once you’re done, you’ll end up with a working sensor!

Update 3/11/20: As Roman in the comments found out, if your country or area requires that you accept other types of licenses or data protection regulations (like the GDPR in the EU), you may need to feed that into the command. It took me 2 hours to realize I needed to feed the “-accept-license” option, and it took Roman 3 days to figure out he needed to feed the “-accept-gdpr” option. Whenever you first run the command from the CLI, you will be asked to accept certain things, like the license and possibly the GDPR and anything else. REMEMBER WHAT IT IS YOU ACCEPT. PRTG is going to run this command as a different user, which is why you have to feed the “-accept-license” option to the command; just because you accepted the license doesn’t mean PRTG did. If you’re getting zero’s on your sensor, try to figure out what other options need to be accepted in your area when you issue the command. Then go into the comments below and thank Roman for chasing this down over 3 days so you didn’t have to.

IPv6, Time Warner / Spectrum, and the Juniper SRX.

I’ve had an IPv6 tunnel from HE.net for quite some time now. Back when I was running the ASA 5505 as my edge, I had to put a router behind it to create the tunnel. Then, when I replaced the ASA with an SRX 220 back in December 2015, I was able to build the tunnel natively on the SRX. Since that time, Time Warner has gotten around to providing IPv6 in my area and I’ve tried a couple different times to get it working with no luck. Now, I’ve finally decided that I wasn’t going to stop working on it until I got it working, and I’ve done just that, so it’s time to tell you guys how to do it yourself.

First a few caveats… Obviously, Time Warner (now Spectrum) needs to provide IPv6 in your area and that your modem supports it. I don’t remember how I found out that they finally had it here, but it was probably a fellow network engineer at TWC that told me. Second, realize that you’re going to have to reboot the SRX, so you’re going to lose connectivity for a bit. The reason you’ll need to reboot is that we need to enable IPv6 flow mode, otherwise the SRX will just drop IPv6 traffic. Let’s start with that…

Obviously, ssh into the SRX and enter config mode. Then enter the following command:

set security forwarding-options family inet6 mode flow-based

Then you’ll need to reboot with “request system reboot”. Once it comes back up, you’re ready to move on.

Your ge-0/0/0.0 interface probably looks something this at present:

greg@SRX220H# show interfaces ge-0/0/0.0

description “Uplink to Cable Modem”;

family inet {

dhcp;

}

We’re going to need to change the dhcp daemon that you’re using on that interface because if we were to continue on with what’s coming, you’d get an error. Then we’re going to add the ipv6 dhcpv6-client config to the same interface. Here’s your commands:

delete interfaces ge-0/0/0 unit 0 family inet dhcp
set interfaces ge-0/0/0 unit 0 family inet dhcp-client
set interfaces ge-0/0/0 unit 0 family inet6 dad-disable
set interfaces ge-0/0/0 unit 0 family inet6 dhcpv6-client client-type statefull
set interfaces ge-0/0/0 unit 0 family inet6 dhcpv6-client client-ia-type ia-na
set interfaces ge-0/0/0 unit 0 family inet6 dhcpv6-client client-ia-type ia-pd
set interfaces ge-0/0/0 unit 0 family inet6 dhcpv6-client client-identifier duid-type duid-ll
set interfaces ge-0/0/0 unit 0 family inet6 dhcpv6-client update-router-advertisement interface vlan.0

Now we need to set our firewall to allow some traffic:

set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic system-services dhcpv6
set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic protocols router-discovery

That should be pretty self explanatory. You need to allow dhcpv6 through the firewall for all this to work, and we’re going to use router-discovery to figure things out. Once you commit that, the SRX should ask TWC for an IPv6 address. Let’s check to see if we got one…

greg@SRX220H# run show dhcpv6 client binding

IP/prefix Expires State ClientType Interface Client DUID

2607:fcc8:ffc0:5:14c9:b140:XXXX:XXXX/128 600553 BOUND STATEFUL ge-0/0/0.0 LL0x3-54:e0:32:ec:XX:XX

2605:a000:XXXX:XXXX::/64 600553 BOUND STATEFUL ge-0/0/0.0 LL0x3-54:e0:32:ec:XX:XX

It looks like we have an address! Now, we need to add a route… we’ll find our next hop by running the previous command and adding detail:

greg@SRX220H# run show dhcpv6 client binding detail

Client Interface: ge-0/0/0.0

Hardware Address: 54:e0:32:ec:XX:XX

State: BOUND(DHCPV6_CLIENT_STATE_BOUND)

ClientType: STATEFUL

Lease Expires: 2017-07-14 08:01:52 EDT

Lease Expires in: 600551 seconds

Lease Start: 2017-07-07 08:01:52 EDT

Bind Type: IA_NA IA_PD

Client DUID: LL0x3-54:e0:32:ec:XX:XX

Rapid Commit: Off

Server Ip Address: fe80::201:5cff:fe78:XXXX

Client IP Address: 2607:fcc8:ffc0:5:14c9:b140:XXXX:XXXX/128

Client IP Prefix: 2605:a000:XXXX:XXXX::/64

DHCP options:

Name: server-identifier, Value: LL_TIME0x1-0x1d7c50b0-00:50:56:XX:XX:XX

Yes, the lease started about an hour before I posted this. I was so excited that I had to post immediately! Anyway, we’re looking for that Server IP Address. Once we have that, let’s add a static route to it.

set routing-options rib inet6.0 static route ::/0 qualified-next-hop fe80::201:5cff:fe78:XXXX interface ge-0/0/0.0

The qualified-next-hop is going to give you a lot more control over a standard next-hop. Commit the config. Once everything is committed, it’s time to test, so we’ll ping Google’s DNS server.

# run ping 2001:4860:4860::8888

You should get a response. IPv6 is now working! W00T! In order to get your network clients talking to the internet on IPv6, you’ll have to configure them to use IPv6. As you can see up above in the dhcpv6 client binding detail, there’s a “Client IP Prefix”. That’s the prefix assigned to you. If you do a “run show interfaces vlan.0 terse”, you’ll see that it now has an inet6 address that looks like 2605:a000:XXXX:XXXX:1::1/80. That’s going to be your IPv6 router / gateway address. You can statically assign IP’s by just counting up from that last ::1, so assign 2605:a000:XXXX:XXXX:1::2/80 to your workstation and try to ping 2001:4860:4860::8888. If you get a response, you’re good to go.

So, that’s the commands I had to enter to get IPv6 working on my SRX. YMMV depending on TWC’s configuration in your area, but this should get you pretty damn close.

Upgrading the gaming rig, for what feels like the 4173rd time.

I’ve been working on a plan to upgrade my gaming rig lately, especially since I had a 4790K come available with the upgrade of the FreeNAS box. For Christmas, I got a pair of Gigabyte R9-390X’s to upgrade my dual HD7970’s. I’ve been running 32GB of RAM in my 4770K box for quite a while now, so there was nowhere to go from there. The plan was to replace the Gigabyte Z87X-OC and i7-4770K with the ASRock Z97 Extreme6 and i7-4790K from the FreeNAS box. The Z97 board has a PCI 3.0 x4 M.2 slot on it, so I wanted to get the speed increase from using it in addition to everything else. But, here’s the problem…

The LGA1150 processors, which means ALL the Haswell processors not considered “Enthusiast / High End”; so the Core i3, i5, and i7s with a 4XXX model number or a G3XXX model number; only have 16 PCIe lanes. The Core i7-5820K (6 core) has 28 lanes and the 5930K (6 core) and 5960X (8 core) both have 40 lanes. Now, let’s do some math on what I wanted to go into that computer:

  • Two R9-390x’s - 32 Lanes
  • M.2 4x - 4 Lanes
  • Thunderbolt 2 AIC - 4 lanes

Well, I’m no mathematician, but I know 40 lanes when I see them. That only left me one option if I wanted to stick with Haswell, and that’s to go with the 5930K. The 5960X is still a $1000 processor and I just wasn’t going to drop that kind of coin on a CPU. So, today I went ahead and placed the order and here’s the new specs for the new gaming rig:

  • Intel Core i7-5930K CPU
  • Corsair H110i GTX Liquid CPU Cooler
  • ASRock X99 Extreme6/3.1 Motherboard
  • 32GB Corsair Dominator Platinum DDR4-2666 (4 x 8GB)
  • 2x Gigabyte R9-390X 8GB
  • ASRock Thunderbolt 2 AIC
  • Samsung 950 PRO 512 GB M.2 4x SSD
  • Samsung 850 EVO 1TB SATA6 SSD
  • Corsair Obsidian 750D Full Tower Chassis
  • Corsair AX1200i PSU
  • Corsair Professional Blue Individually Sleeved PSU cables
  • 5x Corsair SP140 Blue LED Case Fans (2 front, 2 on the radiator, 1 rear)
  • Corsair Link Commander Mini
  • 2x Corsair RGB Light kit

No spinning platters in this baby! We’re going ALL SSD. Spinning platters are for the NAS. The motherboard has dual gigabit NICs, and my network devices all support link aggregation, so I’ll be able to get 2 gigabit network access to the NAS, and that should be more than enough for pulling documents or anything else I need off of it. All this will be displayed on my current triple AOC i2769Vm 27″ monitors, which give me nearly 6 linear feet of monitor space. Yes, they are only 1080, but I’m not quite ready to dump out the money for three 27″ 4K monitors and they don’t make ultra-wide curved monitors big enough for me yet.

Today’s order was for the CPU, motherboard, and RAM. Everything else is already here. I’m hoping that everything will be here by the end of the week and I’ll be able to finish the build this weekend, so I can finish working on the FreeNAS / Plex Automation series.

Here’s a sneak peek of what it’ll look like. I was mounting the lights and fans in the case last night, and since the X99 Extreme6 and the Z97 Extreme6 look nearly identical, it’ll give you an idea of what we’ll be dealing with when it’s all said and done.

IMG_5327

FreeNAS, Plex, and Plex Automation – Part 3 – The Build

Now the fun starts! All the parts arrived and it was time to put them on the test bench to burn things in. This is my first ever dual processor build, so it was definitely a learning experience. Nothing is really different, it’s just twice as much. When the motherboard arrived, it was absolutely beautiful.

IMG_5252

I couldn’t wait to get everything put together and get it on the bench. The RAM had arrived a few days earlier and I knew the processors were supposed to be arriving via USPS later that day. I was antsy with anticipation! Then the letter carrier arrived and my CPUs were ready to go into the board.

IMG_5254

Now that I’ve got the processors and RAM installed, lets put on the coolers, get it mounted to the bench, and get it wired up.

IMG_5255

Thank God I sprung for the EATX version of this test bench. For those of you that are curious, this is the Highspeed PC Half-Deck Tech Station XL-ATX and it’s a great little test bench. There’s no metal parts to come in contact with the motherboard, so no worries about shorting things out.

Now that everything was together, it was time for the smoke test. In case you didn’t know, computers actually run on smoke. If the smoke escapes, it stops working, and the first POST of a new computer, especially one with an open-box motherboard and CPUs from eBay, is the time when that smoke is most likely to escape. Luckily, this one passed the smoke test.

IMG_5259

I slapped the 6x 6TB drives into a carrier and put the LSI 9211-8i HBA in to start burning everything in. I added a USB fan to keep the HBA cool since there wasn’t any airflow on that side of the board. The HDD rack has it’s own fan. Getting the HBA flashed to the IT firmware was quite the pain, but I’ll save that for it’s own post.

First up was to run memtest86 and check the 128GB of ECC DDR3. I ran this for a few days to really beat up the memory, as memtest86 runs over and over and over again until you stop it. After the first pass, I knew I was going to be good because there were no errors found, but I let it run for a while just to be safe. I love this picture because it shows 32 CPUs found and 16 started (It’s 16 physical cores with hyper-threading for a total of 32)!

IMG_5266

After burning in the machine for a while, it was time to transplant it into its permanent home, the Rosewill rackmount chassis. Problem was, there was already a computer in there.

IMG_5239

So, I pulled out the old motherboard (which will actually end up being my new gaming rig) to have a fresh case to start with.

IMG_5267

After moving some standoffs around, the motherboard fit in perfectly.

IMG_5272

IMG_5273

IMG_5271

My original plan was to use the onboard SAS ports for the six 3TB drives and use the LSI HBA for the six 6TB drives, then use the onboard SATA3 ports for the two SSDs. I ended up using all 8 onboard SAS ports instead. FreeNAS doesn’t care what controller the drives are plugged into. I’m not sure if that was a good idea or not, and I plan on looking into it more. If it turns out it is a bad idea, I’ll just move all the 6TB drives to the same controller.

Once everything was put together, it was time to boot it up in the chassis for the first time. I hit the power button and… nothing. The fans spun up for a second, then the whole thing shut down. I had no idea what was going on. The first thing that came to mind was the fact that I couldn’t find the second CPU power cable for the EVGA power supply, so I “borrowed” one from a Corsair PSU I had. I went ahead and unplugged all the drives to see if maybe something there was shorted and it wasn’t. I grabbed the Corsair PSU and plugged it into the second CPU and the computer booted. Ok, maybe it was the cable…

I pulled the EVGA PSU out, put the Corsair PSU in, kinda redid all the cable management, and hit the power button…

IMG_5275

Nothing. WTF??? This thing was working fine on the test bench! I did a little more troubleshooting and figured that if it was working fine on the test bench, I’d just go grab that PSU and use it. Out with the Corsair PSU, in with a Rosewill 1000W that I use for the test bench. I hit the power button and… IT’S ALIVE!

IMG_5274

The drives are all recognized, FreeNAS boots up without a problem, and we’re good to go. My wife actually did the cable management in the chassis because I was fed up with dealing with it. I was originally going to start with a fresh install of FreeNAS, but since it booted up with no issues, I decided to just stick with the current install, though I found out pretty quick that I needed to delete all the tunables created by autotune as they didn’t update to the new hardware. My ARC was still limited to 12GB.

The box has been up and running damn good for over a week now, minus a few reboots with me doing stuff.

Screen Shot 2016-01-15 at 10.24.11 AM

I built the new volume with the six 6TB drives and started moving some stuff to that new pool.

Screen Shot 2016-01-15 at 10.28.41 AM

Screen Shot 2016-01-15 at 10.26.56 AM

So, that’s the hardware build of my new FreeNAS server. Next, we’ll get into the software part of the whole thing. Even though I already have FreeNAS installed and running on this machine, I’ll run through the install procedure using another box and we’ll get into the meat and potatoes of getting FreeNAS, Plex, and all the Plex Automation setup.

FreeNAS, Plex, and Plex Automation – Part 2 – The Hardware

WARNING: The hardware specs you are about to read are NOT needed and are complete overkill for a normal FreeNAS build. It is simply me living by the adage of “anything worth doing is worth overdoing.” You can find the FreeNAS hardware recommendations in this thread on the FreeNAS forums. I suggest you spend some time doing your own research into what will be best for you and your situation. I’ve also gotten a lot of heat from folks on the forum for some of my choices. I’ll admit that some choices aren’t ideal, but I’m also trying to reuse the hardware I already own as much as possible to lower the cost.

This will be the 3rd (actually the 4th) hardware iteration of my FreeNAS server and it’s taken me quite some time to decide on what I wanted to be in this build. When I first decided to build a NAS for my home, I wanted to use some of the hardware still laying around from my bitcoin/litecoin/altcoin mining days. I had sold off many of the GPUs, but still had a few different CPU/Motherboard combos that were collecting dust. This video gives you a very small idea of what things were like back then. After taking a quick inventory of what was available, I decided to go with this:

  • Intel G3220 CPU
  • ASUS Z87-A Motherboard
  • 8GB DDR3-1600
  • Thermaltake Commander G42 mid-tower case
  • 6x WD Red 3TB HDD in RAIDZ2 (~12TB useable storage space)
  • EVGA SuperNOVA 1000G2 80+ Gold PSU

The only thing I needed to buy were four of the hard drives since I already had everything else. Two of the WD Reds were sitting in my ESXi host, unused. After I put everything together and got it running, I realized I needed more RAM due to ZFS’s use of RAM for ARC. 32 gigs went in. I then realized that the G3220 wasn’t powerful enough to handle multiple Plex streams, so I wanted to upgrade it. When I was swapping it for a Core i7-4790K, I bent some pins on the motherboard, so while waiting for a new motherboard to arrive, I put in an AMD FX-4130 CPU and Gigabyte GA-990FXA-UD3 mobo in order to keep things running. That was technically iteration #2, but it was only that way for about 10 days. At the same time I ordered the new motherboard, I also ordered a rackmount chassis for it.

Iteration #3 is what is running currently. Here’s those specs:

  • Intel Core i7-4790K CPU
  • ASRock Z97 Extreme6 Motherboard (bought because it had 10 SATA ports)
  • 32GB DDR3-1600
  • Rosewill RSV-L4412 Rackmount Chassis
  • 2x A-Data Premier Pro SP900 64GB 2.5″ SSD
  • 6x WD Red 3TB HDD in RAIDZ2 (~12TB useable storage space)
  • EVGA SuperNOVA 1000G2 80+ Gold PSU

I moved to SSDs for the boot device because I was having issues with the USB drives constantly getting errors. I had two 64GB SSDs that were purchased for a previous project and ended up not being used, so I threw those in there and I haven’t had any errors on my boot devices since. You DO NOT need SSDs for your boot devices. A couple high quality USB drives will be fine. Even though I have those drives in the computer and mirrored, I can’t use that space for anything other than the FreeNAS operating system, so it’s wasted. As you can see, I’m currently only using 1GB of space.

Screen Shot 2016-01-06 at 11.23.11 AM

I have multiple reasons for creating iteration #4 and for picking the parts I ultimately chose.

  • I want to be able to consolidate my ESXi host and my FreeNAS server into one unit
    • The ESXi host is also running an i7-4790K maxed out with 32GB of RAM.
    • Haswell can’t handle more than 32GB of RAM and I need more than that to run the VMs currently installed.
    • FreeNAS can act as a VirtualBox host. I don’t know how well it works, but we’ll soon find out.
  • I want to be able to handle anything I can throw at Plex.
  • I want to be able to use this server for a long time.
    • I should build something that has enough horsepower that I don’t need to build another one in 3 months. That stuff is starting to get old pretty quick.
    • Making it last means making sure I can add more CPU and RAM in the future. The i7-4790K’s are the most powerful processors I can use with the Z87/Z97 chipset and 32GB is the most Haswell can handle. I can’t upgrade further without changing out motherboards and RAM.
  • I don’t want to have to worry about running out of storage space anytime soon.

With all these things in mind, I spent some time looking into what would not only fit my need, but also be able to use as much of the gear I already have as possible. I knew I was going to have to buy something that can handle ECC RAM and I wanted dual CPUs for the 4K transcoding. So, without further ado, here’s the hardware that will be going into Greg’s FreeNAS v 4.0:

  • Dual Intel Xeon E5-2660 CPUs (used hardware)
  • Dual Supermicro 4U Active CPU Heatsink Cooling for X9 UP/DP Systems SNK-P0050AP4
  • SuperMicro X9DR3-F Motherboard (open box)
  • 128GB (2x 64GB kits of 4x16GB) Kingston KVR16R11D4K4/64 DDR3-1600 Registered ECC RAM
  • 2x A-Data Premier Pro SP900 64GB 2.5″ SSD*
  • 6x WD Red 3TB HDD*
  • 6x WD Red 6TB HDD**
  • LSI SAS9921-8i HBA**
  • 24 port expansion card for the 9921-8i (don’t eBay while drinking, kids)**
  • Rosewill RSV-L4412 4u chassis*
  • EVGA SuperNOVA 1000G2 PSU

*Reused Hardware
** Purchased before I decided on the CPU/Mobo upgrade

Yes, I’m going with dual Xeons and 128GB of RAM. Complete overkill and I love it. The only “new” hardware is the CPU, motherboard, RAM, and coolers. Everything else was already purchased with the idea to upgrade the old server. The parts have already started arriving and should finish getting here next week, which means I’ll probably build it out on a test bench on the weekend of January 16th. The plan is to build the new box with the six 6TB and a spare PSU on the test bench, do some testing and burn-in, then move everything into the rackmount chassis. I’ll use my current FreeNAS config on the new server, add a new zpool with the 6TB drives and go from there.

The guys on the FreeNAS forums are giving me a hard time about two things, the chassis and the power supply. They really think I should have redundant power supplies in the server, and while I’ll probably look into it, I doubt I’ll do it. First off, I have redundant power going to that PSU in the form of dual APC SMX1500RM2UNC UPS systems and a Tripp-Lite PDUMH15ATNET Auto Transfer Switch. Secondly, a redundant 600w power supply isn’t cheap. Even if you have dual PSUs, you only have 1 motherboard and 1 set of wires from the PSU chassis, not to mention the backplane of that chassis. You still have multiple single points of failure. As far as the chassis is concerned, they think it’s garbage. Here’s a couple quotes from the forums:

“I do have to say that building out that much of a server and not going with a better case and redundant power seems like dropping the ball.”

“Considering the amount of money you’re sinking into this, why not just return or resell the Rosewill clunker and find a nice Supermicro 846 or 847 chassis on eBay? It would be a shame to build a Ferrari powertrain and drop it in a Pinto chassis.”

Well, they’re not paying for all this hardware, and adding redundant power and one of those Supermicro chassis would add another $1,000 to the cost. If I need to add more hard drives in the future, there are ways of doing that. I could use an HP SAS expander and put another chassis with nothing but hard drives in it, or I can get one of those Supermicro cases at that point and transfer all this hardware into it. I just don’t foresee needing more than 12 bays. Also, I’m starting to think that Supermicro must secretly pay the people on the FreeNAS forums. Those guys absolutely LOVE Supermicro hardware. It’s the only thing they ever want to talk about. The reason I picked the Supermicro X9 motherboard was because I realized it wouldn’t be hard to get support for it from the forums. That’s something you might want to keep in mind too. If something doesn’t work in your build, you’ll wish that you had picked hardware common on the forums, otherwise you’ll spend a ton of time trying to figure out the problem.

Well, that’s where we stand as of today. I’m thinking about documenting the build on YouTube as well as here, so keep your eyes peeled for links, should I decide to do that. In the meantime, head over to the FreeNAS forums and start reading so you can be informed enough to pick out your own hardware. The decisions you make on hardware will be the most important decisions you make with the whole thing. It can be the difference between a relatively quick and painless setup or an absolute nightmare. Whatever you decide, make damn sure it’ll support ECC RAM!!!

FreeNAS, Plex, and Plex Automation - Part 1 - Getting ready

With the start of a new year, I’ve decided to start a series on setting up a FreeNAS server at home along with setting up Plex as a media server and various other applications to help automate Plex. By the time I’m done with this series, you’ll be able to setup all the following:

  • Install FreeNAS on a bare metal server
  • Install the following programs to manage your media
    • Plex Media Server
    • Transmission - A Bittorrent Client
    • Sonarr - Automatically downloads TV Shows
    • Couchpotato - Automatically downloads Movies
    • PlexPy - Provides in depth monitoring and reporting for Plex
    • PlexEmail - Send newsletters to your Plex users
  • Setup your own domain using dynamic dns from Dyn.org
  • Install and configure nginx to work as a reverse proxy and act as a traffic cop for incoming requests

I’ve been working on this setup for a few months now, and I’ve done quite a bit of customization to all these different items to make them work. As this series goes on, I’m going to try to recreate what I’ve done in a VM so I can have screenshots to show you exactly what you should be doing. In the meantime, I’d suggest you start thinking about what hardware you’d like to use for your build.

Home NAS Refresh

I think that, in this day and age, everyone should have a NAS at their house. For those of you that don’t know what I’m talking about, NAS stands for ‘Network Attached Storage’. A NAS is handy for storing all sorts of things, primarily backups of your computers and your media. In my case, I have a lot of movies and TV shows for my various media players. I also have a ton of photos and videos from over the years, as well as from my drones. Having a large NAS means that I don’t have delete anything. My NAS also acts as a server for various other things that I’ll get into in another post.

For your NAS to be effective, it needs to have lots of space and have enough room to expand. You also need to have an effective operating system running the NAS. For this build, I’m going to use FreeNAS. I had been planning to build this thing for a while, but didn’t get around to finally getting everything setup and running until July 31, 2015. Since then it’s been running pretty stable, but I used an Intel G3220 and 8GB of RAM when I first put it together and I’ve outgrown that processor and RAM, so it’s time for an upgrade. Here’s the hardware list of everything that’s going into the machine:

  • Intel Core i7-4790K CPU
  • ASRock Z97 EXTREME6 ATX LGA1150 Motherboard
  • G.Skill Ripjaws X Series 32GB (4 x 8GB) DDR3-1600 Memory
  • 6x WD Red 3TB 3.5″ 5400RPM HDD
  • Rosewill R​SV-L4412 -​ 4U Rackmo​unt Server​ C​hassis, 12​ SATA / SA​S Hot-swap​ Drives
  • EVGA SuperNOVA 1000G2 1000W 80+ Gold Certified Fully-Modular ATX Power Supply

The only thing that’s carrying over from the previous build are the 6 WD Red 3TB hard drives and the actual FreeNAS install. I was going to just upgrade the CPU and the RAM, but some pins got bent on the ASUS Z87-A motherboard I had, so it needed to get upgraded too. I also figured that while I was at it, I’d put it in a nice rackmount chassis too.

The build went rather smooth. I pulled the hardware out of the old mid-tower case and moved it into the rackmount chassis. I had originally planned on using some M.2 SSDs for boot drives, but ran into some issues. First, the drives I bought weren’t compatible with the Ultra M.2 slot on the motherboard. Secondly, the other M.2 slot ate two of my SATA ports on the motherboard. Because I didn’t bother to read the manual, it took me quite a while to figure out why those two drives weren’t being seen by the BIOS. Ultimately, I got everything put together and all 6 drives were being recognized. FreeNAS booted right up without any issues. I’ll probably pick up an Ultra M.2 SSD in the future to use as L2ARC since it’s so freaking FAST.

More info will be posted soon on how I’m going to automate my media collection and sharing.

An addendum to the addendum of “The Rules of Professional Speeding”

Yesterday on The Drive, Alex Roy published an article entitled “The Rules of Professional Speeding“. Shortly thereafter, Ed Bolian published his own list, building upon what Alex already wrote. Having known both of these gentlemen for a number of years now, as well as being a charter member of the “Fraternity of Lunatics™”, I felt it my civic duty to build upon both already excellent lists.

Before reading any further, I suggest you take the time to go read both articles. When you come back, you’ll have a much better idea of what I’m talking about.


The Backstory:

On the morning of October 29, 2013 I received a phone call from the aforementioned Mr. Roy. He had himself received a call earlier that morning from Matt Hardigree, editor-in-chief at Jalopnik.com. Matt was doing due diligence for an article that Doug Demuro was writing about a guy who claimed to have driven from New York to LA in something like 28 hours. It was funny, because I had just finished the same drive in 31 hours 17 minutes on October 13th. Alex thought that this Bolian guy had supposedly ran the same weekend as me, and he wanted to put me in touch with Matt so Matt could ask some questions. When Matt called, he told me something along the lines of “this guy from Atlanta claims that he made the drive in 28 hours 50 minutes” to which I immediately responded “bullshit.” I had just texted Alex a message of “31:17. Long Live the King” a few days prior because I didn’t think his record could be beat, and now this used car salesman from Georgia is claiming to have not only beaten, but destroyed it by over 2 hours? I called bullshit loudly and proudly.

I spoke with Ed on the phone that afternoon. We spoke for about an hour and I started to believe his story. It wasn’t until 1 year later, when I went to Atlanta to meet with Ed, Dave, Dan, and the rest of the team, that I was fully convinced.

The Present:

Since that day towards the end of October 2013 when I found out that there were other people out there in this world that share my penchant for disobeying traffic laws, the number of people in our little Fraternity has grown. Not a month goes by that I don’t meet someone new via social media that tells me about their dreams of beating Ed’s record. Some have dreamed it since it was Alex’s record… some even before that. Just like in any group of people that share something in common, there’s different levels of seriousness amongst the members, from the guys that love the idea of the whole thing and are only casual in their speeding, all the way up to folks that have spent thousands on countermeasures and countless hours of study on how to not get caught.

Most people think that driving 20 over PSL (that’s ‘posted speed limit’ for the uninitiated) is “real” speeding. After all, many jurisdictions tier their speeding tickets in such a way that 20 over is a pretty serious fine and a mandatory court appearance. In Virginia, if you get popped doing 20 over PSL, or simply 80+ MPH ANYWHERE, you don’t just get a speeding ticket, you can be charged with the crime of Reckless Driving, which is the same level offense as DUI. Surely speeding at a rate where it goes from being a traffic violation to an actual misdemeanor is “serious”, right? Let’s put it this way… if you drove 80 MPH the entire way from New York to LA, without ever stopping for gas or bathroom breaks, you would make it there in about 35 hours, or over 6 hours slower than Bolian and Black. If you drove 75 MPH on I-285 outside of Atlanta, where there’s a 55 MPH speed limit, you’d actually be passed like you were sitting still by people on their morning commute.

The Addendum:

As mentioned by Alex in his article, it makes no sense to speed less than 100 MPH. You gain so little time at 10 or 15 over that it’s not really worth it. Both Alex and Ed make some very valid points in their articles and I will simply build upon what they have already said.

Pay attention!!!: This is the A#1, most important thing you need to do when speeding at the levels we’re talking about. 90% of the time, you won’t be saved from a ticket by your radar detector or your laser jammers. You’ll be saved by your eyes. You’ll notice the brake lights on the vehicles in front of you. The traffic pattern will change. Waze is good and all, but it’s not flawless. This is why Alex tells you to pull the radio our of your car and disable text notifications. If you are too busy singing “Hello” by Adele or checking to see what your girlfriend just texted you, you can’t pay attention to the road. When traveling at 100 MPH, you cover a football field every 2 seconds.

Practice makes perfect: Malcolm Gladwell tells us in his book “Outliers” that it takes 10,000 hours of practice to become an expert at something. Even people that are experts in their field still need to practice their craft. Lewis Hamilton doesn’t show up on a race weekend, go out on track, and set a time that would put him on pole on his first lap. There’s 3 practice sessions to every F1 race weekend so the drivers can re-learn the track and how the car handles on the track. Don’t expect to get in the car and be fast, because you won’t. It takes years of practice to do what we do.

Don’t underestimate: To make a 1,000 mile trip with an average speed of just 85 mph is exceedingly difficult. Just because you have a car that can do 205, that doesn’t mean you’ll be able to do 205. There’s a lot of traffic out there and a lot of people that don’t like to abide by the “slower traffic keep right” laws. To make a 1,000 mile drive at 85 while going solo is exponentially harder.

When I drive home to Louisiana, it’s right at 1,000 miles, especially if I’m going to Houma. My best time from near the New Orleans Airport in Kenner to my house in Powell, OH is 10 hours 25 minutes. That was probably the hardest 10 and a half hours I’ve ever driven because I did it solo. Look at the bar graph in this image and see how much time I spent over 100 MPH. It looks like the vast majority of the run was well over 100, but my average was only 90. This is what I mean by “don’t underestimate”. For you to keep a 90 MPH average over 1,000 miles, you can’t drive 90. You have to drive 110+ to make up for all the time you’re going to be stopped filling up or slowed down behind traffic.

Have an escape route: This is probably the #1 reason why I was so much slower than Bolian and Black. I NEVER make a pass unless I have a way out should something bad happen. Always assume that the person driving the car you are about to pass is a teenager too busy texting to pay attention to what’s going on around him. Eventually, you will make a “bad” pass of someone, and when you do, something like this is bound to happen. Before you go into the pass, make sure you have enough room to avoid an accident without endangering someone else, or that you have enough brakes in the car to bring it down from speed safely. Want to know why cars like BMW and Mercedes dominate these records? They have great brakes.

Have more than you need: Have more of EVERYTHING than you need. More information, more fuel, more catheters, more everything. You don’t want to find yourself in the middle of a drive and not have something you need.

Cleanliness is next to Godliness: Face it, when you’re moving at triple digit speeds, you are committing bug genocide with the front of your vehicle, and the largest surface area on the front of your car is the windshield. If you can’t see out the windshield properly, you can’t drive properly. Every time the car stops for fuel, the windshield gets cleaned. No excuses. Bring your own tools to do this because most gas stations don’t bother with it.

Stealthiness is greater than Godliness: The whole point of being a “professional speeder” is the fact that we don’t getting caught. To maximize your ability to not get caught, it’s best to not be seen, and definitely not remembered. If you pass someone at a speed where, had someone passed you at that speed, you’d consider calling the cops on that “maniac”, you might want to rethink that pass. The biggest fear of the professional speeder isn’t the cop and his lidar gun hiding just on the other side of the hill; we’re not stupid enough to crest a hill at full speed. It’s the soccer mom calling Johnny Law to tell him that a black BMW with antennas on the back just “ran her off the road” and is “driving like a maniac”. The kids in the back of her minivan are terrified now because of this psycho on the roadways. They won’t roll one unit to find you, they’ll roll 5, and heaven forbid they actually clock you doing 115 in a 70 after they got that phone call. When that happens, you do not pass go, you do not collect $200.

Amazing things happen at 125: Your muscles tense, colors become more vivid, background noise deadens, you feel every crack and bump in the road; you become hyper-focused. We all have speeds at which we’re “comfortable” driving. Speed limits are supposed to be set at the 85th percentile speed. That’s the speed at which most drivers are reasonable and prudent, don’t want to have a crash, and desire to reach their destination in the shortest possible time. When many of the maximum speed limits in this country were set, cars were horrible compared to today. If you were guaranteed to have an accident doing 100, would you rather be in a 1981 Corvette or a 2015 model? Knowing that you’re in a safe car can actually make you a worse driver. You don’t focus on what you’re doing because you’ve got GPS telling you when to turn, lane departure warnings telling you that you’re drifting, blind spot warnings, brake assist, and even autopilot. You know in your heart of hearts that if you were to wreck at 70 MPH in your modern car, you’ll likely be shaken up a bit, but you’ll probably escape with minor injuries. That changes as you go faster.

Remember the days before GPS, when you’d actually have to look for a house number to know where you were going? What’s the first thing you did when you turned into the neighborhood? You turned down the radio. Then you leaned forward towards the steering wheel to get a better look. No one taught you this, it’s instinct. You want as few distractions as possible so you can focus on the task at hand. Well, as the speed climbs, you subconsciously know that the level of danger rises. You’ll turn down the radio. You’ll stop paying attention to everything else in your life. You won’t think about the argument you had with your girlfriend that morning or the important meeting with the big boss next week. Everything else disappears and the only thing in life for that moment is the drive. It really is cathartic. It’s also very addictive.

Rest… a lot: One thing you’ll underestimate is how draining driving at high speeds can be, especially when driving solo. Your brain has to take in all the information from the car, the road, Waze, the countermeasures, the trip computer, and everything else. It has to process information at a much higher rate than normal. You can liken it to having a very mild seizure, but for a very extended period of time. Your neurons are firing at an abnormal and excessive rate and that is physically and mentally draining. Whereas you might be fine to drive 16 hours straight at the normal speed limit, driving 16 hours at 150% of the speed limit is going to have a major effect on your performance. You will instinctively slow down. Your reaction times will increase. Your focus will diminish. It’ll have the same effect on your driving as a couple beers. If you think you’re going to wake up at 8am, prep the car, get some stuff done, then get on the road at 2pm for a 12 hour drive, you’re going to have a bad time. Have everything ready to go the night before you plan to leave for a long drive. You should wake up and be on the road within an hour or two to maximize your wakefulness on the roads ahead.

Don’t be cheap: Being a professional speeder is not cheap. If you want to drive at triple digit speeds and be “safe” doing it, be prepared to open your wallet. The cost to fully prepare a vehicle and make an attempt at a transcontinental record currently stands at roughly $25,000, and that’s not including the cost of the vehicle itself. Here’s a spreadsheet I put together to track the costs involved when preparing for run. You’ll see there’s over $2,000 just in the AL Priority and radar detectors. Wheels and tires are another $2,400. Fuel cell design and install is over $3,000. When Ed Bolian brought the record holding CL55 AMG to Mercedes to have them do the maintenance, the bill for that was over $12,000. There’s no telling how much Alex spent on his runs…

If you try to save money, you’re going to increase the likelihood of both and accident and failure. The minute you decide “oh, I’ll just put the laser jammers on the front of the car and leave the back off”, a cop is going to hit you from the rear. If you think that you’ll save money by getting H rated tires rather than W/Y/Z rated ones, you’re increasing the chance that the tire will blow out at speed. Skip out replacing all the fuel filters on your car and you’ll find yourself on the side of the Will Rogers Turnpike with a stalled car and a bill for towing and shipping that’s going to be much more than if you had just replaced them to begin with. Buy only the best, because when your life and the lives of others are on the line, second best just doesn’t cut it. Keep in mind that quite often, the best money you’ll ever spend will be on the thing you’ll hopefully never use.

Prepare to make frenemies: If there’s one thing about the community of really fast drivers, it’s that there’s more than enough ego to go around. After all, it takes a certain level of narcissism to do this sort of thing. People are going to talk shit about you. They’ll call you a liar. They’ll question your sanity. They want insane levels of proof of your deeds. And they’ll never do any of this to your face. You will inevitably make some close friends if you chose this path. The number of people that speed at this level is low, so when you find someone you have this in common with, there will be an instant bond.

Prepare to be hated: If you should ever make a record-setting drive and the story makes its way to the press, you will be hated. The vast majority of the public has the mindset that speed equates danger, so people that drive fast are a menace to society. After Alex and I perpetrated the 26:28 April Fools Day Hoax, I read dozens of comments comparing us to Hitler, the Columbine shooters, al-Qaeda, and any other horrible thing you can think of. People were calling for us to be jailed with rapists, and in one case, for us to be crucified. All that for simply driving fast. People will talk about the busload of nuns on their way to the orphanage that you could have killed.

Be safe: If “Pay attention!!!” is the A#1 rule, then this is rule #0. Everything that Alex, Ed, and I have said all comes down to this one thing. You need to understand that what you are doing is inherently unsafe and you need to do everything in your power to mitigate risk. I’m a firm believer that if we made speed limits high enough to be outside of the majority of people’s comfort zone, we’d have much safer roads. When you are driving a vehicle at a rate of speed where you fear dying, you’re going to be a much better driver. You won’t be texting or fiddling with the radio because you’ll be too busy trying to not die. That’s why I feel like I’m a safer driver at 115 than I am at 70. When I’m tooling along with the flow of traffic, I’m complacent. I trust everyone around me, I trust the road, I trust the car, and I trust that I’m not going to die in a giant ball of flame. But, when I’m moving at an excessive rate of speed, I trust no one and nothing other than my own abilities and vehicle preparation to make sure I’m not delivered home in a ziplock bag.

The Conclusion:

I am not here to tell you that you should go out and break the law by driving exceedingly fast. I don’t think Alex or Ed were telling you to do that either. What we are saying is that no matter what the speed limit is, there will be people out there that will want to go faster, and if you’re one of those people then there are ways to go about doing it that will lowering the risks involved. Alex and Ed did an excellent job of covering nearly all the “rules”, so I just wanted to touch on a few things I thought they missed or didn’t give enough attention to. But then again, why should you listen to me? I’ve never set any records you’ve ever heard of…

The Death of the Eisenhower Republican

(This post was from my old blog and written in 2011. I’ve decided to repost it here today for others to read since the old blog is no longer active.)

There was a time, barely remembered today, when the idea of bipartisanship really seemed reasonable. There was once a kind of Republican, now driven to the verge of extinction, called the “Eisenhower Republican.” Today, the equivalent beast would be called a “Moderate Democrat.” The Republican Party itself has largely purged itself of Eisenhower Republicans like myself in its radical shift to the right.

I have always been a Republican. But even the earliest President I remember, Ronald Regan, though a crazy old actor with a penchant for placating the religious, wasn’t as bad as some of the Republicans of today. It was Nixon though, probably unintentionally, that began the decline of the Eisenhower Republican. Some of those he brought into government are the very same “barking crazy rightwingers” who have systematically started destroying our nation under Bush. That, combined with Nixon’s spectacular and televised downfall, discredited the reasonable, moderate Republican. The Democrats, then more liberal than now, were ready to take advantage of Nixon’s downfall, and the far right wing Republicans, then marginalized but poised to strike, were ready to begin their plans to take over the nation through lying, stealing and cheating.

One man had a small chance of saving the Eisenhower Republican: President Gerald Ford.

Gerald Ford had been a well-respected Congressman, someone who could work with both parties to get things done. As criminal charges consumed Nixon and his administration, Gerald Ford was the last chance Republicans had of restoring respectability. Centrist, traditionalist and all around nice guy, Ford might have been the only person who could have saved the Republican Party from being taken over by extremists or lapsing into obscurity.

Pardoning Nixon and the stagflation Ford inherited from Nixon pretty much made it impossible for Ford to succeed. In the end, a moderate Democrat (Jimmy Carter) defeated Ford for President, and the right wing fringe of the Republican Party swept in to destroy the Eisenhower Republicans and take over. Those right wing nutcases have not only gone to great lengths to destroy our Constitution and to run up the biggest budget deficits in hitsory, but have also by now alienated moderate Republicans. The death of the Eisenhower branch of the Republican Party was one reason why Democrats won the last presidential election.

Just because the Republican Party is now nearly completely dominated by anti-democracy, right wing fools, and the Democrats are winning by appealing to American moderates, don’t think that the Democrats are doing fine. As you can tell from the last mid-term elections, Obama has done a good job of alienating many of those moderates because of his extremely left policies.

America has always been and should remain a two-party system. Why? Because we, as a culture, divide pretty solidly into Federalist and State’s Rights camps…strict interpretation vs. loose interpretation of the Constitution… These are very real ambiguities within our system, left ambiguous by those who formed our government, and it is the give and take between these two views of government that has made our nation strong. The big danger now is that one party, the Republicans, have been taken over by a group that believes in neither of these philosophies of government except as a way of fooling voters. Instead, the barking crazy rightwingers have, in essence, thrown the whole Constitutional dichotomy out the window and have tried instituting a one-party, Soviet system of crony capitalism, corruption and war profiteering.

I have always been a Republican and almost certainly will remain a Republican for life. Why? Because I like the fact that the Republican Party represents America’s diversity in almost every way and, by and large, is more representative of the average American than the more leftist, pro-socialist Democrat Party. I’m not talking about Sarah Palin’s America either.

I want a healthy, moderate Republican Party, the Eisenhower Republicans, to balance the two-party American system. That is why Ford’s failure to hold the line against the right wing extremists within the Republican Party is a shame and why I was saddened by Ford’s death the day after Christmas in 2006.

Since Ford’s presidency, the entire track of the Republican Party has been towards more and more extremism, more and more lies, more and more greed, and more and more corruption. Almost every traditional, Eisenhower Republican ideal has been thrown out by the barking crazy right-wingers, as the three largest deficits in our history came from Reagan, the elected Bush and the little Bush and as the idea of “small government” has been thrown out the window in a greedy rush to publicly fund the corrupt military-industrial-religious extremist complex.

I can only hope that the Republican Party can rediscover its Gerald Ford/Dwight Eisenhower side and reject the extremists who currently control our Party.