Pet Issues

Everyone has their own pet political issues that they’re particularly passionate about. My political interests, like my hobbies, are many and varied, but two particularly stand out as critical in my mind:

  • Gun rights.
  • Strong cryptography.

Indeed, crypto rights are something I’ve been passionate about since before I got involved with guns. Those two issues are those that I will not ever agree to compromise on, since I believe both to be fundamental to liberty.

Both topics make great litmus tests to determine how a government regards its citizenry: a government that respects its citizens and treats them as reasonable, honest adults will trust them to be responsible with potentially-dangerous items like firearms and with private (and potentially-dangerous) communications and thoughts that it cannot monitor.

A government that doesn’t, wont.

Without privacy and the ability to defend oneself from threats, how can any individual or civilization survive?

What about you? What issues do you think are critical? Why?

 

Whoops.

As mentioned earlier, I use Varnish to cache static pages on this site to improve performance. I use a WordPress plugin that detects when parts of the site are added or modified (e.g. a new post is published, or an existing one is edited) and it purges the cache for that particular page so the cache content will be refreshed with the new content.
Unfortunately, it was not purging the RSS/Atom feed, so subscribers weren’t getting any updates for several days. That’s very odd.
Until things get resolved with the plugin, I’m manually purging the cache for the feed so subscribers will get more timely updates.
Sorry for the trouble.
Update: What luck! A new version of the cache-purging plugin was just released today and fixes the problem. This pleases me.

On Caching

A flurry of new visitors from Tam’s (hi everyone, welcome!) got me thinking a bit about the performance of this site. Combined with not much going on with gun-related topics here in Switzerland, I figured I’d write a bit about tech.
This site runs WordPress on a S-sized Simple Hosting instance at Gandi‘s Baltimore facility. As a sort of hybrid of shared hosting and a VPS, it has a surprising amount of “oomph”: it has dedicated Apache, PHP, and MySQL processes, uses APC to cache PHP opcodes, and sits behind a series of load-balanced Varnish cache servers which cache static content. It’d make sense to take advantage of those resources to ensure things are speedy.
Out-of-the-box, WordPress dynamically generates each page entirely from scratch: this involves about a hundred database calls and a bunch of PHP work. Considering that content here changes relatively infrequently (yeah, I should write more), it doesn’t make sense to generate each page anew for each visitor since this takes a fair bit of resources.
That’s why I use Lite Cache to cache static content: the first time a visitor requests a page, it’s dynamically generated and sent to them as normal, but the now-generated page is saved on the server in the cache. If a second visitor requests that same page, the cached version is sent to them; since this is just a static file that doesn’t need to be generated from scratch, the server can send it right away, so it loads faster for the visitor. In addition, the static file is stored on the Varnish caches, which are specifically designed for high-performance caching. If the content ever changes (e.g. I make a new post, someone leaves a comment, etc.), the cached version is updated to reflect those changes.
Combined with the front-end Varnish caches, this can swiftly serve up content to large numbers of users — I’ve load-tested it with over 1,000x the typical daily traffic and it doesn’t break a sweat.
That’s cool and all, but why bring it up now?
Because I forgot a critical detail: I only tested it with desktop browsers, where it worked great. However, once someone hit the site with a mobile browser (perhaps on their smartphone or tablet), things went wonky: the site correctly detected that the site was using a mobile browser and generated a mobile-friendly version, which the cache dutifully stored for other visitors. Unfortunately, the cache wasn’t smart enough to tell that all the other readers were not using mobile devices, and started serving up the mobile version to desktop browsers. Whoops.
I’ll make a note to test for this sort of stuff in the future.
Since mobile users make up a tiny fraction of the already-small number of readers here, I’ve disabled the mobile-friendly theme until I can get things sorted out.
Since this is nominally a gun blog, I suppose I should try to connect this situation to guns in some way. Here goes: don’t assume everything will always work the way you think it will. Train for a variety of situations. If your training consists only of calmly standing upright in a well-lit range shooting at stationary targets with a full-sized pistol, you’re not well-prepared for a situation when, for example, a bad guy mugs you with a knife outside your office when all you have is a Beretta Jetfire and a cup of coffee. It definitely doesn’t prepare you for things that go thump in the night.Whether you’re adjusting web caches, training at the range, or sending a rocket to the moon, it’s wise to keep in mind that the universe has a perverse sense of humor.

Breaking stuff for fun and profit.

I spent a bit of the last day or two making some changes to the back-end around here, enabling SSL/TLS for the admin interface, etc.
As far as I can tell, things should be good, but if you find that some functionality has broken please let me know.
For now, I have only the admin interface accessible over HTTPS?– all other content should automatically redirect back to the HTTP version and work normally, but in some odd cases browsers seem to ignore the redirects and have formatting issues (cause: the page is loaded over HTTPS but the CSS is loaded over HTTP) and may indicate redirect loops. I’ve been unable to replicate this with testing tools, and it may just be an issue with my browsers. If you see similar issues, please let me know what page you’re visiting, the date and time of the error, and the page source (Ctrl-U) so I can see what might have caused the issue.

You’re Not Helping

The new Fox & Friends host, Elisabeth Hasselbeck (formerly the lone conservative on ABC’s The View) suggested during the Tuesday morning show that “the left” was trying to make Monday?s mass shooting at the Washington Navy Yard about “gun control.” Instead she pointed out that the country doesn’t need a national registry for guns, it needs one for to [sic] track video game purchases.

-?GamePolitics
As a gun owner and a gamer, I find remarks like this to be firmly in the “you’re not helping” category. Millions of people in the country (and many more all over the world) — including myself — enjoy playing video games, including those with violent content. The vast, overwhelming majority of gamers are ordinary people who go about their lives without harming anyone.
Is there some overlap between violent madmen and those who play video games? Almost certainly, just as there’s some overlap between violent madmen and those who use toothpaste, watch movies, hold particular religious beliefs, listen to certain musical groups, hold a specific political view, etc. However, as far as I’m aware, there’s no conclusive evidence that any of these things have a causal relationship with violent outcomes.
As fellow gun-rights supporters have pointed out, violent crime rates have dropped over the last few decades while the number of privately-owned guns has increased. Over the same time period the sale of video games, including violent ones, has also increased as has their realism and detail.
Blaming video games for violent crime is a bold claim. Is it possible? Perhaps, but if I may quote Carl Sagan, “extraordinary claims require extraordinary evidence.” Such evidence is not forthcoming. Making unsupported claims of this type is silly, counterproductive, and makes gun-rights advocates look absurd by association.

Fear me, for I am root

Google Authenticator Plugin: I’m sorry, but it is not possible for you to import an existing shared secret. You must generate a new one.
Me: Really? That’s annoying.
GAP: Yup. Sucks to be you.
Me: Fine. *generates a new secret* Oh, there’s something I ought to tell you.
GAP: Tell me.
Me: I have root access to the database in which the secret is stored. *edits the appropriate entry in the database, thus restoring the previous shared secret*

Technical Independence

The internet has contributed enormously to freedom of expression and global communications. Technical measures like encrypted VPNs have enabled people in restrictive, repressive societies to be heard by the rest of the world and access information otherwise prohibited to them.
This is fantastic, but there is one major drawback: the internet relies upon physical infrastructure. While there’s no getting around the necessity to lay cables or have wireless communications that terminate at various physical points (be they cable landing points, satellites and their ground stations, microwave towers, etc.), the issue of physical presence and legal jurisdiction for key internet infrastructure has been a concern of mine for a while.
Take, for example, the DNS root zone: due to the heirarchical structure of the Domain Name System (DNS), there needs to be a “root” from which all names are delegated. As an example consider the name of this website, www.arizonarifleman.com, this server is named “www” and is a subdomain of “arizonarifleman” which is in turn a subdomain of “com” which is in turn a subdomain of the root (( The root name is not normally seen in day-to-day lookups, but represented as a trailing dot. My domain would more properly be defined as “www.arizonarifleman.com.” — note the trailing dot after com; this is the root.)).
All top-level domains like “com”, “net”, “org”, “uk”, “au”, and so on are subsets of the root. While alternative roots have come and gone over the years, the official root is the de-facto standard. To put it bluntly, the root zone is critical to the operations of the entire global internet.
Due to the US’s role in creating the modern internet, the DNS root zone is under the authority of the US Department of Commerce’s National Telecommunications and Information Administration (NTIA) who has delegated technical operations (but not ownership) of the root to IANA, operated by ICANN (a California non-profit company that evolved out of early technical management of the DNS root). The root zone is distributed by hundreds of redundant, load-balanced physical servers representing 13 logical DNS root servers (the 13 logical servers limitation is a technical limitation). These servers are located all around the world.
The DoC and NTIA have been remarkably hands-off when it comes to the actual management of the root zone and have worked to build a “firewall” between the administrative/political and technical sides of managing the DNS root.
Even so, many people (including myself) have concerns about a single country having administrative authority over such a key part of global infrastructure. The US government has recently been seizing domain names of sites accused of copyright infringement, as they claim jurisdiction over generic top-level domains like “com”, “net”, and “org” regardless of where the domains are registered or where the registrant is physically located. What would prevent the US government from turning off country-level domains like “uk”, “fr”, or “se” ((The Pirate Bay is a big target for authorities, and operates in Sweden under the “se” top level domain. )) in the root? What about “ir” (Iran) or other countries that the US has various issues with?
Obviously if this happened there would be massive international outcry and a fracturing of the unified DNS system currently in place — this would likely be catastrophic to the internet.
What, then, could be done? Perhaps the authority for the root could be moved to another country? Sweden and Switzerland are both well-known for their political neutrality and freedoms, but again one runs into the problem of the authority being subject to the laws of a single nation.
Perhaps the UN? That’s been proposed as well, but there’s definitely some drawbacks: many UN members are not exactly well-known for their support of free speech and would be more likely to manipulate the DNS for their own purposes. The US, even with its myriad legal issues as of late, has some of the strongest free speech protections in the world and a history of non-interference with the root zone.
Personally, I wonder if it’d be possible to raise the technical management and authority of the root zone above that of any particular country — a technical “declaration of independence”, if you will. If the root zone could be abstracted from any particular physical or political jurisdiction, I think that be a great benefit to the world.
Of course, that would involve a change in the status quo and is unlikely to succeed. The US government has made it quite clear that they have no intention of relinquishing authority of the root zone and any organization (such as ICANN) who intends to operate the root must be physically located somewhere and thus fall under the jurisdiction of some government.
Nevertheless, it’s interesting to consider.
Update (about an hour later): The US government just seized a .com domain name registered through a Canadian registrar, owned by a Canadian, operating a legal-in-Canada online gambling site because it violated US and Maryland state laws. (They seized it by issuing a court order to Verisign, the operator of the “com” registry.) This serves to highly my concerns above.

CloudFlare Followup

A few days ago I posted about how I was going to be testing CloudFlare on this site.
Here’s a snippet of the stats generated since then:

(click to enlarge)
By caching static content (images, CSS files, JavaScript, etc.) at various datacenters around the world, the service has substantially sped up the response of my site (between 50-67%, depending on the day), as well as saving a not-insubstantial amount of bandwidth (which is nice, as I pay for bandwidth used).
About 10% of visits were known threats, usually comment spammers but occasionally automated exploit hack attempts and botnet zombies. These are blocked from getting to the site.
I’ve received no complaints from legitimate users, either by email or through the CloudFlare messaging system (it shows up for blocked visitors), which is an extra plus.
So far, things look quite promising. It may be more effective for more traffic-heavy sites than my own, but even for a small site like this one it’s saved a bunch of resources.