You’re Not Helping

Quote

The new Fox & Friends host, Elisabeth Hasselbeck (formerly the lone conservative on ABC’s The View) suggested during the Tuesday morning show that “the left” was trying to make Monday’s mass shooting at the Washington Navy Yard about “gun control.” Instead she pointed out that the country doesn’t need a national registry for guns, it needs one for to [sic] track video game purchases.

GamePolitics

As a gun owner and a gamer, I find remarks like this to be firmly in the “you’re not helping” category. Millions of people in the country (and many more all over the world) — including myself — enjoy playing video games, including those with violent content. The vast, overwhelming majority of gamers are ordinary people who go about their lives without harming anyone.

Is there some overlap between violent madmen and those who play video games? Almost certainly, just as there’s some overlap between violent madmen and those who use toothpaste, watch movies, hold particular religious beliefs, listen to certain musical groups, hold a specific political view, etc. However, as far as I’m aware, there’s no conclusive evidence that any of these things have a causal relationship with violent outcomes.

As fellow gun-rights supporters have pointed out, violent crime rates have dropped over the last few decades while the number of privately-owned guns has increased. Over the same time period the sale of video games, including violent ones, has also increased as has their realism and detail.

Blaming video games for violent crime is a bold claim. Is it possible? Perhaps, but if I may quote Carl Sagan, “extraordinary claims require extraordinary evidence.” Such evidence is not forthcoming. Making unsupported claims of this type is silly, counterproductive, and makes gun-rights advocates look absurd by association.

Fear me, for I am root

Google Authenticator Plugin: I’m sorry, but it is not possible for you to import an existing shared secret. You must generate a new one.

Me: Really? That’s annoying.

GAP: Yup. Sucks to be you.

Me: Fine. *generates a new secret* Oh, there’s something I ought to tell you.

GAP: Tell me.

Me: I have root access to the database in which the secret is stored. *edits the appropriate entry in the database, thus restoring the previous shared secret*

Technical Independence

The internet has contributed enormously to freedom of expression and global communications. Technical measures like encrypted VPNs have enabled people in restrictive, repressive societies to be heard by the rest of the world and access information otherwise prohibited to them.

This is fantastic, but there is one major drawback: the internet relies upon physical infrastructure. While there’s no getting around the necessity to lay cables or have wireless communications that terminate at various physical points (be they cable landing points, satellites and their ground stations, microwave towers, etc.), the issue of physical presence and legal jurisdiction for key internet infrastructure has been a concern of mine for a while.

Take, for example, the DNS root zone: due to the heirarchical structure of the Domain Name System (DNS), there needs to be a “root” from which all names are delegated. As an example consider the name of this website, www.arizonarifleman.com, this server is named “www” and is a subdomain of “arizonarifleman” which is in turn a subdomain of “com” which is in turn a subdomain of the root1.

All top-level domains like “com”, “net”, “org”, “uk”, “au”, and so on are subsets of the root. While alternative roots have come and gone over the years, the official root is the de-facto standard. To put it bluntly, the root zone is critical to the operations of the entire global internet.

Due to the US’s role in creating the modern internet, the DNS root zone is under the authority of the US Department of Commerce’s National Telecommunications and Information Administration (NTIA) who has delegated technical operations (but not ownership) of the root to IANA, operated by ICANN (a California non-profit company that evolved out of early technical management of the DNS root). The root zone is distributed by hundreds of redundant, load-balanced physical servers representing 13 logical DNS root servers (the 13 logical servers limitation is a technical limitation). These servers are located all around the world.

The DoC and NTIA have been remarkably hands-off when it comes to the actual management of the root zone and have worked to build a “firewall” between the administrative/political and technical sides of managing the DNS root.

Even so, many people (including myself) have concerns about a single country having administrative authority over such a key part of global infrastructure. The US government has recently been seizing domain names of sites accused of copyright infringement, as they claim jurisdiction over generic top-level domains like “com”, “net”, and “org” regardless of where the domains are registered or where the registrant is physically located. What would prevent the US government from turning off country-level domains like “uk”, “fr”, or “se”2 in the root? What about “ir” (Iran) or other countries that the US has various issues with?

Obviously if this happened there would be massive international outcry and a fracturing of the unified DNS system currently in place — this would likely be catastrophic to the internet.

What, then, could be done? Perhaps the authority for the root could be moved to another country? Sweden and Switzerland are both well-known for their political neutrality and freedoms, but again one runs into the problem of the authority being subject to the laws of a single nation.

Perhaps the UN? That’s been proposed as well, but there’s definitely some drawbacks: many UN members are not exactly well-known for their support of free speech and would be more likely to manipulate the DNS for their own purposes. The US, even with its myriad legal issues as of late, has some of the strongest free speech protections in the world and a history of non-interference with the root zone.

Personally, I wonder if it’d be possible to raise the technical management and authority of the root zone above that of any particular country — a technical “declaration of independence”, if you will. If the root zone could be abstracted from any particular physical or political jurisdiction, I think that be a great benefit to the world.

Of course, that would involve a change in the status quo and is unlikely to succeed. The US government has made it quite clear that they have no intention of relinquishing authority of the root zone and any organization (such as ICANN) who intends to operate the root must be physically located somewhere and thus fall under the jurisdiction of some government.

Nevertheless, it’s interesting to consider.

Update (about an hour later): The US government just seized a .com domain name registered through a Canadian registrar, owned by a Canadian, operating a legal-in-Canada online gambling site because it violated US and Maryland state laws. (They seized it by issuing a court order to Verisign, the operator of the “com” registry.) This serves to highly my concerns above.

  1. The root name is not normally seen in day-to-day lookups, but represented as a trailing dot. My domain would more properly be defined as “www.arizonarifleman.com.” — note the trailing dot after com; this is the root. []
  2. The Pirate Bay is a big target for authorities, and operates in Sweden under the “se” top level domain. []

CloudFlare Followup

A few days ago I posted about how I was going to be testing CloudFlare on this site.

Here’s a snippet of the stats generated since then:

(click to enlarge)
By caching static content (images, CSS files, JavaScript, etc.) at various datacenters around the world, the service has substantially sped up the response of my site (between 50-67%, depending on the day), as well as saving a not-insubstantial amount of bandwidth (which is nice, as I pay for bandwidth used).

About 10% of visits were known threats, usually comment spammers but occasionally automated exploit hack attempts and botnet zombies. These are blocked from getting to the site.

I’ve received no complaints from legitimate users, either by email or through the CloudFlare messaging system (it shows up for blocked visitors), which is an extra plus.

So far, things look quite promising. It may be more effective for more traffic-heavy sites than my own, but even for a small site like this one it’s saved a bunch of resources.

CloudFlare Testing

I’ve decided to test CloudFlare service on my blog.

It’s basically a DDoS-resistant caching service that should increase page loading speed for visitors.

In addition, it also detects potentially malicious traffic (ranging from spammers to botnet members) to the blog and will block them with a “challenge” page that describes why they were blocked and offer a CAPTCHA to proceed. While it’s supposedly quite good at not blocking legitimate users, it may inadvertently challenge ordinary visitors. If this occurs to you, please let me know (either by email or by filling in the appropriate field on the challenge page).

Followup on Spam Filtering

I figured that several readers are also bloggers in their own right, and might be interested in some information that I’ve gathered about spam and my efforts to block it.

This blog, which is not a terribly popular one, gets a substantial amount of comment spam. For example, here’s the amount of spam that was received for the last few months:

Dec2010: 5,028
Jan2011: 6,544
Feb2011: 4,712
Mar2011: 5,596

Compare that to the 25-30 legitimate comments made monthly, and you see that the ratio is extremely skewed in favor of spam. Since this blog was founded in 2008, 53,881 spams have been received, compared to 854 total legitimate messages.

Ideally, there would be no comment spam. Since this is not possible, I want to reduce spam by the maximum amount possible, inconvenience users as little as possible, and keep the spam queue in the WordPress administrative interface as empty as I can.

Now, WordPress comes with an outstanding spam filter called Akismet. When activated, all incoming comments are sent to Akismet for a spam/not-spam review. Since the service is centralized, they’re able to accumulate a huge amount of data about spammy and legitimate messages, adapt to changing spam patterns, and do remarkably well (99.96% according to my calculations) at detecting spam and allowing legitimate messages to pass. If it misses spam, or mistakenly flags legitimate mail as spam, I can override the Akismet decision (and that override is sent to Akismet so it can adapt).

Messages flagged as spam by Akismet go into the spam queue for my review. Unfortunately, this means that more than 150 spams a day get shunted there. Reviewing these messages is tedious and time-consuming. What if I could block the spam from even being submitted, thus reducing the amount of spam that I need to wade through?

Since all WordPress blogs have the same comments.php file, spammers don’t even need to fill in the normal comments form on the website: they can submit their spam directly to the comments.php file with the appropriate fields already filled in. Of course, since this is all done automatically by software, a slight change to the comments.php file will result in the spambots being unable to submit messages. Enter NoSpamNX, a very handy plugin that makes these changes that breaks spambots but doesn’t affect humans. Specifically, it adds certain fields to the human-readable contact form that are filled in with a randomly-generated bunch of text (to avoid the spammers adapting, it changes these random values every 24 hours).

If a comment does not include these hidden fields with that day’s random text, that means that the comment was not submitted through the ordinary human-readable form, and therefore must be spam. One can elect to then mark the message as spam, or simply delete it outright.

This simple plugin has blocked 37,775 spams since I installed it in June 2010. During that same period, a total of 39,113 spams were submitted to my site. This means that NoSpamNX alone would have blocked about 96.6% of spam. Not bad, particularly for something that does not burden legitimate commenters with any additional steps like CAPTCHAs.

In my particular case, I like contributing spam messages to Akismet since it improves their statistics, so I elected to have NoSpamNX simply mark messages as spam rather than deleting them (the deletion would occur before the messages get submitted to Akismet). Thus, my spam queue had lots of messages for me to review. I needed something more, something that would provide a second opinion to Akismet and NoSpamNX.

In my December 14th post, I mentioned that I was testing out a plugin called Conditional CAPTCHA. This one is particularly useful: it waits for messages to get reviewed by existing spam filters such as Akismet. If Akismet says the message is legitimate, Conditional CAPTCHA does nothing, and the message is posted immediately. However, if the message is flagged as spam, then Conditional CAPTCHA presents a reCAPTCHA. If the CAPTCHA is solved incorrectly or no attempt to solve it is made within 10 minutes, the message is silently deleted and not added to the spam queue. If the CAPTCHA is solved correctly, the message is then placed into the moderation queue (I’m a bit suspicious, as it was marked as spam, so I want to review it prior to it being posted).

Using Conditional CAPTCHA means that the vast majority of legitimate commenters are not inconvenienced by always facing a CAPTCHA. Only comments flagged as spam are presented with such a challenge.

So far, Conditional CAPTCHA has stopped 18,589 spams since it was installed, essentially 100% of the spam submitted to this site. There have been exactly four messages that were flagged as spam and resulted in the CAPTCHA being solved correctly. All of these have been spam, and never made it out of the moderation queue.

In my particular case, NoSpamNX is a bit redundant: I use it simply to keep a measure of how many spammers submit spam directly to the comments.php file versus how many submit comments using the human-readable form.

In conclusion, if you are a WordPress blogger and are inundated with spam, both on your site and in your spam queue, I heartily recommend using both Akismet (which you should already be using) and Conditional CAPTCHA. Doing so should reduce your spam to practically nothing.

If other bloggers out there have some statistics on the spam they receive, what they use to combat it, and how effective those measures are, I would be quite interested in hearing about it.