Using ExpressVPN with Ubuntu, Linux Mint or Debian Linux

ExpressVPN is one the highest rated and fastest VPNs. Debian strains of Linux such as Ubuntu, Linux Mint, and Debian itself are the most widely used desktop distros in the world. Putting these two things together provides fast downloads with a very high degree of anonymity and privacy.

ExpressVPN supports Windows, Linux, and macOS. The Windows and macOS versions are fairly similar for both, but the Linux version is a wide departure. It is a command line tool instead of a polished graphical application and comes in 32-bit and 64-bit flavours to accommodate both types of processors typically seen in PCs today. You can run the following command to see what the bitness of your Linux distribution is:

Anything with ‘64’ in the output is running a 64-bit Linux distribution and you should use the 64-bit ExpressVPN  download. If you’re running a 32-bit distribution you’ll likely see something with ‘i386’ in the output. For example, my 64-bit system looks uname output is this (note the 64):

While Ubuntu, Linux Mint, and Debian are all based on the same package manager system, there are some differences in installation. I used 64-bit versions of Ubuntu 16.10, Linux Mint 18, and Debian 8.6.0 for this article.

Installing ExpressVPN

ExpressVPN is available from http://expressvpn.com. You’ll need to login to your ExpressVPN account and navigate to the setup page.

ExpressVPN account

Linux should be preselected for you so you’ll just have to ensure you get one of the Ubuntu downloads.

ExpressVPN Linux download

Ubuntu 16.10 installation

Continue reading “Using ExpressVPN with Ubuntu, Linux Mint or Debian Linux”

Machines don’t guess.

It’s hard for many people to understand how account hacking works. “How can someone guess my password that’s comprised of my kids’ names? It would have to be someone who knows me, and I add a number at the end to make it even harder.” Using your kid’s names is sure to create a weak password but if I were to try to guess your password I wouldn’t have much luck so it’s hard to demonstrate that [4]. The basic problem at work here is the “faulty analogy” fallacy. Things that look alike must be alike.

We’ve all seen humans attempt to guess various things in our lives so we know guessing is error prone and inefficient. We therefore assign these difficulties to machine guessing and arrive at the incorrect conclusion that guessing is universally hard and therefore we don’t need strong passwords and we certainly don’t need to bother with having different passwords on different sites.

The specific fallacy here is that machines don’t guess.

Machines don’t need us

The implementation of a user interface (UI) is one of the last things that is done prior to a device shipping to market. The UI is the thing that we humans interact with to use the machine. It is the layer that gives the machine a way to send and receive data from us two-eyed, two-eared, 10-fingered life forms. It’s almost a pity layer. We’re so slow and limited that the eager machine has to add a slow and limited set of buttons put on it for us to interact. Slow and limited we may be, but we’re also the only one with money to buy things, so the machine grudgingly gets over it. Grudgingly, because it doesn’t give up its fast and smart machine layer when the human UI is bolted on. It lurks beneath working non-stop which infers that you can also choose to bypass the UI and communicate with the machine at its own level if you have the skills and desire.

In some sense, that is what hacking is. It’s the ability to subvert the intended interaction method (the UI) to get at the machine below. In the case of account hacking the goal is to copy account usernames and passwords from the machine. Most websites have protections at the UI level to prevent attacks such as repeated attempts to guess passwords. If you were to go to your bank website and input an incorrect set of credentials repeatedly your account would eventually be locked out and your IP address temporarily blocked. If machines had to use that same human UI with all its safeguards in place then they’d have the same problem. It’s much easier for me to try to steal a copy of the user database and download it to my own machine. I then have access to the user database without all those UI constraints and can just hack away at it at my leisure to try to derive all the username and password combinations within.

User database breaches are legendary these days. The Have I Been Pwned website verifies and catalogues these types of breaches and has almost 2 billion accounts listed so far. If you consider that only about 3.5 billion people even have access to the Internet, that’s a lot of data breaches [1]. And most of these breaches are for sale. Multiple times.

That brings us back to your password.
Continue reading “Machines don’t guess.”

Proper names in the top 10,000 most commonly used passwords

This post came from data I compiled for some other post and I thought it was interesting enough to keep. Out of the top 10,000 most commonly used passwords in this list at the time of this writing, in the top 100 are these 30 names.

Let me say that another way: 30% of the most common passwords on the entire planet are these proper names. Stop using names, people:

michael
jennifer
jordan
harley
hunter
buster
thomas
robert
george
charlie
andrew
michelle
jessica
daniel
joshua
maggie
william
ashley
amanda
nicole
ginger
heather
taylor
austin
merlin
matthew
martin
chelsea
patrick
richard

Slumped Over Keyboard Dead Glossary

I write about technology a lot. I don’t consider this a beginner tech blog, but I’m also keenly aware that many technology words and acronyms are not well known. I thought it prudent to build a glossary that I can link to when I use these terms so we can all learn together. I’ll try to keep it in alphabetical order; let’s see how that goes. I’ll add to this as life goes on and bump it back to the top whenever I do.

DDoS

Distributed Denial of Service attack. We generally drop the word “attack” today and just refer to a DDoS attack as “a DDoS” or “they were DDoSsed”. It’s pronounce Dee Doss, and not Dee Dee Oh Ess. I will go to the grave saying Dee Doss.

The DDoS of today has its roots in a DoS meaning simply “Denial of Service” attack. The added D today is for the word “Distributed”. When the Internet was small and towney, we saw DoS attacks which were pretty easy to mitigate. A DoS attack is perpetrated by one or two IP addresses and is therefore very easy to mitigate. Just block that IP or two, and the attack is over. Today’s “Distributed” DoS attacks are much harder to mitigate because they come from a wide range of IP addresses. The attack stems from “Distributed” attacking IPs.

The first signifcant DDoS was recorded in 1999 when 227 servers were knocked offline for days. On October 21st 2016, over 10,000,000 IPs were recruited to attack the Dyn DNS servers which made thousands of websites unavailable for a few hours. These times, they are a’ changing.

IoT

Internet of Things. This term is kind of racist. It considers “proper” Internet devices to be computers, routers, and maybe smart phones. Anything else is a “thing” and the proliferation of these Internet-connected “things” have spawned the term Internet of Things.

I’ve heard this pronounced as both Eye Oh Tee and plainly spoken as “Internet of Things”. It works both ways now, mostly because it’s very new. Language is built on concensus and there may be a preferred way to pronounce IoT soon.

The list of things is almost endless now and I am sure it will grow to include every device on the planet within the next decade. Fridges, televisions, lightbulbs, and toasters are all available in wifi connected models for your amusement. The first Internet was populated by people. The current Internet forces us to share the Internet with things.

I’ve written more about the problems with IoT here.

Mirai botnets: the vanishing upper limit of DDoS attacks.

There is a lot of blame to go around in the aftermath of the Dyn DDoS attack on Oct 21st. A good chunk of the bots look like Internet of Things (IoT) devices that were recruited by the Mirai botnet code. Mirai has dropped the traditionally high costs of building a botnet to near zero which means we’re seeing progressively larger and more effective DDoS attacks each week.

Sucuri discovered the first IoT botnet using CCTV devices in June. It was not long after that we started to see significantly larger DDoSes occurring and breaking all existing records for DDoS volume to date.

Why is Mirai such a big deal?

Hacker Mirai botnetAs I eluded to in the introduction, the cost of building a botnet used to be high. All those spam and phishing emails we’ve become numb to over the years were part of that effort. Hackers had to painstakingly trick each of us to click a malicious link which installed their malware on our (usually Windows) PC. It would take thousands of emails to get one or two suckers to click the link. It often took months to build a really powerful botnet with hundreds or thousands of zombie computers. And once it was built, it had to be carefully guarded to ensure it did not get dismantled by anti-virus software and other measures.

The reason this was so hard is because it was a person-against-person attack. Hacker guy had an agenda to trick you into clicking the link and you had a very good reason to not do that. That is why it took so many attempts to net one or two clicks. These IoT botnets are a different beast altogether. It’s smart humans against painfully dumb machines that have no way to even know what is happening to them, much less any sentient desire to protect themselves. The most significant contributing factor is the sheer number of these devices that are deployed with the factory username and password which means they may as well have no authentication system at all.

Mirai makes composing a botnet of 10s of thousands of devices even easier by automating the process. Mirai will even find the devices out on the Internet. So, now we have a situation where millions of dumb devices can be successfully exploited en masse within a short time frame. It’s the perfect storm.

Why was the Dyn DDoS attack significant?

Continue reading “Mirai botnets: the vanishing upper limit of DDoS attacks.”

Remote work: the last meritocracy

The general idea of remote work is that you do the same job you would do in the office, but you don’t have to actually go to the office. This removes all the problems with people and politics of the office. That’s viewed as a huge benefit, but the reality is that many people only keep their jobs because of the people and politics of the office. Remote work strips all that away and leaves you standing naked in a meritocracy where only your skills matter.

I’ve worked remotely for 7 out of the last 9 years. For 4 years I was a remote contractor left to my own devices. I spent 2 years working as a remote worker for a non-remote company and I’ve spent the last year-ish working as a remote worker for a remote company. While sitting at home looks the same in all cases, each of those situations were very different from each other.

Here’s what I have learned from each of those situations:

Remote work as a contractor

StressedUnless you want to spend a lot of time chasing business, chasing cheques, and schmoozing on the phone, you’re screwed. The vast majority of remote “employers” are really just guys with ideas that want the cheapest possible labour to see if their idea has legs. They’re not invested in the idea of building a remote workforce for any reason other than they see it as the cheapest way to get going. They’ll work the shit out of you to see if you’re good “startup material” (which really means “I have no money because nobody but me believes in my idea”) and discard you when you’re so exhausted you trip. If they have no backers, be wary. Don’t know if they have backers? Google it; Angels and VCs love to talk about who they’re backing.

I spent about 25% of my time actually working and the rest of the time doing these tasks in no particular order:

  • Trying to find new work.
  • Trying to get paid for completed work.
  • Trying to figure out the best way to acquire gear and services (from a tax perspective).
  • Learning how to do my taxes properly.
  • Mourning the loss of my skill set because I was not using it.

Continue reading “Remote work: the last meritocracy”

The problem with the Internet of Things is the things

1The “Internet of Things”, or IoT, refers to the ever expanding offerings of traditionally non-Internet connected things that can now be connected to the Internet. The array of things you can connect to your home wifi network is staggering and, to be honest, pretty dumb. Internet connected toasters, light bulbs and even hot tubs are all available to lurk on your home network and send god only knows what data about you to god only knows where.

Your home network should be a safe place where only trusted devices have access. Traditionally, this has meant your own computers, your own smartphones and perhaps a few other devices such as gaming consoles. The problem with attaching a new device to your trusted network is two-fold: does it make attacking my network easier and what is it doing with the data it collects?

The attack vectors

Any device attached to your network can see all the other devices and, potentially, have access to them. If you’re sharing your budget and medical documents with your wife’s computer that’s fine. But is it possible to really keep track of a large number of often innocuous Internet connected devices that you’ve introduced to your network over time?

Additionally, each device connected to your network that talks to the outside world introduces a new attack vector and heightens the vulnerability of your safe network to some degree. Most of us run anti-virus, ad-blockers, and possibly ever firewalls on our PCs to keep bad guys out, but what does that toaster come with? Does it have any security software installed to prevent itself from becoming the weakest link in your network?

IoT devices are built by device manufacturers. This may seem like a self-evident statement and perhaps it is, but the point is that light bulb people build light bulbs and hot tub people build hot tubs. Their area of expertise is in the thing, not in the Internet which means their ability to build and maintain the Internet part of their device is a secondary concern. Internet connected CCTV networks, printers, and even cars have been hacked over the Internet largely because manufacturers do not have the Internet mindset that is born and flourishes under a healthy paranoia level 11.

Continue reading “The problem with the Internet of Things is the things”

Troubleshooting SSL certificates with openssl

Picture of an unlocked padlockA big chunk of the problems I tackle every day surround SSL connections. I’ve written a few articles on SSL that cover off its main tasks which are encryption and non-repudiation and some ways to determine if your SSL certificate is non-functioning. The tool I use 99% of the time to diagnose SSL problems is openssl so that is the topic of this post.

I am a Linux guy, if you’re using Windows you may find a binary here you can use.

An SSL connection needs two things: a private key which you likely won’t have for websites you don’t own and a public certificate which is necessarily available to the whole world. It’s the certificate we’re interested in and here’s how to get it:

This spits out a lot of info and you can pipe the output into openssl again to extract specific data like the valid date range:

Or the name the certificate is made out for:

Or both!

Most of your SSL problems will fall into two categories: the subject name of the certificate does not match the domain name or the certificate is expired.

Note in my output above that it looks like I asked for the certificate for slumpedoverkeyboarddead.com but I ended up with the certificate for .sucuri.net. This is kind of misleading. I didn’t *ask for the slumpedoverkeyboarddead.com certificate, rather I told openssl to connect to slumpedoverkeyboarddead.com. It did and since I did not supply a domain name, the server responded with its default certificate. This will happen on any server that is configured to serve more than one domain which includes things like my firewall or any shared hosting server. To get a specific certificate you must supply the servername directive:

If your domain name does not resolve directly to your web host as is the case with slumpedoverkeyboarddead.com, you can specify the real hosting IP address in the connect directive to get the certificate from that host, instead of the intermediate proxy or firewall:

Note that I have used the same IP address that slumpedoverkeyboarddead.com resolves to instead of my real hosting IP because I don’t want to divulge that. But, it works the same way.

This is usually enough to diagnose SSL connection issue and resolving them should be obvious. Either renew it if the certificate is expired, or replace it with a valid certificate if the domain name does not match.

My website is down. Now what? Part 5 – SSL/HTTPS Issues

This is part of a series on diagnosing your website outage issues. This is part five; links to the other parts are here.

In Part 1 of this series we covered the overview of what could have broken to cause your website to go down. In Part 2, we started working through those possible issues by diagnosing DNS issues. In Part 3 we diagnosed routing issues. In Part 4 we looked at how to diagnose problems with any architectural layers such as firewalls. Now that we know all that is good, we need to look at what is going on with the web host itself. If your site runs over HTTPS, there are a myriad of issues that broken certificates or broken code can cause and that is the subject of this article.

This is not an article on what SSL is or how it works, but some basic terms and knowledge are necessary to understand the content of this article so I will lay them out.

Although secure web sessions are referred to as ‘SSL’ and certificates that provide this security are called ‘SSL Certificates’ the more correct term is TLS. The Transport Layer Security (TLS) standard replaced the Secure Sockets Layer (SSL) standard. But to avoid confusion I will use SSL since it is in more common use even though this guy will kill me.

SSL certificates are the mechanism by which secure Hyper Text Transport Protocol (HTTP) sessions are created. Those secure HTTP sessions are referred to as HTTPS (note the ‘S’ denoting Secure). Therefore, the proper way to think of this is that traffic between your website and your visitor is encrypted when they connect to your web server using https:// links and that encryption is implemented by means of the SSL certificate installed on your host.

Lastly before we jump in, it’s important to understand what SSL certificates actually do. They have two jobs:

  1. Encrypt the traffic between your website visitor and your website so that it cannot be read if it is intercepted by bad guys. Intercepting traffic is easier than you probably think but if the requests are encrypted, bad guy only gets a bunch of encrypted blobs.
  2. Provide non-repudiation to your browser meaning that it assures your browser that it is connecting to the website it asked for. Imagine if you told your browser to connect to your bank, but it connected to some other bad site and you entered your username and password into that bad site. SSL non-repudiation prevents that. I wrote an article on the others things SSL certificates do for the Sucuri blog here if you’d like more information on that.

So, knowing the two main jobs SSL does, what can go wrong on your SSL-enabled site? Here are some of the most common:

Continue reading “My website is down. Now what? Part 5 – SSL/HTTPS Issues”

My website is down. Now what? Part 4 – Layers

This is part of a series on diagnosing your website outage issues. This is part four; links to the other parts are here.

In Part 1 of this series we covered the overview of what could have broken to cause your website to go down. In Part 2, we started working through those possible issues by diagnosing DNS issues. In Part 3 we diagnosed routing issues. Now that we know your domain’s DNS is good, the routes are good, we’re going to start looking at any layers you may have in your architecture.

Image showing a turnstile inside a door but there is so much room around the turnstile that you can just walk through it without using it.The term “layers’ refers to things like firewalls or Content Distribution Networks (CDN) that may be present in your architecture. If you don’t use these things, you can skip to the next section which I will link to here when it is ready.

A typical website architecture looks like this:

website visitor -> web hosting server

There are no layers involved in this architecture. Your visitor simply hits your website directly. That works just fine and represents probably 80% of the use cases out there, but an increasing number of website owners are starting to employ firewalls and CDNs to secure and speed up their sites. If you employ a firewall such as The Mighty Sucuri CloudProxy, your architecture changes to look like this:

website visitor -> Sucuri CloudProxy -> web hosting server

If you harken back to Part 2 where we discussed routing, then you will recognize that this change in architecture introduces another point of failure for your website. How do you test those parts to ensure they are functioning? There are a few options.

Continue reading “My website is down. Now what? Part 4 – Layers”