Planet SysAdmin

November 25, 2015

Evaggelos Balaskas

Sender Policy Framework

There is a very simply way to add spf [check] support to your postfix setup.
Below are my notes on CentOS 6.7

Step One: install python policy daemon for spf

# yum -y install pypolicyd-spf

Step Two: Create a new postfix service, called spfcheck

# vim + /etc/postfix/

spfcheck     unix  -       n       n       -       -       spawn
	user=nobody argv=/usr/libexec/postfix/policyd-spf

Step Three: Add a new smtp daemon recipient restrictions

# vim +/^smtpd_recipient_restrictions /etc/postfix/
smtpd_recipient_restrictions =
	check_policy_service unix:private/spfcheck
policy_time_limit = 3600

And that’s what we see in the end on a receiver’s source-view email:

Received-SPF: Pass (sender SPF authorized) identity=mailfrom;;
helo=server.mydomain.tld; envelope-from=user@mydomain.tld;

where is the IP of the sender mail server
server.mydomain.tld is the name of the sender mail server
user@mydomain.tld is the sender’s email address
and of-course is the receiver’s mail address

You can take a better look on postfix python SPF policy daemon by clicking here: python-postfix-policyd-spf

Tag(s): postfix, spf

November 25, 2015 09:35 PM

Everything Sysadmin

Why I don't care that Dell installs Rogue Certificates On Laptops

In recent weeks Dell has been found to have installed rogue certificates on laptops they sell. Not once, but twice. The security ramifications of this are grim. Such a laptop can have its SSL-encrypted connections sniffed quite easily. Dell has responded by providing uninstall instructions and an application that will remove the cert. They've apologized and that's fine... everyone makes mistakes, don't let it happen again. You can read about the initial problem in "Dell Accused of Installing 'Superfish-Like' Rogue Certificates On Laptops" and the re-occurance in "Second Root Cert-Private Key Pair Found On Dell Computer"

And here is why I don't care.

November 25, 2015 06:00 PM

We forget how big "big" is

Talk with any data-scientist and they'll rant about how they hate the phrase "big data". Odds are they'll mention a story like the following:

My employer came to me and said we want to do some 'big data' work, so we're hiring a consultant to build a Hadoop cluster. So I asked, "How much data do you have?" and he replied, "Oh, we never really measured. But it's big. Really big! BIIIIG!!

Of course I did some back of the envelope calculations and replied, "You don't need Hadoop. We can fit that in RAM if you buy a big enough Dell." he didn't believe me. So I went to and showed him a server that could support twice that amount, for less than the cost of the consultant.

We also don't seem to appreciate just how fast computers have gotten.

November 25, 2015 03:00 PM

Steve Kemp's Blog

A transient home-directory?

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

November 25, 2015 12:18 PM

Chris Siebenmann

Why speeding up (C)Python matters

The simple answer to why speeding up Python matters is that speed creates freedom.

Right now there are unquestionably a lot of things that you can't do in (C)Python because they're too slow. Some of these are large macro things like certain sorts of programs; others are more micro things like use complicated data structures or do versions of things implemented in C. Speeding up Python creates the straightforward freedom to do more and more of these things.

In turn, this creates another freedom, the freedom to write code without having to ask yourself 'is this going to be too slow because of Python?'. Again, this is both at the macro level of what problems you tackle in Python and the micro level of how you attack problems. With speed comes the freedom to write code in whatever way is natural, to use whatever data structure is right for the problem, and so on. You don't have to contort your code to optimize for what your Python environment makes fast (in CPython, C level modules; in PyPy, whatever the JIT can do a good job recognizing and translating).

There is a whole universe of Python code that reasonably experienced CPython programmers know you simply don't write because it would be too slow, create too much garbage, and so on. Speed creates the freedom to use more and more of that universe too, in addition to the Python universe that we already program in. This freedom matters even if none of the programs we write are ever CPU-bound to any particular degree.

(Admittedly there's a peculiar kind of freedom that comes from having a slow language, but that's another entry.)

(This is not particularly novel and I've orbited the core of this thought in earlier entries. I just feel like writing it down explicitly today after writing about why I care about how fast CPython is, partly because a rejoinder to that entry is 'why do we care if CPython isn't really fast?'.)

by cks at November 25, 2015 06:27 AM


{git, hg} Custom Log Output

The standard log output for both Git and Mercurial is a bit verbose for my liking. I keep my terminal at ~50 lines, which results in only getting about 8 to 10 log entries depending on how verbose the commit was. This isn't a big deal if you are just i...

by Scott Hebert at November 25, 2015 06:00 AM

November 24, 2015

Carl Chenet

db2twitter: Twitter out of the browser

You have a database, a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

db2twitter is pretty easy to use!  First define your Twitter credentials:


Then your database information:


Then the pattern of your tweet, a Python-style formatted string:

tweet={} hires a {}{}

Add db2twitter in your crontab:

*/10 * * * * db2witter db2twitter db2twitter.ini

And you’re all set! db2twitter will generate and tweet the following tweets:

MyGreatCompany hires a web developer
CoolStartup hires a devops skilled in Docker

db2twitter is developed by and run for, the job board of th french-speaking Free Software and Opensource community.


db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

by Carl Chenet at November 24, 2015 06:00 PM

Chris Siebenmann

PC laptop and desktop vendors are now clearly hostile parties

You may have heard of Lenovo's SuperFish incident, where Lenovo destroyed HTTPS security on a number of their laptops by pre-installing root certificates with known private keys. Well, now Dell's done it too, and not just on consumer laptops, and it turns out not just one bad certificate but several. One could rant about Dell here, but there's a broader issue that's now clear:

PC vendors have become hostile parties that you cannot trust.

Dell has a real brand. It sells to businesses, not just consumers. Yet Dell was either perfectly willing to destroy the security of business oriented desktops or sufficiently incompetent to not understand what they were doing, even after SuperFish. And this was not just a little compromise, where a certificate was accidentally included in the trust store, because a Dell program that runs on startup puts the certificate back in even when it's removed. This was deliberate. Dell decided that they were going to shove this certificate down the throat of everyone using their machines. The exact reasons are not relevant to people who have now had their security compromised.

If Dell can do this, anyone can, and they probably will if they haven't already done so. The direct consequence is that all preinstalled vendor Windows setups are now not trustworthy; they must be presumed to come from a hostile party, one that has actively compromised your security. If you can legally reinstall from known good Microsoft install media, you should do that. If you can't, well, you're screwed. And by that I mean that we're all screwed, because without trust in our hardware vendors we have nothing.

Given that Dell was willing to do this to business desktops, I expect that sooner or later someone will find similar vendor malware on preinstalled Windows images on server hardware (if they haven't already). Of course, IPMIs on server hardware are already an area of serious concern (and often security issues all on their own), even before vendors decide to start equipping them with features to 'manage' the host OS for you in the same way that the Dell startup program puts Dell's terrible certificate back even if you remove it.

(Don't assume that you're immune on servers just because you're running Linux instead of Windows. I look forward to the grim meathook future (tm jwz) where server vendors decide to auto-insert their binary kernel modules on boot to be helpful.)

Perhaps my gloomy cloud world future without generic stock servers is not so gloomy after all; if we can't trust generic stock servers anyways, their loss is clearly less significant. Smaller OEMs are probably much less likely to do things like this (for multiple reasons).

by cks at November 24, 2015 04:06 AM

November 23, 2015


Engine Building Lesson 2: An Example MD5 Engine

Coming back after a month and two weeks, it’s time to resume with the next engine lesson, this time building an engine implementing a digest.

It doesn’t matter much what digest algorithm we choose. Being lazy, I’ve chosen one with a well defined reference implementation, MD5 (reference implementation is found in RFC 1321)

In this example, I’ve extracted the three files global.h, md5.h and md5c.c into rfc1321/. According to the license, I need to identify them as “RSA Data Security, Inc. MD5 Message-Digest Algorithm”, hereby done. [^1]

From an engine point of view, there are three things that need to be done to implement a digest:

  1. Create an OpenSSL digest method structure with pointers to the functions that will be OpenSSL’s interface to the reference implementation.
  2. Create the interface functions.
  3. Create a digest selector function.

Let’s begin with the structure by example:

The digest method
#include <openssl/evp.h>
#include "rfc1321/global.h"
#include "rfc1321/md5.h"

static const EVP_MD digest_md5 = {
  NID_md5,                      /* The name ID for MD5 */
  0,                            /* IGNORED: MD5 with private key encryption NID */
  16,                           /* Size of MD5 result, in bytes */
  0,                            /* Flags */
  md5_init,                     /* digest init */
  md5_update,                   /* digest update */
  md5_final,                    /* digest final */
  NULL,                         /* digest copy */
  NULL,                         /* digest cleanup */
  EVP_PKEY_NULL_method,         /* IGNORED: pkey methods */
  64,                           /* Internal blocksize, see rfc1321/md5.h */
  NULL                          /* IGNORED: control function */

NOTE: the EVP_MD will become opaque in future OpenSSL versions, starting with version 1.1, and the structure init above will have to be replaced with a number of function calls to initialise the different parts of the structure. More on that later.

A slightly complicating factor with this structure is that it also involves private/public key ciphers. I’m ignoring the associated fields in this lesson, along with the control function (all marked “IGNORED:”) and will get back to them in a future lesson where I’ll get into public/private key algo implementations.

The numerical ID is an OpenSSL number that identifies the digest algorithm, and is an index to the OID database.
The flags are not really used for pure digests and can be left zero.
The copy and cleanup functions are hooks to be used if there’s more to copying and cleanup EVP_MD_CTX for our implementation than OpenSSL can handle on its own. The MD5_CTX structure in the reference MD5 implementation has nothing magic about it, and there is therefore no need to use the copy and cleanuphooks.
The internal block size is present for optimal use of the algorithm.

On to implementing the interface functions md5_init, md5_update and md5_final:

The interface functions
static int md5_init(EVP_MD_CTX *ctx)
  return 1;

static int md5_update(EVP_MD_CTX *ctx, const void *data, size_t count)
  MD5Update(ctx->md_data, data, count);
  return 1;

static int md5_final(EVP_MD_CTX *ctx, unsigned char *md)
  MD5Final(md, ctx->md_data);
  return 1;

With the reference implementation, things are very straight forward, no errors can happen, all we do is pass data back and forth. A note: ctx->md_data is preallocated by OpenSSL using the size given in the EVP_MD structure. In a more complex implementation, these functions are expected to return 0 on error, 1 on success.

The selector function deserves some special attention, as it really performs two separate functions. The prototype is as follows:

Digest selector prototype
int digest_selector(ENGINE *e, const EVP_MD **digest,
                    const int **nids, int nid);

OpenSSL calls it in the following ways:

  1. with digest being NULL. In this case, *nids is expected to be assigned a zero-terminated array of NIDs and the call returns with the number of available NIDs. OpenSSL uses this to determine what digests are supported by this engine.
  2. with digest being non-NULL. In this case, *digest is expected to be assigned the pointer to the EVP_MD structure corresponding to the NID given by nid. The call returns with 1 if the request NID was one supported by this engine, otherwise 0.

The implementation would be this for our little engine:

Digest selector
static int digest_nids[] = { NID_md5, 0 };
static int digests(ENGINE *e, const EVP_MD **digest,
                   const int **nids, int nid)
  int ok = 1;
  if (!digest) {
    /* We are returning a list of supported nids */
    *nids = digest_nids;
    return (sizeof(digest_nids) - 1) / sizeof(digest_nids[0]);

  /* We are being asked for a specific digest */
  switch (nid) {
  case NID_md5:
    *digest = &digest_md5;
    ok = 0;
    *digest = NULL;
  return ok;

What remains to do is the following call as part of setting up this engine:

Setting up
  if (!ENGINE_set_digests(e, digests)) {
    printf("ENGINE_set_name failed\n");
    return 0;

Combine with the code from last lesson results in this:

(md5-engine.c) download
#include <stdio.h>
#include <string.h>

#include <openssl/engine.h>

#include <openssl/evp.h>
#include "rfc1321/global.h"
#include "rfc1321/md5.h"

static int md5_init(EVP_MD_CTX *ctx)
  return 1;

static int md5_update(EVP_MD_CTX *ctx, const void *data, size_t count)
  MD5Update(ctx->md_data, data, count);
  return 1;

static int md5_final(EVP_MD_CTX *ctx, unsigned char *md)
  MD5Final(md, ctx->md_data);
  return 1;

static const EVP_MD digest_md5 = {
  NID_md5,                      /* The name ID for MD5 */
  0,                            /* IGNORED: MD5 with private key encryption NID */
  16,                           /* Size of MD5 result, in bytes */
  0,                            /* Flags */
  md5_init,                     /* digest init */
  md5_update,                   /* digest update */
  md5_final,                    /* digest final */
  NULL,                         /* digest copy */
  NULL,                         /* digest cleanup */
  EVP_PKEY_NULL_method,         /* IGNORED: pkey methods */
  64,                           /* Internal blocksize, see rfc1321/md5.h */
  NULL                          /* IGNORED: control function */

static int digest_nids[] = { NID_md5, 0 };
static int digests(ENGINE *e, const EVP_MD **digest,
                   const int **nids, int nid)
  int ok = 1;
  if (!digest) {
    /* We are returning a list of supported nids */
    *nids = digest_nids;
    return (sizeof(digest_nids) - 1) / sizeof(digest_nids[0]);

  /* We are being asked for a specific digest */
  switch (nid) {
  case NID_md5:
    *digest = &digest_md5;
    ok = 0;
    *digest = NULL;
  return ok;

static const char *engine_id = "MD5";
static const char *engine_name = "A simple md5 engine for demonstration purposes";
static int bind(ENGINE *e, const char *id)
  int ret = 0;

  if (!ENGINE_set_id(e, engine_id)) {
    fprintf(stderr, "ENGINE_set_id failed\n");
    goto end;
  if (!ENGINE_set_name(e, engine_name)) {
    printf("ENGINE_set_name failed\n");
    goto end;
  if (!ENGINE_set_digests(e, digests)) {
    printf("ENGINE_set_name failed\n");
    goto end;

  ret = 1;
  return ret;


Building is just as easy as last time:

$ gcc -fPIC -o rfc1321/md5c.o -c rfc1321/md5c.c
$ gcc -fPIC -o md5-engine.o -c md5-engine.c
$ gcc -shared -o -lcrypto md5-engine.o rfc1321/md5c.o

Let’s start with checking that OpenSSL loads it correctly. Remember what Lesson 1 taught us, that we can pass an absolute path for an engine? Let’s use that again.

$ openssl engine -t -c `pwd`/
(/home/levitte/tmp/md5-engine/ A simple md5 engine for demonstration purposes
Loaded: (MD5) A simple md5 engine for demonstration purposes
     [ available ]

Finally, we can compare the result of a run with OpenSSL’s own implementation.

$ echo whatever | openssl dgst -engine `pwd`/ -md5
engine "MD5" set.
(stdin)= d8d77109f4a24efc3bd53d7cabb7ee35
$ echo whatever | openssl dgst -md5
(stdin)= d8d77109f4a24efc3bd53d7cabb7ee35

And there we have it, a functioning implementation of an MD5 engine.

Just as in the previous lesson, I made a slightly more elaborate variant of this engine, again using the GNU auto* tools. This variant also has a different way of initialising digest_md5 by taking a copy of the OpenSSL builtin structure and just replacing the appropriate fields. This allows it to be used for operations involving public/private keys as well.

Next lesson will be about adding a symmetric cipher implementation.

[^1] I needed to make a small change in rfc1321/global.h

 /* UINT2 defines a two byte word */
-typedef unsigned short int UINT2;
+typedef uint16_t UINT2;

 /* UINT4 defines a four byte word */
-typedef unsigned long int UINT4;
+typedef uint32_t UINT4;

The reason is that I’m running on a 64-bit machine, so UINT4 ended up not being 32 bits, and the reference MD5 gave me faulty results.

November 23, 2015 09:19 PM

Errata Security

Some notes on the eDellRoot key

It was discovered this weekend that new Dell computers, as well as old ones with updates, come with a CA certificate ("eDellRoot") that includes the private key. This means hackers can eavesdrop on the SSL communications of Dell computers. I explain how in this blog post, just replace the "ca.key" with "eDellRoot.key".

If I were a black-hat hacker, I'd immediately go to the nearest big city airport and sit outside the international first class lounges and eavesdrop on everyone's encrypted communications. I suggest "international first class", because if they can afford $10,000 for a ticket, they probably have something juicy on their computer worth hacking.

I point this out in order to describe the severity of Dell's mistake. It's not a simple bug that needs to be fixed, it's a drop-everything and panic sort of bug. Dell needs to panic. Dell's corporate customers need to panic.

Note that Dell's spinning of this issue has started, saying that they aren't like Lenovo, because they didn't install bloatware like Superfish. This doesn't matter. The problem with Superfish wasn't the software, but the private key. In this respect, Dell's error is exactly as bad as the Superfish error.

by Robert Graham ( at November 23, 2015 07:24 PM

Chris Siebenmann

Why I care about how fast CPython is

Python is not really a fast language. People who use Python (me included) more or less accept this (or else it would be foolish to write things in the language), but still we'd often like our code to be faster. For a while the usual answer to this has been that you should look into PyPy in order to speed up Python. This is okay as far as it goes, and PyPy certainly can speed up some things, but there are reasons to still care about CPython's speed and wish for it to be faster.

The simple way of putting the big reason is to say that CPython is the universal solvent of Python. To take one example, Apache's mod_wsgi uses CPython; if you want to use it to deploy a WSGI application in a relatively simple, hands-off way, you're stuck with however fast CPython is. Another way that CPython is a universal solvent is that CPython is more or less everywhere; most Unix systems have a /usr/bin/python, for example, and it's going to be some version of CPython. Finally, CPython is what most people develop with, test against, write deployment documentation for, and so on; this is both an issue of whether a package will work at all and an issue of whether it's doing that defeats much of PyPy's speedups.

Thus, speeding up CPython speeds up 'all' Python in a way that improvements to PyPy seem unlikely to. Maybe in the future PyPy will be so pervasive (and so much a drop in replacement for CPython) that this changes, but that doesn't seem likely to happen any time soon (especially since PyPy doesn't yet support Python 3 and that's where the Python world is moving).

(Some people will say that speeding up things like Django web apps is unimportant, because web apps mostly don't use CPU anyways but instead wait for IO and so on. I disagree with this view on Python performance in general, but specifically for Django web apps it can be useful if your app uses less CPU in order to free up more of it for other things, and that's what 'going fast' translates to.)

by cks at November 23, 2015 05:37 AM

Everything Sysadmin

What JJ Abrams just revealed about Star Wars

Last night (Saturday, Nov 21) I attended a fundraiser for the Montclair Film Festival where (I kid you not) for 90 minutes we watched Stephen Colbert interview J.J. Abrams.

What I learned:

  • He finished mixing The Force Awakens earlier that day. 2:30am California time. He then spent all day traveling to Newark, New Jersey for the event.
  • After working on it for so long, he's sooooo ready to get it in the theater. "The truth is working on this movie for nearly three years, it has been like living with the greatest roommate in history for too long. It's time for him to get his own place. It's been great and I can't tell you how much I want him to get out into the world and meet other people because we know each other really well. But really, 'Star Wars' is bigger than all of us. So I'm thrilled beyond words (to be involved) and terrified more than I can say."
  • When they played the Force Awakens trailer, J.J. said he had seen it before, but this was the first time he saw it with a live audience.
  • J.J. was influenced at an early age by "The Force" as being a non-denominational power for good.
  • Stephen Colbert saw the original Star Wars 3 weeks early thanks to a contest. He gloated that he's been excited about Star Wars for 3 weeks longer than anyone here.
  • Jennifer Garner worked for Colbert as a nanny when she was starting out in acting and needed the money.
  • Stephen Colbert auditioned for J.J.'s first film but didn't get the part. The script was called Filofax but was called "Taking Care of Business" when it was released. Colbert said he remembered auditioning for Filofax and then seeing TCoB in the theater and thinking, "Gosh this seems a lot like Filofax!"
  • J.J. acted in a film. He had a cameo in "Six Degrees of Separation". They showed a clip off YouTube.
  • While filming the pilot for "Lost" the network changed presidents. The new one wasn't very confident in the new series and asked them to film an ending for the pilot that would permit them to show it as a made-for-TV movie instead. He pretended to "forget" the request and the president never brought it up again.

The fundraiser was a total win. 2800 people were there (JJ said "about 2700 more than I expected"). If you are in the NY/NJ area, I highly recommend you follow them on Facebook and check out the next film festival on April 29 to May 8, 2016.

November 23, 2015 01:15 AM

November 21, 2015

Colin Percival

A FreeBSD AMI Builder AMI

I've been working on the FreeBSD/EC2 platform for a long time; five years ago I finally had it running, and for the past few years it has provided the behaviour FreeBSD users expect — stability and high performance — across all EC2 instance types. Making the platform work was just the first step though; next comes making it usable.

Some people are happy with simply having a virtual machine which runs the base FreeBSD system; for them, the published FreeBSD/EC2 images (which, as of FreeBSD 10.2-RELEASE, are built by the FreeBSD Release Engineer) will be sufficient. For users who want to use "stock" FreeBSD but would like to have some extra setup performed when the instance launches — say, to install some packages, edit some configuration files, and enable some services — I wrote the configinit tool. And for users who need to make changes to FreeBSD itself, I added code for building AMIs into the FreeBSD source tree, so you can take a modified FreeBSD tree and run make ec2ami to generate a reusable image.

There was one group for whom I didn't have a good solution yet, however: Users who want to create FreeBSD AMIs with minor changes, without wanting to go to the effort of performing a complete FreeBSD release build. Ironically, I am exactly such a user: All of the EC2 instances I use for my online backup service make use of spiped to protect sshd and provide encrypted and authenticated tunnels to my mailserver and package server; and so having spiped preinstalled with the appropriate keys would significantly streamline my deployment process. While it's possible to launch a FreeBSD EC2 instance, make some changes, and then ask EC2 to create a new AMI out of it, this rarely produces a "clean" AMI: A lot of code runs when an EC2 instance first launches — creating the ec2-user user, installing the appropriate SSH public key, creating SSH host keys, growing the root filesystem if launched with a larger root disk, downloading and installing updates to FreeBSD, downloading and installing packages... — and much of this needs to be manually reverted before a reusable AMI can be created; not to mention command histories and log files written during the configuration process, which the more fastidious among us may wish to avoid publishing. To solve this problem, I present the FreeBSD AMI Builder, now available as ami-28682f42 in the EC2 US-East-1 region.

November 21, 2015 12:15 PM

November 20, 2015


Implementing 'git show' in Mercurial

One of my frequently used git commands is 'git show <rev>'. As in, "show me what the heck this guy did here." Unfortunately, Mercurial doesn't have the same command, but it's easy enough to implement it using an alias in your .hgrc. The command y...

by Scott Hebert at November 20, 2015 02:00 PM

November 17, 2015


Factory Reset Broken OnePlus One

This is a sad story. A few weeks ago, I set my OnePlus One down on a table outside so that it would not be in my pocket while I worked on the pool. I have done enough work on the pool to know that at any moment, I might go tumbling in! After successful...

by Scott Hebert at November 17, 2015 07:55 PM

November 16, 2015

Steve Kemp's Blog

lumail2 nears another release

I'm pleased with the way that Lumail2 development is proceeding, and it is reaching a point where there will be a second source-release.

I've made a lot of changes to the repository recently, and most of them boil down to moving code from the C++ side of the application, over to the Lua side.

This morning, for example, I updated the handing of index.limit to be entirely Lua based.

When you open a Maildir folder you see the list of messages it contains, as you would expect.

The notion of the index.limit is that you can limit the messages displayed, for example:

  • See all messages: Config:set( "index.limit", "all")
  • See only new/unread messages: Config:set( "index.limit", "new")
  • See only messages which arrived today: Config:set( "index.limit", "today")
  • See only messages which contain "Steve" in their formatted version: Config:set( "index.limit", "steve")

These are just examples that are present as defaults, but they give an idea of how things can work. I guess it isn't so different to Mutt's "limit" facilities - but thanks to the dynamic Lua nature of the application you can add your own with relative ease.

One of the biggest changes, recently, was the ability to display coloured text! That was always possible before, but a single line could only be one colour. Now colours can be mixed within a line, so this works as you might imagine:

Panel:append( "$[RED]This is red, $[GREEN]green, $[WHITE]white, and $[CYAN]cyan!" )

Other changes include a persistant cache of "stuff", which is Lua-based, the inclusion of at least one luarocks library to parse Date: headers, and a simple API for all our objects.

All good stuff. Perhaps time for a break in the next few weeks, but right now I think I'm making useful updates every other evening or so.

November 16, 2015 10:04 PM

The Geekess

Graphics linkspam: Bugs, bugs, I’m covered in bugs!

Reporting bugs to Intel graphics developers (or any open source project) can be intimidating. You want the right developers to pay attention to your bug, so you need to provide enough information to help them classify the bug. Ian Romanick describes what makes a good Mesa bug report.

One of the things Ian talks about is tagging your bug report with the right Intel graphics code name, and providing PCI ID information for the graphics hardware. Chad Versace provides a tool to find out which Intel graphics you have on your system. That tool is also useful for translating the marketing names to code names and hardware details (like whether your system is a GT2 or GT3).

In the “omg, that’s epic” category, Adrian  analyzes the graphics techniques used in Grand Theft Auto V on PS3. It’s a great post with a lot of visuals. I love the discussion of deleting every-other-pixel to improve performance in one graphics stage, and then extrapolating them back later. It’s an example of something that’s probably really hardware specific, since Kristen Hogsberg mentioned he doesn’t think it will be much help on Intel graphics hardware. When game designers know they’re only selling into one platform, they can use hardware-specific techniques to improve graphics performance. However, it will bite them later if they try to port their game to other platforms.

by sarah at November 16, 2015 08:07 PM

November 15, 2015

Evaggelos Balaskas

dns opennic dnscrypt

A few days ago, I gave a presentation on fosscomm 2015 about DNS, OpenNic Project and DNScrypt

So without further ado, here it is: dns_opennic_dnscrypt.pdf

November 15, 2015 07:21 PM

dns opennic dnscrypt

A few days ago, I gave a presentation on fosscomm 2015 about DNS, OpenNic Project and DNScrypt

So without further ado, here it is: dns_opennic_dnscrypt.pdf

November 15, 2015 07:21 PM

November 12, 2015

Cryptography Engineering

Why the Tor attack matters

Earlier today, Motherboard posted a court document filed in a prosecution against a Silk Road 2.0 user, indicating that the user had been de-anonymized on the Tor network thanks to research conducted by a "university-based research institute".

Source: Motherboard.
As Motherboard pointed out, the timing of this research lines up with an active attack on the Tor network that was discovered and publicized in July 2014. Moreover, the details of that attack were eerily similar to the abstract of a (withdrawn) BlackHat presentation submitted by two researchers at the CERT division of Carnegie Mellon University (CMU).

A few hours later, the Tor Project made the allegations more explicit, posting a blog entry accusing CMU of accepting $1 million to conduct the attack. A spokesperson for CMU didn't exactly deny the allegations, but demanded better evidence and stated that he wasn't aware of any payment. No doubt we'll learn more in the coming weeks as more documents become public.

You might wonder why this is important. After all, the crimes we're talking about are pretty disturbing. One defendant is accused of possessing child pornography, and if the allegations are true, the other was a staff member on Silk Road 2.0. If CMU really did conduct Tor de-anonymization research for the benefit of the FBI, the people they identified were allegedly not doing the nicest things. It's hard to feel particularly sympathetic.

Except for one small detail: there's no reason to believe that the defendants were the only people affected.

If the details of the attack are as we understand them, a group of academic researchers deliberately took control of a significant portion of the Tor network. Without oversight from the University research board, they exploited a vulnerability in the Tor protocol to conduct a traffic confirmation attack, which allowed them to identify Tor client IP addresses and hidden services. They ran this attack for five months, and potentially de-anonymized thousands of users. Users who depend on Tor to protect them from serious harm.

It's quite possible that the CMU researchers exercised strict protocols to ensure that they didn't accidentally de-anonymize innocent bystanders. This would be standard procedure in any legitimate research involving human subjects, particularly research that has the potential to do harm. If the researchers did take such steps, it would be nice to know about them. CMU hasn't even admitted to the scope of the research project, nor have they published any results, so we just don't know.

While most of the computer science researchers I know are fundamentally ethical people, as a community we have a blind spot when it comes to the ethical issues in our field. There's a view in our community that Institutional Review Boards are for medical researchers, and we've somehow been accidentally caught up in machinery that wasn't meant for us. And I get this -- IRBs are unpleasant to work with. Sometimes the machinery is wrong.

But there's also a view that computer security research can't really hurt people, so there's no real reason for sort of ethical oversight machinery in the first place. This is dead wrong, and if we want to be taken seriously as a mature field, we need to do something about it.

We may need different machinery, but we need something. That something begins with the understanding that active attacks that affect vulnerable users can be dangerous, and should never be conducted without rigorous oversight -- if they must be conducted at all. It begins with the idea that universities should have uniform procedures for both faculty researchers and quasi-government organizations like CERT, if they live under the same roof. It begins with CERT and CMU explaining what went on with their research, rather than treating it like an embarrassment to be swept under the rug.

Most importantly, it begins with researchers looking beyond their own research practices. So far the response to the Tor news has been a big shrug. It's wonderful that most of our community is responsible. But none of that matters if we look the other way when others in our community fail to act responsibly.

by Matthew Green ( at November 12, 2015 04:07 PM


Burocracy ~= Automation

That was one of the big points in this morning's keynote by Mickey Dickerson. Laws and the federal system is designed to be a big hunk of automation, executed by humans. Failures in the processes are entirely predictable, and like CPUs, the human processors likely can't do much about it.

So, so true.

And also pointed out that the US Federal System is a huge open-source project kicked off a couple hundred years ago with 538 comitters at the moment. There are lessons in big open-source project management in here somewhere, I'm sure.

It turns out one of the biggest goals of the US Digital Service is to ferret out the worst of the bad automation and try to turn it around. was their first big project, and right now they're making inroads with the Veterans Administration. They're doing some other things they can't talk about yet because government, but this is all worthy cause kinds of things.

During my last job-hunt I considered applying, but didn't go there due to where I wanted to live and an unclear answer on if they'd take a remote worker. Still, this is good stuff and I wish them all the luck and support.

by SysAdmin1138 at November 12, 2015 03:38 AM

November 10, 2015

Racker Hacker

Talking to college students about information security

The University of the Incarnate WordI was recently asked to talk to Computer Information Systems students at the University of the Incarnate Word here in San Antonio about information security in the business world. The students are learning plenty of the technical parts of information security and the complexity that comes from dealing with complicated computer networks. As we all know, it’s the non-technical things that are often the most important in those tough situations.

My talk, “Five lessons I learned about information security”, lasted for about 30 minutes and then I took plenty of technical and non-technical questions from the students. I’ve embedded the slides below and I’ll go through the lessons within the post here.

Lesson 1: Information security requires lots of communication and relationships

Most of what information security professionals do involves talking about security. There are exceptions to this, however. For example, if your role is highly technical in nature and you’re expected to monitor a network or disassemble malware, then you might be spending the majority of your time in front of a screen doing highly technical work.

For the rest of us, we spend a fair amount of time talking about what needs to be secured, why it needs to be secured, and the best way to do it. Information security professionals shouldn’t be alone in this work, though. They must find ways to get the business bought in and involved.

I talked about three general buckets of mindsets that the students might find in an average organization:

“Security is mission critical for us and it’s how we maintain our customers’ trust.”

These are your allies in the business and they must be “read into” what’s happening in the business. Share intelligence with them regularly and highlight their accomplishments to your leadership as well as theirs.

“Security is really important, but we have lots of features to release. We will get to it.”

These people often see security as a bolt-on, value added product feature. Share methods of building in security from the start and make it easier for them to build secure products. Also, this is a great opportunity to create technical standards as opposed to policies (more on that later).

“I opened this weird file from someone I didn’t know and now my computer is acting funny.”

There’s no way to sugar-coat this one: this group is your biggest risk. Take steps to prevent them from making mistakes in the first place and regularly send them high-level security communications. Your goal here is to send them information that is easy to read and keeps security front of mind for them without inundating them with details.

Lesson 2: Spend the majority of your time and money on detection and response capabilities

This is something that my coworker Aaron and I talk about in our presentation called “The New Normal”. Make it easier to detect an intruder and respond to the intrusion. Don’t allow them to quietly sneak through your network undetected. Force them to be a bull in a china shop. If they cross a network segment, make sure there’s an alert for that. Ensure that they have to make a bunch of noise if they wander around in your network.

When something is detected, you need to do two things well: respond to the incident and communicate with the rest of the organization about it.

The response portion requires you to have plenty of information available when it’s time to assess a situation. Ensure that logs and alerts are funneled into centralized systems that can aggregate and report on the events in real time (or close to real time). Take that information and understand where the intruders are, what data is at risk, and how severe the situation really is.

From there, find a way to alert the rest of the organization. The United States Department of Defense uses DEFCON for this. They can communicate the severity of a situation very quickly to thousands of people by using a simple number designation. That number tells everyone what to do, no matter where they are. Everyone has an idea of the gravity of the situation without needing to know a ton of details.

This is also a good opportunity to share limited intelligence with your allies in the business. They may be able to go into battle with you and get additional information that will help the incident response process.

Lesson 3: People, process, and technology must be in sync

Everything revolves around this principle. If your processes and technology are great, but your people never follow the process and work around the technology, you have a problem. Great technology and smart people without process is also a dangerous mix. Just like a three-legged stool, all three legs must be strong to keep it stable. The same goes for any business.

When an incident happens, don’t talk about people, what could have been done, or vendors. Why? Because no matter how delicate you are, you will eventually “call the baby ugly”. Calling the baby ugly means that you insult someone’s work or character without intending to, and then that person withdraws from the process or approaches the situation defensively. That won’t lead to a good outcome and will usually create plenty of animosity.

Assume the worst will happen again and make your processes and technologies better over time. This is an iterative process, so keep in mind that a thousand baby steps will always deliver more value than one giant step.

Lesson 4: Set standards, not policies

Policies are inevitable. We get them from our compliance programs, our governments, and other companies. They’re required, but they’re horribly annoying. Have you ever read through ISO 27002 or NIST 800-53? If you have, you know what I mean. Don’t get me started on PCI-DSS 3.1, either.

What’s my point? Policies are dry. They’re long. They’re often chock-full of requirements that are really difficult to translate into technical changes. There’s no better way to clear out a room of technical people than to say “Let’s talk about PCI-DSS.” (Seriously, try this at least once. It’s amazing.)

You need to use the right kind of psychology to get the results you want. Threatening someone with policy is like getting someone to go in for a root canal. They know they need it, but they know how much it will hurt.

Instead, create technical standards that are actionable and valuable. If you know you need to meet PCI-DSS and ISO 27002 for your business, create a technical standard that allows someone in the business to design systems that meet both compliance programs. Make it actionable and then show them the results of their labor when they’re done.

Also, give them a method for checking their systems against the standard in an automated way. Nobody wants The Spanish Inquisition showing up at the end of a project to say “Hey, you missed something!”. They’ll be able to check their progress along the way.

Lesson 5: Don’t take security incidents personally

This one is still a challenge for me. Security incidents will happen. They certainly won’t be fun. However, when the smoke clears, look at the positive aspects of the incident. These situations highlight two critical things:

  1. Room for improvement (and perhaps additional spending)
  2. What attackers really want from your business

Take the time to understand what type of attacker you just dealt with and what their target really was. If a casual script kiddie found a weakness, you obviously need to invest in more security basics, like network segmentation and hardening standards. If a nation state or some other type of determined attacker found a weakness, you need to understand what they were trying to get. This can be challenging and sometimes third parties can help give an unbiased view.

Required reading

There are three really helpful books I mentioned in the presentation:

These three books help you figure out how to make change, build relationships, and work around challenges in IT.

Final thoughts

If you haven’t been to your local university to meet the next generation of professionals, please take the time to do so. There’s nothing more exciting than talking with people who have plenty of knowledge and are ready to embrace something new. In addition, they yearn to talk to people who have more experience in the real world.

Thanks to John Champion from UIW for asking me to do a talk! It was a fun experience and I can’t wait to do the next one.

The post Talking to college students about information security appeared first on

by Major Hayden at November 10, 2015 02:50 PM

Encrypt and decrypt files to public keys via the OpenSSL Command Line

This small tutorial will show you how to use the openssl command line to encrypt and decrypt a file using a public key. We will first generate a random key, encrypt that random key against the public key of the other person and use that random key to encrypt the actual file with using symmetric encryption.

November 10, 2015 12:00 AM

November 09, 2015

Sign and verify text/files to public keys via the OpenSSL Command Line

This small guide will shows you how to use the OpenSSL Command Line to sign a file, and how to verify the signing of this file. You can do this to prove ownership of a key, or to prove that a file hasn't been modified since you signed it. This works both with small text files as well as huge photo's, documents or PDF files.

November 09, 2015 12:00 AM

November 08, 2015


Security profiling: TSA

Being of a gender-nonconforming nature has revealed certain TSA truths to me.

Yes, they do profile.

It's a white-list, unlike the police profiling that gets people into trouble. There is a 'generic safe-traveler' that they compare everyone to. If you conform, you get the minimum screening everyone gets. If you don't conform, you get some extra attention. Some ways to earn extra attention:

  • Don't look like your government ID.
  • Wear your hair up, or in braids (they've seen those kung-fu movies too)
    • Yes, they put their gloved hands in your hair and feel round. Anyone with dreads knows this all too damn well.
  • Fly with a name other than the one on your government issued ID.
  • Have body-parts replaced with things, such as a prosthetic leg, or knee (if going through metal detectors).
  • Have junk when there shouldn't be junk (or so they think).
  • Have breasts when there shouldn't be breasts (or so they think).
  • Have breast prosthesis instead of actual breasts (mastectomy patients love this).
  • And many more.

Here is an exercize you can try the next time you fly in the US. When you get to the other side of the scanner (this only works for the porno-scanners, not the metal-detectors), while you are waiting for your stuff to come out of the X-ray machine, look at the back of the scanner. Watch the procedure. Maybe put your shoes on slow to catch it all. You'll notice something I've noticed:

There are always two officers back there, a man and a woman. When someone steps in to get scanned, they have to either hit a button to indicate the gender of the person being scanned, or are presented with a side-by-side with both genders and the officer has to chose which to look at. They have a second, maybe two, to figure out which body baseline to apply to you, and those of us who are genderqueer confuse things. I fail the too-much-junk test all the time and get an enhanced patdown in my inner-thighs.

Yes, but with PreCheck you can skip that.

This actually proves my point. By voluntarily submitting to enhanced screening, I can bypass the flight-day screen annoyances. It's admitting that I no longer fit the profile of 'generic safe traveler' and need to achieve 'specific safe traveler' status. That, or I can have my bits rearranged and conform that way. Whichever.

by SysAdmin1138 at November 08, 2015 11:08 PM

Ansible v1.7: New columns and togglable columns

I've just released ansible-cmdb v1.7. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.

This is a feature and bugfix release, including the following changes:

  • Support for showing only specific columns using the -c / –columns commandline argument. This must be supported by the template. Currently html_fancy and txt_table are supported.
  • The html_fancy template now properly automatically focusses the search box.
  • Two new columns were added to the html_fancy template: kernel and timestamp. (By Rowin Andruscavage)

Get the new release from the Github releases page.

Screenshot showing the output of ansible-cmdb -c name,fqdn,main_ipv4,os,kernel


by admin at November 08, 2015 09:38 AM

November 07, 2015

Let's Encrypt with DirectAdmin or other Web Control Panels

Let's Encrypt is a new certificate authority, recognized by all major browsers. They make it a breeze to set up TLS certificates for your web server. And for free! Let's Encrypt is supported by major players like Mozilla, Akamai, Cisco, the EFF, the Internet Security Research Group and others. Let's Encrypt provides free, automatic and secure certificates so that every website can be secured with an SSL certificate. This article shows you how to setup Let's Encrypt with the DirectAdmin web control panel. The guide is generic, so it works for other controlpanels as well. For now it works with the beta, and required some linux knowledge and root access to the hosting server.

November 07, 2015 12:00 AM

November 05, 2015

Steve Kemp's Blog

lumail2 approaches readiness

So the work on lumail2 is going well, and already I can see that it is a good idea. The main reason for (re)writing it is to unify a lot of the previous ad-hoc primitives (i.e. lua functions) and to try and push as much of the code into Lua, and out of C++, as possible. This work is already paying off with the introduction of new display-modes and simpler implementation.

View modes are an important part of lumail, because it is a modal mail-client. You're always in one mode:

  • maildir-mode
    • Shows you lists of Maildir-folders.
  • index-mode
    • Shows you lists of messages inside the maildir you selected.
  • message-mode
    • Shows you a single message.

This is nothing new, but there are two new modes:

  • attachment-mode
    • Shows you the attachments associated with the current message.
  • lua-mode
    • Shows you your configuration-settings and trivia.

Each of these modes draws lines of text on the screen, and those lines consist of things that Lua generated. So there is a direct mapping:

ModeLua Function
maildirfunction maildir_view()
indexfunction index_view()
messagefunction message_view()
luafunction lua_view()

With that in mind it is possible to write a function to scroll to the next line containing a pattern like so:

function find()
   local pattern = Screen:get_line( "Search for:" )

   -- Get the global mode.
   local mode = Config:get("global.mode")

   -- Use that to get the lines we're currently displaying
   loadstring( "out = " .. mode .. "_view()" )()

   -- At this point "out" is a table containing lines that
   -- the current mode wishes to display.

    -- .. do searching here.

Thus the whole thing is dynamic and mode-agnostic.

The other big change is pushing things to lua. So to reply to an email, populating the new message, appending your ~/.signature, is handled by Lua. As is forwarding a message, or composing a new mail.

The downside is that the configuration-file is now almost 1000 lines long, thanks to the many little function definitions, and key-binding setup.

At this rate the first test-release will be out at the weekend, but API documentation, and sample configuration file might make interesting reading until then.

November 05, 2015 09:52 PM

toolsmith #110: Sysinternals vs Kryptic

26 OCT 2015 marked some updates for the venerable Windows Sysinternals tool kit, presenting us with the perfect opportunity to use them in a live malware incident response scenario. Immediately relevant updates include Autoruns v13.5, Sigcheck v2.30, RAMMap v1.4, and Sysmon 3.11.
Quoting directly from the Sysinternals Site Discussion, the updates are as follows:
  • Autoruns: the most comprehensive autostart viewer and manager available for Windows, now shows 32-bit Office addins and font drivers, and enables re-submission of known im  ages to Virus Total for a new scan.
  • Sigcheck: displays detailed file version information, image signing status, catalog and certificate store contents an now includes updated Windows 10 certificate OIDs, support for checking corresponding MUI (internationalization strings) files for more accurate version data, the version company name, and the signature publisher for signed files.
  • RAMMap: a tool that reports detailed information about physical memory usage, is now compatible with Windows 10 and includes a bug fix.
  • Sysmon: logs security relevant process, network and file events to the event log. This update fixes a memory leak for DLL image load event monitoring and removes a misleading warning when processing configuration files. 
For Sysmon we'll include use of the 20 JULY 2015 update to 3.1 as well, given that, since last we discussed Sysmon@markrussinovich and team have added information about the thread initialization function for CreateRemoteThread events, including the DLL and function name and address.
I grabbed a Kryptik sample, a Zbot/FakeAV variant, from VirusShare that exhibits well using these tools. VirusTotal reference to this sample is here. A Windows 7 x64 virtual machine fell victim to this malware and gave up perfect indicators via these Sysinternals specialists.

Beginning with Autoruns v13.5, an immediate suspect presented itself per Figure 1.

Figure 1

I am pretty darned sure that C:\users\malman\appdata\roaming\ibne\haho.exe does not serve a legitimate purpose, and more importantly, sure as hell doesn't belong in HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run,  malware's favorite start-me-up. Right click the entry in Autoruns, select Check VirusTotal, and you'll get a fairly quick answer. You'll see a Scanning File status under the VirusTotal column header while its scanning. Turns out haho.exe is not an OLE server and is a password stealing Kryptik variant. :-)

I first ran sigcheck C:\users\malman\appdata\roaming\ibne\haho.exe and received the details expected regarding the version company name as well as signature publisher, specifically the lack thereof here. The sample was not signed and sigcheck validated its lack of legitimacy by confirming the Company as eSafe (its not), the Publisher as n/a, and the Description as OLE Server, all noted in Figure 2.

Figure 2

As a counterpoint to a malicious file, when you run sigcheck against a legitimate (usually), signed file such as C:\Windows\System32\notepad.exe, you receive much more robust comforting results as seen in Figure 3.

Figure 3
Once you've established some useful indicators such as haho.exe, its installation path and subsequent registry key, you can derive related and useful information using RAMMap live on the victim system as done with Sigcheck. You're likely accustomed to thinking that RAMMap is just a pretty picture of memory usage, but it does offer more. You can use the RAMMap Processes tab to determine active sessions, PID, and memory utilization, which I've done for out malicious process in Figure 4.

Figure 4
Similar results are derived from the File Summary tab including total and standby memory utilized.

Figure 5
Both views were sorted, then searched specifically for C:\users\malman\appdata\roaming\ibne\haho.exe as discovered with Autoruns,

With the 20 JUL 2015 update to Sysmon comes the addition of Event ID 8 for CreateRemoteThread events. This is an absolute bonus as malware often injects DLLs using the CreateRemoteThread function. There's a great reference to this method at the War Room.
For Sysmon reference, I'll point you back to my post, Sysmon 2.0 & EventViz.
To help elaborate on Sysmon results I took advantage of @matt_churchill's Sysmon_Parse script, which in turn utilizes Microsoft's essential Log Parser, @TekDefense's TekCollect, @nirsoft's IPNetInfo and @woanware's virustotalchecker . Its installation and use guidelines are written right into the script as remarks, it's incredibly straightforward. You may run into a glitch with httplib2, a Python dependency for the TekCollect script. Overcome it by downloading source, copying it to the Lib folder of your Python interpreter installation (commonly C:\Python27 on Windows), changing directory to C:\Python27\Lib\httplib2 and running python install. Thereafter you need only run sysmon_parse.cmd and make use of results found in the Results_date directory, in individual text files. Sysmon_Parse first robocopies the Sysmon event log to a temporary .evtx, then parses it to a CSV file, sysmon_parsed.txt, with Log Parser. Note that you can supply an event log from other machines for Sysmon_Parse to crunch, on an analysis workstation rather than the victim host. Sysmon_Parse then calls TekCollect to extract artifacts such as MD5, SHA1, and SHA256 hashes, assuming you've enabled all hash types with sysmon -c -n -l -h md5,sha1,sha256. TekCollect also parses out all domains, URLs, IPs, and executables noted in the Sysmon log. The IPs should match your Sysmon Event ID 3s (Network Connection) quite nicely. I also found the SHA1 for this sample, dc965d0a38505001c800049a6c39817aec3616f0, in the Sysmon_Parse SHA1 text results, so clearly that TekCollect parser works well.
In this case, we're curious to determine if haho.exe used CreateRemoteThread by filtering for Event ID 8. The answer is, of course; an example result reads as follows:
19564,2015-11-05 02:07:21,8,Microsoft-Windows-Sysmon,WIN-QN9NR3Q1MNR,S-1-5-18,2015-11-05 02:07:21.631|{2F96EDF1-B473-563A-0000-00106D1E1000}|1756|C:\Users\malman\AppData\Roaming\Ibne\haho.exe|{2F96EDF1-B9D8-563A-0000-0010BF9B2000}|568|C:\Windows\SysWOW64\WerFault.exe|1008|0x00000000000A45E0||
The source PID is 1756, the source image is our Kryptik friend haho.exe the target PID is 568 and the target image is WerFault.exe, used for Windows Error Reporting. 
CreateRemoteThread tracking is a great addition to Sysmon, already an outstanding tool.

Wrap Up

You've got a lot to take a look at packed in here. In addition to the Sysinternals updates, Sysmon_Parse and it's dependencies give you a lot to consider, or reconsider, for your DFIR toolkit.
You've gotten a look at using them all to inspect a host pwned by a Kryptik/Zbot sample and learned that you can quickly discover quite a bit about malware behavior using just a few simple but powerful tools. Hopefully your inspired to grab this sample (I'll send it to you if you ask), and experiment with this excellent collection of DFIR goodness.

Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

Cheers…until next month.

by Russ McRee ( at November 05, 2015 05:14 PM

Errata Security

We should all follow Linus's example

Yet another Linus rant has hit the news, where he complains about how "your shit code is fucking brain damaged". Many have complained about his rudeness, how it's unprofessional, and part of the culture of harassment in tech. They are wrong. Linus Torvalds is the nicest guy in tech. We should all try to be more like him.

The problem in tech isn't bad language ("your shit code"), but personal attacks ("you are shit").

A good example is Brendan Eich, who was fired from his position as Mozilla CEO because people disagreed with his political opinions. Another example is Nobel prize winner Tim Hunt who was fired because people took his pro-feminist comments out of context and painted him as a misogynist. Another example is Pax Dickinson, who was fired as CTO of Business Insider because of jokes he made before founding the company. A programmer named Curtis Yavin* was booted from a tech conference because he's some sort of monarchist. Yet more examples are the doxing and bomb threats that censor both sides of the GamerGate fiasco. The entire gamer community is a toxic cesspool of personal attacks. We have another class of people, the "SJW"s, who viciously attack those they disagree with, trying to get them fired. They treat those who make a mistake or disagree with them as an evil misogynists who needs to be punished, rather than as people who need to be corrected -- or debated.

What all these things have in common is that they attack the person. They strive to dehumanize the person, to make them an un-person, so that we stop empathizing with them. These attacks seek to intimidate and punish, not to educate or inform.

Linus punishes nobody. He intimidates nobody. He has that power, to act as a dictator to ban people for life from the Linux kernel, but doesn't use that power. Indeed, his use of power demonstrates extreme humility. Even in this case, he declares he doesn't "want" to accept the code into the kernel, not that he "won't".

His rant is designed to inform. The Linux kernel is a work of art based on certain consistent principles, such as not needlessly obfuscating implementation details. He takes the time to yet again lay out his philosophy that guides the kernel. Yes, his language is strong, but I'm not sure how else he'd communicate the unreasonableness of the code in question.

Personal attacks are increasingly the norm in our society. Pick any political debate -- the basis of people's arguments is not that the opposition is wrong, but that they are unreasonable and sub-human. Such discourse would improve if everyone emulated Linus's example, as we'd start debating the issues (albeit with course language) rather than attacking each other.

Seriously, try it. Can you debate issues like abortion, anti-vaxx, taxes, immigration, or global warming in such a way that it doesn't include dehumanizing personal attacks? Can you debate such issues with the overwhelming belief that your opponents are reasonable people? Linus displays that belief. You don't.

Linus could, of course, be even nicer. But the thing is, Linus doesn't ruin careers, unlike these harassers feted by WIRED

by Robert Graham ( at November 05, 2015 12:29 AM

November 04, 2015

Errata Security

The Godwin fallacy

As Wikipedia says:
Godwin's law and its corollaries would not apply to discussions covering known mainstays of Nazi Germany such as genocide, eugenics, or racial superiority, nor to a discussion of other totalitarian regimes or ideologies, if that was the explicit topic of conversation, because a Nazi comparison in those circumstances may be appropriate, in effect committing the fallacist's fallacy, or inferring that an argument containing a fallacy must necessarily come to incorrect conclusions.
An example is a discussion whether waving the Confederate flags was "hate speech" or "fighting words", and hence undeserving of First Amendment protections.

Well, consider the famous march by the American Nazi party through Skokie, Illinois, displaying the Swastika flag, where 1 in 6 residents was a survivor of the Holocaust. The Supreme Court ruled that this was free-speech, that the Nazi's had a right to march.

Citing the Skokie incident isn't Godwin's Law. It's exactly the precedent every court will cite when deciding whether waving a Confederate flag is free-speech.

I frequently discuss totalitarianism, as it's something that cyberspace can both enable and defeat. Comparisons with other totalitarian regimes, notably Soviet Russia and Nazi Germany, are inevitable. They aren't Godwin hyperbole, they are on point. Those who quickly cite Godwin's Law are committing the "fallacist's fallacy", or the "Godwin's Fallacy".

by Robert Graham ( at November 04, 2015 06:32 PM

November 03, 2015


The H-1B and I-140 labor-market

A former employer of mine was a big user of H-1B and I-140 visa-workers. They were big enough we had international development centers, and often those employees came to the US HQ. As an Ops-type, this was pretty awesome since we had follow-the-sun support for our stuff. It also meant holidays in other countries impacted our operations in ways that I'd never experienced before. It was a very diverse workplace, which I enjoyed immensely.

However, the big flaw revealed itself when said company had a really bad quarter and decided to reorganize.

What 'reorganization' involved:

  • A near complete musical-chairs round in the Executive VP, and VP offices.
  • A restructuring of the company into completely new divisions.
  • Completely closing one of our foreign development offices.
  • A serious downsizing of the US workforce.


For those of us who are full citizens, we have a completely fluid job-market. This company was in one of the major tech-hubs (the DC metro area, a.k.a. NOVA, a.k.a. 'Ashburn VA', a.k.a. "us-east-1") where cloud-talent was experiencing a shortage and salaries were rising quickly. Unsurprisingly, a large percentage of our senior cloud-trained talent left the company rather than deal with working for a convulsing organization. Me and most other senior systems-engineer types had cold-calls from both Google and Amazon. Some of us took them up on that offer (not me).

It was during this time that I learned about how different the labor-market was for visa-workers.

Both the H-1B and I-140 visa require employer sponsorship. If you lose your employer, you need to get another job (transfer the visa sponsorship) within a specific time or you have to leave the US. Per my fellow coworkers, that time is four weeks. And that's four weeks to first-day-on-the-job, not four weeks to acceptance-letter-is-signed. Four weeks in which to do a complete job hunt with a company big enough to be able to deal with visas and the Department of Immigration. That is nearly impossible.

As a result, all of my coworkers here on visa were job-hunting just in case they got a layoff. All of them. Half of them had married (always other visa-workers) and bought houses here. Due to the DC metro area being one of the most expensive housing markets in the US, you need two full-time incomes in order to afford anything of any size; so having one of the earners leave the country was a recipe for financial disaster.

My department was one making buckets of money for the company, so we didn't take a layoff at all. Even so, we lost about a quarter of our people due to better offers coming in on that just-in-case job-hunt. The other quarter left due to not wanting to put up with the reorganizations.

Some visa-workers in other departments got layoffs. The rumor mill said they were offered a paid ticket back to their home country in addition to whatever other severance benefits they were giving out. No help with the expense of an international move.


After the layoffs were done and the reorganizations had completed to the point that the org-chart wasn't being updated on a weekly basis, upper management got down to the serious business of trying to keep the people they had. Part of this effort was to rebase salaries to market averages. There were some big movements.

One coworker had been out of the job market for about seven years to raise her son. When she got back into it, she was hired on as an 'associate systems engineer' at a salary of around $70K/year. Fast-forward five years, and she was still at that job-title, but doing a senior systems-engineer's work and getting paid $73K. They reallocated her up to a 'lead systems engineer' title and a salary of $100K.

Another coworker was here on visa and hired as a 'systems engineer'. He was being paid $85K. The full citizen in the chair next to him, also a 'systems engineer', was being paid $102K. After discussing salaries one memorable afternoon (this is legal, don't let anyone tell you otherwise), we learned that all of our visa-people were underpaid versus the citizens. When the reallocation came around, most of this disparity went away.

Why did it get that bad?

Capitalism, of course. The job-market for visa-workers is limited to those companies with skilled enough HR departments to deal with immigration paperwork. This greatly reduces the number of companies they can work for, which in turn reduces upward pressure on salaries. It also means that those companies who have figured out the paperwork problem have access to a skilled job market with less salary pressure than the greater one, so they tend to go deep into it.

The other factor at play here is the internal raise process. As has been mentioned elsewhere, you get your best raises when you change companies. While salaries in the overall job market had been raising, internal ones were not raising as much. People who don't job-hunt until provoked were falling behind their peers and not noticing because talking salary is taboo.

How can we prevent this disparity?

By talking salaries more often. The visa job-market is fundamentally different than the one for full citizens, but our costs-of-living are still the same. By talking about salaries and visa-status the labor market's supply, us workers, can learn which companies are known to exploit visa workers and which are more likely to treat them the same as the citizens. It means being pushy during negotiations, but that's how you stop this kind of exploitation.

by SysAdmin1138 at November 03, 2015 06:56 PM

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – October 2015

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Current popularity of open source log search tools, BTW, does not break the logic of that post. Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. That – and developing a SIEM is much harder than most people think. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” …  [261 pageviews]
  2. Simple Log Review Checklist Released!” is often at the top of this list – the checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [112 pageviews ]
  3. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases as well as this new post. [111 pageviews]
  4. “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?” is a quick framework for assessing the SIEM project (well, a program, really) costs at an organization (much more details on this here in this paper). [108 pageviews]
  5. My classic PCI DSS Log Review series is always popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3.1 as well), useful for building log review processes and procedures , whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (just out in its 4th edition!) [94+ pageviews to the main tag of total 4584 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog
Current research on SIEM:
Recent research on VA tools and VM practices:
Recent maverick research on AI/smart machines risks:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin ( at November 03, 2015 05:59 AM

October 30, 2015

League of Professional System Administrators

LOPSA-East '16 Call for Participation

The organizers of the LOPSA-East Professional IT Community Conference invite you to submit proposals for presentations at LOPSA-EAST ’16.

LOPSA-EAST ’16 aims to bring together IT professionals from all walks of life in the Mid-Atlantic region and beyond to share war stories, learn for each other’s experiences, and network with industry peers. The conference includes acclaimed speakers and keynotes, expert-lead training designed to build the skillsets and confidence of attendees, lightning talks, and a “Birds of a Feather” track where attendees propose and host their own topics during the event.

Attendance for this year’s conference is expected to be between 200 and 250 IT professionals from companies large and small, local government, and academia. Our attendees are primarily from the Mid-Atlantic region including New Jersey, New York, Delaware, Pennsylvania, Massachusetts, Maryland, Virginia, and Washington DC. As IT professionals we go by many titles but everyone is invited: system administrators, network administrators, network engineers, Windows, Linux, Unix, DBAs, security professionals, technical managers, and beyond.

read more

by lopsawebstaff at October 30, 2015 12:50 PM

October 29, 2015

Cryptography Engineering

Attack of the Week: Unpicking PLAID

A few years ago I came across an amusing Slashdot story: 'Australian Gov't offers $560k Cryptographic Protocol for Free'. The story concerned a protocol developed by Australia's Centrelink, the equivalent of our Health and Human Services department, that was wonderfully named the Protocol for Lightweight Authentication of ID, or (I kid you not), 'PLAID'.

Now to me this headline did not inspire a lot of confidence. I'm a great believer in TANSTAAFL -- in cryptographic protocol design and in life. I figure if someone has to use 'free' to lure you in the door, there's a good chance they're waiting in the other side with a hammer and a bottle of chloroform, or whatever the cryptographic equivalent might be.

A quick look at PLAID didn't disappoint. The designers used ECB like it was going out of style; did unadvisable things with RSA encryption, and that was only the beginning.

What PLAID was not, I thought at the time, was bad to the point of being exploitable. Moreover, I got the idea that nobody would use the thing. It appears I was badly wrong on both counts.

This is apropos of a new paper authored by Degabriele, Fehr, Fischlin, Gagliardoni, Günther, Marson, Mittelbach and Paterson entitled 'Unpicking Plaid: A Cryptographic Analysis of an ISO-standards-track Authentication Protocol'. Not to be cute about it, but this paper shows that PLAID is, well, bad.

As is typical for this kind of post, I'm going to tackle the rest of the content in a 'fun' question and answer format.
What is PLAID?
In researching this blog post I was somewhat amazed to find that Australians not only have universal healthcare, but that they even occasionally require healthcare. That this surprised me is likely due to the fact that my knowledge of Australia mainly comes from the first two Crocodile Dundee movies.

It seems that not only do Australians have healthcare, but they also have access to a 'smart' ID card that allows them to authenticate themselves. These contactless smartcards run the proprietary PLAID protocol, which handles all of the ugly steps in authenticating the bearer, while also providing some very complicated protections to prevent user tracking.

This protocol has been standardized as Australian standard AS-5185-2010 and is currently "in the fast track standardization process for ISO/IEC 25185-1.2". You can obtain your own copy of the standard for a mere 118 Swiss Francs, which my currency converter tells me is entirely too much money. So I'll do my best to provide the essential details in this post -- and many others are in the research paper.
How does PLAID work?
Accompanying tinfoil hat
not pictured.
PLAID is primarily an identification and authentication protocol, but it also attempts to offer its users a strong form of privacy. Specifically, it attempts to ensure that only authorized readers can scan your card and determine who you are.

This is a laudable goal, since contactless smartcards are 'promiscuous', meaning that they can be scanned by anyone with the right equipment. Countless research papers have been written on tracking RFID devices, so the PLAID designers had to think hard about how they were going to prevent this sort of issue.

Before we get to what steps the PLAID designers took, let's talk about what one shouldn't do in building such systems.

Let's imagine you have a smartcard talking to a reader where anyone can query the card. The primary goal of an authentication protocol is to perform some sort of mutual authentication handshake and derive a session key that the card can use to exchange sensitive data. The naive protocol might look like this:

Naive approach to an authentication protocol. The card identifies itself
  before the key agreement protocol is run, so the reader can look up a card-specific key.
The obvious problem with this protocol is that it reveals the card ID number to anyone who asks. Yet such identification appears necessary, since each card will have its own secret key material -- and the reader must know the card's key in order to run an authenticated key agreement protocol.

PLAID solves this problem by storing a set of RSA public keys corresponding to various authorized applications. When the reader says "Hi, I'm a hospital", a PLAID card determines which public key it use to talk to hospitals, then encrypts data under the key and sends it over. Only a legitimate hospital should know the corresponding RSA secret key to decrypt this value.
So what's the problem here?
PLAID's approach would be peachy if there were only one public key to deal with. However PLAID cards can be provisioned to support many applications (called 'capabilities'). For example, a citizen who routinely finds himself wrestling crocodiles might possess a capability unique to the Reptile Wound Treatment unit of a local hospital.* If the card responds to this capability, it potentially leaks information about the bearer.

To solve this problem, PLAID cards do some truly funny stuff.

Specifically, when a reader queries the card, the reader initially transmits a set of capabilities that it will support (e.g., 'hospital', 'bank', 'social security center'). If the PLAID card has been provisioned with a matching public key, it goes ahead and uses it. If no matching key is found, however, the card does not send an error -- since this would reveal user-specific information. Instead, it fakes a response by encrypting junk under a special 'dummy' RSA public key (called a 'shill key') that's stored within the card. And herein lies the problem.

You see, the 'shill key' is unique to each card, which presents a completely new avenue for tracking individual cards. If an attacker can induce an error and subsequently fingerprint the resulting RSA ciphertext -- that is, figure out which shill key was used to encipher it -- they can potentially identify your card the next time they encounter you.

A portion of the PLAID protocol (source). The reader (IFD) is on the left, the card (ICC) is on the right.
Thus the problem of tracking PLAID cards devolves to one of matching RSA ciphertexts to unknown public keys. The PLAID designers assumed this would not be possible. What Degabriele et al. show is that they were wrong.
What do German Tanks have to do with RSA encryption?

To distinguish the RSA moduli of two different cards, the researchers employed of an old solution to a problem called the German Tank Problem. As the name implies, this is a real statistical problem that the allies ran up against during WWII.

The problem can be described as follows:

Imagine that a factory is producing tanks, where each tank is printed with a sequential serial number in the ordered sequence 1, 2, ..., N. Through battlefield captures you then obtain a small and (presumably) random subset of k tanks. From the recovered serial numbers, your job is to estimate N, the total number of tanks produced by the factory.

Happily, this is the rare statistical problem with a beautifully simple solution.** Let m be the maximum serial number in the set of k tanks you've observed. To obtain a rough guess Ñ for the total number of tanks produced, you can simply compute the following formula:

So what's this got to do with guessing an unknown RSA key?

Well, this turns out to be another instance of exactly the same problem. Imagine that I can repeatedly query your card with bogus 'capabilities' and thus cause it to enter its error state. To foil tracking, the card will send me a random RSA ciphertext encrypted with its (card-specific) "shill key". I don't know the public modulus corresponding to your key, but I do know that the ciphertext was computed using the standard RSA encryption formula m^e mod N.

My goal, as it was with the tanks, is to make a guess for N.

It's worth pointing out that RSA is a permutation, which means that, provided the plaintext messages are randomly distributed, the ciphertexts will be randomly distributed as well. So all I need to do is collect a number of ciphertexts and apply the equation above. The resulting guess Ñ should then serve as a (fuzzy) identifier for any given card.

Results for identifying a given card in a batch of (up to) 100 cards. Each card was initially 'fingerprinted' by collecting k1=1000 RSA ciphertexts. Arbitrary cards were later identified by collecting varying number of ciphertexts (k2).
Now obviously this isn't the whole story -- the value Ñ isn't exact, and you'll get different levels of error depending on how many ciphertexts you get, and how many cards you have to distinguish amongst. But as the chart above shows, it's possible to identify a specific card within in a real system (containing 100 cards) with reasonable probability.
But that's not realistic at all. What if there are other cards in play?
It's important to note that real life is nothing like a laboratory experiment. The experiment above considered a 'universe' consisting of only 100 cards, required an initial 'training period' of 1000 scans for a given card, and subsequent recognition demanded anywhere from 50 to 1000 scans of the card.

Since the authentication time for the cards is about 300 milliseconds per scan, this means that even the minimal number (50) of scans still requires about 15 seconds, and only produces the correct result with about 12% probability. This probability goes up dramatically the more scans you get to make.

Nonetheless, even the 50-scan result is significantly better than guessing, and with more concerted scans can be greatly improved. What this attack primarily serves to prove is that homebrew solutions are your enemy. Cooking up a clever approach to foiling tracking might seem like the right way to tackle a problem, but sometimes it can make you even more vulnerable than you were before.
How should one do this right?
The basic property that the PLAID designers were trying to achieve with this protocol is something called key privacy. Roughly speaking, a key private cryptosystem ensures that an attacker cannot link a given ciphertext (or collection of ciphertexts) to the public key that created them -- even if the attacker knows the public key itself.

There are many excellent cryptosystems that provide strong key privacy. Ironically, most are more efficient than RSA; one solution to the PLAID problem is simply to switch to one of these. For example, many elliptic-curve (Diffie-Hellman or Elgamal) solutions will generally provide strong key privacy, provided that all public keys in the system are set in the same group.

A smartcard encryption scheme based on, say, Curve25519 would likely avoid most of the problems presented above, and might also be significantly faster to boot.
What else should I know about PLAID?
There are a few other flaws in the PLAID protocol that make the paper worthy of a careful read -- if only to scare you away from designing your own protocols.

In addition to the shill key fingerprinting attacks, the authors also show how to identify the set of capabilities that a card supports, using a relatively efficient binary search approach. Beyond this, there are also many significantly more mundane flaws due to improper use of cryptography. 

By far the ugliest of these are the protocol's lack of proper RSA-OAEP padding, which may present potential (but implementation specific) vulnerabilities to Bleichenbacher's attack. There's also some almost de rigueur misuse of CBC mode encryption, and a handful of other flaws that should serve as audit flags if you ever run into them in the wild.

At the end of the day, bugs in protocols like PLAID mainly help to illustrate the difficulty of designing secure protocols from scratch. They also keep cryptographers busy with publications, so perhaps I shouldn't complain too much.
You should probably read up a bit on Australia before you post again.
I would note that this really isn't a question. But it's absolutely true. And to this end: I would sincerely like to apologize to any Australian citizens who may have been offended by this post.


* This capability is not explicitly described in the PLAID specification.

** The solution presented here -- and used in the paper -- is the frequentist approach to the problem. There is also a Bayesian approach, but it isn't required for this application.

by Matthew Green ( at October 29, 2015 02:27 AM

October 28, 2015


A Different Spin on the Air War Against IS

Sunday evening 60 Minutes aired a segment titled Inside the Air War. The correspondent was David Martin, whose biography includes the fact that he served as a naval officer during the Vietnam War. The piece concluded with the following exchange and commentary:

On the day we watched the B-1 strike, that same bomber was sent to check out a report of a single ISIS sniper firing from the top of a building.

Weapons officer: The weapon will time out directly in between the two buildings.

This captain was one of the weapons officers in the cockpit.

David Martin: B-1 bomber.

Weapons officer: Yes sir.

David Martin: All that technology.

Weapons officer: Yes sir.

David Martin: All that fire power. One sniper down on the ground.

I thought the captain's next words were right on target:

Weapons officer: Sir, I think if it was you or me on the ground getting shot at by that sniper we would take any asset available to make sure we were no longer getting, you know, engaged by that sniper. So, if I get a call and they say they're getting shot at, and there's potential loss of friendly life, I am absolutely gonna drop a weapon on that sniper.

It's clear that Mr Martin was channeling the Vietnam experience of heavily trained pilots flying multi-million dollar airplanes, dropping millions of dollars of bombs on the Ho Chi Minh trail, trying to stop porters carrying supplies on their backs and on bicycles. I understand that concern and I share that theme. However, I'd like to offer another interpretation.

The ability to dynamically retask in-air assets is a strength of American airpower. This episode involved retasking a B-1 that had already completed its primary mission. By putting that asset to use again, it alleviated the need to launch another aircraft.


By the time the B-1 arrived overhead the sniper was gone.

Weapons officer: What we did, however, find though was a tunnel system. So, in this case we dropped weapons on all the entry points that were associated with that tunnel.

Six 500-pound bombs.

Weapons officer: It was actually a perfect shack on the target.

This could be interpreted as a failure, because the sniper wasn't killed. However, in another example of retasking and dynamic intelligence, the B-1 was able to destroy a tunnel system. This again prevented the launch of another aircraft to accomplish that new mission.

These are features of the 60 Minutes story that were not conveyed by the on-air narrative, but which I observed based on my Air Force experience. It doesn't change the strategic questions concerning the role of airpower in theatre, but it is important to recognize the flexibility and dynamism offered by these incidents.

by Richard Bejtlich ( at October 28, 2015 03:56 PM

October 26, 2015

Intel Skylake with an Asus Z170-AR and Ubuntu Linux 14.04

I bought a Asus Z170-AR (the -A is almost identical except for DVI and VGA port). This motherboard (mobo) runs the latest Intel Skylake processors and has some of the newest chipsets on board. This means that you will likely need one of the newest Linux kernels out there to support these chipsets. But, if your running Ubuntu 14.04 LTS and want to run the latest hardware your going to have a problem.

Be advised this was an mobo/ram/proc upgrade and the OS was an original 14.04 install. 14.04.2 and newer point releases will ship with a much later and up to date kernel according to Canonical.

Older kernels and newer hardware

Very new hardware usually needs the latest drivers to work correctly. The closer you are to having the latest Linux kernel the better your chance is that your newer hardware will work. The newer the kernel the newer the driver. This is something that does not work well with Linux distributions that have long term releases in which the major kernel version does not get updated frequently. This is not to say you can go get more up to date kernel modules and build them but, when talking about built in support usually the kernels only get security updates and other fixes. Usually there are no features or support added. This does not bode well for using Ubuntu LTS releases on new hardware.

Ubuntu LTS

Ubuntu made LTS releases so people could stay on them for up to 5 years and not worry about upgrading every 6 months or not getting updates anymore. It is fantastic idea. The only problem is if you stay on an LTS release your were always stuck with the kernel that the release came with. Not any more.

Enter the LTS Enablement Stack

Canonical (the makers of Ubuntu) decided that they wanted LTS releases to be able to be used on newer hardware. They wanted everyone who wanted the stability of an LTS release to be able to use a newer Linux kernel. This would enable to people to buy newer hardware and have better support for said hardware.

To make this a reality Canonical decided to make the LTS Enablement Stack.This set of packages are much more updated Linux kernel and X support for existing LTS releases.

Asus z170 mobo and chipsets

As stated I upgraded a machine to an Asus Z170-AR motherboard with Ubuntu 14.04 already installed. Everything worked well except for the network adapter and the on board audio. These are some pretty important things so I just had to get them working. The following is what I did to do to get them working in Ubuntu 14.04.

Fixing the Ethernet adapter

With the stock 14.04 kernel (3.13.x) the Intel i219-V chip on the motherboard does not work. It also does not work with kernel version 3.15.x either. I had to go all the way to the latest 3.19.x kernel version to get the e1000e driver working with this adapter. To do this I installed the LTS enablement stack for 14.04. Follow that link and run the apt-get line. After reboot your ethernet adapter should be working.

Fixing the audio

Getting the Realtek ALC892 chip on the motherboard working was much harder than just updating the kernel. The 3.19.x kernel did not have a working sound kernel module for this chip. I could see the chip listed using lspci so I knew it was being seen by the kernel.

# sudo lspci -vv

00:1f.3 Audio device: Intel Corporation Device a170 (rev 31)
        Subsystem: ASUSTeK Computer Inc. Device 8698
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
        Stepping- SERR- FastB2B- D
        Kernel driver in use: snd_hda_intel

It even said there was a kernel driver being used for it. Which was a good sign, but still there was no sound. Alsa did not show any cards listed either. If you run aplay and your audio chip is recognized by the kernel it will show up in Alsa.

# aplay -l

Not until I grepped the dmesg did I find out why the kernel module was loaded but it was still not working. I saw the following in dmesg output greping for the PCI address listed in the lspci output.

# dmesg | grep '00:1f.3'

[   10.538405] snd_hda_intel 0000:00:1f.3: failed to add i915_bpo component
master (-19)

Now I have something to look for to see if anyone else was seeing the same error. Google turned up this post from someone with the exact same problem. Answer was to update to a very cutting edge Alsa kernel module. So I followed the instructions here to update 14.04's Alsa sound system with the latest kernel module. Be sure to choose the correct kernel module .deb package for your kernel on the "ALSA daily build snapshots" page. If your using 14.04.x choose "oem-audio-hda-daily-lts-vivid-dkms - ... ubuntu14.04.1" After installing the .deb package and rebooting, "aplay-l" saw the audio chip. Hooray!

Last thing to do to get the audio fully working is to choose the correct output in your desktops sound system. I use the XFCE desktop so what I had to do was Start "Pulse Audio Volume Control" then go to the "Configuration" tab and turn "HDA Nvidia" -> Off for my Nvida video card. Then in "Built in Audio" I had to select "Digital Stereo IEC958 Output" for my output. There are lots of outputs. My suggestion is to play an audio file on repeat, then go through each audio output to find the one that works.

by at October 26, 2015 11:14 PM

Carl Chenet

db2twitter: get data in database, build and send tweets

db2twitter automatically extracts fields from your database, use them to feed a template of tweet and send the tweet.

db2twitter is developed by and run for, the job board of th french-speaking Free Software and Opensource community.


Imagine you want to send tweets like this sentence : MyTux hires a #Django developer

Let’s say you have a MySQL database with a database « mytuxjobs », a table « jobs » and fields « title » and « url »

db2twitter will need the following informations:


Lets define a template for our tweet:

tweet=MyTux hires a {}{}"

db2twitter will get data from the database, builds a tweet filling the wildcard of the tweet template with the data from the database, and send the tweet. By default the last row in the given table is used. Cool isn’t it?

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

by Carl Chenet at October 26, 2015 11:00 PM

Racker Hacker

systemd-networkd and macvlan interfaces

I spent some time working with macvlan interfaces on KVM hypervisors last weekend. They’re interesting because they’re not really a bridge. It allows you to assign multiple MAC addresses to a single interface and then allow the kernel to filter traffic into tap interfaces based on the MAC address in the packet. If you’re looking for a highly detailed explanation, head on over to waldner’s blog for a deep dive into the technology and the changes that come along with it.

Why macvlan?

Bridging can become a pain to work with, especially when you’re forced to add in creative filtering rules and keep them updated. The macvlan interfaces can help with that (read up on VEPA mode). There are some interesting email threads showing that macvlan interfaces can improve network performance for various workloads. Low latency workloads can benefit from the simplicity and low overhead of macvlan interfaces.

systemd-networkd and macvlan interfaces

Fortunately for us, systemd-networkd makes configuring a macvlan interface really easy. I’ve written about configuring bridges with systemd-networkd and the process for macvlan interfaces is similar.

In my scenario, I have a 1U server with an ethernet interface called enp4s0 (read up on interface naming with systemd-udevd). I want to make a macvlan interface for virtual machines and I’ll be attaching VM’s to that interface via macvtap interfaces. It’s similar to bridging where you make a bridge and then give everyone a port on the bridge.

Start by creating a network device for our macvlan interface:

# /etc/systemd/network/vmbridge.netdev

I’ve told systemd-networkd that I want a macvlan interface set up in bridge mode. This will allow hosts and virtual machines to talk to one another on the interface. You could choose vepa for the mode if you want additional security. However, this will force traffic out to your upstream switch/router and makes it challenging for hosts and guests to communicate with each other.

Now that we have a device configured, let’s configure the IP address for the macvlan interface (similar to configuring a bridge):

# /etc/systemd/network/

Let’s tell systemd-networkd that our physical network interface, enp4s0, is part of this interface:

# /etc/systemd/network/

This is very similar to a configuration for a standard Linux bridge. Once you’ve reached this step, you’ll most likely want to reboot to ensure all of your network devices come up properly.

Attaching a virtual machine

Attaching a KVM virtual machine to the macvlan interface is quite easy. When you’re creating a new VM using virt-manager, look for this setting in the wizard:

virt-manager macvlan

If you’re installing via virt-install just use the following argument for your network configuration:

--network type=direct,source=vmbridge,source_mode=bridge

You’ll end up with interfaces like these after creating multiple virtual machines:

13: macvtap0@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/ether 52:54:00:83:53:f2 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    macvtap  mode bridge addrgenmode eui64 
15: macvtap2@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/ether 52:54:00:f1:76:0b brd ff:ff:ff:ff:ff:ff promiscuity 0 
    macvtap  mode bridge addrgenmode eui64 
17: macvtap3@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/ether 52:54:00:cd:53:34 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    macvtap  mode bridge addrgenmode eui64 
20: macvtap1@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/ether 52:54:00:18:79:d3 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    macvtap  mode bridge addrgenmode eui64

The post systemd-networkd and macvlan interfaces appeared first on

by Major Hayden at October 26, 2015 01:50 PM

League of Professional System Administrators

LOPSA Annual Meeting and Raffle (Update 2)

If you are attending LISA, please save Weds Night at 8pm for the LOPSA Annual Meeting and Raffle.  We will be talking about LOPSA's performance over the last year and our goals for the next year.  In addition, when you attend you will get a Raffle ticket for one of these great prizes:

read more

by lopsawebstaff at October 26, 2015 03:28 AM

October 24, 2015

The Geekess

How to approach a new system: Linux graphics and Mesa

By now, most of the Linux world knows I’ve stopped working on the Linux kernel and I’m starting work on the Linux graphics stack, specifically on the userspace graphics project called Mesa.

What is Mesa?

Like the Linux kernel, Mesa falls into the “operating system plumbing” category. You don’t notice it until it breaks. Or clogs, slowing down the rest of the system. Or you need plumbing for your shiny new hot tub, only you’re missing some new essential feature, like hot water or tessellation shaders.

The most challenging and exciting part of working on systems “plumbing” projects is optimization, which requires a deep understanding of both the hardware limitations below you in the stack, and the needs of the software layers above you. So where does Mesa fit into the Linux graphics stack?

Mesa is the most basic part of the userspace graphics stack, sitting above the Linux kernel graphics driver that handles command submission to the hardware, and below 3D libraries like qt, kde, or game engines like Unity or Unreal. A game engine creates a specialized 3D program, called a shader, that conforms to a particular rev of a graphics spec (such as EGL 3.0 or GLES 3.1). Mesa takes that shader program and compiles it into graphics hardware commands specific to the system Mesa is running on.

What’s cool about Mesa?

The exciting thing for me is the potential for optimizing graphics around system hardware limitations. For example, you can optimize the compiler to generate less graphics commands. In theory, less commands means less for the hardware to execute, and thus better graphics performance. However, if each of those commands are really expensive to run on your graphics hardware, that optimization can actually end up hurting your performance.

This kind of work is fun for me, because I get to touch and learn about hardware, without actually writing Verilog or VHDL. I get to make hardware do interesting things, like add a new features that makes the latest Steam game actually render or add an optimization that improves the performance of a new open source Indie game.

Understand how much you don’t know

Without knowledge of both the hardware and software layers above, it’s impossible to evaluate what optimizations are useful to pursue. For example, the first time I heard modern Intel GPUs have a completely separate cache from the CPU, I asked two questions: “What uses that cache?” and “What happens when the cache is full?” The hardware Execution Units (EUs) that execute graphics commands use the cache to store metadata called URBs. My next question, on discovering that URB size could vary, and that newer hardware had an ever-increasing maximum URB size, was to ask, “How does the graphics stack pick which URB size to use?” Obviously, picking a smaller URB size means that more URBs can fit in the cache, which means it’s less likely that an EU will have to stall until there’s room in the cache for the URB it needs for the set of graphics commands its executing. Picking an URB size that was too small would mean programs that use a lot of metadata wouldn’t be able to run.

Mesa is used by a lot of different programs with different needs, and each may require different URB sizes. The hardware has to be programmed with the maximum URB size, and changing that requires stopping any executing commands in the graphics pipeline. So you can’t go changing the URB size on the fly every time a new 3D graphics program starts running, or you would risk stalling the whole graphics stack. Imagine your entire desktop system freezing every time you launched a new Steam game.

Asking questions helps you ramp faster

It’s my experience with other operating system components that lead me to ask questions of my more experienced graphics developer co-workers. I really appreciate working in a community where I know I can ask these kinds of basic questions without fear of backlash at a newcomer. I love working on a team that chats about tech (and non tech!) over lunch. I love the easy access of the Intel graphics and DRI irc channels, where people are willing to answer simple questions like “How do I build the documentation?” (Which turns out to be complex when you run into a bug in doxygen that causes an infinite loop and increasing memory consumption until a less-than-capable build box starts to freeze under memory pressure. Imagine what would have happened if the experienced devs assumed I was a n00b and ignored me?)

My experience with other complex systems makes me understand that the deep, interesting problems can’t be approached without a long ramp up period and lots of smaller patches to gain understanding of the system. I do chafe a bit as I write those patches, knowing the interesting problems are out there, but I know I have to walk before I start running. In the meantime, I find joy in making blog posts about what I’m learning about the graphics pipeline, and I hope we can walk together.

by sarah at October 24, 2015 01:49 PM

October 22, 2015

Carl Chenet

Liens intéressants Journal du hacker semaine #43


Pour cette 43ème semaine de 2015, voici 5 liens intéressants que vous avez peut-être ratés, relayés la semaine précédente par le Journal du hacker, votre source d’informations pour le Logiciel Libre francophone !


Proposed Debian Logo


Petit rappel : la nouvelle adresse du Journal du hacker est

Pour ne plus rater aucun article de la communauté francophone, voici :


De plus le site web du Journal du hacker est « adaptatif (responsive) ». N’hésitez pas à le consulter depuis votre smartphone ou votre tablette !

Le Journal du hacker fonctionne de manière collaborative, grâce à la participation de ses membres. Rejoignez-nous pour proposer vos contenus à partager avec la communauté du Logiciel Libre francophone et faire connaître vos projets.

Et vous ? Qu’avez-vous pensé de ces articles ? N’hésitez pas à réagir directement dans les commentaires de l’article sur le Journal du hacker ou bien dans les commentaires de ce billet :)

by Carl Chenet at October 22, 2015 08:30 PM

Cryptography Engineering

A riddle wrapped in a curve

If you’re looking for a nice dose of crypto conspiracy theorizing and want to read a paper by some very knowledgeable cryptographers, I have just the paper for you. Titled “A Riddle Wrapped in an Enigma” by Neal Koblitz and Alfred J. Menezes, it tackles one of the great mysteries of the year 2015. Namely: why did the NSA just freak out and throw its Suite B program down the toilet?

Koblitz and Menezes are noted cryptographers and mathematicians, together responsible for scores of papers, including a recent series of position papers such as “Another Look at Provable Security” (mentioned previously on this blog) and “The Brave New World of Bodacious Assumptions in Cryptography”. (Yes, that is a real paper title.)

Their latest paper is highly accessible and contains almost no mathematics, so it’s worth your time to go read it now. If you don’t, or can’t handle reading things typeset in LaTeX, I’m going to sum up the gist of their arguments as best I can, then talk about why this all matters.

Let’s start with some background.

What is Suite B, and what do you mean by NSA ‘freaking out’?

For most of the past century, the NSA (and its predecessors) have jealously guarded a monopoly on advanced cryptography. This approach worked reasonably well through the 1980s, then rapidly became unsustainable. There were a couple of reasons for this. First, the PC revolution made it impossible for the NSA to keep encryption out of the hands of ordinary users and industry -- as much as the NSA tried to make this so. Second, Congress got fed up with paying for expensive bespoke computer systems, and decided the DoD should buy its computing equipment off the shelf like everyone else.

The result was the Cryptographic Modernization Program, and Suite B — the first public cryptography standard to include non-classified algorithms certified for encrypting Secret and Top Secret data. Suite B was standardized for industry in the mid/late 1990s, and was notable for its exclusive reliance on (then new) field of elliptic curve cryptography (ECC) for public key encryption and key agreement.

At the time of Suite B’s adoption, ECC was relatively new to the non-classified world. Many industry and academic cryptographers didn’t feel it had been reasonably studied (Koblitz and Menezes’ anecdotes on this are priceless). ECC was noteworthy for using dramatically shorter keys than alternative public-key algorithms such as RSA and “classical” Diffie-Hellman, largely because new sub-exponential attacks that worked in those settings did not seem to have any analogue in the ECC world. 

The NSA pushed hard for adoption. Since they had the best mathematicians, and moreover, clearly had the most to lose if things went south, the standards bodies were persuaded to go along. The algorithms of Suite B were standardized and — aside from a few intellectual property concerns — have been gradually adopted even by the non-classified community.

Then, in August of this year, NSA freaked out.

It was a quiet freakout, as you would expect from the world’s largest spy agency. But it made big waves with cryptographers. Citing concerns about future developments in quantum computing, the Agency updated the Suite B website to announce a rapid transition away from Suite B, and to a new set of quantum-resistant algorithms in the coming years.

Why all the sudden and immediate concern about quantum computers? Well, the NSA obviously isn’t willing to talk about that. But as of 2013, leaked slides from the Snowden cache show no real evidence of any massive quantum breakthroughs on the horizon that would necessitate a rapid transition from ECC.

Which quantum-resistant algorithms are we supposed to move to? Good question. Cryptographers haven’t agreed on any yet. Which makes the timing of this announcement so particularly mysterious.

So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you haven’t yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks.

When reporters finally reached the NSA for comment, the only sound on the line was breaking glass and the sound of digging. (No, I’m kidding about that — but NSA did update their site again just to make it clear they weren’t playing some kind of deeply misguided April Fool’s prank.)

So that’s the background of the Koblitz/Menezes paper. The riddle is: what does the NSA know that we don’t?

No quantum advance?

Koblitz and Menezes exhaustively detail the possible reasons for the NSA’s announcement, ranging from the straightforward — that NSA actually has made a huge advance in developing quantum computers — to the downright bizarre. They even moot the possibility that the NSA is so embarrassed about getting caught tampering with national standards that they’ve just decided to flip the table and act like crazy people.

I’m going to skip most of the details, not because it isn’t interesting, but rather because I’d like to focus on what I think is the most intriguing hypotheses in the paper. Namely, that the NSA isn’t worried about quantum computers at all, but rather, that they’ve made a major advance in classical cryptanalysis of the elliptic curve discrete logarithm problem — and panic is the result.

Let me lay the groundwork. The security of most EC cryptosystems rests on the presumed intractability of a few basic mathematical problems. The most important of these is called the Elliptic Curve Discrete Logarithm Problem (ECDLP). This problem must be supremely hard to solve, or virtually every cryptosystem built on ECC unravels very quickly.

The definition of “hard” is important here. What we mean in the context of ECC is that the best known algorithm for solving the ECDLP requires a number of operations that is fully exponential in the security parameter. In practice, this means we can achieve 128-bit equivalent security using an elliptic curve set in a 256 bit field — which implies similarly-small keys and ciphertexts. To achieve equivalent security with RSA, where sub-exponential algorithms apply, your keys need to be at least 3072 bits (!) long.

But while the ability to use (relatively) tiny elliptic curve points is wonderful for implementers, it leaves no room for error. If NSA’s mathematicians began to make even modest, but sustained advances in the state of the art for solving the ECDLP, it would put the entire field at risk. Beginning with the smallest of the standard curves, P-256, which would now provide less than the required 128-bit security.

Did I mention that as part of the recent announcement, NSA also deprecated P-256

Of course, there’s no reason to believe that classical cryptanalysis is at fault here. It’s possible that NSA really has reason to believe that quantum computing is moving faster than it was in 2013. It’s also possible that they’re just sloppy, and didn’t realize that having a very public freakout about cryptography you just promoted for years might not be such a good idea. But that seems doubtful.

You can read the Koblitz and Menezes paper for their take on all of the alternative hypotheses. I want to take a last few paragraphs to cover a slightly tangential issue, one that’s been the subject of a great deal of panic on the part of our community.

Did the NSA really backdoor the NIST elliptic curves????1??1

One of the nicer points of the Koblitz and Menezes paper is that it tackles the argument that the NSA deliberately backdoored the NIST elliptic curves. 

This argument can be found in various places around the Internet. It originates with accomplished cryptographers who are quite rightly applying skepticism to an oddball and ad hoc process. It then heads off into crazytown. Whether people admit it or not, this concern has been hugely influential in the recent adoption of non-NIST alternative curves by the CFRG and IETF.

The accusation being leveled at NSA can be described simply. It’s known that the NIST pseudorandom curves were generated by the NSA hashing “seeds” through SHA1 to generate the curve parameters. This process was supposed to reassure the public, by eliminating the NSA’s ability to tamper with the parameters at will. However, nobody thought to restrict the NSA’s choice of input seeds — leading to a potential attack where they tried many different random seeds until they found a curve that contained a weakness the NSA could later exploit.

Koblitz and Menezes tackle this argument on grounds of basic common sense, history, and mathematics. The common sense argument, of course, is that you don’t sh*t where you eat. In other words, the NSA wouldn’t deliberately choose weak elliptic curves given that they planned to use them for encrypting Secret and Top Secret data for the next 20 years. 

But common sense convinces nobody. The mathematical argument is a bit more subtle, and may work where self interest didn’t. Roughly speaking, Koblitz and Menezes point out that the NSA, despite their vast computing power, can’t try every possible seed. This is especially true given that they were generating these curves using 1990s equipment, and the process of evaluating (counting points on) an elliptic curve is computationally intensive. Thus, they assume that the NSA could try only test 2^{48} or so different curves for their hypothetical weakness.

By calculating the number of possible curve families, Koblitz and Menezes show that a vast proportion of curves (for P-256, around 2^{209} out of 2^{257}) would have to be weak in order for the NSA to succeed in this attack. The implications of such a large class of vulnerable curves is very bad for the field of ECC. It dwarfs every previous known weak curve class and would call into question the decision to use ECC at all.

In other words, Koblitz and Menezes are saying that if you accept the weak curve hypothesis into your heart, the solution is not to replace the NIST elliptic curves with anything at all, but rather, to leave the building as rapidly as possible and perhaps not shut the door on the way out. No joke.

On the gripping hand, this sounds very much like the plan NSA is currently implementing. Perhaps we should be worried.

by Matthew Green ( at October 22, 2015 11:45 AM

October 19, 2015


South Korea Signs Up to Cyber Theft Pledge

On Friday the Obama administration secured its second win toward establishing a new norm in cyberspace. The Joint Fact Sheet published by the White House includes the following language:

"no country should conduct or knowingly support cyber-enabled theft of intellectual property, trade secrets, or other confidential business information with the intent of providing competitive advantages to its companies or commercial sectors;" (emphasis added)

This excerpt, as well as other elements of the agreement, mirror words which I covered in my Brookings piece To Hack, Or Not to Hack? I recommend reading that article to get my full take on the importance of this language, including the bold elements.

It's likely many readers don't think of South Korea as an economic threat to the US. While South Korean operations are conducted at a fraction of the scale of their Chinese neighbors, ROK spies still remain busy. In January Shane Harris wrote a great story titled Our South Korean Allies Also Hack the U.S.—and We Don’t Seem to Care. It contains gems like the following:

From 2007 to 2012, the Justice Department brought charges in at least five major cases involving South Korean corporate espionage against American companies. Among the accused was a leading South Korean manufacturer that engaged in what prosecutors described as a “multi-year campaign” to steal the secret to DuPont’s Kevlar, which is used to make bulletproof vests...

All of the cases involved corporate employees, not government officials, but the technologies that were stolen had obvious military applications. South Korean corporate spies have targeted thermal imaging devices and prisms used for guidance systems on drones...

But South Korea has gone after commercial tech, as well. A 2005 report published by Cambridge University Press identified South Korea as one of five countries, along with China and Russia, that had devoted “the most resources to stealing Silicon Valley technology.”

I commend the administration for securing a "cyber theft pledge" from another country. Whether it will hold is another issue. Just today there is reporting claiming that China is still targeting US companies in order to benefit Chinese companies. I believe it is too soon to make a judgment.

I'm also watching to see which countries besides the US approach China, asking for similar "cyber theft pledges." With President Xi visiting the UK soon, will we see Prime Minister Cameron ask that China stop stealing commercial secrets from UK companies?

On a related note, I've encountered several people recently who were not aware of the excellent annual Targeting US Technologies report series by the US Defense Security Service. They are posted here. The most recent was published in August 2015.

by Richard Bejtlich ( at October 19, 2015 03:37 PM

October 16, 2015

Racker Hacker

GRE tunnels with systemd-networkd

Switching to systemd-networkd for managing your networking interfaces makes things quite a bit simpler over standard networking scripts or NetworkManager. Aside from being easier to configure, it uses fewer resources on your system, which can be handy for smaller virtual machines or containers.

Managing tunnels between interfaces is also easier with systemd-networkd. This post will show you how to set up a GRE tunnel between two hosts running systemd-networkd.

Getting started

You’ll need two hosts running a recent version of systemd-networkd. I’d recommend Fedora 22 since it provides very recent versions of systemd which include enhancements to systemd-networkd.

For this example, I’ve built one Rackspace Cloud Server in the DFW datacenter and another in IAD. I’ll connect them both together with a simple GRE tunnel.

Switch to systemd-networkd

I’ve detailed out this process before but I’ll do it again here. First off, we will need a directory made on both servers to hold systemd-networkd configuration files:

mkdir /etc/systemd/network

Let’s add a very simple network configuration for our eth0 interface on both hosts:

# cat /etc/systemd/network/

Do this on both servers and be sure to fill in the Address and Gateway lines with the correct data for your servers. Also, feel free to use something other than Google’s DNS servers if needed.

It’s time to get our services in order so that systemd-networkd will handle our networking after a reboot:

systemctl disable network
systemctl disable NetworkManager
systemctl enable systemd-networkd
systemctl enable systemd-resolved
systemctl start systemd-resolved
rm -f /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Don’t start systemd-networkd yet. Having systemd-networkd and NetworkManager fight over your interfaces can lead to a bad day.

Reboot both hosts and wait for them to come back online.

Configure the GRE tunnel

A GRE tunnel is a simple way to encapsulate packets between two hosts and send almost any protocol across the tunnel you build. However, it’s not encrypted. (If you’re planning to use this long-term, consider only using encrypted streams across the link or add IPSec on top of the GRE tunnel.)

If we want to route traffic over the GRE tunnel, we will need IP addresses on both sides. I’ll use for this example, but you’re free to use any subnet of any size (except /32!) for this network.

We need to tell systemd-networkd about a new network device that it doesn’t know about. We do this with .netdev files. Create this file on both hosts:

# cat /etc/systemd/network/gre-example.netdev
Remote=[public ip of remote server]

We’re making a new network device called gre-example here and we’re telling systemd-networkd about the servers participating in the link. Add this configuration file to both hosts but be sure that your Remote= line is correct. If you’re writing the configuration file for the first host, then the Remote= line should have the IP address of your second host. Do the same thing on the second host, but use the IP address of your first host there.

Now that we have a network device, we need to tell systemd-networkd how to configure the IP address on these new GRE tunnels. Let’s make a .network file for our GRE tunnel.

On the first host:

# cat /etc/systemd/network/

On the second host:

# cat /etc/systemd/network/

Bringing up the tunnel

Although systemd-networkd knows we have a tunnel configured now, it’s not sure which interface should manage the tunnel. In our case, our public interface (eth0) is required to be up for this tunnel to function. Go back to your original files and add one line under the [Network] section for our tunnel:


Restart systemd-networkd on both hosts and check the network interfaces:

# systemctl restart systemd-networkd
# networkctl
IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 eth0             ether              routable    configured
  3 eth1             ether              off         unmanaged 
  4 gre0             ipgre              off         unmanaged 
  5 gretap0          ether              off         unmanaged 
  6 gre-example      ipgre              routable    configured
6 links listed.

Hooray! Our GRE tunnel is up! However, we have a firewall in the way.

Fixing the firewall

We need to tell the firewall two things: trust the GRE interface and trust the public IP of the other server. Trusting the GRE interface is easy with firewalld — just add this on both hosts:

firewall-cmd --add-interface=gre-example --zone=trusted

Now, we need a rich rule to tell firewalld to trust the public IP of each host. I talked about this last year on the blog. Run this command on both hosts:

firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="[IP ADDRESS]" accept'

If you run this on your first host, use the public IP address of your second host in the firewall-cmd command. Use the first host’s public IP address when you run the command on the second host.

Save your configuration permanently on both hosts:

firewall-cmd --runtime-to-permanent

Try to ping between your servers using the IP addresses we configured on the GRE tunnel and you should get some replies!

Final words

Remember that GRE tunnels are not encrypted. You can add IPSec over the tunnel or you can ensure that you use encrypted streams across the tunnel at all times (SSL, ssh, etc).

The post GRE tunnels with systemd-networkd appeared first on

by Major Hayden at October 16, 2015 11:54 PM

October 14, 2015

Standalone Sysadmin

Debian Jessie and Puppet

Please correct me if this blog entry is wrong, because I really, truthfully hope it is.

It seems that Debian Jessie is not going to be receiving backports to Puppet 3 (client, anyway).  The way forward is through Puppet 4 (which most of you have probably known about for a while). Like me, you probably hoped to not have to go there so soon. Well, maybe not, but I had hoped to not go there so soon, anyway.

So the first thing I started to do was build out a Puppet 4 testing Vagrant box, which was surprisingly hard. Basically, puppet 3 supported the “–manifestdir” argument, but puppet 4 doesn’t, so if you install the puppet-agent package (which is how you install puppet 4 rather than puppet 3), it dies with “Could not parse application options: invalid option: –manifestdir”. Not cool.

The solution is to redesign your Vagrant puppet directory. Instead of “puppet/init.pp” and “puppet/modules/”, you need to make “puppet/environments/site/environmentname/manifests/site.pp” and “puppet/environment/site/environmentname/modules/”, respectively. The puppet config for my box looks like this:

config.vm.provision :puppet do |puppet|
puppet.hiera_config_path = "hiera.yaml"
puppet.environment_path = "puppet/environments"
puppet.environment = "vagrant"

The directory structure looks like this:

├── hiera
│   ├── common.yaml
│   └── nodes
├── hiera.yaml
├── puppet
│   ├── environments
│   │   └── vagrant
│   │       ├── manifests
│   │       │   └── site.pp
│   │       ├── modules
│   │       │   ├── collectd
│   │       │   ├── concat
│   │       │   ├── redis
│   │       │   ├── stdlib
│   │       │   └── sysctl
│   │       └── Puppetfile
│   └── Puppetfile.lock
└── Vagrantfile

Now, you should know… a lot of the puppet code will work without changing, but there were some not-small changes, including a few real head scratchers. For instance:

Empty Strings in Boolean Context are true

In previous versions of Puppet, an empty string was evaluated as a false boolean value. You would see this in variable and parameter default values where conditional checks would be used to determine if someone passed in a value or left it blank.

class empty_string_defaults (
  $parameter_to_check = ''
) {
  if $parameter_to_check {
    $parameter_to_check_real = $parameter_to_check
  } else {
    $parameter_to_check_real = 'default value'

Puppet’s old behavior of evaluating the empty string as false would allow you to set the default based on a simple if-statement. In Puppet 4.x, this behavior is flipped and $parameter_to_check_real will be set to an empty string.

You can check your existing codebase for this behavior with a puppet-lint plugin.

See the language page on boolean values for more info.

I… um… I don’t even really know what to say to that. I mean, they completely flipped the entire logic of the test around. That’s pretty unnecessary, in my book. I understand what they’re trying to say, that even an empty value isn’t False, but it’s hard to think of a more common shortcut when first writing code. I’m not going to argue that checking against an empty string is a good way to determine true or false, but wow, to change the behavior of the language entirely like that? That’s intense.

Another one I found that will probably kill code:

Regular Expressions Against Non-Strings

Matching a value that is not a string with a regular expression now raises an error. In 3.x, other data types were converted to string form before matching (often with surprising and undefined results).

$securitylevel = 2

case $securitylevel {
  /[1-3]/: { notify { 'security low': } }
  /[4-7]/: { notify { 'security medium': } }
  default: { notify { 'security high': } }

Prior to Puppet 4.0, the first regex would match, and the notify { ‘security low’: } resource would be put into the catalog.

Now, in Puppet 4.0, neither of the regexes would match because the value of $securitylevel is an integer, not a string, and so the default condition would match, resulting in the inclusion of notify { 'security high': } in the catalog.

So, I can kind of see that. A 2 is clearly a number. But on the other hand, the puppet code will happily cast the string ’30’ as a number because that’s valid. But the number 30 can easily be turned into a string, too: “30”. See?

I actually tested to see if you explicitly declared the “2” as a string, would the regex pick it up. Nope, not at all:

$securityLevel = “2”

case $securitylevel {
/[1-3]/: { notify { ‘security low’: } }
/[4-5]/: { notify { ‘security medium’: } }
default: { notify { ‘security high’: } }

vagrant@test-monitoring-client:/tmp/vagrant-puppet/environments/vagrant/manifests$ sudo puppet apply –modulepath=../modules/ ./site.pp
Warning: Facter: timeout option is not supported for custom facts and will be ignored.
Notice: Compiled catalog for test-monitoring-client.spacex.corp in environment production in 4.16 seconds
Notice: security high

Because of course not.

Anyway, I’m currently dealing with this, so I figured I’d write my first blog entry in a while so you could share in the fun. Good luck! And if you have a good technique for testing existing code against new puppet builds, let me know! I’m considering my CI options, but I’ve got a lot of new tests to write, it seems like.

by Matt Simmons at October 14, 2015 01:33 AM

October 13, 2015

The Tech Teapot

Tools to lesson the WPF pain

I’m back writing GUI code at the moment. I don’t think it would be an exaggeration to say that I hate writing GUIs. But, there’s nobody else to do it, so it has to be me.

It’s not just that I am doing GUI code though that makes my current project tough, it is that I am learning WPF at the same time. The last time I had to write a reasonable sized GUI I was using MFC and C++, circa 2002.

Why do I hate GUI programming?

It was probably incubated many years ago in the early 1990s when GUI programming was really quite painful. And fiddly, god it was fiddly. Make a change in the resource editor or C++ file, jump over to a dos box, recompile (could take anything up to 20-30 minutes), start the program, find that your change wasn’t quite right, rinse and repeat for days or weeks on end. Things haven’t changed that much, GUI programming is and always will be fiddly, but the tools have improved.

Tools will set you free

Fortunately, there are a number of tools that can help reduce feedback time.

  1. Kaxaml – a WYSIWYG tool for XAML. Has a number of limitations, which are pretty easy to get around. A very useful tool for building XAML based user interfaces outside of Visual Studio and the change, compile, debug cycle. Enter your XAML and see it rendered in real time. Changes can then be made very quickly. When you’re happy, copy the code over to your Visual Studio project; Free
  2. XAML Spy – a XAML version of the old Windows Spy program. Available as two version: standalone application and Visual Studio add-on; Paid app + Free
  3. Microsoft Blend – would be the perfect tool if it wasn’t restricted to creating Windows 8.1 / Windows 10 app store applications. Not much use in my case, but if you are creating app store applications then you are in luck; Free, bundled with Visual Studio Community
  4. LINQPad – won’t help you with XAML, but will help if you prefer writing your WPF code in C#. Type your C# code into LINQPad and execute it straight away. Even shows you the Roslyn parse tree; Paid app
  5. Snoop – similar tool to XAML Spy, except that this is completely free and open source; Free
  6. Visual Studio 2015 – one of the nicer new features of the new Visual Studio covers similar ground to XAML Spy, but all nicely integrated into the IDE itself. Nice. Free


A lot of programming is by its very nature an iterative activity, GUI programming is just an extreme form. The shorter you can make the change, compile, debug cycle the better. The above tools will help you reduce the cycle, but they don’t eliminate it. GUI programming has always been, is and will always be profoundly fiddly.


by Jack Hughes at October 13, 2015 02:18 PM

Alen Krmelj

Migration from MongoDB to TokuMX – Big performance gain

Benchmark About a month ago I’ve done some benchmarking. I’ve compared TokuMX, TokuMXse, MongoDB 2.6, MongoDB 3.0 and WiredTiger. My use case is mainly based on constant selects and updates, all at the same time. Inserts were made before actual select/update test. Here are my results: As you can see, TokuMX performed best. This benchmark […]

by Alen Krmelj at October 13, 2015 01:38 PM

October 10, 2015

Sarah Allen

software isn’t real

Images and sound, which respond to our touch, create the feeling that we are interacting with something that is actually there. We talk about going to a web site. We tap buttons, open folders, and select choices from menus, and aren’t bothered by the inconsistency with real world analogs as we drag windows and surf the web.


I don’t know the story behind this image sent to me via Twitter by “Brett” — is it an original in drawing or native digital creation? or were 140 characters too short for attribution?

There is a strength in the Tim Berners-Lee vision of the web as “always a little bit broken” allowing for us to post these disconnected fragments which cause images and video to act like language, spread virally and owned by no one, in direct contradiction to our copyright laws. I also feel a loss that there was never widespread adoption of the Ted Nelson model that he envisioned when he coined the word “hypertext” — every link could let you dive into its source and references were a “transclusion” — instead of copying, everything would be by reference.

We need to intentionally draw real world connections in this strange, fictional world of software we create. Even the so-called users of software create this world of code and data that we inhabit.

The post software isn’t real appeared first on the evolving ultrasaurus.

by sarah at October 10, 2015 01:04 PM

what does it take to create an epic win?

An epic win is an extraordinary outcome that feels impossible, until it happens. What does it take to imagine that epic win? What needs to happen so that we can believe that it is possible, and in creating that belief, we create the space for it to actually happen.


Many steps are hard, and many have significant risk of failure. It is so unlikely that enough things would go right that you will achieve that goal. You try things and sometimes you fail, and sometimes you win, and every win moves you closer to that goal.

When we believe something is possible and desirable, yet uncertain, we want it more. It explains why we like gambling, and (mostly) don’t like to work even though we know we’ll get paid for it. There’s quite a bit of fascinating research that dives into the details. I recently read a really nice explanation:

…making the unknown known — i.e., figuring out what is in a wrapped package or finding out which reward one has earned — is a positive experience. Because people are excited to find out what they can actually get, working for an uncertain reward makes the whole situation more like a game and less like work.

ChicagoBooth report on The Motivating-Uncertainty Effect

It is the uncertainty which makes the epic win so appealing. Persisting through that experience of uncertainty seems to be its own reward. Our human brains release dopamine at these moments — this chemical does not only bring us lovely happy feelings, it’s also correlated with increasing our ability to learn. Of course, we have to win around half of the time, on average, for it to continue to be fun.


In retrospect, we can see a complex feat that has been reduced to a thousand micro-skills, where a human being has learned those skills and applied them in such a focused way that the impossible has not only become possible, it is has become certain.

Witnessing an epic win expands our ideas about our own limits. We can see that another human did this thing that we never imagined was possible.

There is a joy in mastery. We reach a certain level of skill where we can be incredibly good, yet in practicing that skill we become better. Interesting to look at the definition of epic win, as evidenced by epic win video compilations (there are over 50 of these on YouTube) and parkour, Almost all of the videos show solo wins — incredible acts by individuals. I believe the more epic wins are those where we work with other humans to do more than we each could make possible, whether it be an organized team where you know all of the people, or a crowd aligned to a purpose mediated by software or seemingly unconnected individuals who find themselves part of a kurass.

The post what does it take to create an epic win? appeared first on the evolving ultrasaurus.

by sarah at October 10, 2015 01:26 AM

October 08, 2015

Sarah Allen

best corporate policy ever

One of my best decisions as CEO of Blazing Cloud was to set a policy that any employee was authorized to spend up to $25 on the purchase of something they needed for their work.

This was intended for things like office supplies and notebooks. Also people were encouraged to buy things for the whele office, not just for themselves when they saw a need. This meant that it wasn’t just the office manager’s job to make sure we had whatever anyone decided they needed. We were a small team and people would look out for each other.

tyraI think someone may have actually asked me about the purchase of a giant inflatable dinosaur, but it was a pretty independent action from the team from my perspective. I arrived back from a ski trip to a new member of the Blazing Cloud family. She was named Tyra, and she would hang out near the entrance or above a conference room. She celebrated Pride with us with her own little rainbow flag and even earned a mention in an NPR story.

She was a part of the Blazing Cloud culture. I used to say that “Blazing Cloud was optimized for fun.” I was often unsuccessful in achieving that goal, but I’m proud of a few things I did that I think helped foster a spirit of joy and delight amongst the creative and talented people who joined me in making that small company happen.

Culture is What You Do

Most people don’t like making rules, but rules can be liberating. It was liberating to have a corporate policy about how money is spent and what does not need approval, followed by examples of people collaborating on shared purchases and individuals buying a favorite (expensive) notebook or pen. I believe that craftspeople need to have the best tools. I don’t need to mediate what kind of pen someone uses, and having them feel a sense of pleasure in their work as their pen slides across paper is priceless.

I don’t have any research to back it up, but I believe that there is a psychology of abundance — this feeling of abundance sparks creativity. I believe that large monitors make us smarter. We need space to think and an endless supply of post-it notes. We need to be surrounded by smart people who value difference — different ideas, different customs, different skills. I don’t care who you are at home, if you show up and are kind to your co-workers, yet don’t back down from speaking truth when you disagree. There’s a whole set of values that I hold and a belief system about what makes a great team, and the seemingly little decisions create an environment to make that happen.

All of the Blazing Cloud culture might not be directly related to that giant inflatable dinosaur, but having rules and precedent that favor independent action and individual choice can foster creative inspiration.


The post best corporate policy ever appeared first on the evolving ultrasaurus.

by sarah at October 08, 2015 01:01 PM


Engine Building Lesson 1: A Minimum Useless Engine

In this lesson, we’re going to explore minimalism, in this case in the form of the most minimal engine possible (without obfuscating it).

The least boilerplate code for an engine looks like this:

A not so complete example
#include <openssl/engine.h>


This example isn’t complete, it will not compile. However, it contains the absolute minimum required for those module to even be recognised as an OpenSSL engine.

When an engine is being loaded, the OpenSSL library will perform certain checks to see that the engine is compatible with this OpenSSL version, and finish up by calling the function whose name is given to IMPLEMENT_DYNAMIC_BIND_FN (bind in our case).

So, to complete the engine a bit more, we need to add the function bind, which should do all the final steps to tie this engine’s functionality with the calling OpenSSL.

A buildable example
#include <openssl/engine.h>

static int bind(ENGINE *e, const char *id)
  return 1;


This engine will simply load. Period.

At this point, we can build it. Let’s pretend we call it silly-engine.c and we’re running on Linux with some version of OpenSSL installed. Then, building is as simple as this:

$ gcc -fPIC -o silly-engine.o -c silly-engine.c
$ gcc -shared -o -lcrypto silly-engine.o

Let’s try it out! Something to keep in mind is that if OpenSSL is just given an engine name, such as silly-engine, it will use platform specific library naming conventions to find the actual shareable name. So if given silly-engine as an engine name on Linux, it will try to find, and that’s not at all what we’ve produced.

It is, however, also possible to give OpenSSL an absolute path for the engine, that’s what we’ll try out now.

$ openssl engine -t -c `pwd`/
Segmentation fault


This happens to be a bug in OpenSSL, it should really report that the engine hasn’t self identified with an identity string and a longer name (can be used as a one line description).
[fix coming up]

So, let’s revisit the code one last time, and add some code in bind()

A working example
#include <stdio.h>

#include <openssl/engine.h>

static const char *engine_id = "silly";
static const char *engine_name = "A silly engine for demonstration purposes";

static int bind(ENGINE *e, const char *id)
  int ret = 0;

  if (!ENGINE_set_id(e, engine_id)) {
    fprintf(stderr, "ENGINE_set_id failed\n");
    goto end;
  if (!ENGINE_set_name(e, engine_name)) {
    printf("ENGINE_set_name failed\n");
    goto end;

  ret = 1;
  return ret;


Rebuilding and running gives us some nice output

$ gcc -fPIC -o silly-engine.o -c silly-engine.c
$ gcc -shared -o -lcrypto silly-engine.o
$ openssl engine -t -c `pwd`/
(/home/levitte/tmp/silly-engine/ A silly engine for demonstration purposes
Loaded: (silly) A silly engine for demonstration purposes
     [ available ]

And voilà! A first engine, workable but entirely silly and useless.

If you’re interested, there’s a slightly more elaborated variant of this engine with configuration script and Makefile suitable for GNU auto* tools, libtool and automake for those who want to study that kind of setup closer.

Next lesson will be about adding a digest implementation.

October 08, 2015 03:41 AM

October 07, 2015


Engine School, a Path to Writing Standalone Engines

For the longest time, it seems that people have wanted to have their diverse engines bundled with the OpenSSL source, as if there was no other way to build it or distribute it.

Nothing could be further from the truth. Also, having the engine for some hardware bundled with the OpenSSL source presents a maintainance problem, and the better solution is for those who have an engine to maintain theḿ themselves.

So, how is it done? That’s something that we will discover together in a series of articles, about one or two weeks apart.

First lesson up, warming up with a minimal, silly and utterly useless engine!

At all times, feel free to comment, to make suggestions, ask for specific future lessons, and so on and so forth.

October 07, 2015 09:32 PM


For the PLA, Cyber War is the Battle of Triangle Hill

In June 2011 I wrote a blog post with the ever polite title China's View Is More Important Than Yours. I was frustrated with the Western-centric, inward-focused view of many commentators, which put themselves at the center of debates over digital conflict, neglecting the possibility that other parties could perceive the situation differently. I remain concerned that while Western thinkers debate war using Western, especially Clausewitzian, models, Eastern adversaries, including hybrid Eastern-Western cultures, perceive war in their own terms.

I wrote in June 2011:

The Chinese military sees Western culture, particularly American culture, as an assault on China, saying "the West uses a system of values (democracy, freedom, human rights, etc.) in a long-term attack on socialist countries...

Marxist theory opposes peaceful evolution, which... is the basic Western tactic for subverting socialist countries" (pp 102-3). They believe the US is conducting psychological warfare operations against socialism and consider culture as a "frontier" that has extended beyond American shores into the Chinese mainland.

The Chinese therefore consider control of information to be paramount, since they do not trust their population to "correctly" interpret American messaging (hence the "Great Firewall of China"). In this sense, China may consider the US as the aggressor in an ongoing cyberwar.

Today thanks to a Tweet by Jennifer McArdle I noticed a May 2015 story featuring a translation of a People's Daily article. The English translation is posted as Cybersovereignty Symbolizes National Sovereignty.

I recommend reading the whole article, but the following captures the spirit of the message:

Western hostile forces and a small number of “ideological traitors” in our country use the network, and relying on computers, mobile phones and other such information terminals, maliciously attack our Party, blacken the leaders who founded the New China, vilify our heroes, and arouse mistaken thinking trends of historical nihilism, with the ultimate goal of using “universal values” to mislead us, using “constitutional democracy” to throw us into turmoil, use “colour revolutions” to overthrow us, use negative public opinion and rumours to oppose us, and use “de-partification and depoliticization of the military” to upset us.

This article demonstrates that, four years after my first post, there are still elements, at least in the PLA, who believe that China is fighting a cyber war, and that the US started it.

I thought the last line from the PLA Daily article was especially revealing:

Only if we act as we did at the time of the Battle of Triangle Hill, are riveted to the most forward position of the battlefield and the fight in this ideological struggle, are online “seed machines and propaganda teams”, and arouse hundreds and thousands in the “Red Army”, will we be able to be good shock troops and fresh troops in the construction of the “Online Great Wall”, and will we be able to endure and vanquish in this protracted, smokeless war.

The Battle of Triangle Hill was an engagement during the Korean War, with Chinese forces fighting American, South Korean, Ethiopian, and Colombian forces. Both sides suffered heavy losses over a protracted engagement, although the Chinese appear to have lost more and viewed their attrition strategy as worthwhile. It's ominous this PLA editorial writer decided to cite a battle between US and Chinese forces to communicate his point about online conflict, but it should make it easier for American readers to grasp the seriousness of the issue in Chinese minds.

by Richard Bejtlich ( at October 07, 2015 10:24 AM

October 06, 2015

The Geekess

What makes a good community?

*Pokes head in, sees comments are generally positive*

There’s been a lot of discussion in my comment sections (and on LWN) about what makes a good community, along with suggestions of welcoming open source communities to check out. Your hearts are in the right place, but I’ve never found an open source community that doesn’t need improvement. I’m quite happy to give the Xorg community a chance, mostly because I believe they’re starting from the right place for cultural change.

The thing is, reaching the goal of a diverse community is a step-by-step process. There are no shortcuts. Each step has to be complete before the next level of cultural change is effective. It’s also worth noting that each step along the way benefits all community members, not just diverse contributors.

Level 0: basic human decency

In order to attract diverse candidates, you need to be known as a welcoming community, with a clear set of agreed-upon social norms. It’s not good enough to have a code of conduct. Your leaders need to be actively behind it, and it needs to be enforced.

A level 0 welcoming community exhibits the following characteristics:

Level 1: on-boarding

The next phase in improving diversity is figuring out how to on-board newcomers. If diverse candidates are only 1-10% of newcomers, but you have a 90% fail rate for people who try to make their first contribution, well, you can’t expect many diverse newcomers to stick around, can you? It’s also essential to explain your unwritten tribal knowledge, so that diverse candidates (who are more likely to be afraid of upsetting the status quo) know what they’re getting into.

Signs of a level 1 welcoming community:

  • Documentation on where to interact with the community (irc, mailing list, bug tracker, etc)
  • In-person conferences to encourage networking with new members
  • Video or in-person chats to put a face to a name and encourage empathy and camaraderie
  • Documented first steps for compiling, running, testing, and polishing contributions
  • Easy, no-setup web harness for testing new contributions
  • Step-by-step tutorials, which are kept up-to-date
  • Coding style (what’s required and what’s optional, and who to listen to when developers disagree)
  • Release schedule and feature cut-off dates
  • How to give back non-code contributions (bug reports, docs, tutorials, testing, event planning, graphical design)

Level 2: meaningful contributions

The next step is figuring out what to do with these eager new diverse candidates. If they’ve made it this far through the gauntlet of toxic tech culture, they’re likely to be persistent, smart, and seeking a challenge. If you don’t have meaningful bigger projects for them to contribute to, they’ll move onto the next shiny thing.

Signs of a level 2 welcoming community:

  • Newbie todo lists
  • Larger, self-contained projects
  • Welcoming, available mentors
  • Programs to pay newbies (internships, summer of code, etc)
  • Contributors are thanked with heartfelt sincerity and an explicit acknowledgment of what was good and what could be improved
  • Community creates a casual feedback channel for generating ideas with newcomers (irc, mailing list, slack, whatever works)
  • Code of conduct encourages developers to assume good intent

Level 3: succession planning

The next step for a community is to figure out how to retain those diverse candidates. How do you promote these new, diverse voices in order to ensure they impact your community at a leadership level? If your leadership is stale, comprised of the same “usual faces”, people will leave when they start wanting to have more of a say in decisions. If your community sees bright diverse people quietly leave, you may need to focus on retention.

Signs of a level 3 welcoming community:

  • Reviewers are rewarded and questions from newcomers on unclear contributions are encouraged
  • Leaders and/or maintainers are rotated on a set time schedule
  • Vacations and leaves of absence are encouraged, so backup maintainers have a chance to learn new skills
  • Community members write tutorials on the art of patch review, release management, and the social side of software development
  • Mentorship for new presenters at conferences
  • Code of conduct encourages avoiding burnout, and encourages respect when people leave

Level 4: empathy and awareness

Once your focus on retention and avoiding developer burnout is in place, it’s time to tackle the task most geeks avoid: general social issues. Your leaders will have different opinions, as all healthy communities should! However, you need to take steps to ensure the loudest voice doesn’t always win by tiring people out, and that less prominent and minority voices are heard.

Signs of a level 4 welcoming community:

  • Equally values developers, bug reporters, and non-code contributors
  • Focuses on non-technical issues, including in-person discussions of cultural or political issues with a clear follow-up from leaders
  • Constantly improves documentation
  • Leadership shows the ability to recognize their mistakes and change when called out
  • Community manager actively enforces the code of conduct when appropriate
  • Code of conduct emphasizes listening to different perspectives

Level 5: diversity

Once you’ve finally got all that cultural change in place, you can work on actively seeking out more diverse voices and have a hope of retaining them.

Signs of a level 5 welcoming community:

  • Leadership gatherings include at least 30% new voices, and familiar voices are rotated in and out
  • People actively reach outside their network and the “usual faces” when searching for new leaders
  • Community participates in diversity programs
  • Diversity is not just a PR campaign – developers truly seek out different perspectives and try to understand their own privilege
  • Gender presentation is treated as a non-issue at conferences
  • Conferences include child care, clearly labeled veggie and non-veggie foods, and a clear event policy
  • Alcoholic drinks policy encourages participants to have fun, rather than get smashed
  • Code of conduct explicitly protects diverse developers, acknowledging the spectrum of privilege
  • Committee handling enforcement of the code of conduct includes diverse leaders from the community

The thing that frustrates me the most is when communities skip steps. “Hey, we have a code of conduct and child care, but known harassers are allowed at our conferences!” “We want to participate in a diversity program, but we don’t have any mentors and we have no idea what the contributor would work on long term!” So, get your basic cultural changes done first, please.

*pops back off the internet*

Edit: Please stop suggesting BSDs or Canonical/Ubuntu as “better” communities.

by sarah at October 06, 2015 01:14 PM

October 04, 2015

No, you are not doing DevOps (…and nor am I)

A word of caution

This post is addressed to all those people who think they know what DevOps means, but they don’t. If you don’t recognize yourself in this post, then maybe it’s not for you. If you recognize yourself, then beware: you’re going to be insulted: read at your own risk and don’t bother asking for apologies.

DevOps is everywhere. So bad it’s not.

DevOps is in everyone’s mouth. It’s the theme for entire conferences or the topic for at least one talk in many conferences. It fills the blogs of people and companies. It fills the mouths of clueless executives.

Unfortunately, the more widespread the word, the less understood the concept. It’s easy to expect how, as the word “DevOps” goes up and up the management chain, its meaning gets oversimplified to the point that when it reaches the Board it has a complete different meaning than when it started the journey. That’s understandable. The depressing part is when this meaning goes back down the chain and owns the head of mediocre techies, giving birth to horrible monsters that, alas, get tagged as DevOps. Examples of such monsters are:

  • DevOps is identified with the usage of a certain set of tools: you use Puppet, CFEngine, Chef? Or maybe Ansible or SaltStack? You make nifty graphs with your Kibana? You have an hybrid cloud with OpenStack and AWS? You do microservices, pack them into Docker containers, deploy them “in the cloud”? Then you DevOps.
  • You fired all your operators and your developers must now take care of everything? Operators are a stinky legacy of the past? Then you DevOps.
  • You replaced your job title (“Web developer”, “System Architect”, “Crow counter engineer”…) with “DevOps” or “DevOps engineer”. Because yeah, you DevOps.


The first, for example, is bullshit. Using a certain tool/set of tools isn’t DevOps, the same way that owning a Ferrari doesn’t make you a professional racing driver.

The second is also bullshit. Let me whisper a thing in your ear: it’s called “cloud”, but it’s not thin air. Guess who’s taking care that your shitty containers are running on something? Hint: it’s not your granma.

The third is, guess what?, bullshit. Let’s be clear: it’s a good move in order to get caught into the nets of keyword-based CV scanners of recruitment agencies. You have a point there. But it’s no more meaningful than someone running a one-person company and tagging himself as “CEO at John Smith Consulting LTD”.

But these misconceptions are now widespread. While it’s understandable, though very sad, that non-technical executives don’t get it right, fire all the operators and make a mess of their own business, it made me furious and frustrated at the same time when a developer, now tagging himself as DevOps because “hey, I’m deploying Docker containers in OpenShift”, addressed me and my professional category as a useless legacy of the past in front of an audience. My dear fellow, it was just out of respect for that audience and not for you that you didn’t get a loud “fuck you” in return.

DevOps is the best evolutionary step that happened in our industry in ages, awakening people to the consciousness that well isolated, hyper-specialized people, held prisoner in watertight silos is a poor way to run a crucial part of your business: IT. And it’s a culture, not a tool!

Back to basics: DevOps is a culture…

…and as such is not suitable to idiots, but that’s another story.

The first time I read an organic definition of DevOps it was in Mike Loukides’ booklet “What is DevOps”. There are a few sentences I’d like to quote here.

Referring to a discussion between Adrian Cockcroft and John Allspaw over Adrian’s article NoOps at Netflix [full title: Ops, DevOps and PaaS (NoOps) at Netflix], Mike says:

What Adrian described as “NoOps” isn’t really. Operations doesn’t go away […] If NoOps is a movement for replacing operations with something that looks suspiciously like operations, there’s clearly confusion.

Mike goes on describing how at the dawn of the modern computers era there was no separation at all between developers and operations, how that separation came in the 60’s, evolved into the IT Operations of the 80’s/90’s, was put under pressure by the need for more automation, and started to blur again when “configuration management” had to be accepted as a mandatory necessity. Infrastructure as a code was born.

The end of operators? Not quite. Mike says:

Rather than envision some sort of uber developer, who understands big data, web performance optimization, application middleware, and fault tolerance in a massively distributed environment, we need operations specialists on the development teams.

Keep this in mind for a while: no uber-developers, but operations specialists in development teams. Got it? OK, keep reading:

The infrastructure doesn’t go away — it moves into the code; and the people responsible for the infrastructure, the system administrators and corporate IT groups, evolve so that they can write the code that maintains the infrastructure. Rather than being isolated, they need to cooperate and collaborate with the developers who create the applications. This is the movement informally known as “DevOps.”

We have an idea of what DevOps really is now. It’s a culture of cooperation. Jez Humble goes even further and asserts that there is no such thing as a “DevOps team”, stressing again how “For developers to take responsibility for the systems they create, they need support from operations to understand how to build reliable software that can be continuous deployed to an unreliable platform that scales horizontally“. Operations is still there.

What’s not there is the silo: the wall rigidly dividing the developers and the operations, or the hyper-specialization that requires developers to lobotomise and ignore anything system or network, and conversely for systems or network administrators to ignore whatever has to do with code: that separation is blurred. Developers need to know more about how the infrastructure supporting their software actually works, so that they can make it better and more reliable. Systems and network administrators need to understand what the code is supposed to do to be able to provide the best solutions.

That’s what high performing it organisations are doing.

Are you?

I’m not. And that upsets me.

Resistance is futile. But fierce.

My resistance is futile. People who claim they are doing DevOps and aren’t won’t repent: they’ll go straight on their road, head down. After all, in democracy the majority decides what’s right and what isn’t, and they will be the majority sooner or later. If they aren’t already.

The resistance of those not doing DevOps is futile. The game is a cultural shift at all levels, starting from the lowest level of operators and their managers and up to the top. It’s a change in how some people have worked for one or more decades, they now have to rethink everything. Never easy to do, as the most common reaction for people in such cases is to push on the brakes with both feet.

Unfortunately, we are talking of a vehicle that is more like a bus than a city car. Who drives the bus is not only jeopardizing their own journey, they are jeopardizing the journey of whoever is on the bus with them. Even if you are the one driving, you are surrounded by people doing all sorts of sabotage: they set your gear into neuter, they pull the handbrake, they cover your eyes…

Nobody goes anywhere.

Maybe that’s why so many of us aren’t doing DevOps, even if we’d love to.

Tagged: DevOps, Sysadmin

by bronto at October 04, 2015 05:58 PM

Interesting links: October 4th 2015

Here's a bunch of links I found interesting in the last few weeks:

by admin at October 04, 2015 09:27 AM

Batch create new users on Linux

A while ago I had to create many new users on a Linux machine. Since I'm lazy, I opted to automate this process. The newusers command combined with pwgen (to generate new passwords) was the solution.

First I installed pwgen, a utility to automatically generate passwords:

$ sudo apt-get install pwgen

I created a file with the new user names to create.

$ cat newusers.txt

A simple shell one-liner generates a new file from this in the right format for the newusers tool:

$ for USER in $(cat newusers.txt); do 
  echo "$USER:$(pwgen 12 -n1)::::/home/$USER:/bin/bash" >> newusers.created.txt;

Finally, we create the new users:

$ sudo newusers newusers.created.txt

The newusers.created.txt file was handed over to the person in charge of notifying the users about their new account.

by admin at October 04, 2015 09:02 AM

October 03, 2015

League of Professional System Administrators

Welcome to our newest sponsor: Edgestream Partners

The LOPSA board welcomes our newest sponsor Edgestream Partners to the LOPSA community. Edgestream Partners is a small group of scientists and engineers with a unique approach to trading in the financial markets. They design, build and run a global trading software platform. They take pride in their software craftsmanship and use Python, Cython and C on Linux to run their global trading operations. They also use open-source tools as much as possible such as Python, PostgreSQL, numpy, git, Cobbler, Puppet and Ansible. Tools that LOPSA members use and have contributed to over the years.

by lopsawebstaff at October 03, 2015 03:50 AM