Planet SysAdmin

March 29, 2015

Chris Siebenmann

SSH connection sharing and erratic networks

In the past I've written about SSH connection sharing and how I use it at work. At home, though, while I've experimented with it I've wound up abandoning SSH connection sharing completely because it turns out that SSH connection sharing has a big drawback in the sort of networking environment I have at home.

Put simply, with connection sharing, if one SSH session to a host stalls or dies they all do. With connection sharing, all of your separate-looking sessions are running over one underlying SSH transport stream and one underlying TCP connection. In a reliable networking environment, this is no problem. In a networking environment where you sometimes experience network stalls or problems, this means that the moment you have one, all of your connections stall and you can't make new ones. You get to wait out TCP retransmission timeout delays and so on, for everything. And if there's an interruption that's long enough to cause an active session to close itself, you lose everything, including the inactive sessions that would have otherwise survived.

It turns out that this is really annoying and frustrating when (or if) it happens. On the university network at work, it basically never comes up (especially since most of the machines I do this to are on the same subnet as my office workstation), but from home the network stalls, dropped ACKs, and outright brief connection losses happened too often for me to keep my sanity in the face of this. The minor savings in latency for new connections weren't worth the heartburn when something went wrong.

(The most irritating things were and are generally retransmit timeouts. When the network comes back you can easily get into a situation where your new SSH session is perfectly active but the old session that had some output and thus a transmit timeout during the network interruption is just sitting there silently waiting for the TCP timeout to expire and a retransmit to kick off. This can happen on either end, although often it will happen on your end because you kept typing even as the network path was failing.)

by cks at March 29, 2015 07:05 AM

March 28, 2015

Disabling GNU Screen lock screen

Lock screen

For the impatient or TLDR ... Put: export LOCKPRG='/bin/true' in your .bashrc. Kill all screen sessions and start a new one. Ctrl-a x should now do nothing.

I have been using GNU Screen for a while now. I usually create multiple new windows (Ctrl-a c) every day. Then I flip back and forth between the multiple screens (Ctrl-a n) or just toggle between the last 2 windows used (Ctrl-a a). Sometimes my finger slips and I hit Ctrl-a x which provides you with a password prompt. This is GNU Screen's lockscreen function.

Normal and not so normal

Normally you just enter your user password and the screen will unlock. Screen is using /local/bin/lck or /usr/bin/lock by default. This is all fine and dandy if you have a user account password to enter. If you have servers that only use SSH keys and don't use passwords you will have no valid password to enter. In other words you are stuck at this lock screen. One way around it is to login to the same machine and re-attach to the same screen session (normally "screen -r" if you have only 1 session open). Then kill the session with the lock screen. This is annoying to have to do.

Disabling lock screen

I wanted to be able to just disable the lockscreen setting and be done with this. Looking at the man page it shows there is no option to turn this off. So you have to go around it in a way. The way I found to do this on machines without user passwords is to disable the terminal lock program by setting the environment variable LOCKPRG to "/bin/true". For BASH put export LOCKPRG='/bin/true' in your .bashrc. Kill all screen sessions and start a new one. Ctrl-a x should now do nothing.

Warning about disabling LOCKPRG

For security sake be careful about disabling your LOCKPRG. /bin/true disables all terminal screen locking, so if you use some program that uses whatever is in the LOCKPRG variable it essentially disables it.

by at March 28, 2015 08:05 PM

Chris Siebenmann

All browsers need a (good) way to flush memorized HTTP redirects

As far as I know, basically all browsers cache HTTP redirects by default, especially permanent ones. If you send your redirects without cache-control headers (and why would you do that for a permanent redirect), they may well cache them for a very long time. In at least Firefox, these memorized redirects are extremely persistent and seem basically impossible to get rid of in any easy way (having Firefox clear your local (disk) cache certainly doesn't do it).

This is a bad mistake. The theory is that a permanent redirect is, well, permanent. The reality is that websites periodically send permanent redirects that are not in fact permanent (cf) and they disappear after a while. Except, of course, that when your browser more or less permanently memorizes this temporary permanent redirect, it's nigh-permanent for you. So browsers should have a way to flush such a HTTP redirect just as they have shift-reload to force a real cache-bypassing page refresh.

(A normal shift-reload won't do it because HTTP redirections are attached to following links, not to pages (okay, technically they're attached to looking up URLs). You could make it so that a shift-reload on a page flushes any memorized HTTP redirections for any link on the page, but that would be both kind of weird and not sufficient in a world where JavaScript can materialize its own links as it feels like.)

Sidebar: Some notes about this and Firefox

I've read some suggestions that Firefox will do this if you either tell Firefox to remove the site's browsing history entirely or delete your entire cache directory by hand. Neither are what are I consider adequate solutions; one has drastic side effects and the other requires quite obscure by-hand action. I want something within the browser that is no more effort and impact than 'Preferences / Advanced / Network / Clear cached web content'. It actually irritates me that telling Firefox to clear cached content does not also discard memorized HTTP redirections, but it clearly doesn't.

If you have some degree of control over the target website, you can force Firefox to drop the memorized HTTP redirection by redirecting back to the right version. This is generally only going to be useful in some situations, eg if you have the same site available under multiple names.

by cks at March 28, 2015 04:29 AM

March 27, 2015


The Attack on GitHub Must Stop

For many years, private organizations in the West have endured attacks by the Chinese government, its proxies, and other parties. These intruders infiltrated private organizations to steal data. Those not associated with the targeted organizations were generally not directly affected.

Today an action by the Chinese government is affecting millions of users around the world. This is unacceptable.

You may be aware that an American technology company, GitHub, is suffering a massive distributed denial of service attack, at the time of writing.

According to Insight Labs, Internet traffic within China is being manipulated, such that users are essentially attacking GitHub. They are unwittingly requesting two sites hosted by GitHub. The first is a mirror of the Chinese edition of the New York Times (blocked for several years). The other is a mirror of the Web site, devoted to discovering and exposing Internet filtering by China's "Great Firewall."

As noted in this Motherboard story, it's unlikely a party other than the Chinese government could sustain this attack, given the nature of the traffic injection within the country's routing infrastructure. Even if somehow this is not a state-executed or state-ordered attack, according to the spectrum of state responsibility, the Chinese government is clearly responsible in one form or another.

It is reprehensible that the censorship policies and actions of a nation-state are affecting "over 3.4 million users and with 16.7 million repositories... the largest code host in the world." (Source)

The Chinese government is forcing GitHub to expend its private resources in order to continue serving its customers. I call on the US government, and like-minded governments and their associates, to tell the Chinese to immediately stop this activity. I also believe companies like IBM, who are signing massive IT deals with "Chinese partners," should reconsider these associations.

by Richard Bejtlich ( at March 27, 2015 07:40 PM

Everything Sysadmin

Usenix LISA 2015 Call For Participation (3 weeks left!)

Only 3 weeks left to submit talk and paper proposals for LISA 2015. This year's conference is in Washington D.C. on November 8-13.

This might be a good weekend to spend time writing your first draft!

Don't be afraid to submit proposals early. Unsure of your topic? Contact the chairs and bounce ideas off of them.

March 27, 2015 05:37 PM

Chris Siebenmann

Looking more deeply into some SMTP authentication probes

Back when I wrote about how fast spammers showed up to probe our new authenticated SMTP service, I said:

[...] I can see that in fact the only people who have gotten far enough to actually try to authenticate are a few of our own users. Since our authenticated SMTP service is still in testing and hasn't been advertised, I suspect that some people are using MUAs (or other software) that simply try authenticated SMTP against their IMAP server just to see if it works.

Ha ha. Silly me. Nothing as innocent as this is what's actually going on. When I looked into this in more detail, I noticed that the IP addresses making these SMTP authentication attempts were completely implausible for the users involved (unless, for example, some of them had abruptly relocated to Eastern Europe). Further, some of the time the authentication attempts were against usernames that were their full email addresses ('<user>@ourdomain') instead of just their login name. But they were all real usernames, which is what I noticed first.

What I think has to be going on is that attackers are mining actual email addresses from domains in order to determine (or guess) the logins to try for SMTP authentication probes, rather than following the SSH pattern of brute force attacks against a long laundry list of login names. This makes a lot of sense for attackers; why waste time and potentially set off alerts when you can do a more targeted attack using resources that are already widely available (namely, lists of email addresses at target domains).

(Not all of the logins that have been tried so far are currently valid ones, but all but one of them have been valid in the past or at least appear in web searches.)

So far there's only been a small volume of these probes. It's quite possible that in the long run these will turn out to be in the minority and the only reason I'm noticing them right now is that other attackers have yet to discover us and start more noisy bulk attempts.

(I suspect that at least some of the attackers are doing this because of the presence of the DNS name ''.)

by cks at March 27, 2015 05:05 AM

March 26, 2015

Racker Hacker

Creating a bridge for virtual machines using systemd-networkd

There are plenty of guides out there for making ethernet bridges in Linux to support virtual machines using built-in network scripts or NetworkManager. I decided to try my hand with creating a bridge using only systemd-networkd and it was surprisingly easy.

First off, you’ll need a version of systemd with networkd support. Fedora 20 and 21 will work just fine. RHEL/CentOS 7 and Arch Linux should also work. Much of the networkd support has been in systemd for quite a while, but if you’re looking for fancier network settings, like bonding, you’ll want at least systemd 216.

Getting our daemons in order

Before we get started, ensure that systemd-networkd will run on a reboot and NetworkManager is disabled. We also need to make a config file director for systemd-networkd if it doesn’t exist already. In addition, let’s enable the caching resolver and make a symlink to systemd’s resolv.conf:

systemctl enable systemd-networkd
systemctl disable NetworkManager
systemctl enable systemd-resolved
ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
mkdir /etc/systemd/network

Configure the physical network adapter

In my case, the network adapter connected to my external network is enp4s0 but yours will vary. Run ip addr to get a list of your network cards. Let’s create /etc/systemd/network/ and put the following in it:


I’m telling systemd to look for a device called enp4s0 and then add it to a bridge called br0 that we haven’t configured yet. Be sure to change enp4s0 to match your ethernet card.

Make the bridge

We need to tell systemd about our new bridge network device and we also need to specify the IP configuration for it. We start by creating /etc/systemd/network/br0.netdev to specify the device:


This file is fairly self-explanatory. We’re telling systemd that we want a device called br0 that functions as an ethernet bridge. Now create /etc/systemd/network/ to specify the IP configuration for the br0 interface:


This file tells systemd that we want to apply a simple static network configuration to br0 with a single IPv4 address. If you want to add additional DNS servers or IPv4/IPv6 addresses, just add more DNS= and Address lines right below the ones you see above. Yes, it’s just that easy.

Let’s do this

Some folks are brave enough to stop NetworkManager and start all of the systemd services here but I prefer to reboot so that everything comes up cleanly. That will also allow you to verify that future reboots will cause the server to come back online with the right configuration. After the reboot, run networkctl and you’ll get something like this (with color):

networkctl screenshot

Here’s what’s in the screenshot:

IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 enp2s0           ether              off         unmanaged 
  3 enp3s0           ether              off         unmanaged 
  4 enp4s0           ether              degraded    configured
  5 enp5s0           ether              off         unmanaged 
  6 br0              ether              routable    configured
  7 virbr0           ether              no-carrier  unmanaged 
7 links listed.

My ethernet card has four ports and only enp4s0 is in use. It has a degraded status because there is no IP address assigned to enp4s0. You can ignore that for now but it would be nice to see this made more clear in a future systemd release.

Look at br0 and you’ll notice that it’s configured and routable. That’s the best status you can get for an interface. You’ll also see that my other ethernet devices are in the unmanaged state. I could easily add more .network files to /etc/systemd/network to configure those interfaces later.

Further reading

As usual, the Arch Linux wiki page on systemd-networkd is a phenomenal resource. There’s a detailed overview of all of the available systemd-networkd configuration file options over at systemd’s documentation site.

The post Creating a bridge for virtual machines using systemd-networkd appeared first on

by Major Hayden at March 26, 2015 01:17 PM

Chris Siebenmann

Why systemd should have ignored SysV init script LSB dependencies

In his (first) comment on my recent entry on program behavior and bugs, Ben Cotton asked:

Is it better [for systemd] to ignore the additional [LSB dependency] information for SysV init scripts even if that means scripts that have complete information can't take advantage of it?

My answer is that yes, systemd should have ignored the LSB dependency information for System V init scripts. By doing so it would have had (or maintained) the full System V init compatibility that it doesn't currently have.

Systemd has System V init compatibility at all because it is and was absolutely necessary for systemd to be adopted. Systemd very much wants you to do everything with native systemd unit files, but the systemd authors understood that if systemd only supported its own files, there would be a massive problem; any distribution and any person that wanted to switch to systemd would have to rewrite every SysV init script they had all at once. To take over from System V init at all, it was necessary for systemd to give people a gradual transition instead of a massive flag day exercise. However, the important thing is that this was always intended as a transition; the long run goal of systemd is to see all System V init scripts replaced by units files. This is the expected path for distributions and systems that move to systemd (and has generally come to pass).

It was entirely foreseeable that some System V init scripts would have inaccurate LSB dependency information, especially in distributions that have previously made no use of it. Supporting LSB dependencies in existing SysV init scripts is not particularly important to systemd's long term goals because all of those scripts are supposed to turn into units files (with real and always-used dependency information). In the short term, this support allows systemd to boot a system that uses a lot of correctly written LSB init scripts somewhat faster than it would otherwise have, at the cost of adding a certain amount of extra code to systemd (to parse the LSB comments et al) and foreseeably causing a certain amount of existing init scripts (and services) with inaccurate LSB comments to malfunction in various ways.

(Worse, the init scripts that are likely to stick around the longest are exactly the least well maintained, least attended, most crufty, and least likely to be correct init scripts. Well maintained packages will migrate to native systemd units relatively rapidly; it's the neglected ones or third-party ones that won't get updated.)

So, in short: by using LSB dependencies in SysV init script comments, systemd got no long term benefit and slightly faster booting in the short term on some systems, at the cost of extra code and breaking some systems. It's my view that this was (and is) a bad tradeoff. Had systemd ignored LSB dependencies, it would have less code and fewer broken setups at what I strongly believe is a small or trivial cost.

by cks at March 26, 2015 04:24 AM

RISKS Digest

March 25, 2015

Everything Sysadmin

The State of DevOps Report

Where does it come from?

Have you read the 2014 State of DevOps report? The analysis is done by some of the world's best IT researchers and statisticians.

Be included in the 2015 edition!

A lot of the data used to create the report comes from the annual survey done by Puppet Labs. I encourage everyone to take 15 minutes to complete this survey. It is important that your voice and experience is represented in next year's report. Take the survey

But I'm not important enough!

Yes you are. If you think "I'm not DevOps enough" or "I'm not important enough" then it is even more important that you fill out the survey. The survey needs data from sites that are not "DevOps" (whatever that means!) to create the basis of comparison.

Well, ok, I'll do it then!

Great! Click the link:

Thank you,

March 25, 2015 03:00 PM

Alen Krmelj

Sysadmin Open Source Landscape

Idea I decided to take up a small side project. That is the reason why I didn’t write new blog posts in last month or two. (No I didn’t run out of topics to talk about ) The idea of a project came out of all those questions on the web, where people ask what […]

by Alen Krmelj at March 25, 2015 09:18 AM

Chris Siebenmann

A significant amount of programming is done by superstition

Ben Cotton wrote in a comment here:

[...] Failure to adhere to a standard while on the surface making use of it is a bug. It's not a SySV init bug, but a bug in the particular init script. Why write the information at all if it's not going to be used, and especially if it could cause unexpected behavior? [...]

The uncomfortable answer to why this happens is that a significant amount of programming in the real world is done partly through what I'll call superstition and mythology.

In practice, very few people study the primary sources (or even authoritative secondary sources) when they're programming and then work forward from first principles; instead they find convenient references, copy and adapt code that they find lying around in various places (including the Internet), and repeat things that they've done before with whatever variations are necessary this time around. If it works, ship it. If it doesn't work, fiddle things until it does. What this creates is a body of superstition and imitation. You don't necessarily write things because they're what's necessary and minimal, or because you fully understand them; instead you write things because they're what people before you have done (including your past self) and the result works when you try it.

(Even if you learned your programming language from primary or high quality secondary sources, this deep knowledge fades over time in most people. It's easy for bits of it to get overwritten by things that are basically folk wisdom, especially because there can be little nuggets of important truth in programming folk wisdom.)

All of this is of course magnified when you're working on secondary artifacts for your program like Makefiles, install scripts, and yes, init scripts. These aren't the important focus of your work (that's the program code itself), they're just a necessary overhead to get everything to go, something you usually bang out more or less at the end of the project and probably without spending the time to do deep research on how to do them exactly right. You grab a starting point from somewhere, cut out the bits that you know don't apply to you, modify the bits you need, test it to see if it works, and then you ship it.

(If you say that you don't take this relatively fast road for Linux init scripts, I'll raise my eyebrows a lot. You've really read the LSB specification for init scripts and your distribution's distro-specific documentation? If so, you're almost certainly a Debian Developer or the equivalent specialist for other distributions.)

So in this case the answer to Ben Cotton's question is that people didn't deliberately write incorrect LSB dependency information. Instead they either copied an existing init script or thought (through superstition aka folk wisdom) that init scripts needed LSB headers that looked like this. When the results worked on a System V init system, people shipped them.

This isn't something that we like to think about as programmers, because we'd really rather believe that we're always working from scratch and only writing the completely correct stuff that really has to be there; 'cut and paste programming' is a pejorative most of the time. But the reality is that almost no one has the time to check authoritative sources every time; inevitably we wind up depending on our memory, and it's all too easy for our fallible memories to get 'contaminated' with code we've seen, folk wisdom we've heard, and so on.

(And that's the best case, without any looking around for examples that we can crib from when we're dealing with a somewhat complex area that we don't have the time to learn in depth. I don't always take code itself from examples, but I've certainly taken lots of 'this is how to do <X> with this package' structural advice from them. After all, that's what they're there for; good examples are explicitly there so you can see how things are supposed to be done. But that means bad examples or imperfectly understood ones add things that don't actually have to be there or that are subtly wrong (consider, for example, omitted error checks).)

by cks at March 25, 2015 06:17 AM

systemd: Don't fear change

This article talks about systemd in Red Hat Enterprise Linux / CentOS 7. It gives some usage examples and talks about the differences between systemd, upstart and sysvinit.

March 25, 2015 12:00 AM

March 24, 2015


Can Interrogators Teach Digital Security Pros?

Recently Bloomberg published an article titled The Dark Science of Interrogation. I was fascinated by this article because I graduated from the SERE program at the US Air Force Academy in the summer of 1991, after my freshman year there. SERE teaches how to resist the interrogation methods used against prisoners of war. When I attended the school, the content was based on techniques used by Korea and Vietnam against American POWs in the 1950s-1970s.

As I read the article, I realized the subject matter reminded me of another aspect of my professional life.

In intelligence, as in the most mundane office setting, some of the most valuable information still comes from face-to-face conversations across a table. In police work, a successful interrogation can be the difference between a closed case and a cold one. Yet officers today are taught techniques that have never been tested in a scientific setting. For the most part, interrogators rely on nothing more than intuition, experience, and a grab bag of passed-down methods.

“Most police officers can tell you how many feet per second a bullet travels. They know about ballistics and cavity expansion with a hollow-point round,” says Mark Fallon, a former Naval Criminal Investigative Service special agent who led the investigation into the USS Cole attack and was assistant director of the federal government’s main law enforcement training facility. “What as a community we have not yet embraced as effectively is the behavioral sciences...”

Christian Meissner, a psychologist at Iowa State University, coordinates much of HIG’s research. “The goal,” he says, “is to go from theory and science, what we know about human communication and memory, what we know about social influence and developing cooperation and rapport, and to translate that into methods that can be scientifically validated.” Then it’s up to Kleinman, Fallon, and other interested investigators to test the findings in the real world and see what works, what doesn’t, and what might actually backfire.

Does this sound familiar? Security people know how many flags to check in a TCP header, or how many bytes to offset when writing shell code, but we don't seem to "know" (in a "scientific" sense) how to "secure" data, networks, and so on.

One point of bright light is the Security Metrics community. The mailing list is always interesting for those trying to bring counting and "science" to the digital security profession. Another great project is the Index of Cyber Security run by Dan Geer and Mukul Pareek.

I'm not saying there is a "science" of digital security. Others will disagree. I also don't have any specific recommendations based on what I read in the interrogation article. However, I did resonate with the article's message that "street wisdom" needs to be checked to see if it actually works. Scientific methods can help.

I am taking small steps in that direction with my PhD in the war studies department at King's College London.

by Richard Bejtlich ( at March 24, 2015 04:38 PM

Everything Sysadmin

2015 DevOps Survey

Have you taken the 2015 DevOps survey? The data from this survey influences many industry executives and helps push them towards better IT processes (and removing the insanity we find in IT today). You definitely want your voice represented. It takes only 15 minutes.

Take the 2015 DevOps Survey Now

March 24, 2015 02:30 PM

Racker Hacker

Test Fedora 22 at Rackspace with Ansible

Fedora Infinity LogoFedora 22 will be arriving soon and it’s easy to test on Rackspace’s cloud with my Ansible playbook:

As with the previous playbook I created for Fedora 21, this playbook will ensure your Fedora 21 instance is fully up to date and then perform the upgrade to Fedora 22.

WARNING: It’s best to use this playbook against a non-production system. Fedora 22 is an alpha release at the time of this post’s writing.

This playbook should work well against other servers and virtual machines from other providers but there are a few things that are Rackspace-specific cleanups that might not apply to other servers.

The post Test Fedora 22 at Rackspace with Ansible appeared first on

by Major Hayden at March 24, 2015 01:55 PM

Chris Siebenmann

What is and isn't a bug in software

In response to my entry on how systemd is not fully SysV init compatible because it pays attention to LSB dependency comments when SysV init does not, Ben Cotton wrote in a comment:

I'd argue that "But I was depending on that bug!" is generally a poor justification for not fixing a bug.

I strongly disagree with this view at two levels.

The first level is simple: this is not a bug in the first place. Specifically, it's not an omission or a bug that System V init doesn't pay attention to LSB comments; it's how SysV init behaves and has behaved from the start. SysV init runs things in the order they are in the rcN.d directory and that is it. In a SysV init world you are perfectly entitled to put whatever you want to into your script comments, make symlinks by hand, and expect SysV init to run them in the order of your symlinks. Anything that does not do this is not fully SysV init compatible. As a direct consequence of this, people who put incorrect information into the comments of their init scripts were not 'relying on a bug' (and their init scripts did not have a bug; at most they had a mistake in the form of an inaccurate comment).

(People make lots of mistakes and inaccuracies in comments, because the comments do not matter in SysV init (very little matters in SysV init).)

The second level is both more philosophical and more pragmatic and is about backwards compatibility. In practice, what is and is not a bug is defined by what your program accepts. The more that people do something and your program accepts it, the more that thing is not a bug. It is instead 'how your program works'. This is the imperative of actually using a program, because to use a program people must conform to what the program does and does not do. It does not matter whether or not you ever intended your program to behave that way; that it behaves the way it does creates a hard reality on the ground. That you left it alone over time increases the strength of that reality.

If you go back later and say 'well, this is a bug so I'm fixing it', you must live up to a fundamental fact: you are changing the behavior of your program in a way that will hurt people. It does not matter to people why you are doing this; you can say that you are doing it because the old behavior was a mistake, because the old behavior was a bug, because the new behavior is better, because the new behavior is needed for future improvements, or whatever. People do not care. You have broken backwards compatibility and you are making people do work, possibly pointless work (for them).

To say 'well, the old behavior was a bug and you should not have counted on it and it serves you right' is robot logic, not human logic.

This robot logic is of course extremely attractive to programmers, because we like fixing what are to us bugs. But regardless of how we feel about them, these are not necessarily bugs to the people who use our programs; they are instead how the program works today. When we change that, well, we change how our programs work. We should own up to that and we should make sure that the gain from that change is worth the pain it will cause people, not hide behind the excuse of 'well, we're fixing a bug here'.

(This shows up all over. See, for example, the increasingly aggressive optimizations of C compilers that periodically break code, sometimes in very dangerous ways, and how users of those compilers react to this. 'The standard allows us to do this, your code is a bug' is an excuse loved by compiler writers and basically no one else.)

by cks at March 24, 2015 05:47 AM

March 23, 2015

Carl Chenet

Unverified backups are useless. Automatize the controls!

Follow me on  or Twitter  or Diaspora* Unverified backups are useless, every sysadmins know that. But manually verifying a backup means wasting time and resources. Moreover it’s boring. You should automatize it! Backup Checker is a command line software developed in Python 3.4 on GitHub (stars appreciated :) ) allowing users to verify the integrity of…

by Carl Chenet at March 23, 2015 11:00 PM

Chris Siebenmann

Systemd is not fully backwards compatible with System V init scripts

One of systemd's selling points is that it's backwards compatible with your existing System V init scripts, so that you can do a gradual transition instead of having to immediately convert all of your existing SysV init scripts to systemd .service files. For the most part this works as advertised and much of the time it works. However, there are areas where systemd has chosen to be deliberately incompatible with SysV init scripts.

If you look at some System V init scripts, you will find comment blocks at the start that look something like this:

# Provides:        something
# Required-Start:  $syslog otherthing
# Required-Stop:   $syslog

These are a LSB standard for declaring various things about your init scripts, including start and stop dependencies; you can read about them here or here, no doubt among other places.

Real System V init ignores all of these because all it does is run init scripts in strictly sequential ordering based on their numbering (and names, if you have two scripts at the same numerical ordering). By contrast, systemd explicitly uses this declared dependency information to run some SysV init scripts in parallel instead of in sequential order. If your init script has this LSB comment block and declares dependencies at all, at least some versions of systemd will start it immediately once those dependencies are met even if it has not yet come up in numerical order.

(CentOS 7 has such a version of systemd, which it labels as 'systemd 208' (undoubtedly plus patches).)

Based on one of my sysadmin aphorisms, you can probably guess what happened next: some System V init scripts have this LSB comment block but declare incomplete dependencies. On a real System V init script this does nothing and thus is easily missed; in fact these scripts may have worked perfectly for a decade or more. On a systemd system such as CentOS 7, systemd will start these init scripts out of order and they will start failing, even if what they depend on is other System V init scripts instead of things now provided directly by systemd .service files.

This is a deliberate and annoying choice on systemd's part, and I maintain that it is the wrong choice. Yes, sure, in an ideal world the LSB dependencies would be completely correct and could be used to parallelize System V init scripts. But this is not an ideal world, it is the real world, and given that there's been something like a decade of the LSB dependencies being essentially irrelvant it was completely guaranteed that there would be init scripts out there that mis-declared things and thus that would malfunction under systemd's dependency based reordering.

(I'd say that the systemd people should have known better, but I rather suspect that they considered the issue and decided that it was perfectly okay with them if such 'incorrect' scripts broke. 'We don't support that' is a time-honored systemd tradition, per say separate /var filesystems.)

by cks at March 23, 2015 05:05 AM

March 22, 2015

Chris Siebenmann

I now feel that Red Hat Enterprise 6 is okay (although not great)

Somewhat over a year ago I wrote about why I wasn't enthused about RHEL 6. Well, it's a year later and I've now installed and run a CentOS 6 machine for an important service that requires it, and as a result of that I have to take back some of my bad opinions from that entry. My new view is that overall RHEL 6 makes an okay Linux.

I haven't changed the details of my views from the first entry. The installer is still somewhat awkward and it remains an old-fashioned transitional system (although that has its benefits). But the whole thing is perfectly usable; both installing the machine and running it haven't run into any particular roadblocks and there's a decent amount to like.

I think that part of my shift is all of our work on our CentOS 7 machines has left me a lot more familiar with both NetworkManager and how to get rid of it (and why you want to do that). These days I know to do things like tick the 'connect automatically' button when configuring the system's network connections during install, for example (even though it should be the default).

Apart from that, well, I don't have much to say. I do think that we made the right decision for our new fileserver backends when we delayed them in order to use CentOS 7, even if this was part of a substantial delay. CentOS 6 is merely okay; CentOS 7 is decently nice. And yes, I prefer systemd to upstart.

(I could write a medium sized rant about all of the annoyances in the installer, but there's no point given that CentOS 7 is out and the CentOS 7 one is much better. The state of the art in Linux installers is moving forward, even if it's moving slowly. And anyways I'm spoiled by our customized Ubuntu install images, which preseed all of the unimportant or constant answers. Probably there is some way to do this with CentOS 6/7, but we don't install enough CentOS machines for me to spend the time to work out the answers and build customized install images and so on.)

by cks at March 22, 2015 06:35 AM

OpenSSL: Manually verify a certificate against a CRL

This article shows you how to manually verfify a certificate against a CRL. CRL stands for Certificate Revocation List and is one way to validate a certificate status. It is an alternative to the OCSP, Online Certificate Status Protocol.

March 22, 2015 12:00 AM

Keep messages secure with GPG

This article shows you how to get started with GPG and Mailvelope. It discusses public/private key crypto and shows you how to use the Mailvelope software to encrypt and decrypt GPG messages on any webmail provider.

March 22, 2015 12:00 AM

March 21, 2015

Chris Siebenmann

Spammers show up fast when you open up port 25 (at least sometimes)

As part of adding authenticated SMTP to our environment, we recently opened up outside access to port 25 (and port 587) to a machine that hadn't had them exposed before. You can probably guess what happened next: it took less than five hours before spammers were trying to rattle the doorknobs to see if they could get in.

(Literally. I changed our firewall to allow outside access around 11:40 am and the first outside attack attempt showed up at 3:35 pm.)

While I don't have SMTP command logs, Exim does log enough information that I'm pretty sure that we got two sorts of spammers visiting. The first sort definitely tried to do either an outright spam run or a relay check, sending MAIL FROMs with various addresses (including things like 'postmaster@<our domain>'); all of these failed since they hadn't authenticated first. The other sort of spammer is a collection of machines that all EHLO as 'ylmf-pc', which is apparently a mass scanning system that attempts to brute force your SMTP authentication. So far there is no sign that they've succeeded on ours (or are even trying), and I don't know if they even manage to start up a TLS session (a necessary prerequisite to even being offered the chance to do SMTP authentication). These people showed up second, but not by much; their first attempt was at 4:04 pm.

(I have some indications that in fact they don't. On a machine that I do have SMTP command logs on, I see ylmf-pc people connect, EHLO, and then immediately disconnect without trying STARTTLS.)

It turns out that Exim has some logging for this (the magic log string is '... authenticator failed for ...') and using that I can see that in fact the only people who have gotten far enough to actually try to authenticate are a few of our own users. Since our authenticated SMTP service is still in testing and hasn't been advertised, I suspect that some people are using MUAs (or other software) that simply try authenticated SMTP against their IMAP server just to see if it works.

There are two factors here that may mean this isn't what you'll see if you stand up just any server on a new IP, which is that this server has existed for some time with IMAP exposed (and under a well known DNS name at that, one that people would naturally try if they were looking for people's IMAP servers). It's possible that existing IMAP servers get poked far more frequently and intently than other random IPs.

(Certainly I don't see anything like this level of activity on other machines where I have exposed SMTP ports.)

by cks at March 21, 2015 05:04 AM

How I got a valid SSL certificate for my ISP's main domain,

I got a valid SSL certificate for a domain that is not mine by creating an email alias. In this article I'll explain what happened, why that was possible and how we all can prevent this.

March 21, 2015 12:00 AM

Olimex OlinuXino A20 LIME2 mainline 4.0.0 kernel, u-boot and debian rootfs image building tutorial

The Olimex OlinuXino A20 LIME2 is an amazing, powerfull and cheap open source ARM development board. It costs EUR 45, and has 160 GPIO pins. This is a guide to build a linux image with Debian and the mainline 4.0.0 kernel for the Olimex A20-Lime2 board, from scratch. By default it comes with an 3.4 kernel with binary blobs and patches from Allwinner. Recently the mainline kernel has gained support for these boards, you can now run and use the mainline kernel without these awfull non-free binary blobs.

March 21, 2015 12:00 AM

March 20, 2015

The Tech Teapot

A cloudy solar eclipse

Partial Eclipse Otley

The partial solar eclipse taken from Otley, West Yorkshire.

by Jack Hughes at March 20, 2015 01:29 PM

Chris Siebenmann

Unix's mistake with rm and directories

Welcome to Unix, land of:

; rm thing
rm: cannot remove 'thing': Is a directory
; rmdir thing

(And also rm may only be telling you half the story, because you can have your rmdir fail with 'rmdir: failed to remove 'thing': Directory not empty'. Gee thanks both of you.)

Let me be blunt here: this is Unix exercising robot logic. Unix knows perfectly well what you want to do, it's perfectly safe to do so, and yet Unix refuses to do it (or tell you the full problem) because you didn't use the right command. Rm will even remove directories if you just tell it 'rm -r thing', although this is more dangerous than rmdir.

Once upon a time rm had almost no choice but to do this because removing directories took special magic and special permissions (as '.' and '..' and the directory tree were maintained in user space). Those days are long over, and with them all of the logic that would have justified keeping this rm (mis)feature. It lingers on only as another piece of Unix fossilization.

(This restriction is not even truly Unixy; per Norman Wilson, Research Unix's 8th edition removed the restriction, so the very heart of Unix fixed this. Sadly very little from Research Unix V8, V9, and V10 ever made it out into the world.)

PS: Some people will now say that the Single Unix Specification (and POSIX) does not permit rm to behave this way. My view is 'nuts to the SUS on this'. Many parts of real Unixes are already not strictly POSIX compliant, so if you really have to have this you can add code to rm to behave in a strictly POSIX compliant mode if some environment variable is set. (This leads into another rant.)

(I will reluctantly concede that having unlink(2) still fail on directories instead of turning into rmdir(2) is probably safest, even if I don't entirely like it either. Some program is probably counting on the behavior and there's not too much reason to change it. Rm is different in part because it is used by people; unlink(2) is not directly. Yes, I'm waving my hands a bit.)

by cks at March 20, 2015 04:42 AM

RISKS Digest