Planet SysAdmin


September 21, 2017

Chris Siebenmann

My potential qualms about using Python 3 in projects

I wrote recently about why I didn't use the attrs module recently; the short version is that it would have forced my co-workers to learn about it in order to work on my code. Talking about this brings up a potentially awkward issue, namely Python 3. Just like the attrs module, working with Python 3 code involves learning some new things and dealing with some additional concerns. In light of this, is using Python 3 in code for work something that's justified?

This issue is relevant to me because I actually have Python 3 code these days. For one program, I had a concrete and useful reason to use Python 3 and doing so has probably had real benefits for our handling of incoming email. But for other code I've simply written it in Python 3 because I'm still kind of enthused about it and everyone (still) does say it's the right thing to do. And there's no chance that we'll be able to forget about Python 2, since almost all of our existing Python code uses Python 2 and isn't going to change.

However, my tentative view is that using Python 3 is a very different situation than the attrs module. To put it one way, it's quite possible to work with Python 3 without noticing. At a superficial level and for straightforward code, about the only difference between Python 3 and Python 2 is print("foo") versus 'print "foo". Although I've said nasty things about Python 3's automatic string conversions in the past, they do have the useful property that things basically just work in a properly formed UTF-8 environment, and most of the time that's what we have for sysadmin tools.

(Yes, this isn't robust against nasty input, and some tools are exposed to that. But many of our tools only process configuration files that we've created ourselves, which means that any problems are our own fault.)

Given that you can do a great deal of work on an existing piece of Python code without caring whether it's Python 2 or Python 3, the cost of using Python 3 instead of Python 2 is much lower than, for example, the cost of using the attrs module. Code that uses attrs is basically magic if you don't know attrs; code in Python 3 is just a tiny bit odd looking and it may blow up somewhat mysteriously if you do one of two innocent-seeming things.

(The two things are adding a print statement and using tabs in the indentation of a new or changed line. In theory the latter might not happen; in practice, most Python 3 code will be indented with spaces.)

In situations where using Python 3 allows some clear benefit, such as using a better version of an existing module, I think using Python 3 is pretty easily defensible; the cost is very likely to be low and there is a real gain. In situations where I've just used Python 3 because I thought it was neat and it's the future, well, at least the costs are very low (and I can argue that this code is ready for a hypothetical future where Python 2 isn't supported any more and we want to migrate away from it).

Sidebar: Sometimes the same code works in both Pythons

I wrote my latest Python code as a Python 3 program from the start. Somewhat to my surprise, it runs unmodified under Python 2.7.12 even though I made no attempt to make it do so. Some of this is simply luck, because it turns out that I was only ever invoking print() with a single argument. In Python 2, print("fred") is seen as 'print ("fred")', which is just 'print "fred"', which works fine. Had I tried to print() multiple arguments, things would have exploded.

(I have only single-argument print()s because I habitually format my output with % if I'm printing out multiple things. There are times when I'll deviate from this, but it's not common.)

by cks at September 21, 2017 05:36 AM

The Lone Sysadmin

Calibrate Your Monitor

When I build a new computer one of the things I do as part of the setup is calibrate the color of the monitors. It’s actually pretty amazing how much better things look after just a few minutes of adjustments. It’s also nice to have the monitors synchronized, so if I move a window between […]

The post Calibrate Your Monitor appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at September 21, 2017 03:38 AM

September 20, 2017

Steve Kemp's Blog

Retiring the Debian-Administration.org site

So previously I've documented the setup of the Debian-Administration website, and now I'm going to retire it I'm planning how that will work.

There are currently 12 servers powering the site:

  • web1
  • web2
  • web3
  • web4
    • These perform the obvious role, serving content over HTTPS.
  • public
    • This is a HAProxy host which routes traffic to one of the four back-ends.
  • database
    • This stores the site-content.
  • events
    • There was a simple UDP-based protocol which sent notices here, from various parts of the code.
    • e.g. "Failed login for bob from 1.2.3.4".
  • mailer
    • Sends out emails. ("You have a new reply", "You forgot your password..", etc)
  • redis
    • This stored session-data, and short-term cached content.
  • backup
    • This contains backups of each host, via Obnam.
  • beta
    • A test-install of the codebase
  • planet
    • The blog-aggregation site

I've made a bunch of commits recently to drop the event-sending, since no more dynamic actions will be possible. So events can be retired immediately. redis will go when I turn off logins, as there will be no need for sessions/cookies. beta is only used for development, so I'll kill that too. Once logins are gone, and anonymous content is disabled there will be no need to send out emails, so mailer can be shutdown.

That leaves a bunch of hosts left:

  • database
    • I'll export the database and kill this host.
    • I will install mariadb on each web-node, and each host will be configured to talk to localhost only
    • I don't need to worry about four database receiving diverging content as updates will be disabled.
  • backup
  • planet
    • This will become orphaned, so I think I'll just move the content to the web-nodes.

All in all I think we'll just have five hosts left:

  • public to do the routing
  • web1-web4 to do the serving.

I think that's sane for the moment. I'm still pondering whether to export the code to static HTML, there's a lot of appeal as the load would drop a log, but equally I have a hell of a lot of mod_rewrite redirections in place, and reworking all of them would be a pain. Suspect this is something that will be done in the future, maybe next year.

September 20, 2017 09:00 PM

ma.ttias.be

DNS Research: using SPF to query internal DNS resolvers

The post DNS Research: using SPF to query internal DNS resolvers appeared first on ma.ttias.be.

Using the SPF records to trigger a response from an internal DNS server. Clever way to extract otherwise closed data!

In response to the spread of cache poisoning attacks, many DNS resolvers have gone from being open to closed resolvers, meaning that they will only perform queries on behalf of hosts within a single organization or Internet Service Provider.

As a result, measuring the security of the DNS infrastructure has been made more difficult. Closed resolvers will not respond to researcher queries to determine if they utilize security measures like port randomization or transaction id randomization.

However, we can effectively turn a closed resolver into an open one by sending an email to a mail server (MTA) in the organization. This causes the MTA to make a query on the external researchers' behalf, and we can log the security features of the DNS resolver using information gained by a nameserver and email server under our control.

Source: DNS Research

The post DNS Research: using SPF to query internal DNS resolvers appeared first on ma.ttias.be.

by Mattias Geniar at September 20, 2017 01:28 PM

Chris Siebenmann

Wireless is now critical (network) infrastructure

When I moved over to here a decade or so ago, we (the department) had a wireless network that was more or less glued together out of spare parts. One reason for this, beyond simply money, is that wireless networking was seen as a nice extra for us to offer to our users and thus not something we could justify spending a lot on. If we had to prioritize (and we did), wired networking was much higher up the heap than wireless. Wired networking was essential; the wireless was merely nice to have and offer.

I knew that wireless usage had grown and grown since then, of course; anyone who vaguely pays attention knows that, and the campus wireless people periodically share eye-opening statistics on how many active devices there are. You see tablets and smartphones all over (and I even have one of my own these days, giving me a direct exposure), and people certainly like using their laptops with wifi (even in odd places, although our machine room no longer has wireless access). But I hadn't really thought about the modern state of wireless until I got a Dell XPS 13 laptop recently and then the campus wireless networking infrastructure had some issues.

You see, the Dell XPS 13 has no onboard Ethernet, and it's not at all alone in that; most modern ultrabooks don't, for example. Tablets are obviously Ethernet-free, and any number of people around here use one as a major part of their working environment. People are even actively working through their phones. If the wireless network stops working, all of these people are up a creek and their work grinds to a halt. All of this has quietly turned wireless networking into relatively critical infrastructure. Fortunately our departmental wireless network is in much better shape now than it used to be, partly because we outsourced almost all of it to the university IT people who run the campus wireless network.

(We got USB Ethernet dongles for our recent laptops, but we're sysadmins with unusual needs, including plugging into random networks in order to diagnose problems. Not everyone with a modern laptop is going to bother, and not everyone who gets one is going to carry it around or remember where they put it or so on.)

This isn't a novel observation but it's something that's snuck up on me and before now has only been kind of an intellectual awareness. It wasn't really visceral until I took the XPS 13 out of the box and got to see the absence of an Ethernet port in person.

(The USB Ethernet dongle works perfectly well but it doesn't feel the same, partly because it's not a permanently attached part of the machine that is always there, the way the onboard wifi is.)

by cks at September 20, 2017 05:22 AM

September 19, 2017

Electricmonk.nl

Enable dnsmasq DNS query caching on Ubuntu 16.04

It appears that by default, DNS query caching is disabled in dnsmasq on Ubuntu 16.04. At least it is for my Xubuntu desktop. You can check if it's disabled for you with the dig command:

$ dig @127.0.1.1 ubuntu.com
$ dig @127.0.1.1 ubuntu.com

Yes, run it twice. Once to add the entry to the cache, the second time to verify it's cached.

Now check the "Query time" line. If it says anything higher than about 0 to 2 msec for the query time, caching is disabled.

;; Query time: 39 msec

To enable it, create a new file /etc/NetworkManager/dnsmasq.d/cache.conf and put the following in it:

cache-size=1000

Next, restart the network manager:

systemctl restart network-manager

Now try the dig again twice and check the Query time. It should say zero or close to zero. DNS queries are now cached, which should make browsing a bit faster. In some cases a lot faster. An additional benifit is that many ISP's modems / routers and DNS servers are horrible, which local DNS caching somewhat mitigates.

by admin at September 19, 2017 11:16 AM

R.I.Pienaar

Load testing Choria

Overview


Many of you probably know I am working on a project called Choria that modernize MCollective which will eventually supersede MCollective (more on this later).

Given that Choria is heading down a path of being a rewrite in Go I am also taking the opportunity to look into much larger scale problems to meet some client needs.

In this and the following posts I’ll write about work I am doing to load test and validate Choria to 100s of thousands of nodes and what tooling I created to do that.

Middleware


Choria builds around the NATS middleware which is a Go based middleware server that forgoes a lot of the persistence and other expensive features – instead it focusses on being a fire and forget middleware network. It has an additional project should you need those features so you can mix and match quite easily.

Turns out that’s exactly what typical MCollective needs as it never really used the persistence features and those just made the associated middleware quite heavy.

To give you an idea, in the old days the community would suggest every ~ 1000 nodes managed by MCollective required a single ActiveMQ instance. Want 5 500 MCollective nodes? That’ll be 6 machines – physical recommended – and 24 to 30 GB RAM in a cluster just to run the middleware. We’ve had reports of much larger RabbitMQ networks on 4 or 5 servers – 50 000 managed nodes or more, but those would be big machines and they had quite a lot of performance issues.

There was a time where 5 500 nodes was A LOT but now it’s becoming a bit every day, so I need to focus upward.

With NATS+Choria I am happily running 5 500 nodes on a single 2 CPU VM with 4GB RAM. In fact on a slightly bigger VM I am happily running 50 000 nodes on a single VM and NATS uses around 1GB to 1.5GB of RAM at peak.

Doing 100s of RPC requests in a row against 50 000 nodes the response time is pretty solid around 16 seconds for a RPC call to every node, it’s stable, never drops a message and the performance stays level in the absence of Java GC issues. This is fast but also quite slow – the Ruby client manages about 300 replies every 0.10 seconds due to the amount of protocol decoding etc that is needed.

This brings with it a whole new level of problem. Just how far can we take the client code and how do you determine when it’s too big and how do I know the client, broker and federation I am working on significantly improve things.

I’ve also significantly reworked the network protocol to support Federation but the shipped code optimize for code and config simplicity over lets say support for 20 000 Federation Collectives. When we are talking about truly gigantic Choria networks I need to be able to test scenarios involving 10s of thousands of Federated Network all with 10s of thousands of nodes in them. So I need tooling that lets me do this.

Getting to running 50 000 nodes


Not everyone just happen to have a 50 000 node network lying about they can play with so I had to improvise a bit.

As part of the rewrite I am doing I am building a Go framework with the Choria protocol, config parsing and network handling all built in Go. Unlike the Ruby code I can instantiate multiple of these in memory and run them in Go routines.

This means I could write a emulator that can start a number of faked Choria daemons all in one process. They each have their own middleware connection, run a varying amount of agents with a varying amount of sub collectives and generally behave like a normal MCollective machine. On my MacBook I can run 1 500 Choria instances quite easily.

So with fewer than 60 machines I can emulate 50 000 MCollective nodes on a 3 node NATS cluster and have plenty of spare capacity. This is well within budget to run on AWS and not uncommon these days to have that many dev machines around.

In the following posts I’ll cover bits about the emulator, what I look for when determining optimal network sizes and how to use the emulator to test and validate performance of different network topologies.

by R.I. Pienaar at September 19, 2017 09:55 AM

The Lone Sysadmin

Let’s Prosecute Unlicensed Engineering in IT

Have you been watching this whole dustup with the Equifax CISO, and how people are saying that she is unqualified because, instead of a Computer Science degree, she had an MFA in music composition? Not surprisingly, there’s a massive backlash from the IT community, much of which doesn’t have a computer science degree, either. That’s […]

The post Let’s Prosecute Unlicensed Engineering in IT appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at September 19, 2017 05:01 AM

Chris Siebenmann

Looking back at my mixed and complicated feelings about Solaris

So Oracle killed Solaris (and SPARC) a couple of weeks ago. I can't say this is surprising, although it's certainly sudden and underhanded in the standard Oracle way. Back when Oracle killed Sun, I was sad for the death of a dream, despite having had ups and downs with Sun over the years. My views about the death of Solaris are more mixed and complicated, but I will summarize them by saying that I don't feel very sad about Solaris itself (although there are things to be sad about).

To start with, Solaris has been dead for me for a while, basically ever since Oracle bought Sun and certainly since Oracle closed the Solaris source. The Solaris that the CS department used for years in a succession of fileservers was very much a product of Sun the corporation, and I could never see Oracle's Solaris as the same thing or as a successor to it. Hearing that Oracle was doing things with Solaris was distant news; it had no relevance for us and pretty much everyone else.

(Every move Oracle made after absorbing Sun came across to me as a 'go away, we don't want your business or to expand Solaris usage' thing.)

But that's the smaller piece, because I have some personal baggage and biases around Solaris itself due to my history. I started using Sun hardware in the days of SunOS, where SunOS 3 was strikingly revolutionary and worked pretty well for the time. It was followed by SunOS 4, which was also quietly revolutionary even if the initial versions had some unfortunate performance issues on our servers (we ran SunOS 4.1 on a 4/490, complete with an unfortunate choice of disk interconnect). Then came Solaris 2, which I've described as a high speed collision between SunOS 4 and System V R4.

To people reading this today, more than a quarter century removed, this probably sounds like a mostly neutral thing or perhaps just messy (since I did call it a collision). But at the time it was a lot more. In the old days, Unix was split into two sides, the BSD side and the AT&T System III/V side, and I was firmly on the BSD side along with many other people at universities; SunOS 3 and SunOS 4 and the version of Sun that produced them were basically our standard bearers, not only for BSD's superiority at the time but also their big technical advances like NFS and unified virtual memory. When Sun turned around and produced Solaris 2, it was viewed as being tilted towards being a System V system, not a BSD system. Culturally, there was a lot of feeling that this was a betrayal and Sun had debased the nice BSD system they'd had by getting a lot of System V all over it. It didn't help that Sun was unbundling the compilers around this time, in an echo of the damage AT&T's Unix unbundling did.

(Solaris 2 was Sun's specific version of System V Release 4, which itself was the product of Sun and AT&T getting together to slam System V and BSD together into a unified hybrid. The BSD side saw System V R4 as 'System V with some BSD things slathered over top', as opposed to 'BSD with some System V things added'. This is probably an unfair characterization at a technical level, especially since SVR4 picked up a whole bunch of important BSD features.)

Had I actually used Solaris 2, I might have gotten over this cultural message and come to like and feel affection for Solaris. But I never did; our 4/490 remained on SunOS 4 and we narrowly chose SGI over Sun, sending me on a course to use Irix until we started switching to Linux in 1999 (at which point Sun wasn't competitive and Solaris felt irrelevant as a result). By the time I dealt with Solaris again in 2005, open source Unixes had clearly surpassed it for sysadmin usability; they had better installers, far better package management and patching, and so on. My feelings about Solaris never really improved from there, despite increasing involvement and use, although there were aspects I liked and of course I am very happy that Sun created ZFS, put it into Solaris 10, and then released it to the world as open source so that it could survive the death of Sun and Solaris.

The summary of all of that is that I'm glad that Sun created a number of technologies that wound up in successive versions of Solaris and I'm glad that Sun survived long enough to release them into the world, but I don't have fond feelings about Solaris itself the way that many people who were more involved with it do. I cannot mourn the death of Solaris itself the way I could for Sun, because for me Solaris was never a part of any dream.

(One part of that is that my dream of Unix was the dream of workstations, not the dream of servers. By the time Sun was doing interesting things with Solaris 10, it was clearly not the operating system of the Unix desktop any more.)

(On Solaris's death in general, see this and this.)

by cks at September 19, 2017 03:35 AM

September 18, 2017

ma.ttias.be

A proposal for cryptocurrency addresses in DNS

The post A proposal for cryptocurrency addresses in DNS appeared first on ma.ttias.be.

By now it's pretty clear that the idea of a cryptocurrency probably isn't going away. It might not be Bitcoin or Litecoin, it might not have the same value as it does today, but the concept of cryptocurrency is here to stay: digital money.

Just like the beginning of IP addresses, using them raw was fine at first. But with crypto, you get long hexadecimal strings that truly no one can remember by heart. It's far from user friendly.

It's like trying to remember that 2a03:a800:a1:1952::ff is the address for this site. Doesn't work very well, does it? It's far easier to say ma.ttias.be than the cryptic representation of IPv6.

I think we need something similar for cryptocurrencies. Something independent and -- relatively -- secure. So here's my proposal I came up with in the car on the way home.

Example: cryptocurrency in DNS

Here's the simplest example I can give.

$ dig ma.ttias.be TXT | sort
ma.ttias.be.	3600	IN    TXT   "ico:10 btc:1AeCyEczAFPVKkvausLSQWP1jcqkccga9m"
ma.ttias.be.	3600	IN    TXT   "ico:10 ltc:Lh1TUmh2WP4LkCeDTm3kMX1E7NQYSKyMhW"
ma.ttias.be.	3600	IN    TXT   "ico:20 eth:0xE526E2Aecb8B7C77293D0aDf3156262e361BfF0e"
ma.ttias.be.	3600	IN    TXT   "ico:30 xrp:rDsbeomae4FXwgQTJp9Rs64Qg9vDiTCdBv"

Cryptocurrency addresses get published as TXT records to a domain of your choosing. Want to receive a payment? Simple say "send it to ma.ttias.be", the client will resolve that TXT record and the accompanying addresses and use the priority field as a guideline for choosing which address to pick first.

Think MX records, but implemented as TXT. The lower the priority, the more preferred it is.

The TXT format explained

A TXT format can contain pretty much anything, so it needs some standardization in order for this to work. Here's my proposal.

ico:[priority] space [currency]:[address]

Let's pick the first result as an example and tear it down.

$ dig ma.ttias.be TXT | sort | head -n 1
ma.ttias.be.	3600	IN    TXT   "ico:10 btc:1AeCyEczAFPVKkvausLSQWP1jcqkccga9m"

Translates to;

  • ico:: a prefix in the TXT record to denote that this ia currency, much like SPF knows the "v=spv1" prefix. This is a fixed value.
  • 10: the priority. The lower, the bigger its preference.
  • btc: preferred currency is btc, or Bitcoin.
  • 1AeCyEczAFPVKkvausLSQWP1jcqkccga9m: the btc address to accept payments.

Simple, versatile format.

The priority allows for round robin implementations, if you wish to diversify your portfolio. Adding multiple cryptocurrency allows the sender the freedom to choose which currency he/she prefers, while still honoring your priority.

Technically, I published 2 records with a priority of 10. It's up to the sender to determine which currency he/she prefers, if it's available to them. If it isn't, they can move down the chain & try other addresses published.

It means only addresses on which you want to receive currency should ever be posted as DNS records.

DNSSEC required

To avoid DNS cache poisoning or other man-in-the-middle attacks, DNSSEC would have to be a hard requirement in order to guarantee integrity.

This should not be optional.

If smart people every end up implementing something like this, an additional GPG/PKI like solution might be added for increased security, by signing the addresses once more.

Currency agnostic

This isn't a solution for Bitcoin, Litecoin or Ripple. It's a solution for all of them. And all the new currencies to come.

It's entirely currency agnostic and can be used for any virtual currency.

Multi tenancy

If you want multiple users on the same domain, you could solve this via subdomains. Ie "john.domain.tld", "mary.domain.tld", ...

It makes the format of the TXT record plain and simple and uses basic DNS records for delegation of accounts.

Why not a dedicated resource record?

For the same reason the SPF resource record went away and was replaced by a TXT alternative: availability.

Every DNS server and client already understands TXT records. If we have to wait for both servers, clients and providers to implement something like a ICO resource record, it'll take ages. Just look at the current state of CAA records, only a handful of providers offer it, even though it's a mandatory CA thing already.

There are already simpler naming schemes for cryptocurrency!

Technically, yes, but they all have a deep flaw: you have to trust someone else's system.

There's BitAlias, onename, ens for ethereum, okTurtles, ... and they all build on top of their own, custom system.

But it turns out, we already have a name-translation-system called DNS, we'd be far better of implementing a readable cryptocurrency variant in DNS than in someone else's closed system.

The validator regex

With the example given above, it can easily be validated with the following regex.

ico:([0-9]+) ([a-z]{3}):([a-zA-Z0-9]+)

And it translates to;

  • group #1: the priority
  • group #2: the currency
  • group #3: the address

A complete validator with the dig DNS client would translate to;

$ dig ma.ttias.be TXT | sort | grep -P 'ico:([0-9]+) ([a-z]{3}):([a-zA-Z0-9]+)'
ma.ttias.be.	3600	IN    TXT   "ico:10 btc:1AeCyEczAFPVKkvausLSQWP1jcqkccga9m"
ma.ttias.be.	3600	IN    TXT   "ico:10 ltc:Lh1TUmh2WP4LkCeDTm3kMX1E7NQYSKyMhW"
ma.ttias.be.	3600	IN    TXT   "ico:20 eth:0xE526E2Aecb8B7C77293D0aDf3156262e361BfF0e"
ma.ttias.be.	3600	IN    TXT   "ico:30 xrp:rDsbeomae4FXwgQTJp9Rs64Qg9vDiTCdBv"

Now, who's going to make this an RFC? I certainly won't, I've got too many things to do already.

The post A proposal for cryptocurrency addresses in DNS appeared first on ma.ttias.be.

by Mattias Geniar at September 18, 2017 07:43 PM

Raymii.org

atop is broken on Ubuntu 16.04 (version 1.26): trap divide error

Recently a few of my Ubuntu 16.04 machines had issues and I was troubleshooting them, noticing `atop` logs missing. atop is a very handy tool which can be setup to record system state every X minutes, and we set it up to run every 5 minutes. You can then at a later moment see what the server was doing, even sorting by disk, memory, cpu or network usage. This post discusses the error and a quick fix.

September 18, 2017 12:00 AM

September 17, 2017

ma.ttias.be

Chrome to force .dev domains to HTTPS via preloaded HSTS

The post Chrome to force .dev domains to HTTPS via preloaded HSTS appeared first on ma.ttias.be.

tl;dr: one of the next versions of Chrome is going to force all domains ending on .dev (and .foo) to be redirected to HTTPs via a preloaded HTTP Strict Transport Security (HSTS) header.


This very interesting commit just landed in Chromium:

Preload HSTS for the .dev gTLD.

This adds the following line to Chromium's preload lists;

{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },

It forces any domain on the .dev gTLD to be HTTPs.

Wait, there's a legit .dev gTLD?

Yes, unfortunately.

It's been bought by Google as one of their 100+ new gTLDs. What do they use it for? No clue. But it's going to cause a fair bit of confusion and pain to webdevelopers.

The .dev gTLD has nameservers and is basically like any other TLD out there, we as developers just happen to have chosen that name as a good placeholder for local development, too, overwriting the public DNS.

$ dig +trace dev. NS
dev.			172800	IN	NS	ns-tld4.charlestonroadregistry.com.
dev.			172800	IN	NS	ns-tld5.charlestonroadregistry.com.
dev.			172800	IN	NS	ns-tld3.charlestonroadregistry.com.
dev.			172800	IN	NS	ns-tld2.charlestonroadregistry.com.
dev.			172800	IN	NS	ns-tld1.charlestonroadregistry.com.

Google publishes some of their domains on there, too;

$ dig +trace google.dev A
google.dev.		3600	IN	A	127.0.53.53

So yes, it's a legit TLD.

Consequences of redirecting .dev to HTTPS

A lot of (web) developers use a local .dev TLD for their own development. Either by adding records to their /etc/hosts file or by using a system like Laravel Valet, which runs a dnsmasq service on your system to translate *.dev to 127.0.0.1.

In those cases, if you browse to http://site.dev, you'll be redirect to https://site.dev, the HTTPS variant.

That means your local development machine needs to;

  • Be able to serve HTTPs
  • Have self-signed certificates in place to handle that
  • Have that self-signed certificate added to your local trust store (you can't dismiss self-signed certificates with HSTS, they need to be 'trusted' by your computer)

Such fun.

What should we do?

With .dev being an official gTLD, we're most likely better of changing our preferred local development suffix from .dev to something else.

There's an excellent proposal to add the .localhost domain as a new standard, which would be more appropriate here. It would mean we no longer have site.dev, but site.localhost. And everything at *.localhost would automatically translate to 127.0.0.1, without /etc/hosts or dnsmasq workarounds.

Alternatively, if you're looking for a quick "search and replace" alternative for existing setups, consider the .test gTLD, which is a reserved name by IETF for testing (or development) purposes.

I do hope the Chromium team reconsiders the preloaded HSTS as it's going to have rather big implications for local webdevelopment.

The post Chrome to force .dev domains to HTTPS via preloaded HSTS appeared first on ma.ttias.be.

by Mattias Geniar at September 17, 2017 08:04 AM

September 16, 2017

Errata Security

People can't read (Equifax edition)

One of these days I'm going to write a guide for journalists reporting on the cyber. One of the items I'd stress is that they often fail to read the text of what is being said, but instead read some sort of subtext that wasn't explicitly said. This is valid sometimes -- as the subtext is what the writer intended all along, even if they didn't explicitly write it. Other times, though the imagined subtext is not what the writer intended at all.


A good example is the recent Equifax breach. The original statement says:
Equifax Inc. (NYSE: EFX) today announced a cybersecurity incident potentially impacting approximately 143 million U.S. consumers.
The word consumers was widely translated to customers, as in this Bloomberg story:
Equifax Inc. said its systems were struck by a cyberattack that may have affected about 143 million U.S. customers of the credit reporting agency
But these aren't the same thing. Equifax is a credit rating agency, keeping data on people who are not its own customers. It's an important difference.


Another good example is yesterday's quote "confirming" that the "Apache Struts" vulnerability was to blame:
Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638.
But it doesn't confirm Struts was responsible. Blaming Struts is certainly the subtext of this paragraph, but it's not the text. It mentions that criminals had exploited the Struts vulnerability, but don't actually connect the dots to the breach we are all talking about.

There's probably reasons for this. While it's easy for forensics to find evidence of Struts exploitation in logfiles, it's much harder to connect this to the breach. While they suspect Struts, they may not actually be able to confirm it. Or, maybe they are trying to cover things up, where they feel failing to patch is a lesser crime than what they really did.

It's at this point journalists should earn their pay. Instead rewriting what they read on the Internet, they could do legwork and call up Equifax PR and ask.


The purpose of this post isn't to discuss Equifax, but the tendency of people to "read between the lines", to read some subtext that wasn't actually expressed in the text. Sometimes the subtext is legitimately there, such as how Equifax clearly intends people to blame Struts thought they don't say it outright. Sometimes the subtext isn't there, such as how Equifax doesn't mean it's own customers, only "U.S. consumers". Journalists need to be careful about making assumptions about the subtext.





Update: The Equifax CSO has a degree in music. Some people have criticized this. Most people have defended this, pointing out that almost nobody has an "infosec" degree in our industry, and many of the top people have no degree at all. Among others, @thegrugq has pointed out that infosec degrees are only a few years old -- they weren't around 20 years ago when today's corporate officers were getting their degrees.

Again, we have the text/subtext problem, where people interpret infosec degrees as being the same as computer-science degrees, the later of which have existed for decades. Some, as in this case, consider them to be wildly different. Others consider them to be nearly the same.






by Robert Graham (noreply@blogger.com) at September 16, 2017 10:39 PM

September 15, 2017

Cryptography Engineering

Patching is hard; so what?

It’s now been about a week since Equifax announced the record-breaking breach that affected 143 million Americans. We still don’t know enough — but a few details have begun to come out about the causes of the attack. It’s now being reported that Equifax’s woes stem from an unpatched vulnerability in Apache Struts that dates from March 2017, nearly two months before the breach began. This flaw, which allows remote command execution on affected servers, somehow allowed an attacker to gain access to a whopping amount of Equifax’s customer data.

While many people have criticized Equifax for its failure, I’ve noticed a number of tweets from information security professionals making the opposite case. Specifically, these folks point out that patching is hard. The gist of these points is that you can’t expect a major corporation to rapidly deploy something as complex as a major framework patch across their production systems. The stronger version of this point is that the people who expect fast patch turnaround have obviously never patched a production server.

I don’t dispute this point. It’s absolutely valid. My very simple point in this post is that it doesn’t matter. Excusing Equifax for their slow patching is both irrelevant and wrong. Worse: whatever the context, statements like this will almost certainly be used by Equifax to excuse their actions. This actively makes the world a worse place.

I don’t operate production systems, but I have helped to design a couple of them. So I understand something about the assumptions you make when building them.

If you’re designing a critical security system you have choices to make. You can build a system that provides defense-in-depth — i.e., that makes the assumption that individual components will fail and occasionally become insecure. Alternatively, you can choose to build systems that are fragile — that depend fundamentally on the correct operation of all components at all times. Both options are available to system designers, and making the decision is up to those designers; or just as accurately, the managers that approve their design.

The key point is that once you’ve baked this cake, you’d better be willing to eat it. If your system design assumes that application servers will not contain critical vulnerabilities — and you don’t have resilient systems in place to handle the possibility that they do — then you’ve implicitly made the decision that you’re never ever going to allow those vulnerabilities to fester. Once an in-the-wild vulnerability is detected in your system, you’d damn well better have a plan to patch, and patch quickly. That may involve automated testing. It may involve taking your systems down, or devoting enormous resources to monitoring activity. If you can’t do that, you’d better have an alternative. Running insecure is not an option.

So what would those systems look like? Among more advanced system designs I’ve begun to see a move towards encrypting back-end data. By itself this doesn’t do squat to protect systems like Equifax’s, because those systems are essentially “hot” databases that have to provide cleartext data to application servers — precisely the systems that Equifax’s attackers breached.

The common approach to dealing with this problem is twofold. First, you harden the cryptographic access control components that handle decryption and key management for the data — so that a breach in an application server doesn’t lead to the compromise of the access control gates. Second, you monitor, monitor, monitor. The sole advantage that encryption gives you here is that your gates for access control are now reduced to only the systems that manage encryption. Not your database. Not your web framework. Just a — hopefully — small and well-designed subsystem that monitors and grants access to each record. Everything else is monitoring.

Equifax claims to have resilient systems in place. Only time will tell if they looked like this. What seems certain is that whatever those systems are, they didn’t work. And given both the scope and scale of this breach, that’s a cake I’d prefer not to have to eat.


by Matthew Green at September 15, 2017 10:21 PM

Everything Sysadmin

September NYCDevOps: Habitat in Production (Seth Thomas)

This month's NYCDevOps meetup speaker will be Seth Thomas talking about "Habitat in Production".

  • Date: Tuesday, September 19, 2017
  • Time: 6:30 PM
  • Location: Stack Overflow HQ, 110 William St, 28th floor, NY, NY

Space is limited! RSVP soon!

https://www.meetup.com/nycdevops/events/240064742/

by Tom Limoncelli at September 15, 2017 03:00 PM

September 13, 2017

Vincent Bernat

Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:

conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route

The same configuration can be used on both sides. Each side will figure out if it is “left” or “right”. The IPsec site-to-site tunnel endpoints are 2001:db8:­1::1 and 2001:db8:­2::1. The protected subnets are 2001:db8:­a1::/64 and 2001:db8:­a2::/64. As a result, strongSwan configures the following policies in the kernel:

$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[…]

This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements:

  • a direction (out, in or fwd2),
  • a selector (source subnet, destination subnet, protocol, ports),
  • a mode (transport or tunnel),
  • an encapsulation protocol (esp or ah), and
  • the endpoint source and destination addresses.

When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):

$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[…]

If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:

$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)

All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs:

Redundant VPNs between 3 sites

A possible configuration between V1-1 and V2-1 could be:

conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route

Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8:­a6::/64 on one side and 2001:db8:­a6::cc:1/128 at the other) can also be problematic.

The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:

  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper

Before looking at how to achieve that on Linux, let’s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform).

Let’s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the “private” routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):

interfaces {
    st0 {
        unit 1 {
            family inet6 {
                address 2001:db8:ff::7/127;
            }
        }
    }
}
routing-instances {
    private {
        instance-type virtual-router;
        interface st0.1;
    }
}

The second step is to configure the VPN:

security {
    /* Phase 1 configuration */
    ike {
        proposal IKE-P1 {
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
        }
        policy IKE-V1-1 {
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
        }
        gateway GW-V1-1 {
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
        }
    }
    /* Phase 2 configuration */
    ipsec {
        proposal ESP-P2 {
            protocol esp;
            encryption-algorithm aes-256-gcm;
        }
        policy IPSEC-V1-1 {
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
        }
        vpn VPN-V1-1 {
            bind-interface st0.1;
            df-bit copy;
            ike {
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
            }
            establish-tunnels on-traffic;
        }
    }
}

We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8:­1::1 endpoint.

The last step is to configure BGP in the “private” routing instance to exchange routes with the remote site:

routing-instances {
    private {
        routing-options {
            router-id 1.0.3.2;
            maximum-paths 16;
        }
        protocols {
            bgp {
                preference 140;
                log-updown;
                group v4-VPN {
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                }
            }
        }
    }
}

The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:

policy-options {
    policy-statement OUR-ROUTES {
        term 10 {
            from {
                protocol ospf3;
                route-type internal;
            }
            then {
                metric 0;
                accept;
            }
        }
    }
}

The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8:­a1::/64:

> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path

private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2

It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS’ security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won’t lead to a disaster.

Route-based VPN on Linux

Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the “private” namespace:

# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1

Any “private” interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):

# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2

Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):

# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up

The tunnel interface is created in the initial namespace and moved to the “private” one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work.

We can then configure strongSwan5:

conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no

The IKE daemon configures the following policies in the kernel:

$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[…]

Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface.

The last step is to configure BGP to exchange routes. We can use BIRD for this:

router id 1.0.1.1;
protocol device {
   scan time 10;
}
protocol kernel {
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
}
protocol bgp IBGP_V3_2 {
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
}

Once BIRD is started in the “private” namespace, we can check routes are learned correctly:

$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1

The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the “private” namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network.

As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:

$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69

You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.


  1. Everything in this post should work with Libreswan

  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity. 

  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces. 

  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn’t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts. 

  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them. 

by Vincent Bernat at September 13, 2017 08:20 AM

September 12, 2017

Sean's IT Blog

vMotion Support for NVIDIA vGPU is Coming…And It’ll Be Bigger than you Think

One of the cooler tech announcements at VMworld 2017 was on display at the NVIDIA booth.  It wasn’t really an announcement, per se, but more of a demonstration of a long awaited solution to a very difficult challenge in the virtualization space.

NVIDIA displayed a tech demo of vMotion support for VMs with GRID vGPU running on ESXi.  Along with this demo was news that they had also solved the problem of suspend and resume on vGPU enabled machines, and these solutions would be included in future product releases.  NVIDIA announced live migration support for XenServer earlier this year.

Rob Beekmans (Twitter: @robbeekmans) also wrote about this recently, and his blog has video showing the tech demos in action.

I want to clarify that these are tech demos, not tech previews.  Tech Previews, in VMware EUC terms, usually means a feature that is in beta or pre-release to get real-world feedback.  These demos likely occurred on a development version of a future ESXi release, and there is no projected timeline as to when they will be released as part of a product.

Challenges to Enabling vMotion Support for vGPU

So you’re probably thinking “What’s the big deal? vMotion is old hat now.”  But when vGPU is enabled on a virtual machine, it requires that VM to have direct, but shared, access to physical hardware on the system – in this case, a GPU.  And vMotion never worked if a VM had direct access to hardware – be it a PCI device that was passed through or something plugged into a USB port.

If we look at how vGPU works, each VM has a shared PCI device added to it.  This shared PCI device provides shared access to a physical card.  To facilitate this access, each VM gets a portion of the GPU’s Base Address Register (BAR), or the hardware level interface between the machine and the PCI card.  In order to make this portable, there has to be some method of virtualizing the BAR.  A VM that migrates may not get the same address space on the BAR when it moves to a new host, and any changes to that would likely cause issues to Windows or any jobs that the VM has placed on the GPU.

There is another challenge to enabling vMotion support for vGPU.  Think about what a GPU is – it’s a (massively parallel) processor with dedicated RAM.  When you add a GPU into a VM, you’re essentially attaching a 2nd system to the VM, and the data that is in the GPU framebuffer and processor queues needs to be migrated along with the CPU, system RAM, and system state.  So this requires extra coordination to ensure that the GPU releases things so they can be migrated to the new host, and it has to be done in a way that doesn’t impact performance for other users or applications that may be sharing the GPU.

Suspend and Resume is another challenge that is very similar to vMotion support.  Suspending a VM basically hibernates the VM.  All current state information about the VM is saved to disk, and the hardware resources are released.  Instead of sending data to another machine, it needs to be written to a state file on disk.  This includes the GPU state.  When the VM is resumed, it may not get placed on the same host and/or GPU, but all the saved state needs to be restored.

Hardware Preemption and CUDA Support on Pascal

The August 2016 GRID release included support for the Pascal-series cards.  Pascal series cards include hardware support for preemption.  This is important for GRID because it uses time-slicing to share access to the GPU across multiple VMs.  When a time-slice expires, it moves onto the next VM.

This can cause issues when using GRID to run CUDA jobs.  CUDA jobs can be very long running, and the job is stopped when the time-slice is expired.  Hardware preemption enables long-running CUDA tasks to be interrupted and paused when the time-slice expires, and those jobs are resumed when that VM gets a new time-slice.

So why is this important?  In previous versions of GRID, CUDA was only available and supported on the largest profiles.  So to support the applications that required CUDA in a virtual virtual environment, and entire GPU would need to be dedicated to the VM. This could be a significant overallocation of resources, and it significantly reduced the density on a host.  If a customer was using M60s, which have two GPUs per card, then they may be limited to 4 machines with GPU access if they needed CUDA support.

With Pascal cards and the latest GRID software, CUDA support is enabled on all vDWS profiles (the ones that end with a Q).  Now customers can provide CUDA-enabled vGPU profiles to virtual machines without having to dedicate an entire GPU to one machine.

This has two benefits.  First, it enables more features in the high-end 3D applications that run on virtual workstations.  Not only can these machines be used for design, they can now utilize the GPU to run models or simulations.

The second benefit has nothing to do with virtual desktops or applications.  It actually allows GPU-enabled server applications to be fully virtualized.  This potentially means things like render farms or, in a future looking state, virtualized AI inference engines for business applications or infrastructure support services.  One potentially interesting use case for this is running MapD, a database that runs entirely in the GPU, on a virtual machine.

Analysis

GPUs have the ability to revolutionize enterprise applications in the data center.  They can potentially bring artificial intelligence, deep learning, and massively parallel computing to business apps.

vMotion support is critical in enabling enterprise applications in virtual environments.  The ability to move applications and servers around is important to keeping services available.

By enabling hardware preemption and vMotion support, it now becomes possible to virtualize the next generation of business applications.  These applications will require a GPU and CUDA support to improve performance or utilize deep learning algorithms.  Applications that require a GPU and CUDA support can be moved around in the datacenter without impacting the running workloads, maintaining availability and keeping active jobs running so they do not have to be restarted.

This also opens up new opportunities to better utilize data center resources.  If I have a large VDI footprint that utilizes GRID, I can’t vMotion any running desktops today to consolidate them on particular hosts.  If I can use vMotion to consolidate these desktops, I can utilize the remaining hosts with GPUs to perform other tasks with GPUs such as turning them into render farms, after-hours data processing with GPUs, or other tasks.

This may not seem important now.  But I believe that deep learning/artificial intelligence will become a critical feature in business applications, and the ability to turn my VDI hosts into something else after-hours will help enable these next generation applications.

 


by seanpmassey at September 12, 2017 08:44 PM

September 11, 2017

HolisticInfoSec.org

Toolsmith Tidbit: Windows Auditing with WINspect

WINSpect recently hit the toolsmith radar screen via Twitter, and the author, Amine Mehdaoui, just posted an update a couple of days ago, so no time like the present to give you a walk-through. WINSpect is a Powershell-based Windows Security Auditing Toolbox. According to Amine's GitHub README, WINSpect "is part of a larger project for auditing different areas of Windows environments. It focuses on enumerating different parts of a Windows machine aiming to identify security weaknesses and point to components that need further hardening. The main targets for the current version are domain-joined windows machines. However, some of the functions still apply for standalone workstations."
The current script feature set includes audit checks and enumeration for:

  • Installed security products
  • World-exposed local filesystem shares
  • Domain users and groups with local group membership
  • Registry autoruns
  • Local services that are configurable by Authenticated Users group members
  • Local services for which corresponding binary is writable by Authenticated Users group members
  • Non-system32 Windows Hosted Services and their associated DLLs
  • Local services with unquoted path vulnerability
  • Non-system scheduled tasks
  • DLL hijackability
  • User Account Control settings
  • Unattended installs leftovers
I can see this useful PowerShell script coming in quite handy for assessment using the CIS Top 20 Security Controls. I ran it on my domain-joined Windows 10 Surface Book via a privileged PowerShell and liked the results.


The script confirms that it's running with admin rights, checks PowerShell version, then inspects Windows Firewall settings. Looking good on the firewall, and WINSpect tees right off on my Window Defender instance and its configuration as well.
Not sharing a screenshot of my shares or admin users, sorry, but you'll find them enumerated when you run WINSpect.


 WINSpect then confirmed that UAC was enabled, and that it should notify me only apps try to make changes, then checked my registry for autoruns; no worries on either front, all confirmed as expected.


WINSpect wrapped up with a quick check of configurable services, SMSvcHost is normal as part of .NET, even if I don't like it, but the flowExportService doesn't need to be there at all, I removed that a while ago after being really annoyed with it during testing. No user hosted services, and DLL Safe Search is enable...bonus. Finally, no unattended install leftovers, and all the scheduled tasks are normal for my system. Sweet, pretty good overall, thanks WINSpect. :-)

Give it a try for yourself, and keep an eye out for updates. Amine indicates that Local Security Policy controls, administrative shares configs, loaded DLLs, established/listening connections, and exposed GPO scripts on the to-do list. 
Cheers...until next time.

by Russ McRee (noreply@blogger.com) at September 11, 2017 12:29 AM

September 10, 2017

Steve Kemp's Blog

Debian-Administration.org is closing down

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.

September 10, 2017 09:00 PM

Debian Administration

This site is going to go read-only

This site was born in late September 2004, and has now reached 13 years of age and that seems to be a fitting time to stop.

by Steve at September 10, 2017 07:02 AM

September 08, 2017

The Lone Sysadmin

Help Me Name a Team, Win $50

I know all you nerds have a few minutes to help me crowd source the solution to a problem. I’m also willing to offer $50 in the form of an Amazon gift card to someone with the right solution. We want to rename our group at work. We are currently “Systems Engineering” (abbreviated “SE”). There […]

The post Help Me Name a Team, Win $50 appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at September 08, 2017 05:56 PM

Michael Biven

Reducing Human Error in Software Based Services

As I’m writing this Texas and Louisiana continue to deal with the impact of Hurricane Harvey. Hurricane Irma is heading towards Florida. Los Angeles just experienced the biggest fire to burn in its history (La Tuna Fire). And in the last three months there have been two different collisions between US Navy vessels and civilian ships that resulted in 17 fatalities and multiple injuries.

The interaction and relationships between people, actions, and events in high risk endeavors is awe inspiring. Put aside the horrific loss of life and think of the amount of stress and chaos involved. Imagine knowing that your actions can have irreversible consequences. Though they can’t be changed I’m fascinated with efforts to prevent them from repeating.

Think of those interactions between people, actions and events as change. There are examples of software systems having critical or fatal consequences when failing to handle them. For most of us the impact might be setbacks delaying work or at most a financial consequence to our employer or ourselves. While the impact may differ there are benefits from learning from professions other than our own that deal with change on a daily basis.

Our job as systems or ops engineers should be on how to build, maintain, troubleshoot, and retire the systems we’re responsible for. But there’s been a shift building that has us focusing more on becoming experts at evaluating new technology.

Advances in our tooling has allowed us to rebuild, replace, or re-provision from failures. This starts introducing complacency, because the tools start to have more context of the issues than us. It shifts our focus away from reaching a better idea of what’s happening.

As the complexity and the number of systems involved increases our ability to understand what is happening and how they interact hasn’t kept up. If you have any third-party dependencies, what are the chances they’re going through a similar experience? How much of an impact does this have on your understanding of what’s happening in your own systems?

Atrophy of Basics Skills

The increased efficiency of our tooling creates a Jevons paradox. This is an economic idea that as the efficiency of something increases it will lead to more consumption instead of reducing it. Named after William Jevons who in the 19th century noticed that the consumption of coal increased after the release of a new steam engine design. The improvements with this new design increased the efficiency of the coal-fired steam engine over it’s predecessors. This fueled a wider adoption of the steam engine. It became cheaper for more people to use a new technology and this led to the increased consumption of coal.

For us the engineer’s time is the coal and the tools are the new coal-fired engine. As the efficiency of the tooling increases we tend to use more of the engineer’s time. The adoption of the tooling increases while the number of engineers tends to remain flat. Instead of bringing in more people we tend to try to do more with the people we have.

This contributes to an atrophying of the basic skills needed to do the job. Things like troubleshooting, situational awareness, and being able to hold a mental model of what’s happening. It’s a journeyman’s process to build them. Actual production experience is the best teacher and the best feedback is from your peers. Tools are starting to replace the opportunities for people to have those experiences to learn from.

Children of the Magenta and the Dangers of Automation

For most of us improving the efficiency of an engineer’s time will look like some sort of automation. And while there are obvious benefits there are some not so obvious negatives. First, automation can hide the context of what is, has, and will happen from us. How many times have you heard or asked yourself “What’s it doing now?”

“A lot of what’s happening is hidden from view from the pilots. It’s buried. When the airplane starts doing something that is unexpected and the pilot says ‘hey, what’s it doing now?’ — that’s a very very standard comment in cockpits today.’”
– William Langewiesche, journalist and former American Airlines captain.

In May 31, 2009 228 people died when Air France 447 lost altitude from 35,000 feet and pancaked into the Atlantic Ocean. A pressure probe had iced over preventing the aircraft from determining its speed. This caused the autopilot to disengage and the “fly-by-wire” system switched into a different mode.

“We appear to be locked into a cycle in which automation begets the erosion of skills or the lack of skills in the first place and this then begets more automation.” – William Langewiesche

Four years later Asiana Airlines flight 214 crashed on their final approach into SFO. It came in short of the runway striking the seawall. The NTSB report shows the flight crew mismanaged the initial approach and the aircraft was above the desired glide path. The captain responded by selecting the wrong autopilot mode, which caused the auto throttle to disengage. He had a faulty mental model of the aircraft’s automation logic. This over-reliance on automation and lack of understanding the systems was cited as major factors leading to the accident.

This has been described as “Children of the Magenta” due to the information presented in the cockpit from the autopilot being magenta in color. It was coined by Capt. Warren “Van” Vanderburgh at American Airlines Flight Academy. There are different levels of automation in an aircraft and he argues that by reducing the level of automation you can reduce the workload in some situations. The amount of automation should match the current conditions of the environment. It’s a 25 minute video that’s worth watching, but it boils down to this. Pilots have become too dependent on automation in general and are losing the skills needed to safely control their aircraft.

This led a Federal Aviation Administration task force on cockpit technology to urge airlines to have their pilots spend more time flying by hand. This focus of returning to the basic skills needed is similar to the report released from the Government Accountability Office (GAO) regarding the impact of maintenance and training on the readiness of the US Navy.

Based on updated data, GAO found that, as of June 2017, 37 percent of the warfare certifications for cruiser and destroyer crews based in Japan—including certifications for seamanship—had expired. This represents more than a fivefold increase in the percentage of expired warfare certifications for these ships since GAO’s May 2015 report.

What if Automation is part of the Product?

Knight Capital was a global financial services firm engaging in market making. They used high-frequency trading algorithms to manage a little over a 17% market share on NYSE and almost 17% on NASDAQ. In 2012 a new program was about to be released at the NYSE called Retail Liquidity Program (RLP). Knight Capital made a number of changes to it’s systems and software to handle the change. This included adding new code to an automated, high speed, algorithmic router called SMARS that would send orders into the market.

This was intended to replace some deprecated code in SMARS called Power Peg. Except it wasn’t deprecated and it was still being used. The new code for RLP even used the same feature flag as Power Peg. During the deploy one server was skipped and no one noticed the deployment was incomplete. When the feature flag for SMARS was activated it triggered the Power Peg code on that one server that was missing the update. After 45 minutes of routing millions of order into the market (4 million executions on 154 stocks) Knight Capital lost around $460 million dollars. In this case automation could have helped.

Automation is not a bad thing. You need to be thoughtful and clear on how it’s being used and how it functions. In Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity the authors provide a guideline for this. They show the interaction between people and automation can be improved by meeting four basic requirements (here I’m thinking that robots only had three laws). They then go on to describe the ten biggest challenges to satisfy them.

Four Basic Requirements

  1. Enter into an agreement, which we’ll call a Basic Compact, that the participants intend to work together
  2. Be mutually predictable in their actions
  3. Be mutually directable
  4. Maintain common ground

Complexity and Chaos

Complexity has a tendency to bring about chaos. Due to the difficulty of people to understand the system(s). Incomplete visibility into what is happening. The number of possibilities within them, including both the normal events, mutability of the data, and the out of ordinary. That encompasses failures, people using the system(s) in unexpected ways, large spikes in requests, and bad actors.

If this is the environment that we find ourselves working in we can only control the things we bring into it. That includes making sure we have a grasp on the basics of our profession and maintain the best possible understanding of what’s happening with our systems. This should allow us to work around issues as they happen and decrease our dependencies on our tools. There’s usually more than one way to get information or make a change. Knowing the basics and staying aware of what is happening can get us through the chaos.

There are three people whose works can help navigate these kind of conditions. John Boyd, Richard I. Cook, and Edward Tufte. Boyd gives us reference on how to work within chaos and how to use it to our own advantage. While Cook shows how complexity can fail and suggests ways to find the causes of those failures. And Tufte explains how we can reduce the complexity of the information we’re working with.

Team Resource Management

This leads us to a proven approach that has been used in aviation, fire fighting, and emergency medicine that we can adapt for our own use. In 1973 NASA started research into human factors in aviation safety. Several years later two Boeing 747s collided on the runway and killed 583 people. This prompted a workshop titled “Resource Management on the Flight Deck” that included the NASA researchers and the senior officers responsible for aircrew training from the major airlines. The result of this workshop was a focus on training to reduce the primary causes of aviation accidents called Crew Resource Management (CRM).

They saw the primary causes as human error and communication problems. The training would reinforce the importance of communication and orientating to what is actually happening. It changes the cultural muscle memory so when we start to see things go bad, we speak up and examine it. And it stresses a culture where authority may be questioned.

We’ve already adapted the Incident Command System for our own use… why not do the same with CRM?

What’s the Trend of Causes for Failures?

We don’t have a group like the NTSB or NASA focused on reducing failures in what we do. Yes, there are groups like USENIX, Apache Software Foundation, Linux Foundation, and the Cloud Native Computing Foundation. But I’m not aware of any of them tracking and researching the common causes of failures.

After a few searches for any reference to a postmortem, retro, or outages I came up with this list. These were pulled from searches on Lobsters, Hacker News, Slashdot, TechCrunch, Techmeme, High Scalability, and ArsTechnica. And in this very small nonscientific sample almost half are due to what I would call human error. There also the four power related causes. Don’t take anything out of this list other than the following. We would benefit from having more transparency of the failures in our industry and understanding their causes. 

Date Description and Link Cause
9/7/2017 Softlayer GLBS Outage Unclear
8/26/2017 BGP Leak caused internet outage in Japan Unknown
8/21/2017 Honeycomb outage In Progress
8/4/2017 Visual Studio Team Services outage Human Error
8/2/2017 Issues with Visual Studio Team Services Failed Dependency
5/18/2017 Let’s Encrypt OCSP outage Human Error
3/16/2017 Square Deploy Triggered Load
2/28/2017 AWS S3 Outage Human Error
2/9/2017 Instapaper Outage Cause & Recovery Human Error
1/31/2017 Gitlab Outage Human Error
1/22/2017 United Airlines grounded two hours, computer outage Unknown
10/21/2016 Dyn DNS DDoS DDoS
10/18/2016 Google Compute Engine Human Error
5/10/2016 SalesForce Failure after mitigating a power outage
1/28/2016 Github Service outage Cascading failure after power outage
1/19/2016 Twitter Human Error
7/27/2015 Joyent Manta Outage Locks Blocks The Data
1/10/2014 Dropbox Outage post-mortem Human Error
1/8/2014 GitHub Outage Human Error
3/3/2013 Cloudflare outage Unintended Consequences
8/1/2012 Knight Capital Human Error
6/30/2012 AWS power failure Double Failure of Generators during Power Outage
10/11/2011 Blackberry World Wide 3 Day Outage Hardware failure Hardware Failure and Failed Backup Process
11/15/2010 GitHub Outage Config Error Human Error

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” – R. P. Feynman Rogers Commission Report

September 08, 2017 05:10 PM

September 07, 2017

Michael Biven

Valid Security Announcements

A checklist item for the next time you need to create a landing page for a security announcement.

Make sure the certificate and the whois on the domain being used actually references the name of your company.

My wife sends me a link to this.

I then find the page for the actual announcement from Equifax.

Go to the dedicated website www.equifaxsecurity2017.com and find it’s using a Cloudflare SSL certificate.

The certificate chain doesn’t mention Equifax other than the DNS names used in the cert (*.equifaxsecurity2017.com and equifaxsecurity2017.com).

What happens if I do a whois?

$ whois equifaxsecurity2017.com
   Domain Name: EQUIFAXSECURITY2017.COM
   Registry Domain ID: 2156034374_DOMAIN_COM-VRSN
   Registrar WHOIS Server: whois.markmonitor.com
   Registrar URL: http://www.markmonitor.com
   Updated Date: 2017-08-25T15:08:31Z
   Creation Date: 2017-08-22T22:07:28Z
   Registry Expiry Date: 2019-08-22T22:07:28Z
   Registrar: MarkMonitor Inc.
   Registrar IANA ID: 292
   Registrar Abuse Contact Email: abusecomplaints@markmonitor.com
   Registrar Abuse Contact Phone: +1.2083895740
   Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
   Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
   Name Server: BART.NS.CLOUDFLARE.COM
   Name Server: ETTA.NS.CLOUDFLARE.COM
   DNSSEC: unsigned

Now I want to see if I’m impacted. Click on the “Check Potential Impact” and I’m taken to a new site (trustedidpremier.com/eligibility/eligibility.html).

And we get another certificate and a whois lacking any reference back to Equifax.

$ whois trustedidpremier.com
   Domain Name: TRUSTEDIDPREMIER.COM
   Registry Domain ID: 2157515886_DOMAIN_COM-VRSN
   Registrar WHOIS Server: whois.registrar.amazon.com
   Registrar URL: http://registrar.amazon.com
   Updated Date: 2017-08-29T04:59:16Z
   Creation Date: 2017-08-28T17:25:35Z
   Registry Expiry Date: 2018-08-28T17:25:35Z
   Registrar: Amazon Registrar, Inc.
   Registrar IANA ID: 468
   Registrar Abuse Contact Email: registrar-abuse@amazon.com
   Registrar Abuse Contact Phone: +1.2062661000
   Domain Status: ok https://icann.org/epp#ok
   Name Server: NS-1426.AWSDNS-50.ORG
   Name Server: NS-1667.AWSDNS-16.CO.UK
   Name Server: NS-402.AWSDNS-50.COM
   Name Server: NS-934.AWSDNS-52.NET
   DNSSEC: unsigned

I’m not suggesting that the site equifaxsecurity2017 is malicious, but if you’re going to the trouble of setting up a page like this make sure your certificate and whois actually references back to the company making the announcement. If you look at the creation dates for the domain and the Not Valid Before dates on the certs they had plenty of time to get domains and certificates created that would reference themselves.

September 07, 2017 02:31 PM

September 05, 2017

Errata Security

State of MAC address randomization

tldr: I went to DragonCon, a conference of 85,000 people, so sniff WiFi packets and test how many phones now uses MAC address randomization. Almost all iPhones nowadays do, but it seems only a third of Android phones do.


Ten years ago at BlackHat, we presented the "data seepage" problem, how the broadcasts from your devices allow you to be tracked. Among the things we highlighted was how WiFi probes looking to connect to access-points expose the unique hardware address burned into the phone, the MAC address. This hardware address is unique to your phone, shared by no other device in the world. Evildoers, such as the NSA or GRU, could install passive listening devices in airports and train-stations around the world in order to track your movements. This could be done with $25 devices sprinkled around a few thousand places -- within the budget of not only a police state, but also the average hacker.

In 2014, with the release of iOS 8, Apple addressed this problem by randomizing the MAC address. Every time you restart your phone, it picks a new, random, hardware address for connecting to WiFi. This causes a few problems: every time you restart your iOS devices, your home network sees a completely new device, which can fill up your router's connection table. Since that table usually has at least 100 entries, this shouldn't be a problem for your home, but corporations and other owners of big networks saw their connection tables suddenly get big with iOS 8.

In 2015, Google added the feature to Android as well. However, even though most Android phones today support this feature in theory, it's usually not enabled.


Recently, I went to DragonCon in order to test out how well this works. DragonCon is a huge sci-fi/fantasy conference in Atlanta in August, second to San Diego's ComicCon in popularity. It's spread across several neighboring hotels in the downtown area. A lot of the traffic funnels through the Marriot Marquis hotel, which has a large open area where, from above, you can see thousands of people at a time.

And, with a laptop, see their broadcast packets.

So I went up on a higher floor and setup my laptop in order to capture "probe" broadcasts coming from phones, in order to record the hardware MAC addresses. I've done this in years past, before address randomization, in order to record the popularity of iPhones. The first three bytes of an old-style, non-randomized address, identifies the manufacturer. This time, I should see a lot fewer manufacturer IDs, and mostly just random addresses instead.

I recorded 9,095 unique probes over a couple hours. I'm not sure exactly how long -- my laptop would go to sleep occasionally because of lack of activity on the keyboard. I should probably setup a Raspberry Pi somewhere next year to get a more consistent result.

A quick summary of the results are:
The 9,000 devices were split almost evenly between Apple and Android. Almost all of the Apple devices randomized their addresses. About a third of the Android devices randomized. (This assumes Android only randomizes the final 3 bytes of the address, and that Apple randomizes all 6 bytes -- my assumption may be wrong).
A table of the major results are below. A little explanation:

  • The first item in the table is the number of phones that randomized the full 6 bytes of the MAC address. I'm guessing these are either mostly or all Apple iOS devices. They are nearly half of the total, or 4498 out of 9095 unique probes.
  • The second number is those that randomized the final 3 bytes of the MAC address, but left the first three bytes identifying themselves as Android devices. I'm guessing this represents all the Android devices that randomize. My guesses may be wrong, maybe some Androids randomize the full 6 bytes, which would get them counted in the first number.
  • The following numbers are phones from major Android manufacturers like Motorola, LG, HTC, Huawei, OnePlus, ZTE. Remember: the first 3 bytes of an un-randomized address identifies who made it. There are roughly 2500 of these devices.
  • There is a count for 309 Apple devices. These are either older iOS devices pre iOS 8, or which have turned off the feature (some corporations demand this), or which are actually MacBooks instead of phones.
  • The vendor of the access-points that Marriot uses is "Ruckus". There have a lot of access-points in the hotel.
  • The "TCT mobile" entry is actually BlackBerry. Apparently, BlackBerry stopped making phones and instead just licenses the software/brand to other hardware makers. If you buy a BlackBerry from the phone store, it's likely going to be a TCT phone instead.
  • I'm assuming the "Amazon" devices are Kindle ebooks.
  • Lastly, I'd like to point out the two records for "Ford". I was capturing while walking out of the building, I think I got a few cars driving by.



(random)  4498
(Android)  1562
Samsung  646
Motorola  579
Murata  505
LG  412
Apple  309
HTC-phone  226
Huawei  66
Ruckus  60
OnePlus Tec  40
ZTE  23
TCT mobile  20
Amazon Tech  19
Nintendo  17
Intel  14
Microsoft  9
-hp-  8
BLU Product  8
Kyocera  8
AsusTek  6
Yulong Comp  6
Lite-On  4
Sony Mobile  4
Z-COM, INC.  4
ARRIS Group  2
AzureWave  2
Barnes&Nobl  2
Canon  2
Ford Motor  2
Foxconn  2
Google, Inc  2
Motorola (W  2
Sonos, Inc.  2
SparkLAN Co  2
Wi2Wi, Inc  2
Xiaomi Comm  2
Alps Electr  1
Askey  1
BlackBerry  1
Chi Mei Com  1
Clover Netw  1
CNet Techno  1
eSSys Co.,L  1
GoPro  1
InPro Comm  1
JJPlus Corp  1
Private  1
Quanta  1
Raspberry P  1
Roku, Inc.  1
Sonim Techn  1
Texas Instr  1
TP-LINK TEC  1
Vizio, Inc  1






by Robert Graham (noreply@blogger.com) at September 05, 2017 03:06 AM

September 01, 2017

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – August 2017

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this
month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” … 
  2. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” (also aged a bit by now) is a companion to the checklist (updated version)
    1. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here!
    2. Again, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (now in its 4th edition!) – note that this series is mentioned in some PCI Council materials. 
    3. SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?” is a quick framework for assessing the SIEM project (well, a program, really) costs at an organization (a lot more details on this here in this paper).
    In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]: 

    Current research on SIEM:
    Miscellaneous fun posts:

    (see all my published Gartner research here)
    Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016.

    Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on August 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

    Previous post in this endless series:

    by Anton Chuvakin (anton@chuvakin.org) at September 01, 2017 03:01 PM

    August 28, 2017

    Sean's IT Blog

    #VMworld EUC Showcase Keynote – #EDW7002KU Live Blog

    Good afternoon from Las Vegas.  The EUC Showcase keynote is about to start.  During this session, VMware EUC CTO Shawn Bass and GM of End User Computing Sumit Dhawan will showcase the latest advancements in VMware’s EUC technology portfolio.  So sit tight as we bring you the latest news here shortly.

    3:30 PM – Session is about to begin.  They’re reading off the disclaimers.

    3:30 PM – Follow hashtag #EUCShowcase on Twitter for real-time updates from EUC Champions like @vhojan and @youngtech

    3:31 PM – The intro video covers some of the next generation technologies like AI and Machine Learning, and how people are the power behind this power.  EUC is fundamentally about people and using technology to improve how people get work done.

    3:34 PM – “Most of you are here to learn how to redefine work.” Sumit Dhawan

    3:38 PM – Marginal costs of endpoint management will continue to increase due to the proliferation of devices and applications.  IoT will only make this worse.

    3:39 PM – VMware is leveraging public APIs to build a platform to manage devices and applications.  The APIs provide a context of the device along with the identity that allow the device to receive the proper level of security and management.  Workspace One combines identity and context seamlessly to deliver this experience to mobile devices.

    3:42 PM – There is a tug of war between the needs of the business, such as security and application management, and the needs of the end user, such as privacy and personal data management.  VMware is using the Workspace One platform to deliver a balance between the needs of the employer and the needs of the end user without increasing marginal costs of management.

    3:45 PM – Shawn Bass is now onstage.  He’s going to be showing a lot of demos.  Demos will include endpoint management of Windows 10, MacOS, and ChromeBook, BYO, and delivering Windows as a Service.

    3:47 PM – Legacy Windows management is complex.  Imaging has a number of challenges, and delivering legacy applications has more complex challenge.  Workspace One can provide the same experience for delivering applications to Windows 10 as users get with mobile devices.  The process allows users to self-enroll their devices by just entering their email and joining it to an Airwatch-integrated Azure AD.

    Application delivery is simplified and performance is improved by using Adaptiva.  This removes the need for local distribution points.  Integration with Workspace One also allows users to self-service enroll in applications without having to open a ticket with IT or manually install software.

    3:54 PM – MacOS support is enabled in Workspace One.  The user experience is similar to what users experience on Windows 10 devices and mobile devices – both for enrollment and application experience.  A new Workspace One app experience is being delivered for MacOS.

    3:57 PM – Chromebook integration can be configured out of the box and have devices joined to the Workspace One environment.  It also supports the Android Google Play store integration and allows users to get a curated app-store experience.

    3:59 PM – The core message of Workspace One is that one solution can manage mobile devices, tablets, and desktop machines, removing the need for point solutions and management silos.

    4:01 PM – Capital One and DXC are on stage to talk about their experience around digital workspace.  The key message is that the workplace is changing from one where everyone is an employee to a gig economy where employees are temporary and come and go.  Bring-Your-Own helps solve this challenge, but it raises new challenges around security and access.

    Capital One sees major benefits of using Workspace One to manage Windows 10.  Key features include the ability to apply an MDM framework to manage devices and removing the need for application deployment and imaging.

    4:10 PM – The discussion has now moved into BYO and privacy.

    4:11 PM – And that’s it for me folks.  I need to jet.


    by seanpmassey at August 28, 2017 10:27 PM

    HolisticInfoSec.org

    Toolsmith Release Advisory: Magic Unicorn v2.8

    David Kennedy and the TrustedSec crew have released Magic Unicorn v2.8.
    Magic Unicorn is "a simple tool for using a PowerShell downgrade attack and inject shellcode straight into memory, based on Matthew Graeber's PowerShell attacks and the PowerShell bypass technique presented by Dave and Josh Kelly at Defcon 18.

    Version 2.8:
    • shortens length and obfuscation of unicorn command
    • removes direct -ec from PowerShell command
    Usage:
    "Usage is simple, just run Magic Unicorn (ensure Metasploit is installed and in the right path) and Magic Unicorn will automatically generate a PowerShell command that you need to simply cut and paste the PowerShell code into a command line window or through a payload delivery system."


    by Russ McRee (noreply@blogger.com) at August 28, 2017 05:59 PM

    OpenSSL

    OpenSSL Goes to China

    Over the past few years we’ve come to the realisation that there is a surprising (to us) amount of interest in OpenSSL in China. That shouldn’t have been a surprise as China is a huge technologically advanced country, but now we know better thanks to correspondence with many new Chinese contacts and the receipt of significant support from multiple Chinese donors (most notably from Smartisan.

    We have accepted an invitation from BaishanCloud to visit China in person and meet with interested OpenSSL users and stakeholders in September. We’d like to thank BaishanCloud for hosting us and Paul Yang and his colleagues there for the substantial amount of work that went into arranging this trip.

    Five of us (Matt Caswell, Tim Hudson, Richard Levitte, Steve Marquess and Rich Salz) will be in China from 18 September through 24 September, visiting Shanghai, Shenzhen, and Beijing. With this trip we hope to learn more about this significant portion of the open source and OpenSSL user communities, and hope to make OpenSSL more visible and accessible to that audience. Note that while not quite constituting a OpenSSL team meeting, this will be only the third time any significant number of the OpenSSL team have met in person.

    We will presenting on various aspects of OpenSSL on 23 September 2017 in Beijing. An introduction to the event and a registration link are available in Chinese.

    We will also be visiting Shanghai and Shenzhen earlier that week to meet with members of the open source community and OpenSSL users and stakeholders. If you can’t make it to the presentation above it may be possible to arrange to meet up with you in one of the above locations. Please drop us a line if you are interested in meeting with us.

    August 28, 2017 01:00 PM

    August 27, 2017

    That grumpy BSD guy

    Twenty-plus years on, SMTP callbacks are still pointless and need to die

    A rarely used legacy misfeature of the main Internet email protocol creeps back from irrelevance as a minor annoyance. You should ask your mail and antispam provider about their approach to 'SMTP callbacks'. Be wary of any assertion that is not backed by evidence.

    Even if you are an IT professional and run an email system, you could be forgiven for not being immediately aware that there is such a thing as SMTP callbacks, also referred to as callback verification. As you will see from the Wikipedia article, the feature was never widely adopted, and for all too understandable reasons.

    If you do run a mail system, you have probably heard about that feature's predecessor, the still-required but rarely used SMTP VRFY and EXPN commands. Those commands offer a way to verify whether an address is valid and to show the component addresses that a mailing list resolves to, respectively.

    Back when all things inter-networking were considered experimental and it was generally thought that information should flow freely in and between those experimental networks, it was quite common for mail servers to offer VRFY and EXPN service to all comers.

    I'm old enough to remember using VRFY by hand, telnet-ing to port 25 on a mail server and running VRFY $user@$domain.$tld commands to check whether an email address was indeed valid. I've forgotten which domains and persons were involved, but I imagine the reason why was that I wanted to contact somebody who had said something interesting in a post to a USENET news group.

    But networkers trying to make contact with each other were not the only ones who discovered the VRFY and EXPN commands.  Soon spammers were using those commands to actively harvest actually! valid! deliverable! addresses, and by 1999 the RFC2505 best practices document recommended disabling the features altogether. After all, there would usually be some other way available to find somebody's email address (there was even a FAQ, a longish Frequently Asked Questions document with apparent USENET origins written and maintained on the subject, a copy of which can be found here).

    In roughly the same time frame, somebody came up with the idea of SMTP callbacks. The idea was that all domains on the Internet need to publish the address of their mail exchangers via DNS MX (mail exchanger) records. The logical next step is then that when a piece of mail arrives over SMTP, the receiving end should be able to contact the sender domain's known mail exchanger to check that the sender address is indeed valid. If you by now hear the echoes of VRFY and EXPN, you're right. There are indications that some early implementations did in fact use VRFY for that purpose.

    But then the world changed, and you could not rely on VRFY being available in the post-RFC2505 world.

    In the post-RFC2505 world, the other side would most likely not offer up any useful information in response to VRFY commands, and you would most likely be limited to the short interchange that the Wikipedia entry quotes,
    HELO <verifier host name>
    MAIL FROM:<>
    RCPT TO:<the address to be tested>
    QUIT
    which a perceptive reader would identify as only verifying in a very limited sense that the domain's mail exchanger was indeed equipped with a functional SMTP service.

    It is worth noting, as many have over the years, that the MX records only specify where a domain expects to receive mail, not where valid mail from the domain is supposed to originate. Several mechanisms to help identify valid mail senders for a domain have been devised in the intervening years, but none existed at the time SMTP callbacks were considered even remotely useful. 

    For reasons that are not entirely clear, some developers kept working on SMTP callback code and several mail server implementations available today (2017) still contain code that looks like it was intended to support information-rich callbacks, if the system was configured to enable the feature at all. The default configurations in general do not enable the SMTP callback feature, and mail admins rarely bother to even learn much about the largely disused and (in my opinion at least) not too well thought out feature.

    This all happened back in the 1990s, but quite recently an incident occurred that indicates that in some pockets of the Internet, SMTP callbacks are still in use, and in at least some cases data from the callbacks are used for generating blacklists and block mail delivery. The last part should raise a few eyebrows at least.

    Jumping forward from the distant 1990s to the present day, regular readers of this column will be aware that bsdly.net and cooperating domains run SMTP service with OpenBSD spamd(8) doing greylisting service, and that the spamd(8) setup produces a greytrapping-based blacklist which is available for download, dumped to a file (available here and here) once per hour.

    Maintaining the mail system and the blacklist also involves keeping an eye on mail-related activities, and invalid addresses in our domains that turn up in the greylist are usually added to the list of spamtrap addresses within not too many hours after they first appear. The process of actually adding spamtrap addresses is a manual one, but based on the output of pathetically simple shell scripts that run as cron jobs.

    The list of spamtraps has grown over the years to more than 38 000 entries. Most of the entries have local parts that are pure generated gibberish, some entries are probably degraded versions of earlier spamtrap addresses and some again seem to conform with specific patterns, including but not limited to SMTP or NNTP message IDs.

    On August 19th and 20th 2017 I noticed a different, but yet familiar pattern in some of the new entries.

    The entry that caught my eye had the MAIL FROM: part as

    mx42.antispamcloud.com-1503146097-testing@bsdly.com

    The local part pattern was somewhat familiar, and breaks down to

        $localhostname-$epochtime-testing

    with @targetdomain.$tld (in our case, bsdly.com) appended. I had at this point totally forgotten about SMTP callbacks, but I decided to check the logs for any traces of activity involving that host. The only trace I could find in the logs was at the spamd-serving firewall in front of the bsdly.com domain's secondary mail exchanger:

    Aug 19 14:35:27 delilah spamd[26915]: 207.244.64.181: connected (25/24)
    Aug 19 14:35:38 delilah spamd[26915]: (GREY) 207.244.64.181: <> -> <mx42.antispamcloud.com-1503146097-testing@bsdly.com>
    Aug 19 14:35:38 delilah spamd[15291]: new entry 207.244.64.181 from <> to <mx42.antispamcloud.com-1503146097-testing@bsdly.com>, helo mx18-12.smtp.antispamcloud.com
    Aug 19 14:35:38 delilah spamd[26915]: 207.244.64.181: disconnected after 11 seconds.

    Essentially a normal first contact: spamd at our end answers slowly, one byte per second, but the greylist entry is created in the expectation that any caller with a valid message to deliver would try again within a reasonable time. The spamd synchronization between the hosts in our group of greylisting hosts would see to that an entry matching this sequence would appear in the greylist on all participating hosts.

    But the retry never happened, and even if it had, that particular local-part would anyway have produced an "Unknown user" bounce. But at that point I decided to do a bit of investigation and dug out what seemed to be a reasonable point of contact for the antispamcloud.com domain and sent an email with a question about the activity.

    That message bounced, with the following explanation in the bounce message body:

      DOMAINS@ANTISPAMCLOUD.COM
        host filter10.antispamcloud.com [31.204.155.103]
        SMTP error from remote mail server after end of data:
        550 The sending IP (213.187.179.198) is listed on https://spamrl.com as a source of dictionary attacks.

    As you have probably guessed, 213.187.179.198 is the IPv4 address of the primary mail exchanger for bsdly.net, bsdly.com and a few other domains under my care.

    If you go to the URL quoted in the bounce, you will notice that the only point of contact is via an email adress in an unrelated domain.

    I did fire off a message to that address from an alternate site, but before the answer to that one arrived, I had managed to contact another of their customers and got confirmation that they were indeed running with an exim setup that used SMTP callbacks.

    The spamrl.com web site states clearly that they will not supply any evidence in support of their decision to blacklist. Somebody claiming to represent spamrl.com did respond to my message, but as could be expected from their published policy was not willing to supply any evidence to support the claim stated in the bounce.

    In my last message to spamrl.com before starting to write this piece, I advised

    I remain unconvinced that the description of that problem is accurate, but investigation at this end can not proceed without at least some supporting evidence such as times of incidents, addresses or even networks affected.
    If there is a problem at this end, it will be fixed. But that will not happen as a result of handwaving and insults. Actual evidence to support further investigation is needed.
    Until verifiable evidence of some sort materializes, I will assume that your end is misinterpreting normal greylisting behavior or acting on unfounded or low-quality reports from less than competent sources.

    The domain bsdly.com was one I registered some years back mainly to fend off somebody who offered to help the owner of the bsdly.net domain acquire the very similar bsdly.com domain at the price of a mere few hundred dollars.

    My response was to spend something like ten dollars (or was it twenty?) to register the domain via my regular registrar. I may even have sent back a reply about trying to sell me what I already owned, but I have not bothered to dig that far back into my archives.

    The domain does receive mail, but is otherwise not actively used. However, as the list of spamtraps can attest (the full list does not display in a regular browser, since some of the traps are interpreted as html tags, if you want to see it all, fetch the text file instead), others have at times tried to pass off something or other with from addresses in that domain.

    But with the knowledge that this outfit's customers are believers in SMTP callbacks as a way to identify spam, here is my hypothesis on what actually happened:

    On August 19th 2017, my greylist scanner identified the following new entries referencing the bsdly.com domain:
    anecowuutp@bsdly.com
    pkgreewaa@bsdly.com
    eemioiyv@bsdly.com
    keerheior@bsdly.com
    mx42.antispamcloud.com-1503146097-testing@bsdly.com
    vbehmonmin@bsdly.com
    euiosvob@bsdly.com
    otjllo@bsdly.com
    akuolsymwt@bsdly.com

    I'll go out on a limb and guess that mx42.antispamcloud.com was contacted by any of the roughly 5000 hosts blacklisted at bsdly.net at the time, with an attempt to deliver a message with a MAIL FROM: of either anecowuutp@bsdly.com, pkgreewaa@bsdly.com, eemioiyv@bsdly.com or perhaps most likely keerheior@bsdly.com, which appears as a bounce-to address in the same hourly greylist dump where mx42.antispamcloud.com-1503146097-testing@bsdly.com first appears as a To: address.

    The first seen time in epoch notation for keerheior@bsdly.com is
    1503143365
    which translates via date -r to
    Sat Aug 19 13:49:25 CEST 2017
    while mx42.antispamcloud.com-1503146097-testing@bsdly.com is first seen here at epoch 1503146138, which translates to Sat Aug 19 14:35:38 CEST 2017.

    The data indicate that this initial (and only) attempt to contact was aimed at the bsdly.com domain's secondary mail exchanger, and was intercepted by the greylisting spamd that sits in the incoming signal path to there. The other epoch-tagged callbacks follow the same pattern, as can be seen from the data preserved here.

    Whatever action or address triggered the callback, the callback appears to have followed the familiar script:
    1. register attempt to deliver mail
    2. look up the domain stated in the MAIL FROM: or perhaps even the HELO or EHLO
    3. contact the domain's mail exchangers with the rump SMTP dialog quoted earlier
    4. with no confirmation or otherwise of anything other than the fact that the domain's mail exchangers do listen on the expected port, proceed to whatever the next step is.
    The known facts at this point are:
    1. a mail system that is set up for SMTP callbacks received a request to deliver mail from keerheior@bsdly.com
    2. the primary mail exchanger for bsdly.com has the IPv4 address 213.187.179.198
    Both of these are likely true. The second we know for sure, and the first is quite likely. What is missing here is any consideration of where the request to deliver came from.

    From the data we have here, we do not have any indication of what host contacted the system that initiated the callback. In a modern configuration, it is reasonable to expect that a receiving system checks for sender validity via any SPF, DKIM or DMARC records available, or for that matter, greylist and wait for the next attempt (in fact, greylisting before performing any other checks - as an OpenBSD spamd(8) setup would do by default - is likely to be the least resource intensive approach).

    We have no indication that the system performing the SMTP callout used any such mechanism to find an indication as to whether the communication partner was in fact in any way connected to the domain it was trying to deliver mail for.

    My hypothesis is that whatever code is running on the SMTP callback adherents' systems does not check the actual sending IP address, but assumes that any message claiming to be from a domain must in fact involve the primary mail exchanger of that domain and since the code likely predates the SPF, DKIM and DMARC specifications by at least a decade, it will not even try to check those types of information. Given the context it is a little odd but perhaps within reason that in all cases we see here, the callback is attempted not to the domain's primary mail exchanger, but the secondary. 

    With somebody or perhaps even several somebodies generating nonsense addresses in the bsdly.com domain at an appreciable rate (see the record of new spamtraps starting May 20th, 2017, alternate location here) and trying to deliver using those fake From: addresses to somewhere doing SMTP callback, it's not much of a stretch to assume that the code was naive enough to conclude that the purported sender domain's primary mail exchanger was indeed performing a dictionary attack.

    The most useful lesson to take home from this sorry affair is likely to be that you need to kill SMTP callback setups in any system where you may find them. In today's environment, SMTP callbacks do not in fact provide useful information that is not available from other public sources, and naive use of results from those lookups is likely to harm unsuspecting third parties.

    So,
    • If you are involved in selling or operating a system that behaves like the one described here and are in fact generating blacklists based on those very naive assumptions, you need to stop doing so right away.

      Your mistaken assumptions help produce bad data which could lead to hard to debug problems for innocent third parties.

      Or as we say in the trade, you are part of the problem.
    • If you are operating a system that does SMTP callbacks but doesn't do much else, you are part of a small problem and likely only inconveniencing yourself and your users.

      The fossil record (aka the accumulated collection spamtraps at bsdly.net) indicates that the callback variant that includes epoch times is rare enough (approximately 100 unique hosts registered over a decade) that callback activity in total volume probably does not rise above the level of random background noise.

      There may of course be callback variants that have other characteristics, and if you see a way to identify those from the data we have, I would very much like to hear from you.
    • If you are a customer of somebody selling antispam products, you have reason to demand an answer to just how, if at all, your antispam supplier utilizes SMTP callbacks. If they think it's a fine and current feature, you have probably been buying snake oil for years.
    • If you are the developer or maintainer of mail server code that contains the SMTP callbacks feature, please remove the code. Leaving it disabled by default is not sufficient. Removing the code is the only way to make sure the misfeature will never again be a source of confusing problems.
    For some hints on what I consider a reasonable and necessary level of transparency in blacklist maintenance, please see my April 2013 piece Maintaining A Publicly Available Blacklist - Mechanisms And Principles.

    The data this article is based on still exists and will be available for further study to requestors with reasonable justification for their request. I welcome comments in the comment field or via email (do factor in any possible greylist delay, though).

    Any corrections or updates that I find necessary based on your responses will be appended to the article.



    Update 2017-09-05: Since the article was originally published, we've seen a handful of further SMTP callback incidents. The last few we've handled by sending the following to the addresses that could be gleaned from whois on the domain name and source IP address (with mx.nxdomain.nx and 192.0.2.74 inserted as placeholders here to protect the ignorant):

    Hi,

    I see from my greylist dumps that the host identifying as 

    mx.nxdomain.nx, IP address 192.0.2.74

    is performing what looks like SMTP callbacks, with the (non-existent of course) address

    mx.nxdomain.nx-1504629949-testing@bsdly.com

    as the RCPT TO: address.

    It is likely that this activity has been triggered by spam campaigns using made up addresses in one of our little-used domains as from: addresses.

    A series of recent incidents here following the same pattern are summarized in the article

    http://bsdly.blogspot.com/2017/08/twenty-plus-years-on-smtp-callbacks-are.html

    Briefly, the callbacks do not work as you expect. Please read the article and then disable that misfeature. Otherwise you will be complicit in generating false positives for your SMTP blacklist.

    If you have any questions or concerns, please let me know.

    Yours sincerely,
    Peter N. M. Hansteen

    If you've received a similarly-worded notice recently, you know why and may be closer to having a sanely run mail service.


    by Peter N. M. Hansteen (noreply@blogger.com) at August 27, 2017 12:44 PM

    August 25, 2017

    Steve Kemp's Blog

    Interesting times debugging puppet

    I recently upgraded a bunch of systems from Jessie to Stretch, and as a result of that one of my hosts has started showing me a lot of noise in an hourly cron-email:

    Command line is not complete. Try option "help"
    

    I've been ignoring these emails for the past while, but today I sat down to track down the source. It was obviously coming from facter, the system that puppet uses to gather information about hosts.

    Running facter -debug made that apparent:

     root@smaug ~ # facter --debug
     Found no suitable resolves of 1 for ec2_metadata
     value for ec2_metadata is still nil
     value for netmask_git is still nil
     value for ipaddress6_lo is still nil
     value for macaddress_lo is still nil
     value for ipaddress_master is still nil
     value for ipaddress6_master is still nil
     Command line is not complete. Try option "help"
     value for netmask_master is still nil
     value for ipaddress_skx_mail is still nil
     ..
    

    There we see the issue, and it is obviously relating to our master interface.

    To cut a long-story short /usr/lib/ruby/vendor_ruby/facter/util/ip.rb contains some code which eventually runs this:

     ip link show $interface
    

    That works on all other interfaces I have:

      $ ip link show git
      6: git: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
    

    But not on master:

      $ ip link show master
      Command line is not complete. Try option "help"
    

    I ninja-edited the code from this:

      ethbond = regex.match(%x{/sbin/ip link show '#{interface}'})
    

    to:

      ethbond = regex.match(%x{/sbin/ip link show dev '#{interface}'})
    

    And suddenly puppet-runs without any errors. I'm not 100% sure if this is a bug bug, but it is something of a surprise anyway.

    This host runs KVM guests, one of the guests is a puppet-master, with a local name master. Hence the name of the interface. Similarly the interface git is associated with the KVM guest behind git.steve.org.uk.

    August 25, 2017 09:00 PM

    August 23, 2017

    Errata Security

    ROI is not a cybersecurity concept

    In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of "return on investment" or "ROI" to explain why they need to spend more money. Stop doing this. It's nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don't play the vendor's game.

    The correct concept is simply "risk analysis". Here's how it works.

    List out all the risks. For each risk, calculate:

    • How often it occurs.
    • How much damage it does.
    • How to mitigate it.
    • How effective the mitigation is (reduces chance and/or cost).
    • How much the mitigation costs.

    If you have risk of something that'll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.

    Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will ...

    • ...reduce the remaining risk to once-per-month for an additional $10/day.
    • ...replace that $500/day mitigation with a $400/day mitigation.

    But this is never done. Companies don't have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.

    If you haven't done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.


    But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate -- by hackers. Our risk models are based on trying to guess what hackers might do.

    An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.

    Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.

    Finally, there is this problem with cybersecurity that we really can't distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker's machines every day and restoring from backups. It's a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.

    These difficulties with risk analysis leads us to punting on the problem altogether, but that's not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.

    One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That's the starting point from which we start to analysis risks and mitigations -- what we need beyond SSL, for example.

    So stop using "ROI", or worse, the abomination "ROSI". Start doing risk analysis.

    by Robert Graham (noreply@blogger.com) at August 23, 2017 02:48 AM

    August 22, 2017

    Michael Biven

    How I Minimize Distractions

    Being clear on what’s important. Being honest with myself about my own habits. Protect how and where my attention is being pulled. Manage my own calendar.

    I know I’m going to want to work on personal stuff, surf a bit, and do some reading. Knowing that I’m a morning person I start my day off before work on something for myself. Usually it falls under reading, writing, or coding.

    After that I usually scan through my feed reader and read anything that catches my eye, surf a few regular sites and check my personal email.

    When I do start work the first thing I do is about one hour of reactive work. All of those notifications and requests needing my attention get worked on. Right now this usually looks like catching up on email, Slack, Github, and Trello.

    I then try to have a few chunks of time blocked off in my calendar throughout the week as “Focused Work”. This creates space to focus on what I need to work on while leaving time to be available for anyone else.

    The key has been managing my own calendar to allow time for my attention to be directed on the things I want, the things I need, and the needs of others.

    I do keep a running text file where I write out the important or urgent things that need to be done for the day. I’ll also add notes when something interrupts me. When I used to write this out on paper I used this sheet from Dave Seah called The Emergent Task Timer. I found it helped to write out what needs to be done each day and to track what things are pulling my attention away from more important things.

    Because the type of work that I do can be interrupt driven I orientate my team in a similar fashion. By creating that same uninterrupted time for everyone it allows us to minimize the impact of distractions. During the week there are blocks of time where one person is the interruptible person while everyone else snoozes notifications in Slack, ignores email and gets some work done.

    This also means focusing on minimizing the impact of alert / notification fatigue. Have a zero tolerance on things that page you repeatedly or adds unnecessary notifications to your day

    The key really is just those four things I listed at the beginning.

    You have to be clear in what is important both for yourself and for those you’re accountable to.

    You have to be honest with yourself about your own habits. If you keep telling yourself you’re going to do something, but you never do… well there’s probably something else at play. Maybe you’re more in love with the idea rather than with the doing it.

    You need to protect your attention from being pulled away for things that are not important or urgent.

    You can work towards those three points by managing your own calendar. Besides tracking the passage of days a calendar schedules where your attention will be focused on. If you don’t manage it others will manage it for you.

    I view these as ideals that I try to aim for. The circumstances I’m in might not allow me to always to so, but it does give me direction on how I work each day.

    And yeah when I’m in the office I use headphones.

    August 22, 2017 07:54 AM

    August 20, 2017

    Vincent Bernat

    IPv6 route lookup on Linux

    TL;DR: With its implementation of IPv6 routing tables using radix trees, Linux offers subpar performance (450 ns for a full view — 40,000 routes) compared to IPv4 (50 ns for a full view — 500,000 routes) but fair memory usage (20 MiB for a full view).


    In a previous article, we had a look at IPv4 route lookup on Linux. Let’s see how different IPv6 is.

    Lookup trie implementation

    Looking up a prefix in a routing table comes down to find the most specific entry matching the requested destination. A common structure for this task is the trie, a tree structure where each node has its parent as prefix.

    With IPv4, Linux uses a level-compressed trie (or LPC-trie), providing good performances with low memory usage. For IPv6, Linux uses a more classic radix tree (or Patricia trie). There are three reasons for not sharing:

    • The IPv6 implementation (introduced in Linux 2.1.8, 1996) predates the IPv4 implementation based on LPC-tries (in Linux 2.6.13, commit 19baf839ff4a).
    • The feature set is different. Notably, IPv6 supports source-specific routing1 (since Linux 2.1.120, 1998).
    • The IPv4 address space is denser than the IPv6 address space. Level-compression is therefore quite efficient with IPv4. This may not be the case with IPv6.

    The trie in the below illustration encodes 6 prefixes:

    Radix tree

    For more in-depth explanation on the different ways to encode a routing table into a trie and a better understanding of radix trees, see the explanations for IPv4.

    The following figure shows the in-memory representation of the previous radix tree. Each node corresponds to a struct fib6_node. When a node has the RTN_RTINFO flag set, it embeds a pointer to a struct rt6_info containing information about the next-hop.

    Memory representation of a routing table

    The fib6_lookup_1() function walks the radix tree in two steps:

    1. walking down the tree to locate the potential candidate, and
    2. checking the candidate and, if needed, backtracking until a match.

    Here is a slightly simplified version without source-specific routing:

    static struct fib6_node *fib6_lookup_1(struct fib6_node *root,
                                           struct in6_addr  *addr)
    {
        struct fib6_node *fn;
        __be32 dir;
    
        /* Step 1: locate potential candidate */
        fn = root;
        for (;;) {
            struct fib6_node *next;
            dir = addr_bit_set(addr, fn->fn_bit);
            next = dir ? fn->right : fn->left;
            if (next) {
                fn = next;
                continue;
            }
            break;
        }
    
        /* Step 2: check prefix and backtrack if needed */
        while (fn) {
            if (fn->fn_flags & RTN_RTINFO) {
                struct rt6key *key;
                key = fn->leaf->rt6i_dst;
                if (ipv6_prefix_equal(&key->addr, addr, key->plen)) {
                    if (fn->fn_flags & RTN_RTINFO)
                        return fn;
                }
            }
    
            if (fn->fn_flags & RTN_ROOT)
                break;
            fn = fn->parent;
        }
    
        return NULL;
    }
    

    Caching

    While IPv4 lost its route cache in Linux 3.6 (commit 5e9965c15ba8), IPv6 still has a caching mechanism. However cache entries are directly put in the radix tree instead of a distinct structure.

    Since Linux 2.1.30 (1997) and until Linux 4.2 (commit 45e4fd26683c), almost any successful route lookup inserts a cache entry in the radix tree. For example, a router forwarding a ping between 2001:db8:1::1 and 2001:db8:3::1 would get those two cache entries:

    $ ip -6 route show cache
    2001:db8:1::1 dev r2-r1  metric 0
        cache
    2001:db8:3::1 via 2001:db8:2::2 dev r2-r3  metric 0
        cache
    

    These entries are cleaned up by the ip6_dst_gc() function controlled by the following parameters:

    $ sysctl -a | grep -F net.ipv6.route
    net.ipv6.route.gc_elasticity = 9
    net.ipv6.route.gc_interval = 30
    net.ipv6.route.gc_min_interval = 0
    net.ipv6.route.gc_min_interval_ms = 500
    net.ipv6.route.gc_thresh = 1024
    net.ipv6.route.gc_timeout = 60
    net.ipv6.route.max_size = 4096
    net.ipv6.route.mtu_expires = 600
    

    The garbage collector is triggered at most every 500 ms when there are more than 1024 entries or at least every 30 seconds. The garbage collection won’t run for more than 60 seconds, except if there are more than 4096 routes. When running, it will first delete entries older than 30 seconds. If the number of cache entries is still greater than 4096, it will continue to delete more recent entries (but no more recent than 512 jiffies, which is the value of gc_elasticity) after a 500 ms pause.

    Starting from Linux 4.2 (commit 45e4fd26683c), only a PMTU exception would create a cache entry. A router doesn’t have to handle those exceptions, so only hosts would get cache entries. And they should be pretty rare. Martin KaFai Lau explains:

    Out of all IPv6 RTF_CACHE routes that are created, the percentage that has a different MTU is very small. In one of our end-user facing proxy server, only 1k out of 80k RTF_CACHE routes have a smaller MTU. For our DC traffic, there is no MTU exception.

    Here is how a cache entry with a PMTU exception looks like:

    $ ip -6 route show cache
    2001:db8:1::50 via 2001:db8:1::13 dev out6  metric 0
        cache  expires 573sec mtu 1400 pref medium
    

    Performance

    We consider three distinct scenarios:

    Excerpt of an Internet full view
    In this scenario, Linux acts as an edge router attached to the default-free zone. Currently, the size of such a routing table is a little bit above 40,000 routes.
    /48 prefixes spread linearly with different densities
    Linux acts as a core router inside a datacenter. Each customer or rack gets one or several /48 networks, which need to be routed around. With a density of 1, /48 subnets are contiguous.
    /128 prefixes spread randomly in a fixed /108 subnet
    Linux acts as a leaf router for a /64 subnet with hosts getting their IP using autoconfiguration. It is assumed all hosts share the same OUI and therefore, the first 40 bits are fixed. In this scenario, neighbor reachability information for the /64 subnet are converted into routes by some external process and redistributed among other routers sharing the same subnet2.

    Route lookup performance

    With the help of a small kernel module, we can accurately benchmark3 the ip6_route_output_flags() function and correlate the results with the radix tree size:

    Maximum depth and lookup time

    Getting meaningful results is challenging due to the size of the address space. None of the scenarios have a fallback route and we only measure time for successful hits4. For the full view scenario, only the range from 2400::/16 to 2a06::/16 is scanned (it contains more than half of the routes). For the /128 scenario, the whole /108 subnet is scanned. For the /48 scenario, the range from the first /48 to the last one is scanned. For each range, 5000 addresses are picked semi-randomly. This operation is repeated until we get 5000 hits or until 1 million tests have been executed.

    The relation between the maximum depth and the lookup time is incomplete and I can’t explain the difference of performance between the different densities of the /48 scenario.

    We can extract two important performance points:

    • With a full view, the lookup time is 450 ns. This is almost ten times the budget for forwarding at 10 Gbps — which is about 50 ns.
    • With an almost empty routing table, the lookup time is 150 ns. This is still over the time budget for forwarding at 10 Gbps.

    With IPv4, the lookup time for an almost empty table was 20 ns while the lookup time for a full view (500,000 routes) was a bit above 50 ns. How to explain such a difference? First, the maximum depth of the IPv4 LPC-trie with 500,000 routes was 6, while the maximum depth of the IPv6 radix tree for 40,000 routes is 40.

    Second, while both IPv4’s fib_lookup() and IPv6’s ip6_route_output_flags() functions have a fixed cost implied by the evaluation of routing rules, IPv4 has several optimizations when the rules are left unmodified5. Those optimizations are removed on the first modification. If we cancel those optimizations, the lookup time for IPv4 is impacted by about 30 ns. This still leaves a 100 ns difference with IPv6 to be explained.

    Let’s compare how time is spent in each lookup function. Here is a CPU flamegraph for IPv4’s fib_lookup():

    IPv4 route lookup flamegraph

    Only 50% of the time is spent in the actual route lookup. The remaining time is spent evaluating the routing rules (about 30 ns). This ratio is dependent on the number of routes we inserted (only 1000 in this example). It should be noted the fib_table_lookup() function is executed twice: once with the local routing table and once with the main routing table.

    The equivalent flamegraph for IPv6’s ip6_route_output_flags() is depicted below:

    IPv6 route lookup flamegraph

    Here is an approximate breakdown on the time spent:

    • 50% is spent in the route lookup in the main table,
    • 15% is spent in handling locking (IPv4 is using the more efficient RCU mechanism),
    • 5% is spent in the route lookup of the local table,
    • most of the remaining is spent in routing rule evaluation (about 100 ns)6.

    Why does the evaluation of routing rules is less efficient with IPv6? Again, I don’t have a definitive answer.

    History

    The following graph shows the performance progression of route lookups through Linux history:

    IPv6 route lookup performance progression

    All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels as well as current ones. The kernel configuration is the default one with CONFIG_SMP, CONFIG_IPV6, CONFIG_IPV6_MULTIPLE_TABLES and CONFIG_IPV6_SUBTREES options enabled. Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

    There are three notable performance changes:

    • In Linux 3.1, Eric Dumazet delays a bit the copy of route metrics to fix the undesirable sharing of route-specific metrics by all cache entries (commit 21efcfa0ff27). Each cache entry now gets its own metrics, which explains the performance hit for the non-/128 scenarios.
    • In Linux 3.9, Yoshifuji Hideaki removes the reference to the neighbor entry in struct rt6_info (commit 887c95cc1da5). This should have lead to a performance increase. The small regression may be due to cache-related issues.
    • In Linux 4.2, Martin KaFai Lau prevents the creation of cache entries for most route lookups. The most sensible performance improvement comes with commit 4b32b5ad31a6. The second one is from commit 45e4fd26683c, which effectively removes creation of cache entries, except for PMTU exceptions.

    Insertion performance

    Another interesting performance-related metric is the insertion time. Linux is able to insert a full view in less than two seconds. For some reason, the insertion time is not linear above 50,000 routes and climbs very fast to 60 seconds for 500,000 routes.

    Insertion time

    Despite its more complex insertion logic, the IPv4 subsystem is able to insert 2 million routes in less than 10 seconds.

    Memory usage

    Radix tree nodes (struct fib6_node) and routing information (struct rt6_info) are allocated with the slab allocator7. It is therefore possible to extract the information from /proc/slabinfo when the kernel is booted with the slab_nomerge flag:

    # sed -ne 2p -e '/^ip6_dst/p' -e '/^fib6_nodes/p' /proc/slabinfo | cut -f1 -d:
    ♯  name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>
    fib6_nodes         76101  76104     64   63    1
    ip6_dst_cache      40090  40090    384   10    1
    

    In the above example, the used memory is 76104×64+40090×384 bytes (about 20 MiB). The number of struct rt6_info matches the number of routes while the number of nodes is roughly twice the number of routes:

    Nodes

    The memory usage is therefore quite predictable and reasonable, as even a small single-board computer can support several full views (20 MiB for each):

    Memory usage

    The LPC-trie used for IPv4 is more efficient: when 512 MiB of memory is needed for IPv6 to store 1 million routes, only 128 MiB are needed for IPv4. The difference is mainly due to the size of struct rt6_info (336 bytes) compared to the size of IPv4’s struct fib_alias (48 bytes): IPv4 puts most information about next-hops in struct fib_info structures that are shared with many entries.

    Conclusion

    The takeaways from this article are:

    • upgrade to Linux 4.2 or more recent to avoid excessive caching,
    • route lookups are noticeably slower compared to IPv4 (by an order of magnitude),
    • CONFIG_IPV6_MULTIPLE_TABLES option incurs a fixed penalty of 100 ns by lookup,
    • memory usage is fair (20 MiB for 40,000 routes).

    Compared to IPv4, IPv6 in Linux doesn’t foster the same interest, notably in term of optimizations. Hopefully, things are changing as its adoption and use “at scale” are increasing.


    1. For a given destination prefix, it’s possible to attach source-specific prefixes:

      ip -6 route add 2001:db8:1::/64 \
        from 2001:db8:3::/64 \
        via fe80::1 \
        dev eth0
      

      Lookup is first done on the destination address, then on the source address. 

    2. This is quite different of the classic scenario where Linux acts as a gateway for a /64 subnet. In this case, the neighbor subsystem stores the reachability information for each host and the routing table only contains a single /64 prefix. 

    3. The measurements are done in a virtual machine with one vCPU and no neighbors. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor set to performance). The benchmark is single-threaded. Many lookups are performed and the result reported is the median value. Timings of individual runs are computed from the TSC

    4. Most of the packets in the network are expected to be routed to a destination. However, this also means the backtracking code path is not used in the /128 and /48 scenarios. Having a fallback route gives far different results and make it difficult to ensure we explore the address space correctly. 

    5. The exact same optimizations could be applied for IPv6. Nobody did it yet

    6. Compiling out table support effectively removes those last 100 ns. 

    7. There is also per-CPU pointers allocated directly (4 bytes per entry per CPU on a 64-bit architecture). We ignore this detail. 

    by Vincent Bernat at August 20, 2017 04:53 PM

    August 17, 2017

    Sean's IT Blog

    GRID 5.0–Pascal Support and More

    This morning, NVIDIA announced the latest version of the graphics virtualization stack – NVIDIA GRID 5.0.  This latest releases continues the trend that NVIDIA started two years ago when they separated the GRID software stack from the Tesla data center GPUs in the GRID 2.0 release.

    GRID 5.0 adds several new key features to the GRID product line.  Along with these new features, NVIDIA is also adding a new Tesla card and rebranding the Virtual Workstation license SKU.

    Quadro Virtual Data Center Workstation

    Previous versions of GRID contained profiles designed for workstations and high-end professional applications.  These profiles, which ended in a Q, provided Quadro level features for the most demanding applications.  They also required the GRID Virtual Workstation license.

    NVIDIA has decided to rebrand the professional series capabilities of GRID to better align with their professional visualization series of products.  The GRID Virtual Workstation license will now be called the Quadro Virtual Data Center Workstation license.  This change helps differentiate the Virtual PC and Apps features, which are geared towards knowledge users, from the professional series capabilities.

    Tesla P6

    The Tesla P6 is the Pascal-generation successor to the Maxwell-generation M6 GPU.  It provides a GPU purpose-built for blade servers.  In addition to using a Pascal-generation GPU, the P6 also increases the amount of Framebuffer to 16GB.   The P6 can now support up to 16 users per blade, which provides more value to customers who want to adopt GRID for VDI on their blade platform.

    Pascal Support for GRID

    The next generation GRID software adds support for the Pascal-generation Tesla cards.  The new cards that are supported in GRID 5.0 are the Tesla P4, P6, P40, and P100.

    The P40 is the designated successor to the M60 card.  It is a 1U board with a single GPU and 24GB of Framebuffer.  The increased framebuffer also allows for a 50% increase in density, and the P40 can handle up to 24 users per board compared to the 16 users per M60.

    Edit for Clarification – The comparison between the M60 and the P40 was done using the 1GB GRID profiles.  The M60 can support up to 32 users per board when assigning each VM 512MB of framebuffer, but this option is not available in GRID 5.0.  

    On the other end of the scale is the P4. This is a 1U small form factor Pascal GPU with 8GB of Framebuffer.  Unlike other larger Tesla boards, this board can run on 75W, so it doesn’t need any additional power.  This makes it suitable for cloud and rack-dense computing environments.

    In addition to better performance, the Pascal cards have a few key advantages over the previous generation Maxwell cards.  First, there is no need to use the GPU-Mode-Switch utility to convert the Pascal board from compute mode to graphics mode.  There is, however, a manual step that is required to disable ECC memory on the Pascal boards, but this is built into the NVIDIA-SMI utility.  This change streamlines the GRID deployment process for Pascal boards.

    The second advantage involves hardware-level preemption support.  In previous generations of GRID, CUDA support was only available when using the 8Q profile.  This dedicated an entire GPU to a single VM.  Hardware preemption support enables Pascal cards to support CUDA on all profiles.

    To understand why hardware preemption is required, we have to look at how GRID shares GPU resources.  GRID uses round-robin time slicing to share GPU resources amongst multiple VMs, and each VM gets a set amount of time on the GPU.  When the time slice expires, the GPU moves onto the next VM.  When the GPU is rendering graphics to be displayed on the screen, the round-robin method works well because the GPU can typically complete all the work in the allotted time slice.  CUDA jobs, however, pose a challenge because jobs can take hours to complete.  Without the ability to preempt the running jobs, the CUDA jobs could fail when the time slice expired.

    Preemption support on Pascal cards allows VMs with any virtual workstation profile to have access to CUDA features.  This enables high-end applications to use smaller Quadro vDWS profiles instead of having to have an entire GPU dedicated to that specific user.

    Fixed Share Round Robin Scheduling

    As mentioned above, GRID uses round robin time slicing to share the GPU across multiple VMs.  One disadvantage of this method is that if a VM doesn’t have anything for the GPU to do, it is skipped and the time slice is given to the next VM in line.  This prevents the GPU from being idle if there are VMs that can utilize it.  It also means that some VMs may get more access to the GPU than others.

    NVIDIA is adding a new scheduler option in GRID 5.0.  This option is called the Fixed Share Scheduler.  The Fixed Share scheduler grants each VM that is placed on the GPU an equal share of resources.  Time slices are still used in the fixed share scheduler, and fi a VM does not have any jobs for the GPU to execute, the GPU will be idled during that time slice.

    As VMs are placed onto, or removed from, a GPU, the share of resources available to each VM is recalculated, and shares are redistributed to ensure that all VMs get equal access.

    Enhanced Monitoring

    GRID 5.0 adds new monitoring capabilities to the GRID platform.  One of the new features is per-application monitoring.  Administrators can now view GPU utilization on a per-application basis using the NVIDIA-SMI tool.  This new feature allows administrators to see exactly how much of the GPU resources each application is using.

    License Enforcement

    In previous versions of GRID, the licensing server basically acted as an auditing tool.  A license was required for GRID, but the GRID features would continue to function even if the licensed quantity was exceeded.  GRID 5.0 changes that.  Licensing is now enforced with GRID, and if a license is not available, the GRID drivers will not function.  Users will get reduced quality when they sign into their desktops.

    Because licensing is now enforced, the license server has built-in HA functionality.  A secondary licensing server can be specified in the config of both the license server and the driver, and if the primary is not available, it will fall back to the secondary.

    Other Announced Features

    Two GRID 5.0 features were announced at Citrix Synergy back in May.  The first was Citrix Director support for monitoring GRID.  The second feature is beta Live Migration support for XenServer.


    by seanpmassey at August 17, 2017 05:35 PM

    OpenSSL

    FIPS 140-2: Thanks and Farewell to SafeLogic

    We’ve had a change in the stakeholder aspect of this new FIPS 140 validation effort. The original sponsor, SafeLogic, with whom we jump-started this effort a year ago and who has worked with us since then, is taking a well-deserved bow due to a change in circumstances. Supporting this effort has been quite a strain for a relatively small company, but SafeLogic has left us in a fairly good position. Without SafeLogic we wouldn’t have made it this far, and while I don’t anticipate any future SafeLogic involvement with this effort from this point on, I remain enormously grateful to SafeLogic and CEO Ray Potter for taking on such a bold and ambitious venture.

    As announced here recently Oracle remains a sponsor but will hopefully not be the only sponsor for long. We will continue to partner with Acumen and we have been working extensively with Ashit Vora and Tony Busciglio there to sort out some new ideas.

    No code has been written yet as we’re still developing a technical strategy and design. We’ve considered some new approaches to structuring the module, perhaps even as a related set of “bound” modules instead of one monolithic module as for past validations. Carefully sorting through the implications of design decisions for FIPS 140 requirements is a tedious but necessary process, and I think we’ll make faster progress overall by not rushing to the coding stage.

    As always we’re interested in hearing from stakeholders (and especially prospective sponsors!), please contact me at marquess@openssl.com or Jim Wright at Oracle at jim.wright@oracle.com.

    August 17, 2017 04:00 PM

    Raymii.org

    Backup OpenStack object store or S3 with rclone

    This is a guide that shows you how to make backups of an object storage service like OpenStack swift or S3. Most object store services save data on multiple servers, but deleting a file also deletes it from all servers. Tools like rsync or scp are not compatible most of the time with these services, unless there is a proxy that translates the object store protocol to something like SFTP. rclone is an rsync-like, command line tool that syncs files and directories from cloud storage services like OpenStack swift, Amazon S3, Google cloud/drive, dropbox and more. By having a local backup of the contents of your cloud object store you can restore from accidental deletion or easily migrate between cloud providers. Syncing between cloud providers is also possible. It can also help to lower the RTO (recovery time objective) and backups are just always a good thing to have and test.

    August 17, 2017 12:00 AM

    August 16, 2017

    HolisticInfoSec.org

    Toolsmith #127: OSINT with Datasploit

    I was reading an interesting Motherboard article, Legal Hacking Tools Can Be Useful for Journalists, Too, that includes reference to one of my all time OSINT favorites, Maltego. Joseph Cox's article also mentions Datasploit, a 2016 favorite for fellow tools aficionado, Toolswatch.org, see 2016 Top Security Tools as Voted by ToolsWatch.org Readers. Having not yet explored Datasploit myself, this proved to be a grand case of "no time like the present."
    Datasploit is "an #OSINT Framework to perform various recon techniques, aggregate all the raw data, and give data in multiple formats." More specifically, as stated on Datasploit documentation page under Why Datasploit, it utilizes various Open Source Intelligence (OSINT) tools and techniques found to be effective, and brings them together to correlate the raw data captured, providing the user relevant information about domains, email address, phone numbers, person data, etc. Datasploit is useful to collect relevant information about target in order to expand your attack and defense surface very quickly.
    The feature list includes:
    • Automated OSINT on domain / email / username / phone for relevant information from different sources
    • Useful for penetration testers, cyber investigators, defensive security professionals, etc.
    • Correlates and collaborate results, shows them in a consolidated manner
    • Tries to find out credentials,  API keys, tokens, sub-domains, domain history, legacy portals, and more as related to the target
    • Available as single consolidating tool as well as standalone scripts
    • Performs Active Scans on collected data
    • Generates HTML, JSON reports along with text files
    Resources
    Github: https://github.com/datasploit/datasploit
    Documentation: http://datasploit.readthedocs.io/en/latest/
    YouTube: Quick guide to installation and use

    Pointers
    Second, a few pointers to keep you from losing your mind. This project is very much work in progress, lots of very frustrated users filing bugs and wondering where the support is. The team is doing their best, be patient with them, but read through the Github issues to be sure any bugs you run into haven't already been addressed.
    1) Datasploit does not error gracefully, it just crashes. This can be the result of unmet dependencies or even a missing API key. Do not despair, take note, I'll talk you through it.
    2) I suggest, for ease, and best match to documentation, run Datasploit from an Ubuntu variant. Your best bet is to grab Kali, VM or dedicated and load it up there, as I did.
    3) My installation guidance and recommendations should hopefully get you running trouble free, follow it explicitly.
    4) Acquire as many API keys as possible, see further detail below.

    Installation and preparation
    From Kali bash prompt, in this order:

    1. git clone https://github.com/datasploit/datasploit /etc/datasploit
    2. apt-get install libxml2-dev libxslt-dev python-dev lib32z1-dev zlib1g-dev
    3. cd /etc/datasploit
    4. pip install -r requirements.txt
    5. mv config_sample.py config.py
    6. With your preferred editor, open config.py and add API keys for the following at a minimum, they are, for all intents and purposes required, detailed instructions to acquire each are here:
      1. Shodan API
      2. Censysio ID and Secret
      3. Clearbit API
      4. Emailhunter API
      5. Fullcontact API
      6. Google Custom Search Engine API key and CX ID
      7. Zoomeye Username and Password
    If, and only if, you've done all of this correctly, you might end up with a running instance of Datasploit. :-) Seriously, this is some of the glitchiest software I've tussled with in quite a while, but the results paid handsomely. Run python datasploit.py domain.com, where domain.com is your target. Obviously, I ran python datasploit.py holisticinfosec.org to acquire results pertinent to your author. 
    Datasploit rapidly pulled results as follows:
    211 domain references from Github:
    Github results
    Luckily, no results from Shodan. :-)
    Four results from Paste(s): 
    Pastebin and Pastie results
    Datasploit pulled russ at holisticinfosec dot org as expected, per email harvesting.
    Accurate HolisticInfoSec host location data from Zoomeye:

    Details regarding HolisticInfoSec sub-domains and page links:
    Sub-domains and page links
    Finally, a good return on DNS records for holisticinfosec.org and, thankfully, no vulns found via PunkSpider

    DataSploit can also be integrated into other code and called as individual scripts for unique functions. I did a quick run with python emailOsint.py russ@holisticinfosec.org and the results were impressive:
    Email OSINT
    I love that the first query is of Troy Hunt's Have I Been Pwned. Not sure if you have been? Better check it out. Reminder here, you'll really want to be sure to have as many API keys as possible or you may find these buggy scripts crashing. You'll definitely find yourself compromising between frustration and the rapid, detailed results. I put this offering squarely in the "shows much promise category" if the devs keep focus on it, assess for quality, and handle errors better.
    Give Datasploit a try for sure.
    Cheers, until next time...

    by Russ McRee (noreply@blogger.com) at August 16, 2017 02:54 PM

    August 15, 2017

    Evaggelos Balaskas

    Nagios with PNP4Nagios on CentOS 6.x

    nagios_logo.png

    In many companies, nagios is the de-facto monitoring tool. Even with new modern alternatives solutions, this opensource project, still, has a large amount of implementations in place. This guide is based on a “clean/fresh” CentOS 6.9 virtual machine.

    Epel

    An official nagios repository exist in this address: https://repo.nagios.com/
    I prefer to install nagios via the EPEL repository:

    # yum -y install http://fedora-mirror01.rbc.ru/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

    # yum info nagios | grep Version
    Version     : 4.3.2

    # yum -y install nagios

    Selinux

    Every online manual, suggest to disable selinux with nagios. There is a reason for that ! but I will try my best to provide info on how to keep selinux enforced. To write our own nagios selinux policies the easy way, we need one more package:

    # yum -y install policycoreutils-python

    Starting nagios:

    # /etc/init.d/nagios restart

    will show us some initial errors in /var/log/audit/audit.log selinux log file

    Filtering the results:

    # egrep denied /var/log/audit/audit.log | audit2allow

    will display something like this:

    #============= nagios_t ==============
    allow nagios_t initrc_tmp_t:file write;
    allow nagios_t self:capability chown;
    

    To create a policy file based on your errors:

    # egrep denied /var/log/audit/audit.log | audit2allow -a -M nagios_t

    and to enable it:

    # semodule -i nagios_t.pp

    BE AWARE this is not the only problem with selinux, but I will provide more details in few moments.

    Nagios

    Now we are ready to start the nagios daemon:

    # /etc/init.d/nagios restart

    filtering the processes of our system:

    # ps -e fuwww | egrep na[g]ios

    nagios    2149  0.0  0.1  18528  1720 ?        Ss   19:37   0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
    nagios    2151  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
    nagios    2152  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
    nagios    2153  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
    nagios    2154  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
    nagios    2155  0.0  0.0  18076   712 ?        S    19:37   0:00  _ /usr/sbin/nagios -d /etc/nagios/nagios.cfg
    

    super!

    Apache

    Now it is time to start our web server apache:

    # /etc/init.d/httpd restart

    Starting httpd: httpd: apr_sockaddr_info_get() failed
    httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

    This is a common error, and means that we need to define a ServerName in our apache configuration.

    First, we give an name to our host file:

    # vim /etc/hosts

    for this guide, I ‘ll go with the centos69 but you can edit that according your needs:

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 centos69

    then we need to edit the default apache configuration file:

    # vim /etc/httpd/conf/httpd.conf

    #ServerName www.example.com:80
    ServerName centos69

    and restart the process:

    # /etc/init.d/httpd restart

    Stopping httpd:      [  OK  ]
    Starting httpd:      [  OK  ]

    We can see from the netstat command that is running:

    # netstat -ntlp | grep 80

    tcp        0      0 :::80                       :::*                        LISTEN      2729/httpd      

    Firewall

    It is time to fix our firewall and open the default http port, so that we can view the nagios from our browser.
    That means, we need to fix our iptables !

    # iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

    this is want we need. To a more permanent solution, we need to edit the default iptables configuration file:

    # vim /etc/sysconfig/iptables

    and add the below entry on INPUT chain section:

    -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

    Web Browser

    We are ready to fire up our web browser and type the address of our nagios server.
    Mine is on a local machine with the IP: 129.168.122.96, so

    http://192.168.122.96/nagios/

    User Authentication

    The default user authentication credentials are:

    nagiosadmin // nagiosadmin

    but we can change them!

    From our command line, we type something similar:

    # htpasswd -sb /etc/nagios/passwd nagiosadmin e4j9gDkk6LXncCdDg

    so that htpasswd will update the default nagios password entry on the /etc/nagios/passwd with something else, preferable random and difficult password.

    ATTENTION: e4j9gDkk6LXncCdDg is just that, a random password that I created for this document only. Create your own and dont tell anyone!

    Selinux, Part Two

    at this moment and if you are tail-ing the selinux audit file, you will see some more error msgs.

    Below, you will see my nagios_t selinux policy file with all the things that are needed for nagios to run properly - at least at the moment.!

    module nagios_t 1.0;
    
    require {
            type nagios_t;
            type initrc_tmp_t;
            type nagios_spool_t;
            type nagios_system_plugin_t;
            type nagios_exec_t;
            type httpd_nagios_script_t;
            class capability chown;
            class file { write read open execute_no_trans getattr };
    }
    
    #============= httpd_nagios_script_t ==============
    allow httpd_nagios_script_t nagios_spool_t:file { open read getattr };
    
    #============= nagios_t ==============
    allow nagios_t initrc_tmp_t:file write;
    allow nagios_t nagios_exec_t:file execute_no_trans;
    allow nagios_t self:capability chown;
    
    #============= nagios_system_plugin_t ==============
    allow nagios_system_plugin_t nagios_exec_t:file getattr;

    Edit your nagios_t.te file accordingly and then build your selinux policy:

    # make -f /usr/share/selinux/devel/Makefile

    You are ready to update the previous nagios selinux policy :

    # semodule -i nagios_t.pp

    Selinux - Nagios package

    So … there is an rpm package with the name: nagios-selinux on Version: 4.3.2
    you can install it, but does not resolve all the selinux errors in audit file ….. so ….
    I think my way is better, cause you can understand some things better and have more flexibility on defining your selinux policy

    Nagios Plugins

    Nagios is the core process, daemon. We need the nagios plugins - the checks !
    You can do something like this:

    # yum install nagios-plugins-all.x86_64

    but I dont recommend it.

    These are the defaults :

    nagios-plugins-load-2.2.1-4git.el6.x86_64
    nagios-plugins-ping-2.2.1-4git.el6.x86_64
    nagios-plugins-disk-2.2.1-4git.el6.x86_64
    nagios-plugins-procs-2.2.1-4git.el6.x86_64
    nagios-plugins-users-2.2.1-4git.el6.x86_64
    nagios-plugins-http-2.2.1-4git.el6.x86_64
    nagios-plugins-swap-2.2.1-4git.el6.x86_64
    nagios-plugins-ssh-2.2.1-4git.el6.x86_64

    # yum -y install nagios-plugins-load nagios-plugins-ping nagios-plugins-disk nagios-plugins-procs nagios-plugins-users nagios-plugins-http nagios-plugins-swap nagios-plugins-ssh

    and if everything is going as planned, you will see something like this:

    nagios_checks.jpg

    PNP4Nagios

    It is time, to add pnp4nagios a simple graphing tool and get read the nagios performance data and represent them to graphs.

    # yum info pnp4nagios | grep Version
    Version     : 0.6.22

    # yum -y install pnp4nagios

    We must not forget to restart our web server:

    # /etc/init.d/httpd restart

    Bulk Mode with NPCD

    I’ve spent toooooo much time to understand why the default Synchronous does not work properly with nagios v4x and pnp4nagios v0.6x
    In the end … this is what it works - so try not to re-invent the wheel , as I tried to do and lost so many hours.

    Performance Data

    We need to tell nagios to gather performance data from their check:

    # vim +/process_performance_data /etc/nagios/nagios.cfg

    process_performance_data=1
    

    We also need to tell nagios, what to do with this data:

    nagios.cfg

    # *** the template definition differs from the one in the original nagios.cfg
    #
    service_perfdata_file=/var/log/pnp4nagios/service-perfdata
    service_perfdata_file_template=DATATYPE::SERVICEPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tSERVICEDESC::$SERVICEDESC$tSERVICEPERFDATA::$SERVICEPERFDATA$tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$tSERVICESTATE::$SERVICESTATE$tSERVICESTATETYPE::$SERVICESTATETYPE$
    service_perfdata_file_mode=a
    service_perfdata_file_processing_interval=15
    service_perfdata_file_processing_command=process-service-perfdata-file
    
    # *** the template definition differs from the one in the original nagios.cfg
    #
    host_perfdata_file=/var/log/pnp4nagios/host-perfdata
    host_perfdata_file_template=DATATYPE::HOSTPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tHOSTPERFDATA::$HOSTPERFDATA$tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$
    host_perfdata_file_mode=a
    host_perfdata_file_processing_interval=15
    host_perfdata_file_processing_command=process-host-perfdata-file

    Commands

    In the above configuration, we introduced two new commands

    service_perfdata_file_processing_command  &
    host_perfdata_file_processing_command

    We need to define them in the /etc/nagios/objects/commands.cfg file :

    #
    # Bulk with NPCD mode
    #
    define command {
           command_name    process-service-perfdata-file
           command_line    /bin/mv /var/log/pnp4nagios/service-perfdata /var/spool/pnp4nagios/service-perfdata.$TIMET$
    }
    
    define command {
           command_name    process-host-perfdata-file
           command_line    /bin/mv /var/log/pnp4nagios/host-perfdata /var/spool/pnp4nagios/host-perfdata.$TIMET$
    }

    If everything have gone right … then you will be able to see on a nagios check something like this:

    nagios_perf.png

    Verify

    Verify your pnp4nagios setup:

    # wget -c http://verify.pnp4nagios.org/verify_pnp_config
    
    # perl verify_pnp_config -m bulk+npcd -c /etc/nagios/nagios.cfg -p /etc/pnp4nagios/ 

    NPCD

    The NPCD daemon (Nagios Performance C Daemon) is the daemon/process that will translate the gathered performance data to graphs, so let’s started it:

    # /etc/init.d/npcd restart
    Stopping npcd:                                             [FAILED]
    Starting npcd:                                             [  OK  ]

    You should see some warnings but not any critical errors.

    Templates

    Two new template definition should be created, one for the host and one for the service:

    /etc/nagios/objects/templates.cfg

    define host {
       name       host-pnp
       action_url /pnp4nagios/index.php/graph?host=$HOSTNAME$&srv=_HOST_' class='tips' rel='/pnp4nagios/index.php/popup?host=$HOSTNAME$&srv=_HOST_
       register   0
    }
    
    define service {
       name       srv-pnp
       action_url /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$' class='tips' rel='/pnp4nagios/popup?host=$HOSTNAME$&srv=$SERVICEDESC$
       register   0
    }

    Host Definition

    Now we need to apply the host-pnp template to our system:

    so this configuration: /etc/nagios/objects/localhost.cfg

    define host{
            use                     linux-server            ; Name of host template to use
                                                            ; This host definition will inherit all variables that are defined
                                                            ; in (or inherited by) the linux-server host template definition.
            host_name               localhost
            alias                   localhost
            address                 127.0.0.1
            }
    

    becomes:

    define host{
            use                     linux-server,host-pnp            ; Name of host template to use
                                                            ; This host definition will inherit all variables that are defined
                                                            ; in (or inherited by) the linux-server host template definition.
            host_name               localhost
            alias                   localhost
            address                 127.0.0.1
            }

    Service Definition

    And we finally must append the pnp4nagios service template to our services:

    srv-pnp

    define service{
            use                             local-service,srv-pnp         ; Name of service template to use
            host_name                       localhost

    Graphs

    We should be able to see graphs like these:

    nagios_ping.png

    Happy Measurements!

    appendix

    These are some extra notes on the above article, you need to take in mind:

    Services

    # chkconfig httpd on
    # chkconfig iptables on
    # chkconfig nagios on
    # chkconfig npcd on 

    PHP

    If you are not running the default php version on your system, it is possible to get this error msg:

    Non-static method nagios_Core::SummaryLink()

    There is a simply solution for that, you need to modify the index file to exclude the deprecated php error msgs:

    # vim +/^error_reporting /usr/share/nagios/html/pnp4nagios/index.php   
    
    // error_reporting(E_ALL & ~E_STRICT);
    error_reporting(E_ALL & ~E_STRICT & ~E_DEPRECATED);
    

    August 15, 2017 06:18 PM

    August 13, 2017

    Sarah Allen

    call it what it is

    The news from Charlottesville is not an isolated incident. The US President responds that there is violence “on many sides.” This is false.

    In Charlottesville, the police were present in riot gear, but the rioting white people were treated with kid gloves, a stark difference from Fergusen police reaction reported by @JayDowTV who was there.

    These people feel comfortable declaring it a war: “And to everyone, know this: we are now at war. And we are not going to back down. There will be more events. Soon. We are going to start doing this nonstop. Across the country. I’m going to arrange them myself. Others will too, I’m sure, but I’m telling you now: I am going to start arranging my own events. We are going to go bigger than Charlottesville. We are going to go huge.”

    General McMaster, US National Security Advisor tells us “I think what terrorism is, is the use of violence to incite terror and fear. And, of course, it was terrorism” (NBC via Vox)

    A Christian minister writes this plainly on his post, Yes, This is Racism. We each need to declare what we stand for in this moment and always.

    We are not with you, torch-bearers, in Charlottesville or anywhere.
    We do not consent to this.
    In fact we stand against you, alongside the very beautiful diversity that you fear.
    We stand with people of every color and of all faiths, people of every orientation, nationality, and native tongue.

    We are not going to have this. This is not the country we’ve built together and it will not become what you intend it to become.

    So you can kiss our diverse, unified, multi-colored behinds because your racism and your terrorism will not win the day.

    Believe it.

    — John Pavlovitz

    by sarah at August 13, 2017 10:05 PM

    August 12, 2017

    OpenSSL

    Random Thoughts

    The next release will include a completely overhauled version of the random number facility, the RAND API. The default RAND method is now based on a Deterministic Random Bit Generator (DRBG) implemented according to the NIST recommendation 800-90A. We have also edited the documentation, allowed finer-grained configuration of how to seed the generator, and updated the default seeding mechanisms.

    There will probably be more changes before the release is made, but they should be comparatively minor.

    Read on for more details.

    Background; NIST

    The National Institute of Standards and Technologies (NIST) is part of the US Federal Government. Part of their mission is to advance standards. NIST has a long history of involvement with cryptography, including DES, the entire family of SHA digests, AES, various elliptic curves, and so on. They produce a series of Special Publications. One set, SP 800, is the main way they publish guidelines, recommendations, and reference materials in the computer security area.

    NIST SP 800-90A rev1 is titled Recommendation for Random Number Generation Using Deterministic Random Bit Generators. Yes, that’s a mouthful! But note that if you generate enough random bits, you get a random byte, and if you generate enough bytes you can treat it as a random number, often a BN in OpenSSL terminology. So when you see “RBG” think “RNG.” :)

    A non-deterministic RBG (NRBG) has to be based on hardware where every bit output is based on an unpredictable physical process. A fun example of this is the old Silicon Graphics LavaRand, now recreated and running at CloudFlare, which takes pictures of a lava lamp and digitizes them.

    A deterministic RBG (DRBG) uses an algorithm to generate a sequence of bits from an initial seed, and that seed must be based on a true randomness source. This is a divide and conquer approach: if the algorithm has the right properties, the application only needs a small input of randomness (16 bytes for our algorithm) to generate many random bits. It will also need to be periodically reseeded, but when that needs to happen can also be calculated.

    The NIST document provides a framework for defining a DRBG, including requirements on the operating environment and a lifecycle model that is slightly different from OpenSSL’s XXX_new()/XXX_free() model. NIST treats creation as relatively heavyweight, and allows a single DRBG to be instantiated or not during its lifetime.

    NIST SP 800-90 defined four DRBG algorithms. One of these was “Dual Elliptic Curve” which was later shown to be deliberately vulnerable. For a really good explanation of this, see Steve Checkoway’s talk at the recent IETF meeting. An update to the document was made, the above-linked 90A revision 1, and Dual-EC DRBG was removed.

    OpenSSL currently implements the AES-counter version, which is also what Google’s BoringSSL and Amazon’s s2n use. Our tests include the NIST known answer tests (KATs), so we are confident that the algorithm is pretty correct. It also uses AES in a common way, which increases out confidence in its correctness.

    What we did

    First, we cleaned up our terminology in the documentation and code comments. The term entropy is both highly technical and confusing. It is used in 800-90A in very precise ways, but in our docs it was usually misleading, so we modified the documents to use the more vague but accurate term randomness. We also tightened up some of the implementation and made some semantics more precise; e.g., RAND_load_file now only reads regular files.

    Next, we imported the AES-based DRBG from the OpenSSL FIPS project, and made it the default RAND method. The old method, which tried an ad hoc set of methods to get seed data, has been removed. We added a new configuration parameter, --with-rand-seed, which takes a comma-separated list of values for seed sources. Each method is tried in turn, stopping when enough bits of randomness have been collected. The types supported are:

    • os which is the default and only one supported on Windows, VMS and some older legacy platforms. Most of the time this is the right value to use, and it can therefore be omitted.
    • getrandom which uses the getrandom() system call
    • devrandom which tries special devices like /dev/urandom
    • egd which uses the Entropy Gathering Daemon protocol
    • none for no seeding (don’t use this)
    • rdcpu for X86 will try to use RDSEED and RDRAND instructions
    • librandom currently not implemented, but could use things like arc4random() when available.

    If running on a Linux kernel, the default of os will turn on devrandom. If you know you have an old kernel and cannot upgrade, you should think about using rdcpu. Implementation details can be seen by looking at the files in the crypto/rand directory, if you’re curious. They’re relatively well-commented, but the implementation could to change in the future. Also note that seeding happens automatically, so unless you have a very special environment, you should not ever need to call RAND_add(), RAND_seed() or other seeding routines. See the manpages for details.

    It’s possible to “chain” DRBG’s, so that the ouput of one becomes the seed input for another. Each SSL object now has its own DRBG, chained from the global one. All random bytes, like the pre-master secret and session ID’s, are generated from that one. This can reduce lock contention, and might result in needing to seed less often.

    We also added a separate global DRBG for private key generation and added API’s to use it. This object isn’t reachable directly, but it is used by the new BN_priv_rand and BN_priv_rand_range API’s. Those API’s, in turn, are used by all private-key generating functions. We like the idea of keeping the private-key DRBG “away from” the general public one; this is common practice. We’re not sure how this idea, and the per-SSL idea, interact and we’ll be evaluating that for a future release. One possibility is to have two DRBG’s per-thread, and remove the per-SSL one. We’d appreciate any suggestions on how to evaluate and decide what to do.

    What we didn’t do

    The RAND_DRBG API isn’t public, but it probably will be in a future release. We want more time to play around with the API and see what’s most useful and needed.

    It’s not currently possible to change either global DRBG; for example, to use an AES-256 CTR which is also implemented. This is also something under consideration for the future. The proper way to do both of these might be to make the existing RAND_METHOD datatype opaque; we missed doing so in the 1.1.0 release.

    Acknowledgements

    This recent round of work started with some posts on the cryptography mailing list. There was also discussion on the openssl-dev mailing list and in a handful of PR’s. Many people participated, but in particular we’d like to thank (in no particular order): Colm MacCarthaigh, Gilles Van Assche, John Denker, Jon Callas, Mark Steward, Chris Wood, Matthias St. Pierre, Peter Waltenberg, and Ted T’so.

    August 12, 2017 08:00 PM

    Electricmonk.nl

    Why the EasyList DMCA takedown notice is perfectly valid (and fair)

    There's a story going around about a DMCA takedown notice filed against the adblocking list EasyList, causing them to remove a domain from one of their filters. As usual, lots of indignation and pitchfork-mobbing followed. It seems though, that few people understand what exactly is going on here.

    Here's a brief history of ad blocking on the web:

    1. Publishers start putting ads on their websites.
    2. Publishers start putting annoying and dangerous ads on their websites.
    3. Adblockers appear that filter out ads.
    4. Cat-and-mouse game where publishers try to circumvent adblockers and adblockers blocking their attempts at circumvention.
    5. Publishers start putting up paywalls and competely blocking browsers that have adblocking.
    6. Adblockers start circumventing paywalls and measures put in place by publishers to ban people that are blocking ads.

    This whole EasyList DMCA thing is related to point 5 and 6.

    Admiral is a company that provides services to implement point 5: block people completely from websites if they block ads.

    Let's get something clear: I'm extremely pro-adblocking. Publishers have time and again shown that they can't handle the responsibility of putting decent ads on websites. It always devolves into shitty practices such as auto-playing audio, flashing epilepsy bullshit, popups, malware infested crap, etc. Anything to get those few extra ad impressions and ad quality be damned! This has happened again and again and again in every possible medium from television to newspapers. So you can pry my adblocker from my cold dead hands and publishers can go suck it.

    BUT I'm also of the firm belief that publishers have every right to completely ban me from their contents if I'm not willing to look at ads. It's their contents and it's their rules. If they want to block me, I'm just fine with that. I probably don't care about their content all that much anyway.

    Now, for the DMCA notice.

    A commit to EasyList added functionalclam.com to EasyList. According to Admiral, functionalclam.com is a domain used to implement point 5: blocking users from websites if they're using adblockers. According to this lawyer:

    The Digital Millennium Copyright Act ("DMCA") makes it illegal to circumvent technical measures (e.g., encryption, copy protection) that prevent access to copyrighted materials, such as computer software or media content. The DMCA also bans the distribution of products or services that are designed to carry out circumvention of technical measures that prevent either access to or copying of copyrighted materials.

    Since the functionalclam.com domain is a technical means to prevent access to copyrighted materials, the DMCA takedown notice is completely valid. And actually, it's a pretty honorable one. From the DMCA takedown notice itself (scroll down):

    >>> What would be the best solution for the alleged infringement? Are there specific changes the other person can make other than removal?

    Full repository takedown [of EasyList] should not be necessary. Instead, the repository owner can remove functionalclam[.]com from the file in question and not replace with alternative circumvention attempts.

    EasyList themselves agree that their list should not be used to circumvent such access control:

    If it is a Circumvention/Adblock-Warning adhost, it should be removed from Easylist even without the need for a DMCA request.

    So put down your pitchforks because:

    • This has nothing to do with ad blocking, it has to do with circumventing access control.
    • Adblocking is not under attack,
    • The DMCA notice was not abuse of the DMCA.
    • Pulbishers have every right to ban adblock users from their content.
    • Y'all need to start growing some skepticism. When you read clickbaity titles such as "Ad blocking is under attack", you should immediately be suspicious. In fact, always be suspicious.

    by admin at August 12, 2017 06:25 AM

    August 11, 2017

    SysAdmin1138

    Reputation services, a brief history

    Or,

    The Problem of Twitter, Hatemobs, and Denial of Service

    The topic of shared blocklists is hot right now. For those who avoid the blue bird, a shared blocklist is much like a shared killfile from Ye Olde Usenet, or an RBL for spam. Subscribe to one, and get a curated feed of people you never want to hear from again. It's an idea that's been around for decades, applied to a new platform.

    However, internet-scale has caught up with the technique.

    Usenet

    A Usenet killfile was a feature of NNTP clients where posts meeting a regex would not even get displayed. If you've ever wondered what the vengeful:

    *Plonk!*

    ...was about? This is what it was referring to. It was a public way of saying:

    I have put you into my killfile, and I am now telling you I have done so, you asshole.

    This worked because in the Usenet days, the internet was a much smaller place. Once in a while you'd get waves of griefers swarming a newsgroup, but that was pretty rare. You legitimately could remove most content you didn't want to see from your notice. The *Plonk!* usage still exists today, and I'm seeing some twitter users use that to indicate a block is being deployed. I presume these are veterans of many a Usenet flame-war.

    RBLs

    The Realtime Blackhole Lists (RBL) were pioneered as an anti-spam technique. Mail administrators could subscribe to these, and all incoming mail-connections could be checked against it. If it was listed, the SMTP connection could be outright rejected. The assumption here was that spam comes from insecured or outright evil hosts, and that blocking them outright is better for everyone.

    This was a true democratic solution in the spirit of free software: Anyone could run one.

    That same sprit means that each RBL had a different criteria for listing. Some were zero tolerance, and even one Unsolicited Commercial Email was enough to get listed. Others, simply listed whole netblocks, so you could block all Cable ISPs, or entire countries.

    Unlike killfiles, RBLs were designed to be a distributed system from the outset.

    Like killfiles, RBLs are in effect a Book of Grudges. Subscribing to one, means subscribing to someone else's grudges. If you shared grudge-worthy viewpoints, this was incredibly labor saving. If you didn't, sometimes things got blocked that shouldn't have.

    As a solution to the problem of spam, RBLs were not the silver bullet. That came with the advent of commercial providers deploying surveillance networks and offering IP reputation services as part of their paid service. The commercial providers were typically able to deploy far wider surveillance than the all-volunteer RBLs did, and as such saw a wider sample of overall email traffic. A wider sample means that they were less likely to ban a legitimate site for a single offense.

    This is still the case today, though email-as-a-service providers like Google and Microsoft are now hosting the entire email stack themselves. Since Google handles a majority of all email on the planet, their surveillance is pretty good.

    Compounding the problem for the volunteer-lead RBL efforts is IPv6. IPv4 was small enough you can legitimately tag the entire internet with spam/not-spam without undue resources. IPv6 is vastly larger and very hard to do comprehensively without resorting to netblock blocking. And even then, there are enough possible netblocks that scale is a real issue.

    Twitter Blocklists

    Which brings us to today, and twitter. Shared blocklists are in this tradition of killfiles and RBLs. However, there are a few structural barriers to this being the solution it was with Usenet:

    • No Netblocks. Which means each user has to be blocked individually, you can't block all of a network, or a country-of-origin
    • The number of accounts. Active-users is not the same as total-users. In 2013, the estimated registered-account number was around 810 million. Four years later, this is likely over a billion. It's rapidly approaching the size of the IPv4 address space.
    • Ease of setting up a new account. Changing your IP address Changing your username is very, very easy.

    The lack of a summarization technique, the size of the problem space, and the ease of bypassing a block by changing your identifier mean that a shared-blocklist is a poor tool to fight off a determined hatemob. It's still good for casual griefing, where the parties aren't invested enough to break a blocklist.

    The idea of the commercialized RBL, where a large company sells account-reputation services, is less possible. First of all, such an offering would likely be against the Twitter terms of service. Second, the target market is not mail-server-admins, but individual account-holders. A far harder market to monitize.

    The true solution will have to come from Twitter itself. Either by liberalizing their ToS to allow those commercial services to develop, or developing their own reputation markets and content tools.

    by SysAdmin1138 at August 11, 2017 02:38 PM

    August 10, 2017

    Everything Sysadmin

    Evidence that Go makes it easy to get up to speed

    Some recent PRs to the DNSControl Project casually mentioned that this was their first time writing Go code. That's amazing!

    When was the last time you saw someone say, "here's a major contribution to your open source project... oh and I just learned this language." (and the PR was really darn good!) I think it is pretty rare and one of the special things about Go.

    Part of Go's original vision was to make it easy for new people to join a project and make contributions. This was important internally at Google, since engineers hop projects frequently. This also benefits open source projects by making it easy to dive in and participate.

    Here are the three PRs:

    • Add Digitalocean provider #171. DNSControl has a plug-in architecture to support new DNS Service Providers. This Go first-timer wrote an entire plugin to support Digital Ocean. "I haven't used Go before, but the diff looks sane so hopefully I managed to handle the dependencies correctly."
    • Implement SRV support for CloudFlare provider #174. Plug-ins can indicate whether or not they support new fangled DNS records like SRV. This PR extends the CloudFlare provider to add support for the CAA record. "This is my first time writing anything in Go".
    • CAA support #132. CAA is a new DNS record type. This PR changed DNSControl to support this new record, and implements it for a few of the providers (plug-ins). "I almost never wrote Go before (and this is my first Go PR)".

    One of the joys maintaining an open source project is watching contributors build new skills. Github.com's PR system makes it a joy to give constructive criticism and help people iterate on the code until it meets our high standards. (ProTip: Criticize the code, not the person. i.e. write "this should" instead of "you should")

    Go isn't 100 percent of why it is easy to contribute to DNSControl. We've made it easy to contribute other ways too:

    • Extensive unit tests and integration tests. Your first contribution can be scary. If your new provider passes all the integration tests, you can be confident that it is time to make the PR. This reduces fear of embarassment.
    • Detailed documentation on the most common and difficult tasks such as how to write a new provider and add a new DNS record type. People are encouraged to add new tests of their own (TDD-style). We also encourage people to update the "how to" docs as they use them, to make the process easier for the next person.
    • Extra-friendly code reviews. A special shout-out to Craig who is super friendly and helpful. He's happy to coach people whether they're learning DNSControl itself or Go.

    If you would like to learn more about DNSControl, and why "DNS as Code" is a better way to maintain your DNS zones, then watch our video from Usenix SRECon or check out our Github page.

    Thanks to our contributors, and to my employer StackOverflow.com, for supporting this project. And, of course, thanks to the Go community for making such an awesome language!

    by Tom Limoncelli at August 10, 2017 10:00 PM

    August 07, 2017

    LZone - Sysadmin

    Automatically Download Oracle JDK

    When downloading Oracle JDK via scripts you might run into the login page.

    While in the past it was sufficient to do something like
    wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
    you now also need to ignore https:// as you will get redirected. So when using wget you might want to run
    wget -c --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
    Note the added "-c" and "--no-check-certificate" options.

    August 07, 2017 02:26 PM

    Sarah Allen

    impact matters more than intent

    Disclaimer: I work at Google. This article is based on 18 years of observing the company as an outsider, along with over a year of experience as a senior technical leader at the company.

    An internal google document was published this weekend, where an individual articulated poorly reasoned arguments that demonstrated conscious bias, both about gender and about the skills necessary for software engineering. Demeaning generalizations and stereotypes were presented as unbiased fact. The author may not have intended to be disrespectful, yet caused great harm. Many women have spoken publicly about their frustration in reading this harmful rhetoric in their workplace.

    For many years before I joined Google, I heard stories of sexism and racism that happened inside the company. Frankly, I know of no company which is immune to this. Companies believe they have well-established social norms around respectful behavior, good management training, effective escalation paths, etc., yet these aren’t evenly distributed. In 2006, I declared that I would not work at Google because of their hiring practices. In 2016, I decided that both Google and I had changed a bit since then. I interviewed Google even more than they interviewed me. Not including the actual day of interviews, I had 21 conversations with 17 different people before deciding to work here.

    Unexpectedly, Google was the first tech company I have worked for where I didn’t routinely meet people who expressed surprise at my technical ability. There seems to be a positive bias that if you work for Google you must be technically awesome. I can’t tell whether this is universal. (I assume it isn’t. Google is a big place.) However, as evidenced by this latest rant, Google has a long way to go in creating effective social norms around discussing diversity and the efforts to make hiring and promotion fair.

    We need to be careful as we address inequities. As a woman who attended private high school and studied Computer Science at a prestigious university, I have an advantage in getting a job at any tech company over a white man who joined the military to pay for college and taught himself to code. Google could, if it wanted to, hire the very best women and people of color such that it statistically matched the demographics of the United States, and not “lower the bar” yet still be homogeneous, yielding limited benefit from its diversity efforts.

    Diversity and inclusion is not just about demographics. The lack of minorities and women at Google and most other tech companies is a sign that things aren’t quite right. We should look at diversity numbers as a signal, and then seek to address underlying problems around inclusion and respect. We need to include effective communication skills as part of selection criteria for new engineers and for promotion.

    Ex-Google engineer Yonatan Zunger wrote a thoughtful response about how communication and collaboration are critical to the work we do. I have also written more generally about how communication is a technical skill: “We usually think of software as being made of code, but it is really made of people. We imagine what the software should do, and through our code, we turn ideas into reality…. I find it confusing that anyone could even suggest that communication is a different skill that is optional for programmers. This is what we do. We write our code in programming languages. We define things and then they exist. We make real things using language.”

    I’ve worked with amazing engineers who have little else in common. The best engineers I’ve worked with solve problems differently from each other — creativity and insight combined with technical skill are the mark of a great engineer.

    The Google hiring process tests for the candidate’s ability to write code and solve problems (and to some degree talk about code). This is not enough. Google and the rest of the industry needs to set a higher bar for hiring talent. It is much harder to collaborate effectively and communicate your ideas than it is to write a basic algorithm on the whiteboard.

    In my experience, the ability to write great software is not tied to any outward trait, and discussion of biological or societal differences is a distraction from the core issue. We need people with a diversity of experience and perspectives to bring creative solutions to their work, and we need engineers who can work with people who are different from them with respect and enthusiasm.

    I know there are good people working inside of Google to make change. I applaud the publication of research on effective teamwork. This is not enough. This work of creating change for humans is much harder than the work of writing software. Smaller companies have a greater opportunity to make change faster. We each need to step up and take a leadership role at every level of our organizations.

    Impact matters more than intent.

    by sarah at August 07, 2017 05:42 AM

    August 06, 2017

    Electricmonk.nl

    Understanding Python's logging module

    I'm slightly embarrassed to say that after almost two decades of programming Python, I still didn't understand its logging module. Sure, I could get it to work, and reasonably well, but I'd often end up with unexplained situations such as double log lines or logging that I didn't want.

    >>> requests.get('https://www.electricmonk.nl')
    DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): www.electricmonk.nl
    DEBUG:requests.packages.urllib3.connectionpool:https://www.electricmonk.nl:443 "GET / HTTP/1.1" 301 178

    What? I never asked for that debugging information?

    So I decided to finally, you know, read the documentation and experiment a bit to figure out how the logging module really works. In this article I want to bring attention to some of the misconceptions I had about the logging module. I'm going to assume you have a basic understanding of how it works and know about loggers, log levels and handlers.

    Logger hierarchy

    Loggers have a hierarchy. That is, you can create individual loggers and each logger has a parent. At the top of the hierarchy is the root logger. For instance, we could have the following loggers:

    myapp
    myapp.ui
    myapp.ui.edit

    These can be created by asking a parent logger for a new child logger:

    >>> log_myapp = logging.getLogger("myapp")
    >>> log_myapp_ui = log_myapp.getChild("ui")
    >>> log_myapp_ui.name
    'myapp.ui'
    >>> log_myapp_ui.parent.name
    'myapp'

    Or you can use dot notation:

    >>> log_myapp_ui = logging.getLogger("myapp.ui")
    >>> log_myapp_ui.parent.name
    'myapp'

    You should use the dot notation generally.

    One thing that's not immediately clear is that the logger names don't include the root logger. In actuality, the logger hierarchy looks like this:

    root.myapp
    root.myapp.ui
    root.myapp.ui.edit

    More on the root logger in a bit.

    Log levels and message propagation

    Each logger can have a log level. When you send a message to a logger, you specify the log level of the message. If the level matches, the message is then propagated up the hierarchy of loggers. One of the biggest misconceptions I had was that I thought each logger checked the level of the message and if it the level of the message is lower or equal, the logger's handler would be invoked. This is not true!

    What happens instead is that the level of the message is only checked by the logger you give the message to. If the message's level is lower or equal to the logger's, the message is propagated up the hierarchy, but none of the other loggers will check the level! They'll simply invoke their handlers.

    >>> log_myapp.setLevel(logging.ERROR)
    >>> log_myapp_ui.setLevel(logging.DEBUG)
    >>> log_myapp_ui.debug('test')
    DEBUG:myapp.ui:test

    In the example above, the root logger has a handler that prints the message. Even though the "log_myapp" handler has a level of ERROR, the DEBUG message is still propagated to to the root logger. This image (found on this page) shows why:

    As you can see, when giving a message to a logger, the logger checks the level. After that, the level on the loggers is no longer checked and all handlers in the entire chain are invoked, regardless of level. Note that you can set levels on handlers as well. This is useful if you want to, for example, create a debugging log file but only show warnings and errors on the console.

    It's also worth noting that by default, loggers have a level of 0. This means they use the log level of the first parent logger that has an actual level set. This is determined at message-time, not when the logger is created.

    The root logger

    The logging tutorial for Python explains that to configure logging, you can use basicConfig():

    logging.basicConfig(filename='example.log',level=logging.DEBUG)
    

    It's not immediately obvious, but what this does is configure the root logger. Doing this may cause some counter-intuitive behaviour, because it causes debugging output for all loggers in your program, including every library that uses logging. This is why the requests module suddenly starts outputting debug information when you configure the root logger.

    In general, your program or library shouldn't log directly against the root logger. Instead configure a specific "main" logger for your program and put all the other loggers under that logger. This way, you can toggle logging for your specific program on and off by setting the level of the main logger. If you're still interested in debugging information for all the libraries you're using, feel free to configure the root logger. There is no convenient method such as basicConfig() to configure a main logger, so you'll have to do it manually:

    ch = logging.StreamHandler()
    formatter = logging.Formatter('%(asctime)s %(levelname)8s %(name)s | %(message)s')
    ch.setFormatter(formatter)
    
    logger = logging.getLogger('myapp')
    logger.addHandler(ch)
    logger.setLevel(logging.WARNING)  # This toggles all the logging in your app

    There are more pitfalls when it comes to the root logger. If you call any of the module-level logging methods, the root logger is automatically configured in the background for you. This goes completely against Python's "explicit is better than implicit" rule:

    #!/usr/bin/env python
    import logging
    logging.warn("uhoh")
    # Output: WARNING:root:uhoh

    In the example above, I never configured a handler. It was done automatically. And on the root handler no less. This will cause all kinds of logging output from libraries you might not want. So don't use the logging.warn(), logging.error() and other module-level methods. Always log against a specific logger instance you got with logging.getLogger().

    This has tripped me up many times, because I'll often do some simple logging in the main part of my program with these. It becomes especially confusing when you do something like this:

    #!/usr/bin/python
    import logging
    for x in foo:
        try:
            something()
        except ValueError as err:
            logging.exception(err)
            pass  # we don't care

    Now there will be no logging until an error occurs, and then suddenly the root logger is configured and subsequent iterations of the loop may start logging messages.

    The Python documentation also mentions the following caveat about using module-level loggers:

    The above module-level convenience functions, which delegate to the root logger, call basicConfig() to ensure that at least one handler is available. Because of this, they shouldnot be used in threads, in versions of Python earlier than 2.7.1 and 3.2, unless at least one handler has been added to the root logger before the threads are started. In earlier versions of Python, due to a thread safety shortcoming in basicConfig(), this can (under rare circumstances) lead to handlers being added multiple times to the root logger, which can in turn lead to multiple messages for the same event.

    Debugging logging problems

    When I run into weird logging problems such as no output, or double lines, I generally put the following debugging code at the point where I'm logging the message.

    log_to_debug = logging.getLogger("myapp.ui.edit")
    while log_to_debug is not None:
        print "level: %s, name: %s, handlers: %s" % (log_to_debug.level,
                                                     log_to_debug.name,
                                                     log_to_debug.handlers)
        log_to_debug = log_to_debug.parent

    which outputs:

    level: 0, name: myapp.ui.edit, handlers: []
    level: 0, name: myapp.ui, handlers: []
    level: 0, name: myapp, handlers: []
    level: 30, name: root, handlers: []
    

    From this output it becomes obvious that all loggers use a level of 30, since their log levels are 0, which means the look up the hierarchy for the first logger with a non-zero level. I've also not configured any handlers. If I was seeing double output, it's probably because there is more than one handler configured.

    Summary

    • When you log a message, the level is only checked at the logger you logged the message against. If it passes, every handler on every logger up the hierarchy is called, regardless of that logger's level.
    • By default, loggers have a level of 0. This means they use the log level of the first parent logger that has an actual level set. This is determined at message-time, not when the logger is created.
    • Don't log directly against the root logger. That means: no logging.basicConfig() and no usage of module-level loggers such as logging.warning(), as they have unintended side-effects.
    • Create a uniquely named top-level logger for your application / library and put all child loggers under that logger. Configure a handler for output on the top-level logger for your application. Don't configure a level on your loggers, so that you can set a level at any point in the hierarchy and get logging output at that level for all underlying loggers. Note that this is an appropriate strategy for how I usually structure my programs. It might not be for you.
    • The easiest way that's usually correct is to use __name__ as the logger name: log = logging.getLogger(__name__). This uses the module hierarchy as the name, which is generally what you want. 
    • Read the entire logging HOWTO and specifically the Advanced Logging Tutorial, because it really should be called "logging basics".

    by admin at August 06, 2017 08:40 AM

    August 01, 2017

    Anton Chuvakin - Security Warrior

    Monthly Blog Round-Up – July 2017

    Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this
    month:
    1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” … 
    2. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here!
    3. Again, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (now in its 4th edition!) – note that this series is mentioned in some PCI Council materials. 
    4. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” (also aged a bit by now) is a companion to the checklist (updated version)
    5. “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?” is a quick framework for assessing the SIEM project (well, a program, really) costs at an organization (a lot more details on this here in this paper).
    In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]: 

    Current research on SIEM:
    Recent research on vulnerability management:
    Recent research on cloud security monitoring:
    Miscellaneous fun posts:

    (see all my published Gartner research here)
    Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016.

    Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on August 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

    Previous post in this endless series:

    by Anton Chuvakin (anton@chuvakin.org) at August 01, 2017 07:41 PM

    July 31, 2017

    pagetable

    62 Reverse-Engineered C64 Assembly Listings

    Between 1992 and 1995, I reverse engineered Commodore 64 applications by printing their disassemblies on paper and adding handwritten comments (in German). These are the PDF scans of the 62 applications, which are 552 pages total.

    File Author Date Description
    Adv. Squeezer 1995-08-19 Decompression code
    Amica Paint Schnellader Fast loader Amica Paint extension
    Bef.-Erw. Diethelm Berens BASIC extension
    Bitmap Manager Hannes Sommer 1994-10-26 AGSP map editor
    Bitstreamer 1994-11-06 Decompression code
    Blanking Screen blanker
    Creatures 2 Autostart John Rowlands 1993-05-03
    Creatures 2 Loader John Rowlands 1993-05-04
    Delta Coder Nikolaus Heusler 1994-08-25
    Drive Composer Chester B. Kollschen 1994-05-25 1541 drive head sample player, Magic Disk 07/1993
    Drive Digi Player Michael Steil 1994-05-25 1541 drive head sample player
    Drive ROMs 1995-04-19 Differences of the 1541/-II/-C/1571/1581 ROMs
    EmBa Nikolaus Heusler 1992-08-08 Emergency BASIC
    Errorline-Lister 1992-08-18 BASIC extension
    Fast Disk Set
    Fast Load Frank Deizner 1994-09-29
    Fast Save Explorer/Agony 1995-08-24
    File Copier 1993-07-22
    Final Cartridge III Freezer Loader 1994-05-01 Fast load/decompression code of the FC3 unfreeze binary
    Fred’s Back II Hannes Sommer 1994-12 Parts of the AGSP game
    GEO-RAM-Test Mario Büchle 1996-05-21 GeoRAM detection code
    GEOS Select Printer 1993-01-25 GEOS tool
    GoDot A. Dettke, W. Kling 1994-05-17 Core code, ldr.koala, svr.doodle
    Graphtool 1992
    Heureka-Sprint 1994-01 Fast loader
    IRQ Loader 2 Bit Explorer/Agony 1995-08-24 Fast loader
    Kunst aus China Autostart 1993-03-31
    MIST Entpacker Michael Steil 1995-08-21 Decompression code
    MIST Load V2 Michael Steil Fast load
    MSE V1.1 1992-08-08 Programm type-in helper from 64′er Magazin
    MSE V2.1 1993-03-28 Programm type-in helper from 64′er Magazin
    MTOOL Michail Popov 1992-08-14
    Mad Code VSP 1994-10-21 VSP + DYSP code from the demo “Mad Code”
    Mad Format 1995-04-20 Fast format
    Magic Basic BASIC extension
    Magic Disk Autostart 1992-07
    Master Copy 1995-09-21 Disk copy
    Master Cruncher 1994-05-25 Decompression code
    Mayhem in Monsterland VSP John Rowlands 1994-12 VSP code from the game “Mayhem in Monsterland”
    Mini-Erweiterung Marc Freese 1992-08-09
    Mini-Scan The Sir 1993-07-22 Disk error scanner
    Mini-Tris Tetris
    Movie-Scroller Ivo Herzeg 1995-01
    RAM-Check T. Pohl 1995-09-08 REU detection
    SUCK Copy 1995-04-30 File copy
    SUPRA 64 Loader
    Shapes 64 R. Löwenstein 1994-09-05 Text mode windowing
    Small Ass S. Berghofer 1993-08-07 Assembler
    Spriterob Andreas Breuer 1992
    Startprg.obj M. Hecht 1992-06
    SuperinfoIRQ Nikolaus Heusler 1992-06
    Swappers Best Copy Disk copy
    TPP’s Screensplitter Armin Beck 1995-03
    The Sampler 1994-04-30 $D418 sample player (high volume on 8580!)
    Tiny Compiler Nikolaus Heusler 1994-09-06 BASIC compiler
    Turrican 1 Autostart Manfred Trenz 1994-11-02
    Turrican 1 Loader Manfred Trenz 1994
    Turrican 2 Autostart Manfred Trenz 1994-10-03
    Vectorset M. Strelecki 1995-02-05
    Vis-Ass Mazim Szenessy 1992-08 Assembler and screen editor
    Vocabulary Schindowski 1993-01-05 FORTH library
    Warp-Load 1994-10-02 Fast loader

    by Michael Steil at July 31, 2017 03:28 PM

    July 24, 2017

    Evaggelos Balaskas

    Let's Encrypt - Auto Renewal

    Let’s Encrypt

    I’ve written some posts on Let’s Encrypt but the most frequently question is how to auto renew a certificate every 90 days.

    Disclaimer

    This is my mini how-to, on centos 6 with a custom compiled Python 2.7.13 that I like to run on virtualenv from latest git updated certbot. Not a copy/paste solution for everyone!

    Cron

    Cron doesnt not seem to have something useful to use on comparison to 90 days:

    crond.png

    Modification Time

    The most obvious answer is to look on the modification time on lets encrypt directory :

    eg. domain: balaskas.gr

    # find /etc/letsencrypt/live/balaskas.gr -type d -mtime +90 -exec ls -ld {} \;
    
    # find /etc/letsencrypt/live/balaskas.gr -type d -mtime +80 -exec ls -ld {} \;
    
    # find /etc/letsencrypt/live/balaskas.gr -type d -mtime +70 -exec ls -ld {} \;

    # find /etc/letsencrypt/live/balaskas.gr -type d -mtime +60 -exec ls -ld {} \;

    drwxr-xr-x. 2 root root 4096 May 15 20:45 /etc/letsencrypt/live/balaskas.gr
    

    OpenSSL

    # openssl x509 -in <(openssl s_client -connect balaskas.gr:443 2>/dev/null) -noout -enddate

    Email

    If you have registered your email with Let’s Encrypt then you get your first email in 60 days!

    Renewal

    Here are my own custom steps:

    #  cd /root/certbot.git
    #  git pull origin 
    
    #  source venv/bin/activate && source venv/bin/activate
    #  cd venv/bin/
    
    #  monit stop httpd 
    
    #  ./venv/bin/certbot renew --cert-name balaskas.gr --standalone 
    
    #  monit start httpd 
    
    #  deactivate
    

    Script

    I use monit, you can edit the script accordingly to your needs :

    #!/bin/sh
    
    DOMAIN=$1
    
    ## Update certbot
    cd /root/certbot.git
    git pull origin 
    
    # Enable Virtual Environment for python
    source venv/bin/activate && source venv/bin/activate 
    
    ## Stop Apache
    monit stop httpd 
    
    sleep 5
    
    ## Renewal
    ./venv/bin/certbot renew  --cert-name ${DOMAIN} --standalone 
    
    ## Exit virtualenv
    deactivate 
    
    ## Start Apache
    monit start httpd
    

    All Together

    # find /etc/letsencrypt/live/balaskas.gr -type d -mtime +80 -exec /usr/local/bin/certbot.autorenewal.sh balaskas.gr \;

    Systemd Timers

    or put it on cron

    whatever :P

    Tag(s): letsencrypt

    July 24, 2017 10:03 PM

    July 22, 2017

    Everything Sysadmin

    Google's BBR fixes TCP's dirty little secret

    Networking geeks: Google made a big announcements about BBR this week. Here's a technical deep-dive: http://queue.acm.org/detail.cfm?id=3022184 (Hint: if you would read ACM Queue like I keep telling you to, you'd have known about this before all your friends.)

    Someone on Facebook asked me for a "explain it like I'm 5 years old" explanation. Here's my reply:

    Short version: Google changed the TCP implementation (their network stack) and now your youtube videos, Google websites, Google Cloud applications, etc. download a lot faster and smoother. Oh, and it doesn't get in the way of other websites that haven't made the switch. (Subtext: another feature of Google Cloud that doesn't exist at AWS or Azure. Nothing to turn on, no extra charge.)

    ELI5 version: TCP tries to balance the need to be fast and fair. Fast... transmitting data quickly. Fair... don't hog the internet, share the pipe. Being fair is important. In fact, it is so important that most TCP implementations use a "back off" algorithm that results in you getting about 1/2 the bandwidth of the pipe... even if you are the only person on it. That's TCP's dirty little secret: it under-utilizes your network connection by as much as 50%.

    Backoff schemes that use more than 1/2 the pipe tend to crowd out other people, thus are unfair. So, in summary, the current TCP implementations prioritize fairness over good utilization. We're wasting bandwidth.

    Could we do better? Yes. There are better backoff algorithms but they are so much work that they are impractical. For years researchers have tried to make better schemes that are easy to compute. (As far back as the 1980s researchers built better and better simulations so they could experiment with different backoff schemes.)

    Google is proposing a new backoff algorithm called BBR. It has reached the holy grail: It is more fair than existing schemes. If a network pipe only has one user, they basically use the whole thing. If many users are sharing a pipe, it shares it fairly. You get more download speed over the same network. Not only that, it doesn't require changes to the internet, just the sender.

    And here's the real amazing part: it works if you implement BBR on both the client and the server, but it works pretty darn good if only change the sender's software (i.e. Google updated their web frontends and you don't have to upgrade your PC). Wait! Even more amazing is that it doesn't ruin the internet if some people use it and some people use the old methods.

    They've been talking about it for nearly a year at conferences and stuff. Now they've implemented it at www.google.com, youtube.com, and so on. You get less "buffering.... buffering..." even on mobile connections. BBR is enabled "for free" for all Google Cloud users.

    With that explanation, you can probably read the ACM article a bit easier. Here's the link again: http://queue.acm.org/detail.cfm?id=3022184

    Disclaimer: I don't own stock in Google, Amazon or Microsoft. I don't work for any of them. I'm an ex-employee of Google. I use GCP, AWS and Azure about equally (nearly zero).

    by Tom Limoncelli at July 22, 2017 07:07 PM

    July 14, 2017

    Evaggelos Balaskas

    Install Slack Desktop to Archlinux

    How to install slack dekstop to archlinux

    Download Slack Desktop

    eg. latest version

    https://downloads.slack-edge.com/linux_releases/slack-2.6.3-0.1.fc21.x86_64.rpm

    Extract under root filesystem

    # cd /

    # rpmextract.sh slack-2.6.3-0.1.fc21.x86_64.rpm

    Done

    Actually, that’s it!

    Run

    Run slack-desktop as a regular user:

    $ /usr/lib/slack/slack

    Slack Desktop

    slackdesktop.jpg

    Proxy

    Define your proxy settings on your environment:

    declare -x ftp_proxy="proxy.example.org:8080"
    declare -x http_proxy="proxy.example.org:8080"
    declare -x https_proxy="proxy.example.org:8080"

    Slack

    slackdesktop2.jpg

    Tag(s): slack

    July 14, 2017 10:28 AM

    July 12, 2017

    Debian Administration

    Implementing two factor authentication with perl

    Two factor authentication is a system which is designed to improve security, by requiring a second factor, in addition to a username/password pair, to access a particular resource. Here we'll look briefly at how you add two factor support to your applications with Perl.

    by Steve at July 12, 2017 05:23 PM

    July 10, 2017

    pagetable

    80 Columns Text on the Commodore 64

    The text screen of the Commodore 64 has a resolution of 40 by 25 characters, based on the hardware text mode of the VIC-II video chip. This is a step up from the VIC-20′s 22 characters per line, but since computers in the professional segment (Commodore PET 8000 series, CP/M, MS-DOS) usually had 80 columns, several solutions – both hardware and software – exist to allow 80 columns on a C64 as well. Let’s look at how this is done in software! At the end of this article, I present a fast and full-featured open source implementation with several different character sets.

    Regular 40×25 Mode

    First, we need to understand how 40 columns are done. The VIC-II video chip has dedicated support for a text mode. There is a 1000 (= 40 * 25) byte “Screen RAM”, each byte of which contains the character to be displayed at that location in the form of an index into the 2 KB character set, which contains 8 bytes (for 8×8 pixels) for each of the 256 characters. In addition, there is a “Color RAM”, which contains a 1000 (again 40 * 25) 4-bit values, which represents a color for each character on the screen.

    Putting a character onto the screen is quite trivial: Just write the index of it to offset column + 40 * line into Screen RAM, and its color to the same offset in Color RAM. An application can load its own character set, but the Commodore 64 already comes with two character sets in ROM: One with uppercase characters and lots of graphical symbols (“GRAPHICS”), and one with upper and lower case (“TEXT”). You can switch these by pressing the Commodore and the Shift key at the same time.

    Bitmap Mode

    There is no hardware support for an 80 column mode, but such a mode can be implemented in software by using bitmap mode. In bitmap mode, all 320 by 200 pixels on the screen can be freely addressed through an 8000 byte bitmap, which contains one bit for every pixel on the screen. Luckily for us, the layout of the bitmap in memory is not linear, but reminds of the encoding of text mode: The first 8 bytes in Bitmap RAM don’t describe, as you would expect, the leftmost 64 pixels on the first line. Instead, they describe the top left 8×8 block. The next 8 bytes describe the 8×8 block to the right of it, and so on.

    This is the same layout as the character set’s: The first 8 bytes correspond to the first character, the next 8 bytes to the second character and so on. Drawing an 8×8 character onto a bitmap (aligned to the 8×8 grid) is as easy as copying 8 consecutive bytes.

    This is what an 8×8 font looks like in memory:

    0000 ··████··  0008 ···██···  0010 ·█████··
    0001 ·██··██·  0009 ··████··  0011 ·██··██·
    0002 ·██·███·  000a ·██··██·  0012 ·██··██·
    0003 ·██·███·  000b ·██████·  0013 ·█████··
    0004 ·██·····  000c ·██··██·  0014 ·██··██·
    0005 ·██···█·  000d ·██··██·  0015 ·██··██·
    0006 ··████··  000e ·██··██·  0016 ·█████··
    0007 ········  000f ········  0017 ········
    

    For an 80 column screen, every character is 4×8 pixels. So we could describe the character set like this:

    0000 ········  0008 ········  0010 ········
    0001 ··█·····  0009 ··█·····  0011 ·██·····
    0002 ·█·█····  000a ·█·█····  0012 ·█·█····
    0003 ·███····  000b ·███····  0013 ·██·····
    0004 ·███····  000c ·█·█····  0014 ·█·█····
    0005 ·█······  000d ·█·█····  0015 ·█·█····
    0006 ··██····  000e ·█·█····  0016 ·██·····
    0007 ········  000f ········  0017 ········
    

    Every 4×8 character on the screen is either in the left half or the right half of an 8×8 block, so drawing an 4×8 character is as easy as copying the bit pattern into the 8×8 block – and shifting it 4 bits to the right for characters at odd positions.

    Color

    In bitmap mode, it is only possible to use two out of the 16 colors per 8×8 block, because there are only 1000 (40 * 25) entries for the color matrix. This is a problem, since we need three colors per 8×8: Two for the two characters and one for the background. We will have to compromise: Whenever a character gets drawn into an 8×8 block and the other character in the block has a different color, that other character will be changed to the same color as the new character.

    Scrolling

    Scrolling on a text screen is easy: 960 bytes of memory have to be copied to move the character indexes to their new location. In bitmap mode, 7680 bytes have to be copied – 8 times more. Even with the most optimized implementation (73ms, about 3.5 frames), scrolling will be slower, and tearing artifacts are unavoidable.

    Character Set

    Creating a 4×8 character set that is both readable and looks good is not easy. There has to be a one-pixel gap between characters, so characters can effectively only be 3 pixels wide. For characters like “M” and “N”, this is a challenge.

    These are the character sets of four different software solutions for 80 columns:

    COLOR 80 by Richvale Telecommunications
      

    80COLUMNS
      

    SCREEN-80 by Compute’s Gazette
      

    Highspeed80 by CKtwo
      

    Some observations:

    • Highspeed80 and SCREEN-80 have capitals of a height of 7 pixels (more detail, but very tight vertical spacing, making it hard to read), while COLOR 80 uses only 5 pixels (more square like the original 8×8 font, but less detail). 6 pixels, as used by 80COLUMNS, seems like a good compromise.
    • 80COLUMNS has the empty column at the left, which makes single characters in reverse mode more readable, since most characters have their stems at the left.
    • Except for Highspeed80, the graphical symbols are very similar between the different character sets.
    • All four character sets use the same strategy to distinguish between “M” and “N”.

    The Editor

    The “EDITOR” is the part of C64′s ROM operating system (“KERNAL”) that handles printing characters to the screen (and interpreting control characters), as well as converting on-screen contents back into a PETSCII string – yes, text input on CBM computers is done by feeding the keyboard input directly into character output, and reading back the screen contents when the user presses the return key. This way, the user can use the cursor keys to navigate to existing text anywhere on the screen (even to the output of previous commands), edit it, and have the system interpret it as input when pressing return.

    The C64′s EDITOR can only deal with 40 columns (but has a very nice feature that allows using two 40 character lines as one virtual 80 character input line), and has no idea how to draw into a bitmap, so a software 80 characters solution basically has to provide a complete reimplementation of this KERNAL component.

    The KERNAL provides user vectors for lots of its functionality, so both character output, and reading back characters from the screen can be hooked (vectors IBSOUT at $0326 and IBASIN at $0324). In addition to drawing characters into the bitmap, the character codes have to be cached in a 80×25 virtual Screen RAM, so the input routine can read them back.

    The PETSCII code contains control codes for changing the current color, moving the cursor, clearing the screen, turning reverse on and off, and switching between the “GRAPHICS” and “TEXT” character sets. The new editor has provide code to interpret these. There are two special cases though: When in quote mode (the user is typing text between quotes) or insert mode (the user has typed shift+delete), most special characters show up as inverted graphical characters instead of being interpreted. This way, control characters can be included e.g. in strings in BASIC programs.

    There are two functions though that cannot be intercepted through vectors: Applications (and BASIC programs) change the screen background color by writing the color’s value to $d020, since there is no KERNAL function or BASIC command for it, and the KERNAL itself switches between the two character sets (when the user presses the Commodore and the Shift key at the same time) by directly writing to $d018. The only way to intercept these is by hooking the timer interrupt vector and detecting a change in these VIC-II registers. If the background color has changed, the whole 1000 byte color matrix for bitmap mode has to be updated, and if the character set has changed, the whole screen has to be redrawn.

    The Implementation

    I looked at all existing software implementations I could find and concluded that “80COLUMNS” (by an unknown author) had the best design and was the only one to implement the full feature set of the original EDITOR. I reverse-engineered it into structured, easy to read code, added Ilker Ficicilar’s fast scrolling patch as well as my own minor cleanups, fixes and optimizations.

    https://www.github.com/mist64/80columns

    The project requies cc65 and exomizer to build. Running make will produce 80columns-compressed.prg, which is about 2.2 KB in size and can be started using LOAD/RUN.

    The source contains several character sets (charset.s, charset2.s etc.) from different 80 column software solutions, which can be selected by changing the reference to the filename in the Makefile.

    The object code resides at $c800-$cfff. The two character sets are located at $d000-$d7ff. The virtual 80×25 Screen RAM (in order to read back screen contents) is at $c000-$c7ff. The bitmap is at $e000-$ff40, and the color matrix for bitmap mode is at $d800-$dbe8. All this lies beyond the top of BASIC RAM, so BASIC continues to have 38911 bytes free.

    In order to speed up drawing, the character set contains all characters duplicated like this:

    0000 ········  0008 ········  0010 ········
    0001 ··█···█·  0009 ··█···█·  0011 ·██··██·
    0002 ·█·█·█·█  000a ·█·█·█·█  0012 ·█·█·█·█
    0003 ·███·███  000b ·███·███  0013 ·██··██·
    0004 ·███·███  000c ·█·█·█·█  0014 ·█·█·█·█
    0005 ·█···█··  000d ·█·█·█·█  0015 ·█·█·█·█
    0006 ··██··██  000e ·█·█·█·█  0016 ·██··██·
    0007 ········  000f ········  0017 ········
    

    This way, the drawing code only has to mask the value instead of shifting it. In addition, parts of character drawing and all of screen scrolling are using unrolled loops for performance.

    Contributions to the project are very welcome. It would be especially interesting to add new character sets, both existing 4×8 fonts from other projects (including hinted TrueType fonts!), and new ones that combine the respective strengths of the existing ones.

    80×33 Mode?

    Reducing the size of characters to 4×6 would allow a text mode resolution of 80×33 characters. Novaterm 10 has an implementation. At this resolution, logical characters don’t end at vertical 8×8 boundaries any more, making color impossible, and the drawing routine a little slower. It would be interesting to add an 80×33 mode as a compile time option to “80columns”.

    by Michael Steil at July 10, 2017 09:45 PM

    That grumpy BSD guy

    OpenBSD and the modern laptop

    Did you think that OpenBSD is suitable only for firewalls and high-security servers? Think again. Here are my steps to transform a modern mid to high range laptop into a useful Unix workstation with OpenBSD.

    One thing that never ceases to amaze me is that whenever I'm out and about with my primary laptop at conferences and elsewhere geeks gather, a significant subset of the people I meet have a hard time believing that my laptop runs OpenBSD, and that it's the only system installed.

    A typical exchange runs something like,
    "So what system do you run on that laptop there?"
    "It's OpenBSD. xfce is the window manager, and on this primary workstation I tend to just upgrade from snapshot to snapshot."
    "Really? But ..."
    and then it takes a bit of demonstrating that yes, the graphics runs with the best available resolution the hardware can offer, the wireless network is functional, suspend and resume does work, and so forth. And of course, yes, I do use that system when writing books and articles too. Apparently heavy users of other free operating systems do not always run them on their primary workstations.

    I'm not sure at what time I permanently converted my then-primary workstation to run OpenBSD exclusively, but I do remember that when I took delivery of the ThinkPad R60 (mentioned in this piece) in 2006, the only way forward was to install the most recent OpenBSD snapshot. By mid-2014 the ThinkPad SL500 started falling to pieces, and its replacement was a Multicom Ultrabook W840, manufactured by Clevo. The Clevo Ultrabook has weathered my daily abuse and being dragged to various corners of the world for conferences well, but on the trek to BSDCan 2017 cracks started appearing in the glass on the display and the situation worsened on the return trip.

    So the time came to shop around for a replacement. After a bit of shopping around I came back to Multicom, a small computers and parts supplier outfit in rural Åmli in southern Norway, the same place I had sourced the previous one.

    One of the things that attracted me to that particular shop and their own-branded offerings is that they will let you buy those computers with no operating system installed. That is of course what you want to do when you source your operating system separately, as we OpenBSD users tend to do.

    The last time around I had gone for a "Thin and lightweight" 14 inch model (Thickness 20mm, weight 2.0kg) with 16GB RAM, 240GB SSD for system disk and 1TB HD for /home (since swapped out for a same-size SSD, as the dmesg will show).

    Three years later, the rough equivalent with some added oomph for me to stay comfortable for some years to come ended me with a 13.3 inch model, 18mm thick and advertised as 1.3kg (but actually weighing in at 1.5kg, possibly due to extra components), 32GB RAM, 512GB SSD and 2TB harddisk. For now the specification can be viewed online here (the site language is Norwegian, but product names and units of measure are not in fact different).

    That system arrived today, in a slender box:



    Here are the two machines, the old (2014-vintage) and the new side by side:



    The OpenBSD installer is a wonder of straightforward, no-nonsense simplicity that simply gets the job done. Even so, if you are not yet familiar with OpenBSD, it is worth spending some time reading the OpenBSD FAQ's installation guidelines and the INSTALL.$platform file (in our case, INSTALL.amd64) to familiarize yourself with the procedure. If you're following this article to the letter and will be installing a snapshot, it is worth reading the notes on following -current too.

    The main hurdle back when I was installing the 2014-vintage 14" model was getting the system to consider the SSD which showed up as sd1 the automatic choice for booting (I solved that by removing the MBR, setting the size of the MBR on the hard drive that showed up as sd0 to 0 and enlarging the OpenBSD part to fill the entire drive).

    Let's see how the new one is configured, then. I try running with the default UEFI "Secure boot" option enabled, and it worked.

    Here we see the last part of the messages that scroll across the screen when the new laptop boots from the USB thumbdrive that has had the most recent OpenBSD/amd64 install61.fs dd'ed onto it:



    And as the kernel messages showed us during boot (yes, that scrolled off the top before I got around to taking the picture), the SSD came up as sd1 while the hard drive registered as sd0. Keep that in mind for later.



    After the initial greeting from the installer, the first menu asks what we want to do. This is a new system, so only (A)utoinstall and (I)nstall would have any chance of working. I had not set up for automatic install this time around, so choosing (I)nstall was the obvious thing to do.

    The next item the installer wants to know is which keyboard layout to use and to set as the default on the installed system. I'm used to using Norwegian keyboards, so no is the obvious choice for me here. If you want to see the list of available options, you press ? and then choose the one you find the must suitable.

    Once you've chosen the keyboard layout, the installer prompts you for the system's host name. This is only the host part, the domain part comes later. I'm sure your site or organization has some kind of policy in place for choice of host names. Make sure you stay inside any local norms, the one illustrated here conforms with what we have here.

    Next up the installer asks which network interfaces to configure. A modern laptop such as this one comes with at least two network interfaces: a wireless interface, in this case an Intel part that is supported in OpenBSD with the iwm(4) driver, and a wired gigabit ethernet interface which the installer kernel recognized as re0.

    Quite a few pieces the hardware in a typical modern laptop requires the operating system to load firmware onto the device before it can start interacting meaningfully with the kernel. The Intel wireless network parts supported by the iwm(4) driver and the earlier iwn(4) all have that requirement. However, for some reason the OpenBSD project has not been granted permission to distribute the Intel firmware files, so with only the OpenBSD installer it is not possible to use iwm(4) devices during an initial install. So in this initial round I only configure the re0 interface. During the initial post-install boot the rc.firsttime script will run fw_update(1) command that will identify devices that require firmware files and download them from the most convenient OpenBSD firmware mirror site.

    My network here has a DHCP server in place, so I simply choose the default dhcp for IPv4 address assignment and autoconf for IPv6.

    With the IPv4 and IPv6 addresses set, the installer prompts for the domain name. Once again, the choice was not terribly hard in my case.



    On OpenBSD, root is a real user, and you need to set that user's password even if you will rarely if ever log in directly as root. You will need to type that password twice, and as the install documentation states, the installer will only check that the passwords match. It's up to you to set a usefully strong password, and this too is one of the things organizations are likely to have specific guidelines for.

    Once root's password is set, the installer asks whether you want to start sshd(8) by default. The default is the sensible yes, but if you answer no here, the installed system will not have any services listening on the system's reachable interfaces.

    The next question is whether the machine will run the X Windows system. This is a laptop with a "Full HD" display and well supported hardware to drive it, so the obvious choice here is yes.

    I've gotten used to running with xenodm(1) display manager and xfce as the windowing environment, so the question about xenodm is a clear yes too, in my case.

    The next question is whether to create at least one regular user during the install. Creating a user for your systems adminstrator during install has one important advantage: the user you create at this point will be a member of the wheel group, which makes it slightly easier to move to other privilege levels via doas(1) or similar.

    Here I create a user for myself, and it is added, behind the scenes, to the wheel group.

    With a user in place, it is time to decide whether root will be able to log in via ssh. The sensible default is no, which means you too should just press enter here.

    The installer guessed correctly for my time zone, so it's another Enter to move forward.

    Next up is the part that people have traditionally found the most scary in OpenBSD installing: Disk setup.

    If the machine had come with only one storage device, this would have been a no-brainer. But I have a fast SSD that I want to use as the system disk, and a slightly slower and roomier rotating rust device aka hard disk that I want primarily as the /home partition.

    I noted during the bsd.rd boot that the SSD came up as sd1 and the hard drive came up as sd0, so we turn to the SSD (sd1) first.

    Since the system successfully booted with the "Secure boot" options in place, I go for the Whole disk GPT option and move on to setting partition sizes.

    The default suggestion for disk layout makes a lot of sense and will set sensible mount options, but I will be storing /home on a separate device, so I choose the (E)dit auto layout option and use the R for Resize option to redistribute the space left over to the other partitions.

    Here is also where you decide the size of the swap space, traditionally on the boot device's b partition. Both crashdumps and suspend to disk use swap space for their storage needs, so if you care about any of these, you will need to allocate at least as much space as the amount of physical RAM installed in the system. Because I could, I allocated the double of that, or 64GB.

    For sd0, I once again choose the Whole disk GPT option and make one honking big /home partition for myself.

    The installer then goes on to create the file systems, and returns with the prompt to specify where to find install sets.

    The USB drive that I dd'ed the install61.fs image to is the system's third sd device (sd2), so choosing disk and specifying sd2 with the subdirectory 6.1/amd64 makes sense here. On the other hand, if your network and the path to the nearest mirror is fast enough, you may actually save time choosing a http network install over installing from a relatively slow USB drive.

    Anyway, the sets install proceeds and trundles through what is likely the longest period of forced inactivity that you will have during an OpenBSD install.

    The installer verifies the signed sets and installs them.



    Once the sets install is done, you get the offer of specifying more sets -- your site could have a site-specific items in an install set -- but I don't have any of those handy, so I just press enter to accept the default done.

    If you get the option to correct system time here, accept it and have ntpd(8) set your system clock to a sane setting gleaned from well known NTP servers.

    With everything else in place, the installer links the kernel with a unique layout, in what is right now a -current-only feature, but one that will most likely be one of the more talked-about items in the OpenBSD 6.2 release some time in the not too distant future.

    With all items on the installer's agenda done, the installer exits and leaves you at a root shell prompt where the only useful action is to type reboot and press enter. Unless, of course you have specific items you know will need to be edited into the configuration before the reboot.

    After completing the reboot, the system did unfortunately not, as expected, immediately present the xenodm login screen, but rather the text login prompt.

    Looking at the /var/log/Xorg.0.log file pointed to driver problems, but after a little web searching on the obvious keywords, I found this gist note from notable OpenBSD developer Reyk Flöter that gave me the things to paste into my /etc/xorg.conf to yield a usable graphics display for now.

    My task for this evening is to move my working environment to new hardware, so after install there are really only two items remaining, in no particular order:
    • move my (too large) accumulation of /home/ data to the new system, and
    • install the same selection of packages on the old machine to the new system.
    The first item will take longer, so I shut down all the stuff I normally have running on the laptop such as web browsers, editors and various other client programs, and use pkg_info(1) to create the list of installed packages on the 'from' system:

    $ pkg_info -mz >installed_packages

    then I transfer the installed_packages file to the fresh system, but not before recording the df -h status of the pristine fresh install:

    $ df -h
    Filesystem     Size    Used   Avail Capacity  Mounted on
    /dev/sd1a     1005M   76.4M    878M     8%    /
    /dev/sd0d      1.8T    552K    1.7T     0%    /home
    /dev/sd1d     31.5G   12.0K   29.9G     0%    /tmp
    /dev/sd1f     98.4G    629M   92.9G     1%    /usr
    /dev/sd1g      9.8G    177M    9.2G     2%    /usr/X11R6
    /dev/sd1h      108G    218K    103G     0%    /usr/local
    /dev/sd1k      9.8G    2.0K    9.3G     0%    /usr/obj
    /dev/sd1j     49.2G    2.0K   46.7G     0%    /usr/src
    /dev/sd1e     98.4G    5.6M   93.5G     0%    /var

    Not directly visible here is the amount of swap configured in the sd1b partition. As I mentioned earlier, crashdumps and suspend to disk both use swap space for their storage needs, so if you care about any of these, you will need to allocate at least as much space as the amount of physical RAM installed in the system. Because I could, I allocated the double of that, or 64GB.

    I also take a peek at the old system's /etc/doas.conf and enter the same content on the new system to get the same path to higher privilege that I'm used to having. With those in hand, recreating the set of installed packages on the fresh system is then a matter of a single command:

    $ doas pkg_add -l installed_packages

    and pkg_add(1) proceeds to fetch and install the same packages I had on the old system.

    Then there is the matter of transferring the I-refuse-to-admit-the-actual-number-of gigabytes that make up the content of my home directory. In many environments it would make sense to just restore from the most recent backup, but in my case where the source and destination sit side by side, i chose to go with a simple rsync transfer:

    $ rsync  -rcpPCavu 192.168.103.69:/home/peter . | tee -a 20170710-transferlog.txt

    (Yes, I'm aware that I could have done something similar with nc and tar, which are both in the base system. But rsync wins by being more easily resumable.)

    While the data transfers, there is ample time to check for parts of the old system's configuration that should be transferred to the new one. Setting up the hostname.iwm0 file to hold the config for the wireless networks (see the hostname.if man page) by essentially copying across the previous one is an obvious thing, and this is the time when you discover tweaks you love that were not part of that package's default configuration.

    Some time went by while the content transferred, and I can now announce that I'm typing away on the machine that is at the same time both the most lightweight and the most powerful machine I have ever owned.

    I am slowly checking and finding that the stuff I care about just works, though I haven't bothered to check whether the webcam works yet. I know you've been dying to see the dmesg, which can be found here. I'm sure I'll get to the bottom of the 'not configured' items (yes, there are some) fairly soon. Look for updates that will be added to the end of this column.

    And after all these years, I finally have a machine that matches my beard color:



    If you have any questions on running OpenBSD as a primary working environment, I'm generally happy to answer but in almost all cases I would prefer that you use the mailing lists such as misc@openbsd.org or the OpenBSD Facebook group so the question and hopefully useful answers become available to the general public. Browsing the slides for my recent OpenBSD and you user group talk might be beneficial if you're not yet familiar with the system. And of course, comments on this article are welcome.


    Update 2017-07-18: One useful thing to do once you have your system up and running is to submit your dmesg to the NYCBUG dmesg database. The one for the system described here is up as http://dmesgd.nycbug.org/index.cgi?do=view&id=3227.

    Update 2017-08-18: Ars Technica reviews the same model, in a skinnier configuration, with a focus on Linux, in Review: System76’s Galago Pro solves “just works” Linux’s Goldilocks problem.

    Update 2017-08-24: After questions about how the OpenBSD installer handles UEFI and the 'Secure boot' options, I recorded the output of fdisk -v in this file, which I hope clears up any ambiguity left by the original article.

    by Peter N. M. Hansteen (noreply@blogger.com) at July 10, 2017 07:53 PM

    July 07, 2017

    Anton Chuvakin - Security Warrior

    Monthly Blog Round-Up – June 2017

    Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this
    month:
    1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” … 
    2. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” (also aged a bit by now) is a companion to the checklist (updated version)
    3. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here!
    4. This month, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (now in its 4th edition!) – note that this series is mentioned in some PCI Council materials. 
    5. “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?” is a quick framework for assessing the SIEM project (well, a program, really) costs at an organization (a lot more details on this here in this paper).
    In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]: 

    Current research on vulnerability management:
    Current research on cloud security monitoring:
    Recent research on security analytics and UBA / UEBA:
    Miscellaneous fun posts:

    (see all my published Gartner research here)
    Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016.

    Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on August 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

    Previous post in this endless series:

    by Anton Chuvakin (anton@chuvakin.org) at July 07, 2017 06:35 PM

    July 03, 2017

    Vincent Bernat

    Performance progression of IPv4 route lookup on Linux

    TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process.


    In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history:

    IPv4 route lookup performance

    Two scenarios are tested:

    • 500,000 routes extracted from an Internet router (half of them are /24), and
    • 500,000 host routes (/32) tightly packed in 4 distinct subnets.

    All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

    The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock).

    The following kernel versions bring a notable performance improvement:

    • In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case.

    • In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled.

    • In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve the cache.

    • In Linux 4.0, notably commit 9f9e636d4f89, Alexander Duyck adds several optimizations to the trie lookup algorithm. It really pays off!

    • In Linux 4.1, commit 0ddcf43d5d4a, Alexander Duyck collapses the local and main tables when no specific IP rules are configured. For non-local traffic, both those tables were looked up.


    1. Compiling old kernels with an updated userland may still require some small patches

    2. The kernels are compiled with the CONFIG_SMP option to use the hierarchical RCU and activate more of the same code paths as actual routers. However, progress on parallelism are left unnoticed. 

    by Vincent Bernat at July 03, 2017 01:25 PM

    July 02, 2017

    Cryptography Engineering

    Beyond public key encryption

    One of the saddest and most fascinating things about applied cryptography is how little cryptography we actually use. This is not to say that cryptography isn’t widely used in industry — it is. Rather, what I mean is that cryptographic researchers have developed so many useful technologies, and yet industry on a day to day basis barely uses any of them. In fact, with a few minor exceptions, the vast majority of the cryptography we use was settled by the early-2000s.*

    Most people don’t sweat this, but as a cryptographer who works on the boundary of research and deployed cryptography it makes me unhappy. So while I can’t solve the problem entirely, what I can do is talk about some of these newer technologies. And over the course of this summer that’s what I intend to do: talk. Specifically, in the next few weeks I’m going to write a series of posts that describe some of the advanced cryptography that we don’t generally see used.

    Today I’m going to start with a very simple question: what lies beyond public key cryptography? Specifically, I’m going to talk about a handful of technologies that were developed in the past 20 years, each of which allows us to go beyond the traditional notion of public keys.

    This is a wonky post, but it won’t be mathematically-intense. For actual definitions of the schemes, I’ll provide links to the original papers, and references to cover some of the background. The point here is to explain what these new schemes do — and how they can be useful in practice.

    Identity Based Cryptography

    In the mid-1980s, a cryptographer named Adi Shamir proposed a radical new idea. The idea, put simply, was to get rid of public keys.

    To understand where Shamir was coming from, it helps to understand a bit about public key encryption. You see, prior to the invention of public key crypto, all cryptography involved secret keys. Dealing with such keys was a huge drag. Before you could communicate securely, you needed to exchange a secret with your partner. This process was fraught with difficulty and didn’t scale well.

    Public key encryption (beginning with Diffie-Hellman and Shamir’s RSA cryptosystem) hugely revolutionized cryptography by dramatically simplifying this key distribution process. Rather than sharing secret keys, users could now transmit their public key to other parties. This public key allowed the recipient to encrypt to you (or verify your signature) but it could not be used to perform the corresponding decryption (or signature generation) operations. That part would be done with a secret key you kept to yourself.

    While the use of public keys improved many aspects of using cryptography, it also gave rise to a set of new challenges. In practice, it turns out that having public keys is only half the battle — people still need to use distribute them securely.

    For example, imagine that I want to send you a PGP-encrypted email. Before I can do this, I need to obtain a copy of your public key. How do I get this? Obviously we could meet in person and exchange that key on physical media — but nobody wants to do this. It would much more desirable to obtain your public key electronically. In practice this means either (1) we have to exchange public keys by email, or (2) I have to obtain your key from a third piece of infrastructure, such as a website or key server. And now we come to the  problem: if that email or key server is untrustworthy (or simply allows anyone to upload a key in your name), I might end up downloading a malicious party’s key by accident. When I send a message to “you”, I’d actually be encrypting it to Mallory.

    Mallory.

    Solving this problem — of exchanging public keys and verifying their provenance — has motivated a huge amount of practical cryptographic engineering, including the entire web PKI. In most cases, these systems work well. But Shamir wasn’t satisfied. What if, he asked, we could do it better? More specifically, he asked: could we replace those pesky public keys with something better?

    Shamir’s idea was exciting. What he proposed was a new form of public key cryptography in which the user’s “public key” could simply be their identity. This identity could be a name (e.g., “Matt Green”) or something more precise like an email address. Actually, it didn’t realy matter. What did matter was that the public key would be some arbitrary string — and not a big meaningless jumble of characters like “7cN5K4pspQy3ExZV43F6pQ6nEKiQVg6sBkYPg1FG56Not”.

    Of course, using an arbitrary string as a public key raises a big problem. Meaningful identities sound great — but I don’t own them. If my public key is “Matt Green”, how do I get the corresponding private key? And if I can get out that private key, what stops some other Matt Green from doing the same, and thus reading my messages? And ok, now that I think about this, what stops some random person who isn’t named Matt Green from obtaining it? Yikes. We’re headed straight into Zooko’s triangle.

    Shamir’s idea thus requires a bit more finesse. Rather than expecting identities to be global, he proposed a special server called a “key generation authority” that would be responsible for generating the private keys. At setup time, this authority would generate a single master public key (MPK), which it would publish to the world. If you wanted to encrypt a message to “Matt Green” (or verify my signature), then you could do so using my identity and the single MPK of an authority we’d both agree to use. To decrypt that message (or sign one), I would have to visit the same key authority and ask for a copy of my secret key. The key authority would compute my key based on a master secret key (MSK), which it would keep very secret.

    With all algorithms and players specified, whole system looks like this:

    Overview of an Identity-Based Encryption (IBE) system. The Setup algorithm of the Key Generation Authority generates the master public key (MPK) and master secret key (MSK). The authority can use the Extract algorithm to derive the secret key corresponding to a specific ID. The encryptor (left) encrypts using only the identity and MPK. The recipient requests the secret key for her identity, and then uses it to decrypt. (Icons by Eugen Belyakoff)

    This design has some important advantages — and more than a few obvious drawbacks. On the plus side, it removes the need for any key exchange at all with the person you’re sending the message to. Once you’ve chosen a master key authority (and downloaded its MPK), you can encrypt to anyone in the entire world. Even cooler: at the time you encrypt, your recipient doesn’t even need to have contacted the key authority yet. She can obtain her secret key after I send her a message.

    Of course, this “feature” is also a bug. Because the key authority generates all the secret keys, it has an awful lot of power. A dishonest authority could easily generate your secret key and decrypt your messages. The polite way to say this is that standard IBE systems effectively “bake in” key escrow.**

    Putting the “E” in IBE

    All of these ideas and more were laid out by Shamir in his 1984 paper. There was just one small problem: Shamir could only figure out half the problem.

    Specifically, Shamir’s proposed a scheme for identity-based signature (IBS) — a signature scheme where the public verification key is an identity, but the signing key is generated by the key authority. Try as he might, he could not find a solution to the problem of building identity-based encryption (IBE). This was left as an open problem.***

    It would take more than 16 years before someone answered Shamir’s challenge. Surprisingly, when the answer came it came not once but three times.

    The first, and probably most famous realization of IBE was developed by Dan Boneh and Matthew Franklin much later — in 2001. The timing of Boneh and Franklin’s discovery makes a great deal of sense. The Boneh-Franklin scheme relies fundamentally on elliptic curves that support an efficient “bilinear map” (or “pairing”).**** The algorithms needed to compute such pairings were not known when Shamir wrote his paper, and weren’t employed constructively — that is, as a useful thing rather than an attack — until about 2000. The same can be said about a second scheme called Sakai-Kasahara, which would be independently discovered around the same time.

    (For a brief tutorial on the Boneh-Franklin IBE scheme, see this page.)

    The third realization of IBE was not as efficient as the others, but was much more surprising. This scheme was developed by Clifford Cocks, a senior cryptologist at Britain’s GCHQ. It’s noteworthy for two reasons. First, Cocks’ IBE scheme does not require bilinear pairings at all — it is based in the much older RSA setting, which means in principle it spent all those years just waiting to be found. Second, Cocks himself had recently become known for something even more surprising: discovering the RSA cryptosystem, nearly five years before RSA did. To bookend that accomplishment with a second major advance in public key cryptography was a pretty impressive accomplishment.

    In the years since 2001, a number of additional IBE constructions have been developed, using all sorts of cryptographic settings. Nonetheless, Boneh and Franklin’s early construction remains among the simplest and most efficient.

    Even if you’re not interested in IBE for its own sake, it turns out that this primitive is really useful to cryptographers for many things beyond simple encryption. In fact, it’s more helpful to think of IBE as a way of “pluralizing” a single public/secret master keypair into billions of related keypairs. This makes it useful for applications as diverse as blocking chosen ciphertext attacks, forward-secure public key encryption, and short signature schemes.

    Attribute Based Encryption

    Of course, if you leave cryptographers alone with a tool like IBE, the first thing they’re going to do is find a way to make things more complicated improve on it.

    One of the biggest such improvements is due to Sahai and Waters. It’s called Attribute-Based Encryption, or ABE.

    The origin of this idea was not actually to encrypt with attributes. Instead Sahai and Waters were attempting to develop an Identity-Based encryption scheme that could encrypt using biometrics. To understand the problem, imagine I decide to use a biometric like your iris scan as the “identity” to encrypt you a ciphertext. Later on you’ll ask the authority for a decryption key that corresponds to your own iris scan — and if everything matches up and you’ll be able to decrypt.

    The problem is that this will almost never work.

    The issue here is that biometric readings (like iris scans or fingerprint templates) are inherently error-prone. This means every scan will typically be very close, but often there will be a few bits that disagree. With standard IBE

    Tell me this isn’t giving you nightmares.

    this is fatal: if the encryption identity differs from your key identity by even a single bit, decryption will not work. You’re out of luck.

    Sahai and Waters decided that the solution to this problem was to develop a form of IBE with a “threshold gate”. In this setting, each bit of the identity is represented as a different “attribute”. Think of each of these as components you’d encrypt under — something like “bit 5 of your iris scan is a 1” and “bit 23 of your iris scan is a 0”. The encrypting party lists all of these bits and encrypts under each one. The decryption key generated by the authority embeds a similar list of bit values. The scheme is defined so that decryption will work if and only if the number of matching attributes (between your key and the ciphertext) exceeds some pre-defined threshold: e.g., any 2024 out of 2048 bits must be identical in order to decrypt.

    The beautiful thing about this idea is not fuzzy IBE. It’s that once you have a threshold gate and a concept of “attributes”, you can more interesting things. The main observation is that a threshold gate can be used to implement the boolean AND and OR gates, like so:

    Even better, you can stack these gates on top of one another to assign a fairly complex boolean formula — which will itself determine what conditions your ciphertext can be decrypted under. For example, switching to a more realistic set of attributes, you could encrypt a medical record so that either a pediatrician in a hospital could read it, or an insurance adjuster could. All you’d need is to make sure people received keys that correctly described their attributes (which are just arbitrary strings, like identities).

    A simple “ciphertext policy”, in which the ciphertext can be decrypted if and only if a key matches an appropriate set of attributes. In this case, the key satisfies the formula and thus the ciphertext decrypts. The remaining key attributes are ignored.

    The other direction can be implemented as well. It’s possible to encrypt a ciphertext under a long list of attributes, such as creation time, file name, and even GPS coordinates indicating where the file was created. You can then have the authority hand out keys that correspond to a very precise slice of your dataset — for example, “this key decrypts any radiology file encrypted in Chicago between November 3rd and December 12th that is tagged with ‘pediatrics’ or ‘oncology'”.

    Functional Encryption

    Once you have a related of primitives like IBE and ABE, the researchers’ instinct is to both extend and generalize. Why stop at simple boolean formulae? Can we make keys (or ciphertexts) that embed arbitrary computer programs? The answer, it turns out, is yes — though not terribly efficiently. A set of recent works show that it is possible to build ABE that works over arbitrary polynomial-size circuits, using various lattice-based assumptions. So there is certainly a lot of potential here.

    This potential has inspired researchers to generalize all of the above ideas into a single class of encryption called “functional encryption“. Functional encryption is more conceptual than concrete — it’s just a way to look at all of these systems as instances of a specific class. The basic idea is to represent the decryption procedure as an algorithm that computes an arbitary function F over (1) the plaintext inside of a ciphertext, and (2) the data embedded in the key. This function has the following profile:

    output = F(key data, plaintext data)

    In this model IBE can be expressed by having the encryption algorithm encrypt (identity, plaintext) and defining the function F such that, if “key input == identity”, it outputs the plaintext, and outputs an empty string otherwise. Similarly, ABE can be expressed by a slightly more complex function. Following this paradigm, once can envision all sorts of interesting functions that might be computed by different functions and realized by future schemes.

    But those will have to wait for another time. We’ve gone far enough for today.

    So what’s the point of all this?

    For me, the point is just to show that cryptography can do some pretty amazing things. We rarely see this on a day-to-day basis when it comes to industry and “applied” cryptography, but it’s all there waiting to be used.

    Perhaps the perfect application is out there. Maybe you’ll find it.

    Notes:

    * An earlier version of this post said “mid-1990s”. In comments below, Tom Ristenpart takes issue with that and makes the excellent point that many important developments post-date that. So I’ve moved the date forward about five years, and I’m thinking about how to say this in a nicer way.

    ** There is also an intermediate form of encryption known as “certificateless encryption“. Proposed by Al-Riyami and Paterson, this idea uses a combination of standard public key encryption and IBE. The basic idea is to encrypt each message using both a traditional public key (generated by the recipient) and an IBE identity. The recipient must then obtain a copy of the secret key from the IBE authority to decrypt. The advantages here are twofold: (1) the IBE key authority can’t decrypt the message by itself, since it does not have the corresponding secret key, which solves the “escrow” problem. And (2) the sender does not need to verify that the public key really belongs to the sender (e.g., by checking a certificate), since the IBE portion prevents imposters from decrypting the resulting message. Unfortunately this system is more like traditional public key cryptography than IBE, and does not have the neat usability features of IBE.

    *** A part of the challenge of developing IBE is the need to make a system that is secure against “collusion” between different key holders. For example, imagine a very simple system that has only 2-bit identities. This gives four possible identities: “00”, “01”, “10”, “11”. If I give you a key for the identity “01” and I give Bob a key for “10”, we need to ensure that you two cannot collude to produce a key for identities “00” and “11”. Many earlier proposed solutions have tried to solve this problem by gluing together standard public encryption keys in various ways (e.g., by having a separate public key for each bit of the identity and giving out the secret keys as a single “key”). However these systems tend to fail catastrophically when just a few users collude (or their keys are stolen). Solving the collusion problem is fundamentally what separates real IBE from its faux cousins.

    **** A full description of Boneh and Franklin’s scheme can be found here, or in the original paper. Some code is here and here and here. I won’t spend more time on it, except to note that the scheme is very efficient. It was patented and implemented by Voltage Security, now part of HPE.


    by Matthew Green at July 02, 2017 03:28 PM

    June 19, 2017

    Sarah Allen

    customer-driven development

    Sometimes when I talk about customer use cases for software that I’m building, it confuses the people I work with. What does that have to do with engineering? Is my product manager out on leave and I’m standing in?

    I’ve found that customer stories connect engineers to the problems we’re solving. When we’re designing a system, we ask ourselves “what if” questions all the time. We need to explore the bounds of the solutions we’re creating, understanding the edge cases. We consider the performance characteristics, scalability requirements and all sorts of other important technical details. Through all this we imagine how the system will interact. It is easier when the software has a human interface, when it interacts with regular people. It is harder, but just as important, when we write software that only interacts with other software systems.

    Sometimes the software that we are building can seem quite unrelated from the human world, but it isn’t at all. We then need to understand the bigger system we’re building. At some point, the software has a real-world impact, and we need to understand that, or we can miss creating the positive effects we intend.

    On many teams over many years, I’ve had the opportunity to work with other engineers who get this. When it works there’s a rhythm to it, a heartbeat that I stop hearing consciously because it is always there. We talk to customers early and often. We learn about their problems, the other technologies they use, and what is the stuff they understand that we don’t. We start describing our ideas for solutions in their own words. This is not marketing. This influences what we invent. Understanding how to communicate about the thing we’re building changes what we build. We imagine this code we will write, that calls some other code, which causes other software to do a thing, and through all of the systems and layers there is some macro effect, which is important and time critical. Our software may have the capability of doing a thousand things, but we choose the scenarios for performance testing, we decide what is most normal, most routine, and that thing needs to be tied directly to those real effects.

    Sometimes we refer to “domain knowledge” if our customers have special expertise, and we need to know that stuff, at least a little bit, so we can relate to our customers (and it’s usually pretty fun to explore someone else’s world a bit). However, the most important thing our customers know, that we need to discover, is what will solve their problems. They don’t know it in their minds — what they describe that they think will solve their problems often doesn’t actually. They know it when they feel it, when they somehow interact with our software and it gives them agency and amplifies their effect in the world.

    Our software works when it empowers people to solve their problems and create impact. We can measure that. We can watch that happen. For those of us who have experienced that as software engineers, there’s no going back. We can’t write software any other way.

    Customer stories, first hand knowledge from the people whose problems I’m solving spark my imagination, but I’m careful to use those stories with context from quantitative data. I love the product managers who tell me about rapidly expanding markets and how they relate to the use cases embedded in those stories, and research teams who validate whether the story I heard the other day is common across our customers or an edge case. We build software on a set of assumptions. Those assumptions need to be based on reality, and we need to validate early and often whether the thing we’re making is actually going to have a positive effect on reality.

    Customer stories are like oxygen to a development team that works like this. Research and design teams who work closely with product managers are like water. When other engineers can’t explain the customer use cases for their software, when they argue about what the solution should be based only on the technical requirements, sometimes I can’t breathe. Then I find the people who are like me, who work like this, and I can hear a heartbeat, I can breathe again, and if feels like this thing we are making just might be awesome.

    by sarah at June 19, 2017 01:08 PM

    June 18, 2017

    Debian Administration

    Debian Stretch Released

    Today the Debian project is pleased to announce the release of the next stable release of Debian GNU/Linux, code-named Stretch.

    by Steve at June 18, 2017 03:44 AM

    June 14, 2017

    Colin Percival

    Oil changes, safety recalls, and software patches

    Every few months I get an email from my local mechanic reminding me that it's time to get my car's oil changed. I generally ignore these emails; it costs time and money to get this done (I'm sure I could do it myself, but the time it would cost is worth more than the money it would save) and I drive little enough — about 2000 km/year — that I'm not too worried about the consequences of going for a bit longer than nominally advised between oil changes. I do get oil changes done... but typically once every 8-12 months, rather than the recommended 4-6 months. From what I've seen, I don't think I'm alone in taking a somewhat lackadaisical approach to routine oil changes.

    June 14, 2017 04:40 AM