Planet SysAdmin


May 25, 2016

Racker Hacker

Test Fedora 24 Beta in an OpenStack cloud

Although there are a few weeks remaining before Fedora 24 is released, you can test out the Fedora 24 Beta release today! This is a great way to get a sneak peek at new features and help find bugs that still need a fix.
Fedora Infinity Logo
The Fedora Cloud image is available for download from your favorite local mirror or directly from Fedora’s servers. In this post, I’ll show you how to import this image into an OpenStack environment and begin testing Fedora 24 Beta.

One last thing: this is beta software. It has been reliable for me so far, but your experience may vary. I would recommend waiting for the final release before deploying any mission critical applications on it.

Importing the image

The older glance client (version 1) allows you to import an image from a URL that is reachable from your OpenStack environment. This is helpful since my OpenStack cloud has a much faster connection to the internet (1 Gbps) than my home does (~ 20 mbps upload speed). However, the functionality to import from a URL was removed in version 2 of the glance client. The OpenStackClient doesn’t offer the feature either.

There are two options here:

  • Install an older version of the glance client
  • Use Horizon (the web dashboard)

Getting an older version of glance client installed is challenging. The OpenStack requirements file for the liberty release leaves the version of glance client without a maximum version cap and it’s difficult to get all of the dependencies in order to make the older glance client work.

Let’s use Horizon instead so we can get back to the reason for the post.

Adding an image in Horizon

Log into the Horizon panel and click Compute > Images. Click + Create Image at the top right of the page and a new window should appear. Add this information in the window:

  • Name: Fedora 24 Cloud Beta
  • Image Source: Image Location
  • Image Location: http://mirrors.kernel.org/fedora/releases/test/24_Beta/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Beta-1.6.x86_64.qcow2
  • Format: QCOW2 – QEMU Emulator
  • Copy Data: ensure the box is checked

When you’re finished, the window should look like this:

Adding Fedora 24 Beta image in Horizon

Click Create Image and the images listing should show Saving for a short period of time. Once it switches to Active, you’re ready to build an instance.

Building the instance

Since we’re already in Horizon, we can finish out the build process there.

On the image listing page, find the row with the image we just uploaded and click Launch Instance on the right side. A new window will appear. The Image Name drop down should already have the Fedora 24 Beta image selected. From here, just choose an instance name, select a security group and keypair (on the Access & Security tab), and a network (on the Networking tab). Be sure to choose a flavor that has some available storage as well (m1.tiny is not enough).

Click Launch and wait for the instance to boot.

Once the instance build has finished, you can connect to the instance over ssh as the fedora user. If your security group allows the connection and your keypair was configured correctly, you should be inside your new Fedora 24 Beta instance!

Not sure what to do next? Here are some suggestions:

  • Update all packages and reboot (to ensure that you are testing the latest updates)
  • Install some familiar applications and verify that they work properly
  • Test out your existing automation or configuration management tools
  • Open bug tickets!

The post Test Fedora 24 Beta in an OpenStack cloud appeared first on major.io.

by Major Hayden at May 25, 2016 03:17 AM

May 24, 2016

Carl Chenet

Tweet your database with db2twitter

Follow me also on Diaspora*diaspora-banner or Twitter 

You have a database (MySQL, PostgreSQL, see supported database types), a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

A quick example of a tweet generated by db2twitter:

db2twitter

The new version 0.6 offers the support of tweets with an image. How cool is that?

 

db2twitter is developed by and run for LinuxJobs.fr, the job board of th french-speaking Free Software and Opensource community.

banner-linuxjobs-small

db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

 


by Carl Chenet at May 24, 2016 10:00 PM

Chris Siebenmann

How fast fileserver failover could matter to us

Our current generation fileservers don't have any kind of failover system, just like our original generation. A few years ago I wrote that we don't really miss failover, although I allowed that I might be overlooking situations where we'd have used failover if we had it. So, yeah, about that: on reflection, I think there is a relatively important situation where we could really use fast, reliable cooperative failover (when both the old and new hosts of a virtual fileserver are working properly).

Put simply, the advantage of fast cooperative failover is that it makes a number of things a lot less scary, because you can effectively experiment (assuming that the failover is basically user transparent). For instance, trying a new version of OmniOS in production, where it's very unlikely that it will crash outright but possible that we'll experience performance problems or other anomalies. With fast failover, we could roll a virtual fileserver on to a server running the new OmniOS, watch it, and have an immediate and low impact way out if something comes up.

(At one point this would have made our backups explode because the backups were tied to the real hosts involved. However we've changed that these days and backups are relatively easy to shift around.)

It's possible that we should take another look at failover in our current environment, since a lot of water has gone under the bridge since we last gave up on it. This also sparks a more radical thought; if we're going to use failover mostly as a way to do experiments, perhaps we should reorganize things so that some of our virtual fileservers are smaller than they are now so moving one over affects fewer people. Or at least we could have a 'sysadmin virtual fileserver' so we can test it with ourselves only at first.

(Our current overall architecture is sort of designed with the idea that a host has only one virtual fileserver and virtual fileservers don't really share disks with other ones, but we might be able to do some tweaks.)

All of this is a bit blue sky, but at the very least we should do a bit of testing to see how much time a cooperative fileserver failover might take in our current environment. I should also keep an eye out for future OmniOS changes that might improve it.

(As usual, re-checking one's core assumptions periodically is probably a good idea. Ideally we would have done some checking of this when we were initially testing each OmniOS version, but well. Hopefully next time, if there is one.)

by cks at May 24, 2016 02:25 AM

May 23, 2016

LZone - Sysadmin

Scan Linux for Vulnerable Packages

How do you know wether your Linux server (which has no desktop update notifier or unattended security updates running) does need to be updated? Of course an
apt-get update && apt-get --dry-run upgrade
might give an indication. But what of the package upgrades do stand for security risks and whose are only simple bugfixes you do not care about?

Check using APT

One useful possibility is apticron which will tell you which packages should be upgraded and why. It presents you the package ChangeLog to decided wether you want to upgrade a package or not. Similar but less details is cron-apt which also informs you of new package updates.

Analyze Security Advisories

Now with all those CERT newsletters, security mailing lists and even security news feeds out there: why can't we check the other way around? Why not find out:
  1. Which security advisories do affect my system?
  2. Which ones I have already complied with?
  3. And which vulnerabilities are still there?
My mad idea was to take those security news feeds (as a start I tried with the ones from Ubuntu and CentOS) and parse out the package versions and compare them to the installed packages. The result was a script producing the following output:

screenshot of lpvs-scan.pl

In the output you see lines starting with "CEBA-2012-xxxx" which is CentOS security advisory naming schema (while Ubuntu has USN-xxxx-x). Yellow color means the security advisory doesn't apply because the relevant packages are not installed. Green means the most recent package version is installed and the advisory shouldn't affect the system anymore. Finally red, of course meaning that the machine is vulnerable.

Does it Work Reliably?

The script producing this output can be found here. I'm not yet satisfied with how it works and I'm not sure if it can be maintained at all given the brittle nature of the arbitrarily formatted/rich news feeds provided by the distros. But I like how it gives a clear indication of current advisories and their effect on the system.

Maybe persuading the Linux distributions into using a common feed format with easy to parse metadata might be a good idea...

How do you check your systems? What do you think of a package scanner using XML security advisory feeds?

May 23, 2016 03:52 PM

Chris Siebenmann

Our problem with OmniOS upgrades: we'll probably never do any more

Our current fileserver infrastructure is currently running OmniOS r151014, and I have recently crystallized the realization that we will probably not upgrade it to a newer version of OmniOS over the remaining lifetime of this generation of server hardware (which I optimistically project to be another two to three years). This is kind of a problem for a number of reasons (and yes, beyond the obvious), but my pessimistic view right now is that it's an essentially intractable one for us.

The core issue with upgrades for us is that in practice they are extremely risky. Our fileservers are a core and highly visible service in our environment; downtime or problems on even a single production fileserver directly impacts the ability of people here to get their work done. And we can't even come close to completely testing a new fileserver outside of production. Over and over, we have only found problems (sometimes serious ones) under our real and highly unpredictable production load.

(We can do plenty of fileserver testing outside of production and we do, but testing can't show that production fileservers will be problem free, it can only find (some) problems before production.)

Since upgrades are risky, we need fairly strong reasons to do them. When our existing fileservers are working reasonably well, it's not clear where such strong reasons would come from (barring a few freak events, like a major ixgbe improvement, or the discovery of catastrophic bugs in ZFS or NFS service or the like). On the one hand this is a testimony to OmniOS's current usefulness, but on the other hand, well.

I don't have any answers to this. There probably really aren't any, and I'm wishing for a magic solution to my problems. Sometimes that's just how it goes.

(I'm assuming for the moment that we could do OmniOS version upgrades through new boot environments. We might not be able to, for various reasons (we couldn't last time), in which case the upgrade problem gets worse. Actual system reinstalls, hardware swaps, or other long-downtime operations crank the difficulty of selling upgrades up even more. Our round of upgrades to OmniOS r151014 took about six months from the first server to the last server, for a whole collection of reasons including not wanting to do all servers at once in case of problems.)

by cks at May 23, 2016 03:56 AM

May 22, 2016

Chris Siebenmann

My view of Barracuda's public DNSBL

In a comment on this entry, David asked, in part:

Have you tried the Barracuda and Hostkarma DNSBLs? [...]

I hadn't heard of Hostkarma before, so I don't have anything to say about it. But I am somewhat familiar with Barracuda's public DNSBL and based on my experiences I'm not likely to use it any time soon. As for why, well, David goes on to mention:

[...] Barracuda in particular lists more aggressively and is willing to punish lower volume relays that fail to mitigate spammer exploitations. [...]

That's one way to describe what Barracuda does. Another way to put it is that in my experience, Barracuda is pretty quick to list any IP address that has even a relatively brief burst of outgoing spam, regardless of the long term spam-to-ham ratio of that IP address. Or to put it another way, whenever we have one of our rare outgoing spam incidents, we can count on the outgoing IP involved to get listed and for some amount of our entirely legitimate email to start bouncing as a result.

As a result I expect that any attempt to use it in our anti-spam system would have far too high a false positive rate to be acceptable to our users. Given this I haven't attempted any sort of actual analysis of comparing sender IPs of accepted and rejected email against the Barracuda list; it's too much work for too little return.

My suspicion is that this is likely to be strongly influenced by your overall rate of ham to spam, for standard mathematical reasons. If most of your incoming email is spam anyways and you don't often receive email from places that are likely to be compromised from time to time by spammers, its misfires are not likely to matter to you. This does not describe our mail environment, however, either in ham/spam levels or in the type of sources we see.

(To put it one way, universities are reasonably likely to get one of their email systems compromised from time to time and we certainly get plenty of legitimate email from universities.)

On my personal sinkhole spamtrap, I could probably use the Barracuda list (and the psky RBL) as a decent way of getting rid of known and thus probably uninteresting source of spam in favour of only having to deal with (more) interesting ones. But obviously this spamtrap gets only spam, so false positives are not exactly a concern. Certainly a significant number of recently trapped messages there are from IPs that are on one or the other lists (and sometimes both), although obviously I'm taking a post-facto look at the hit rate.

by cks at May 22, 2016 04:56 AM

May 21, 2016

Geek and Artist - Tech

Catching Up

I haven’t posted anything for quite some time (which I feel a little bad about), so this is something of a randomly-themed catch-up post. According to my LinkedIn profile I’ve been doing this engineering management thing for about two years, which at least provides some explanation for a relative lack of technical-oriented blog posts. Of course in that time I have certainly not revoked my Github access, deleted all compilers/runtimes/IDEs/etc and entirely halted technical work, but the nature of the work of course has changed. In short, I don’t find myself doing so much “interesting” technical work that leads to blog-worthy posts and discoveries.

So what does the work look like at the moment? I’ll spare you the deep philosophical analysis – there are many, many (MANY) posts and indeed books on making the transition from a technical contributor to a team manager or lead of some sort. Right back at the beginning I did struggle with the temptation to continue coding and contribute also on my management tasks – it is difficult to do both adequately at the same time. More recently (and perhaps in my moments of less self-control) I do allow myself to do some technical contributions. These usually look like the following:

  • Cleaning up some long-standing technical debt that is getting in the way of the rest of the team being productive, but is not necessarily vital to their learning/growth or knowledge of our technology landscape.
  • Data analysis – usually ElasticSearch, Pig/Hive/Redshift/MapReduce jobs to find the answer to a non-critical but still important question.
  • Occasionally something far down the backlog that is a personal irritation for me, but is not in the critical path.
  • Something that enables the rest of the team in some way, or removes a piece of our technology stack that was previously only known by myself (i.e. removes the need for knowledge transfer)
  • Troubleshooting infrastructure (usually also coupled with data analysis).

I’d like to say I’ve been faithful to that list but I haven’t always. The most recent case was probably around a year ago I guess, when I decided I’d implement a minimum-speed data transfer monitor to our HLS server. This ended up taking several weeks and was a far more gnarly problem than I realised. The resulting code was also not of the highest standard – when you are not coding day-in and day-out, I find that my overall code quality and ability to perceive abstractions and the right model for a solution is impaired.

Otherwise, the tasks that I perhaps should be filling my day with (and this is not an exhaustive list, nor ordered, just whatever comes to mind right now) looks more like this:

  • Assessing the capacity and skills make up of the team on a regular basis, against our backlog and potential features we’d like to deliver. Do we have the right skills and are we managing the “bus factor”? If the answer is “no” (and it almost always is), I should be hiring.
  • Is the team able to deliver? Are there any blockers?
  • Is the team happy? Why or why not? What can I do to improve things for them?
  • How are the team-members going on their career paths? How can I facilitate their personal growth and helping them become better engineers?
  • What is the overall health of our services and client applications? Did we have any downtime last night? What do I need to jump on immediately to get these problems resolved? I would usually consider this the first item to check in my daily routine – if something has been down we need to get it back up and fix the problems as a matter of urgency.
  • What is the current state of our technical debt; are there any tactical or strategic processes we need to start in order to address it?
  • How are we matching up in terms of fitting in with technology standards in the rest of the organisation? Are we falling behind or leading the way in some areas? Are there any new approaches that have worked well for us that could be socialised amongst the rest of the organisation?
  • Are there any organisational pain-points that I can identify and attempt to gather consensus from my peer engineering managers? What could we change on a wider scale that would help the overall organisation deliver user value faster, or with higher quality?
  • Could we improve our testing processes?
  • How are we measuring up against our KPIs? Have we delivered something new recently that needs to be assessed for impact, and if so has it been a success or not matched up to expectations? Do we need to rethink our approach or iterate on that feature?
  • Somewhat related: have there been any OS or platform updates on any of our client platforms that might have introduced bugs that we need to address? Ideally we would be ahead of the curve and anticipate problems before they happen, but if you have a product that targets web browsers or Android phones, there are simply too many to adequately test ahead of general public releases before potential problems are discovered by the public.
  • Is there any free-range experimentation the team could be doing? Let’s have a one-day offsite to explore something new! (I usually schedule at least one offsite a month for this kind of thing, with a very loose agenda.)
  • How am I progressing on my career path? What aspects of engineering management am I perhaps not focussing enough on? What is the next thing I need to be learning?

I could probably go on and on about this for a whole day. After almost two years (and at several points before that) it is natural to question whether the engineering management track is the one I should be on. Much earlier (perhaps 6 months in) I was still quite unsure – if you are still contributing a lot of code as part of your day to day work, the answer to the question is that much harder to arrive at since you have blurred the lines of what your job description should look like. It is much easier to escape the reality of settling permanently on one side or the other.

Recently I had some conversations with people which involved talking in depth about either software development or engineering management. On the one hand, exploring the software development topics with someone, I definitely got the feeling that there was a lot I am getting progressively more and more rusty on. To get up to speed again I feel would take some reasonable effort on my part. In fact, one of the small technical debt “itches” I scratched at the end of last year was implementing a small application to consume from AWS Kinesis, do some minor massaging of the events and then inject them into ElasticSearch. I initially thought I’d write it in Scala, but the cognitive burden of learning the language at that point was too daunting. I ended up writing it in Java 8 (which I have to say is actually quite nice to use, compared to much older versions of Java) but this is not a struggle a competent engineer coding on a daily basis would typically have.

On the other hand, the conversations around engineering management felt like they could stretch on for ever. I could literally spend an entire day talking about some particular aspect of growing an organisation, or a team, or on technical decision-making (and frequently do). Some of this has been learned through trial and error, some by blind luck and I would say a decent amount through reading good books and the wonderful leadership/management training course at SoundCloud (otherwise known as LUMAS). I and many other first-time managers took this course (in several phases) starting not long after I started managing the team, and I think I gained a lot from it. Unfortunately it’s not something anyone can simply take, but at least I’d like to recommend some of the books we were given during the course – I felt I got a lot out of them as well.

  • Conscious Business by Fred Kofman. It might start out a bit hand-wavy, and feel like it is the zen master approach to leadership but if you persist you’ll find a very honest, ethical approach to business and leadership. I found it very compelling.
  • Five Dysfunctions of a Team by Patrick Lencioni. A great book, and very easy read with many compelling stories as examples – for building healthy teams. Applying the lessons is a whole different story, and I would not say it is easy by any measure. But avoiding it is also a path to failure.
  • Leadership Presence by Kathy Lubar and Belle Linda Halpern. Being honest and genuine, knowing yourself, establishing a genuine connection and empathy to others around you and many other gems within this book are essential to being a competent leader. I think this is a book I’ll keep coming back to for quite some time.

In addition I read a couple of books on Toyota and their lean approach to business (they are continually referenced in software development best practices). I have to admit that making a solid connection between all of TPS can be a challenge, and I hope to further learn about them in future and figure out which parts are actually relevant and which parts are not. There were a few other books around negotiation and other aspects of leadership which coloured my thinking but were not significant enough to list. That said, I still have something like 63 books on my wish list still to be ordered and read!

In order to remain “relevant” and in touch with technical topics I don’t want to stop programming, of course, but this will have to remain in the personal domain. To that end I’m currently taking a game programming course in Unity (so, C#) and another around 3D modelling using Blender. Eventually I’ll get back to the machine learning courses I was taking a long time ago but still need to re-take some beginner linear algebra in order to understand the ML concepts properly. Then there are a tonne of other personal projects in various languages and to various ends. I’ll just keep fooling myself that I’ll have free time for all of these things 🙂

by oliver at May 21, 2016 08:14 PM

May 18, 2016

Errata Security

Technology betrays everyone

Kelly Jackson Higgins has a story about how I hacked her 10 years ago, by sniffing her email password via WiFi and displaying it on screen. It wasn't her fault -- it was technology's fault. Sooner or later, it will betray you.

The same thing happened to me at CanSecWest around the year 2001, for pretty much exactly the same reasons. I think it was HD Moore who sniffed my email password. The thing is, I'm an expert, who writes tools that sniff these passwords, so it wasn't like I was an innocent party here. Instead, simply opening my laptop with Outlook running the background was enough for it to automatically connect to WiFi, then connect to a POP3 server across the Internet. I thought I was in control of the evil technology -- but this incident proved I wasn't.

By 2006, though, major email services were now supporting email wholly across SSL, so that this would no longer happen -- in theory. In practice, they still left the old non-encrypted ports open. Users could secure themselves, if they tried hard, but they usually weren't secured.

Today, in 2016, the situation is much better. If you use Yahoo! Mail or GMail, you don't even have the option to not encrypt -- even if you wanted to. Unfortunately, many people are using old services that haven't upgraded, like EarthLink or Network Solutions. These were popular services during the dot-com era 20 years ago, but haven't evolved since. They spend enough to keep the lights on, but not enough to keep up with the times. Indeed, EarthLink doesn't even allow for encryption even if you wanted it, as an earlier story this year showed.

My presentation in 2006 wasn't about email passwords, but about all the other junk that leaks private information. Specifically, I discussed WiFi MAC addresses, and how they can be used to track mobile devices. Only in the last couple years have mobile phone vendors done something to change this. The latest version of iOS 9 will now randomize the MAC address, so that "they" can no longer easily track you by it.

The point of this post is this. If you are thinking "surely my tech won't harm me in stupid ways", you are wrong. It will. Even if it says on the box "100% secure", it's not secure. Indeed, those who promise the most often deliver the least. Those on the forefront of innovation (Apple, Google, and Facebook), but even they must be treated with a health dose of skepticism.

So what's the answer? Paranoia and knowledge. First, never put too much faith in the tech. It's not enough, for example, for encryption to be an option -- you want encryption enforced so that unencrypted is not an option. Second, learn how things work. Learn why SSL works the way it does, why it's POP3S and not POP3, and why "certificate warnings" are a thing. The more important security is to you, the more conservative your paranoia and the more extensive your knowledge should become.


Appendix: Early this year, the EFF (Electronic Freedom Foundation) had a chart of various chat applications and their features (end-to-end, open-source, etc.). On one hand, it was good information. On the other hand, it was bad, in that it still wasn't enough for the non-knowledgeable to make a good decision. They could easily pick a chat app that had all the right features (green check-marks across the board), but still be one that could easily be defeated.

Likewise, recommendations to use "Tor" are perilous. Tor is indeed a good thing, but if you blindly trust it without understanding it, it will quickly betray you. If your adversary is the secret police of your country cracking down on dissidents, then you need to learn a lot more about how Tor works before you can trust it.


Notes: I'm sorta required to write this post, since I can't let it go unremarked that the same thing happened to me -- even though I know better.





by Robert Graham (noreply@blogger.com) at May 18, 2016 09:49 PM

May 17, 2016

[bc-log]

Creating a swap file for tiny cloud servers

A few months ago while setting up a few cloud servers to host one of my applications. I started running into an interesting issue while building Docker containers. During the docker build execution my servers ran out of memory causing the Docker build to fail.

The servers in question only have about 512MB of RAM and the Docker execution was using the majority of the available memory. My solution to this problem was simple, add a swap file.

A swap file is a file that lives on a file system and is used like a swap device.

While adding the swap file I thought this would make for an interesting article. Today's post is going to cover how to add a swap file to a Linux system. However, before we jump into creating a swap file, let's explore what exactly swap is and how it works.

What is swap

When Linux writes data to physical memory, or RAM it does so in chunks that are called pages. Pages can vary in size from system to system but in general a page is a small amount of data. The Linux kernel will keep these pages in physical memory for as long as it has available memory to spare and the pages are relevant.

When the kernel needs to make room for new memory pages it will move some memory pages out of physical memory, where the destination is usually to a swap device. Swap devices are hard-drive partitions or files within a file system used to expand the memory capacity of a system.

A simple way to think of swap is to think of it as an overflow area for system memory. With that thought, the general rule of thumb is that memory pages being written to swap is a bad thing. In my case however, since the only time I require swap is during the initial build of my Docker containers. Swapping isn't necessarily a bad thing.

In fact, my use case is exactly the reason why swap exists. I have a process that requires more memory than what is available on my system, and I would prefer to use some disk space to perform the task rather than add memory to the system. The key is that my system does not swap during normal operations, only during the Docker build execution.

Determining if a system is Swapping

In summary, Swapping is only a bad thing, if it happens during unexpected times. A simple way to determine if a system is actively swapping is to run the vmstat command.

$ vmstat -n 5 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0    100 207964  22056 129964    0    0    14     1   16   32  0  0 100  0  0
 0  0    100 207956  22056 129956    0    0     0     0   16   26  0  0 100  0  0
 0  0    100 207956  22064 129956    0    0     0     2   16   27  0  0 100  0  0
 0  0    100 207956  22064 129956    0    0     0     0   14   23  0  0 100  0  0
 0  0    100 207956  22064 129956    0    0     0     0   14   24  0  0 100  0  0

The vmstat command can be used to show memory statistics from the system in question. In the above there are two columns, si (Swapped In) and so (Swapped Out) which are used to show the number of pages swapped in and swapped out.

Since the output above shows that the example system is neither swapping in, or swapping out memory pages, we can assume that swapping on this server is probably not an issue. The fact that at one point the system has swapped 100KB of data does not necessarily mean there is an issue.

When does a system swap

A common misconception about swap and Linux is that the system will swap only when it is completely out of memory. While this was mostly true a long time ago, recent versions of the Linux kernel include a tunable parameter called vm.swappiness. The vm.swappiness parameter is essentially an aggressiveness setting. The higher the vm.swappiness value (0 - 100), the more aggressive the kernel will swap. When set to 0, the kernel will only swap once the system memory is below the vm.min_free_kbytes value.

What this means is, unless vm.swappiness is set at 0. The system may swap at any time, even without memory being completely used. Since by default the vm.swappiness value is set to 60, which is somewhat on the aggressive side. A typical Linux system may swap without ever hitting the vm.min_free_kbytes threshold.

This aggressiveness does not often impact performance (though it can in some cases) because when the kernel decides to start swapping it will pick memory pages based on the last accessed time of those pages. What this means is, data that is frequently accessed is kept in physical memory longer, and data that is not frequently accessed is usually the primary candidates for moving to swap.

Now that we have a better understanding of swap, let's add a swap device to the cloud servers in question.

Adding a swap device

Adding a swap device is fairly easy and can be done with only a few commands. Let's take a look at the memory available on the system today. We will do this with the free command, using the -m (Megabytes) flag.

$ free -m
             total       used       free     shared    buffers     cached
Mem:           489        281        208          0         17        126
-/+ buffers/cache:        136        353
Swap:            0          0          0

From the free command's output we can see that this server currently has no swap space at all. We will change that by creating a 2GB swap file.

Creating a swap file

The first step in creating a swap file is to create a file with the dd command.

$ sudo dd if=/dev/zero of=/swap bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 5.43507 s, 395 MB/s

The reason we used dd to create the file is because we can create more than a simple empty file. With the dd command above we created a 2GB file (writing 1MB chunks 2048 times) with the contents being populated from /dev/zero. This gives us the data space that will make up the swap device. Whatever size we make the file in this step, is the size the swap device will be.

With the file created, we can now use the mkswap command to make the file a swap device.

$ sudo mkswap /swap
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=0d64a557-b7f9-407e-a0ef-d32ccd081f4c

The mkswap command is used to setup a file or device as a swap device. This is performed by formatting the device or file as a swap partition.

Mounting the swap device

With our file now formatted as a swap device we can start the steps to mount it. Like any other filesystem device, in order to mount our swap device we will need to specify it within the /etc/fstab file. Let's start by appending the below contents into the /etc/fstab file.

/swap   swap  swap  defaults  0 0

With the above step complete, we can now mount the swap file. To do this we will use the swapon command.

$ sudo swapon -a

The -a flag for swapon tells the command to mount and activate all defined swap devices within the /etc/fstab file. To check if our swap file was successfully mounted we can execute the swapon command again with the -s option.

$ sudo swapon -s
Filename                Type        Size    Used    Priority
/swap                                   file        2097148    0    -1

From the above we can see that /swap, the file we created is mounted and currently in use as a swap device. If we were to re-run the free command from above we should also see the swap space available.

$ free -m
             total       used       free     shared    buffers     cached
Mem:           489        442         47          0         11        291
-/+ buffers/cache:        139        350
Swap:         2047          0       2047

In the previous execution of the free command the swap size was 0 and now it is 2047 MB. This means we have roughly 2GB of swap to use with this system.

A useful thing to know about Linux is that you can have multiple swap devices. The priority value of the swap device is used to determine in what order the data should be written. When 2 or more swap devices have the same priority swap usage is balanced across both. This means you can gain some performance during times of swap usage by creating multiple swap devices on different hard-drives.

Got any other swap tips? Add them below in the comments.


Posted by Benjamin Cane

May 17, 2016 08:30 PM

LZone - Sysadmin

Do Not List Iptables NAT Rules Without Care

What I do not want to do ever again is running
iptables -L -t nat
on a core production server with many many connections.

And why?

Well because running "iptables -L" auto-loads the table specific iptables kernel module which for the "nat" table is "iptables_nat" which has a dependency on "nf_conntrack".

While "iptables_nat" doesn't do anything when there are no configured iptables rules, "nf_conntrack" immediately starts to drop connections as it cannot handle the many many connections the server has.

The probably only safe way to check for NAT rules is:
grep -q ^nf_conntrack /proc/modules && iptables -L -t nat

May 17, 2016 04:05 PM

Racker Hacker

Troubleshooting OpenStack network connectivity

NOTE: This post is a work in progress. If you find something that I missed, feel free to leave a comment. I’ve made plenty of silly mistakes, but I’m sure I’ll make a few more. :)


Completing a deployment of an OpenStack cloud is an amazing feeling. There is so much automation and power at your fingertips as soon as you’re finished. However, the mood quickly turns sour when you create that first instance and it never responds to pings.

It’s the same feeling I get when I hang Christmas lights every year only to find that a whole section didn’t light up. If you’ve ever seen National Lampoon’s Christmas Vacation, you know what I’m talking about:

Chevy-Chase-in-National-Lampoons-Christmas-Vacation

I’ve stumbled into plenty of problems (and solutions) along the way and I’ll detail them here in the hopes that it can help someone avoid throwing a keyboard across the room.

Security groups

Security groups get their own section because I forget about them constantly. Security groups are a great feature that lets you limit inbound and outbound access to a particular network port.

However, OpenStack’s default settings are fairly locked down. That’s great from a security perspective, but it can derail your first instance build if you’re not thinking about it.

You have two options to allow traffic:

  • Add more permissive rules to the default security group
  • Create a new security group and add appropriate rules into it

I usually ensure that ICMP traffic is allowed into any port with the default security group applied, and then I create a another security group specific to the class of server I’m building (like webservers). Changing a security group rule or adding a new security group to a port takes effect in a few seconds.

Something is broken in the instance

Try to get console access to the instance through Horizon or via the command line tools. I generally find an issue in one of these areas:

  • The IP address, netmask, or default gateway are incorrect
  • Additional routes should have been applied, but were not applied
  • Cloud-init didn’t run, or it had a problem when it ran
  • The default iptables policy in the instance is overly restrictive
  • The instance isn’t configured to bring up an instance by default
  • Something is preventing the instance from getting a DHCP address

If the network configuration looks incorrect, cloud-init may have had a problem during startup. Look in /var/log/ or in journald for any explanation of why cloud-init failed.

There’s also the chance that the network configuration is correct, but the instance can’t get a DHCP address. Verify that there are no iptables rules in place on the instance that might block DHCP requests and replies.

Some Linux distributions don’t send gratuitous ARP packets when they bring an interface online. Tools like arping can help with these problems.

If you find that you can connect to almost anything from within the instance, but you can’t connect to the instance from the outside, verify your security groups (see the previous section). In my experience, a lopsided ingress/egress filter almost always points to a security group problem.

Something is broken in OpenStack’s networking layer

Within the OpenStack control plane, the nova service talks to neutron to create network ports and manage addresses on those ports. One of the requests or responses may have been lost along the way or you may have stumbled into a bug.

If your instance couldn’t get an IP address via DHCP, make sure the DHCP agent is running on the server that has your neutron agents. Restarting the agent should bring the DHCP server back online if it isn’t running.

You can also hop into the network namespace that the neutron agent uses for your network. Start by running:

# ip netns list

Look for a namespace that starts with qdhcp- and ends with your network’s UUID. You can run commands inside that namespace to verify that networking is functioning:

# ip netns exec qdhcp-NETWORK_UUID ip addr
# ip netns exec qdhcp-NETWORK_UUID ping INSTANCE_IP_ADDRESS

If your agent can ping the instance’s address, but you can’t ping the instance’s address, there could be a problem on the underlying network — either within the virtual networking layer (bridges and virtual switches) or on the hardware layer (between the server and upstream network devices).

Try to use tcpdump to dump traffic on the neutron agent and on the instance’s network port. Do you see any traffic at all? You may find a problem with incorrect VLAN ID’s here or you may see activity that gives you more clue (like one half of an ARP or DHCP exchange).

Something is broken outside of OpenStack

Diagnosing these problems can become a bit challenging since it involves logging into other systems.

If you are using VLAN networks, be sure that the proper VLAN ID is being set for the network. Run openstack network show and look for provider:segmentation_id. If that’s correct, be sure that all of your servers can transmit packets with that VLAN tag applied. I often remember to allow tagged traffic on all of the hypervisors and then I forget to do the same in the control plane.

Be sure that your router has the VLAN configured and has the correct IP address configuration applied. It’s possible that you’ve configured all of the VLAN tags correctly in all places, but then fat-fingered an IP address in OpenStack or on the router.

While you’re in the router, test some pings to your instance. If you can ping from the router to the instance, but not from your desk to the instance, your router might not be configured correctly.

For instances on private networks, ensure that you created a router on the network. This is something I tend to forget. Also, be sure that you have the right routes configured between you and your OpenStack environment so that you can route traffic to your private networks through the router. If this isn’t feasible for you, another option could be OpenStack’s VPN-as-a-service feature.

Another issue could be the cabling between servers and the nearest switch. If a cable is crossed, it could mean that a valid VLAN is being blocked at the switch because it’s coming in on the wrong port.

When it’s something else

There are some situations that aren’t covered here. If you think of any, please leave a comment below.

As with any other troubleshooting, I go back to this quote from Dr. Theodore Woodward about diagnosing illness in the medical field:

When you hear hoofbeats, think of horses not zebras.

Look for the simplest solutions and work from the smallest domain (the instance) to the widest (the wider network). Make small changes and go back to the instance each time to verify that something changed. Once you find the solution, document it! Someone will surely appreciate it later.

The post Troubleshooting OpenStack network connectivity appeared first on major.io.

by Major Hayden at May 17, 2016 02:43 AM

May 15, 2016

Raymii.org

Ansible - Add an apt-repository on Debian and Ubuntu

This is a guide that shows you how to add an apt repository to Debian and Ubuntu using Ansible. It includes both the old way, when the apt modules only worked on Ubuntu, and the new way, now that the apt-modules also support Debian, plus some other tricks.

May 15, 2016 12:00 AM

May 12, 2016

Everything Sysadmin

Augeas: Updating of config files without ruining comments and formatting.

Wouldn't it be nice if you could write a program that could reach into an Apache config file (or an AptConf file, or an /etc/aliases file, Postfix master.cf, sshd/ssh config, sudoers, Xen conf, yum or other) make a change, and not ruin the comments and other formatting that exists?

That's what Augeas permits you to do. If a config file's format as been defined in the Augeas "lens" language, you can then use Augeas to parse the file, pull out the data you want, plus you can add, change or delete elements too. When Augeas saves the file it retains all comments and formatting. Any changes you made retain the formatting you'd expect.

Augeas can be driven from a command-line tool (augtool) or via the library. You can use the library from Ruby, Puppet, and other systems. There is a project to rewrite Puppet modules so that they use Augeas (augeasproviders.com/providers)

Version 1.5 of Augeas was released this week. The number of formats it understands (lenses) has increased (see the complete list here). You can also define your own lens, but that requires an understanding of parsing concepts (get our your old CS textbook, you'll need a refresher). That said, I've found friendly support via their mailing list when I've gotten stuck writing my own lens.

The project homepage is http://augeas.net/ and the new release was just announced on the mailing list (link).

by Tom Limoncelli at May 12, 2016 06:00 PM

May 11, 2016

A Year in the Life of a BSD Guru

Racker Hacker

Getting started with gertty

gertty
When you’re ready to commit code in an OpenStack project, your patch will eventually land in a Gerrit queue for review. The web interface works well for most users, but it can be challenging to use when you have a large amount of projects to monitor. I recently became a core developer on the OpenStack-Ansible project and I searched for a better solution to handle lots of active reviews.

This is where gertty can help. It’s a console-based application that helps you navigate reviews efficiently. I’ll walk you through the installation and configuration process in the remainder of this post.

Installing gertty

The gertty package is available via pip, GitHub, and various package managers for certain Linux distributions. If you’re on Fedora, just install python-gertty via dnf.

In this example, we will use pip:

pip install gertty

Configuration

You will need a .gertty.yaml file in your home directory for gertty to run. I have an example on GitHub that gives you a good start:

Be sure to change the username and password parts to match your Gerrit username and password. For OpenStack’s gerrit server, you can get these credentials in the user settings area.

Getting synchronized

Now that gertty is configured, start it up on the console:

$ gertty

Type a capital L (SHIFT + L) and wait for the list of projects to appear on the screen. You can choose projects to subscribe to (note that these are different than Gerrit’s watched projects) by pressing your ‘s’ key.

However, if you need to follow quite a few projects that match a certain pattern, there’s an easier way. Quit gertty (CTRL – q) and adjust the sqlite database that gertty uses:

$ sqlite3 .gertty.db
SQLite version 3.8.6 2014-08-15 11:46:33
Enter ".help" for usage hints.
sqlite> SELECT count(*) FROM project WHERE name LIKE '%openstack-ansible%';
39
sqlite> UPDATE project SET subscribed=1 WHERE name LIKE '%openstack-ansible%';
sqlite>

In this example, I’ve subscribed to all projects that contain the string openstack-ansible.

I can start gertty once more and wait for it to sync my new projects down to my local workstation. Keep an eye on the Sync: status at the top right of the screen. It will count up as it enumerates reviews to retrieve and then count down as those reviews are downloaded.

You can also create custom dashboards for gertty based on custom queries. In my example configuration file above, I have a special dashboard that contains all OpenStack-Ansible reviews. That dashboard appears whenever I press F5. You can customize these dashboards to include any custom queries that you need for your projects.

Photo credit: Frank Taillandier

The post Getting started with gertty appeared first on major.io.

by Major Hayden at May 11, 2016 01:45 PM

May 10, 2016

A Year in the Life of a BSD Guru

EuroBSDCon 2010

I'll be heading to the airport later this afternoon, enroute to Karlsruhe for this year's EuroBSDCon. Here are my activities for the conference:

May 10, 2016 12:05 PM

Security Monkey

Everything Sysadmin

Future directions for Blackbox

I maintain an open source project called Blackbox which makes it easy to store GPG-encrypted secrets in Git, Mercurial, Subversion, and others.

I've written up my ideas for where the project should go in the future, including renaming the commands, change where the keys are stored, add a "repo-less" mode, and possibly rewrite it in a different language:

https://github.com/StackExchange/blackbox/blob/master/Version2-Ideas.md

Feedback welcome!

Tom

by Tom Limoncelli at May 10, 2016 01:07 AM

May 09, 2016

Standalone Sysadmin

Debian Jessie Preseed – Yes, please

What could possibly draw me out of blogging dormancy? %!@^ing Debian, that’s what.

I’m working on getting a Packer job going so I can make an updated Vagrant box on a regular basis, but in order to do that, I need to get a preseed file arranged. Well, everything goes swimmingly using examples I’ve found on the internet (because the Debian-provided docs are awful) until this screen:

partition disks if you continue the changes listed below will be written to the disks write the changes to disks yes no

That’s weird. I’ve got the right answers in the preseed:

d-i partman-auto/method                 string lvm
d-i partman-auto-lvm/guided_size        string max
d-i partman-lvm/device_remove_lvm       boolean true
d-i partman-lvm/confirm                 boolean true
d-i partman-auto-lvm/confirm            boolean true
d-i partman-partitioning/confirm_write_new_label   boolean true
d-i partman-partitioning/confirm        boolean true
d-i partman-lvm/confirm_nooverwrite     boolean true

and I’ve even got this, as a backup:

d-i partman/confirm                     boolean true
d-i partman/confirm_nooverwrite         boolean true

The “partman-lvm” is what should be the answer, because of the “partman-auto/method” line. Well, it doesn’t work. That’s weird. So I start Googling. And nothing. And nothing. Hell, I can’t even figure out if preseed files are order-specific. The last time someone apparently asked that question on the internet was 2008.

Anyway. My coworker found a link to a Debian 6 question on ServerFault and suggested that I try something crazy: set the preseed answer for the partman RAID setting:

d-i partman-md/confirm_nooverwrite      boolean true

which makes no freaking sense. So, of course, it worked.

Why you have to do that to make the partman-lvm installer work, I have no idea. Please comment if you have any inkling. Or just want to rant against preseeds too. It’s 8:45am and I’m ready to go home.

by Matt Simmons at May 09, 2016 03:49 PM

Everything Sysadmin

Preview a chapter of our new book in acmqueue magazine!

The new issue of acmqueue magazine contains a preview of a chapter from our next book, the 3rd edition of TPOSANA. This issue contains a preview of a chapter from our next book, the 3rd edition of TPOSANA. The chapter is called "The Small Batches Principle". We are very excited to be able to bring you this preview and hope you find the chapter fun and educational. The book won't be out until Oct 7, 2016, so don't miss this opportunity to read it early!

ACM members can access it online for free, or a small fee gets you access to it online or via an app. Get it now!

by Tom Limoncelli at May 09, 2016 03:00 PM

May 08, 2016

HolisticInfoSec.org

toolsmith #116: vFeed & vFeed Viewer

Overview

In case you haven't guessed by now, I am an unadulterated tools nerd. Hopefully, ten years of toolsmith have helped you come to that conclusion on your own. I rejoice when I find like-minded souls, I found one in Nabil (NJ) Ouchn (@toolswatch), he of Black Hat Arsenal and toolswatch.org fame. In addition to those valued and well-executed community services, NJ also spends a good deal of time developing and maintaining vFeed. vFeed included a Python API and the vFeed SQLite database, now with support for Mongo. It is, for all intents and purposes a correlated community vulnerability and threat database. I've been using vFeed for quite a while now having learned about it when writing about FruityWifi a couple of years ago.
NJ fed me some great updates on this constantly maturing product.
Having achieved compatibility certifications (CVE, CWE and OVAL) from MITRE, the vFeed Framework (API and Database) has started to gain more than a little gratitude from the information security community and users, CERTs and penetration testers. NJ draws strength from this to add more features now and in the future. The actual vFeed roadmap is huge. It varies from adding new sources such as security advisories from industrial control system (ICS) vendors, to supporting other standards such as STIX, to importing/enriching scan results from 3rd party vulnerability and threat scanners such as Nessus, Qualys, and OpenVAS.
There have a number of articles highlighting impressive vFeed uses cases of vFeed such as:
Needless to say, some fellow security hackers and developers have included vFeed in their toolkit, including Faraday (March 2015 toolsmith), Kali Linux, and more (FruityWifi as mentioned above).

The upcoming version vFeed will introduce support for CPE 2.3, CVSS 3, and new reference sources. A proof of concept to access the vFeed database via a RESTFul API is in testing as well. NJ is fine-tuning his Flask skills before releasing it. :) NJ, does not consider himself a Python programmer and considers himself unskilled (humble but unwarranted). Luckily Python is the ideal programming language for someone like him to express his creativity.
I'll show you all about woeful programming here in a bit when we discuss the vFeed Viewer I've written in R.

First, a bit more about vFeed, from its Github page:
The vFeed Framework is CVE, CWE and OVAL compatible and provides structured, detailed third-party references and technical details for CVE entries via an extensible XML/JSON schema. It also improves the reliability of CVEs by providing a flexible and comprehensive vocabulary for describing the relationship with other standards and security references.
vFeed utilizes XML-based and  JSON-based formatted output to describe vulnerabilities in detail. This output can be leveraged as input by security researchers, practitioners and tools as part of their vulnerability analysis practice in a standard syntax easily interpreted by both human and machine.
The associated vFeed.db (The Correlated Vulnerability and Threat Database) is a detective and preventive security information repository useful for gathering vulnerability and mitigation data from scattered internet sources into an unified database.
vFeed's documentation is now well populated in its Github wiki, and should be read in its entirety:
  1. vFeed Framework (API & Correlated Vulnerability Database)
  2. Usage (API and Command Line)
  3. Methods list
  4. vFeed Database Update Calendar
vFeed features include:
  • Easy integration within security labs and other pentesting frameworks 
  • Easily invoked via API calls from your software, scripts or from command-line. A proof of concept python api_calls.py is provided for this purpose
  • Simplify the extraction of related CVE attributes
  • Enable researchers to conduct vulnerability surveys (tracking vulnerability trends regarding a specific CPE)
  • Help penetration testers analyze CVEs and gather extra metadata to help shape attack vectors to exploit vulnerabilities
  • Assist security auditors in reporting accurate information about findings during assignments. vFeed is useful in describing a vulnerability with attributes based on standards and third-party references(vendors or companies involved in the standardization effort)
vFeed installation and usage

Installing vFeed is easy, just download the ZIP archive from Github and unpack it in your preferred directory or, assuming you've got Git installed, run git clone https://github.com/toolswatch/vFeed.git
You'll need a Python interpreter installed, the latest instance of 2.7 is preferred. From the directory in which you installed vFeed, just run python vfeedcli.py -h followed by python vfeedcli.py -u to confirm all is updated and in good working order; you're ready to roll.

You've now read section 2 (Usage) on the wiki, so you don't need a total usage rehash here. We'll instead walk through a few options with one of my favorite CVEs: CVE-2008-0793.

If we invoke python vfeedcli.py -m get_cve CVE-2008-0793, we immediately learn that it refers to a Tendenci CMS cross-site scripting vulnerability. The -m parameter lets you define the preferred method, in this case, get_cve.


Groovy, is there an associated CWE for CVE-2008-0793? But of course. Using the get_cwe method we learn that CWE-79 or "Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')" is our match.


If you want to quickly learn all the available methods, just run python vfeedcli.py --list.
Perhaps you'd like to determine what the CVSS score is, or what references are available, via the vFeed API? Easy, if you run...

from lib.core.methods.risk import CveRisk
cve = "CVE-2014-0160"
cvss = CveRisk(cve).get_cvss()
print cvss

You'll retrieve...


For reference material...

from lib.core.methods.ref import CveRef
cve = "CVE-2008-0793"
ref = CveRef(cve).get_refs()
print ref

Yields...

And now you know...the rest of the story. CVE-2008-0793 is one of my favorites because a) I discovered it, and b) the vendor was one of the best of many hundreds I've worked with to fix vulnerabilities.

vFeed Viewer

If NJ thinks his Python skills are rough, wait until he sees this. :-)
I thought I'd get started on a user interface for vFeed using R and Shiny, appropriately name vFeed Viewer and found on Github here. This first version does not allow direct queries of the vFeed database as I'm working on SQL injection prevention, but it does allow very granular filtering of key vFeed tables. Once I work out safe queries and sanitization, I'll build the same full correlation features you enjoy from NJ's Python vFeed client.
You'll need a bit of familiarity with R to make use of this viewer.
First install R, and RStudio.  From the RStudio console, to ensure all dependencies are met, run install.packages(c("shinydashboard","RSQLite","ggplot2","reshape2")).
Download and install the vFeed Viewer in the root vFeed directory such that app.R and the www directory are side by side with vfeedcli.py, etc. This ensures that it can read vfeed.db as the viewer calls it directly with dbConnect and dbReadTable, part of the RSQLite package.
Open app.R with RStudio then, click the Run App button. Alternatively, from the command-line, assuming R is in your path, you can run R -e "shiny::runApp('~/shinyapp')" where ~/shinyapp is the path to where app.R resides. In my case, on Windows, I ran R -e "shiny::runApp('c:\\tools\\vfeed\\app.R')". Then browser to the localhost address Shiny is listening on. You'll probably find the RStudio process easier and faster.
One important note about R, it's not known for performance, and this app takes about thirty seconds to load. If you use Microsoft (Revolution) R with the MKL library, you can take advantage of multiple cores, but be patient, it all runs in memory. Its fast as heck once it's loaded though.
The UI is simple, including an overview.


At present, I've incorporated NVD and CWE search mechanisms that allow very granular filtering.


 As an example, using our favorite CVE-2008-0793, we can zoom in via the search field or the CVE ID drop down menu. Results are returned instantly from 76,123 total NVD entries at present.


From the CWE search you can opt to filter by keywords, such as XSS for this scenario, to find related entries. If you drop cross-site scripting in the search field, you can then filter further via the cwetitle filter field at the bottom of the UI. This is universal to this use of Shiny, and allows really granular results.


You can also get an idea of the number of vFeed entries per vulnerability category entities. I did drop CPEs as their number throws the chart off terribly and results in a bad visualization.


I'll keep working on the vFeed Viewer so it becomes more useful and helps serve the vFeed community. It's definitely a work in progress but I feel as if there's some potential here.

Conslusion

Thanks to NJ for vFeed and all his work with the infosec tools community, if you're going to Black Hat be certain to stop by Arsenal. Make use of vFeed as part of your vulnerability management practice and remember to check for updates regularly. It's a great tool, and getting better all the time.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Nabil (NJ) Ouchn (@toolswatch)













by Russ McRee (noreply@blogger.com) at May 08, 2016 07:23 PM

May 06, 2016

Errata Security

Freaking out over the DBIR

Many in the community are upset over the recent "Verizon DBIR" because it claims widespread exploitation of the "FREAK" vulnerability. They know this is impossible, because of the vulnerability details. But really, the problem lies in misconceptions about how "intrusion detection" (IDS) works. As a sort of expert in intrusion detection (by which, I mean the expert), I thought I'd describe what really went wrong.

First let's talk FREAK. It's a man-in-the-middle attack. In other words, you can't attack a web server remotely by sending bad data at it. Instead, you have to break into a network somewhere and install a man-in-the-middle computer. This fact alone means it cannot be the most widely exploited attack.

Second, let's talk FREAK. It works by downgrading RSA to 512-bit keys, which can be cracked by supercomputers. This fact alone means it cannot be the most widely exploited attack -- even the NSA does not have sufficient compute power to crack as many keys as the Verizon DBIR claim were cracked.

Now let's talk about how Verizon calculates when a vulnerability is responsible for an attack. They use this methodology:
  1. look at a compromised system (identified by AV scanning, IoCs, etc.)
  2. look at which unpatched vulnerabilities the system has (vuln scans)
  3. see if the system was attacked via those vulnerabilities (IDS)
In other words, if you are vulnerable to FREAK, and the IDS tells you people attacked you with FREAK, and indeed you were compromised, then it seems only logical that they compromised you through FREAK.

This sounds like a really good methodology -- but only to stupids. (Sorry for being harsh, I've been pointing out this methodology sucks for 15 years, and am getting frustrated people still believe in it.)

Here's the problem with all data breach investigations. Systems get hacked, and we don't know why. Yet, there is enormous pressure to figure out why. Therefore, we seize on any plausible explanation. We then go through the gauntlet of logical fallacies, such as "confirmation bias", to support our conclusion. They torture the data until it produces the right results.

In the majority of breach reports I've seen, the identified source of the compromise is bogus. That's why I never believed North Korea was behind the Sony attack -- I've read too many data breach reports fingering the wrong cause. Political pressure to come up with a cause, any cause, is immense.

This specific logic, "vulnerable to X and attacked with X == breached with X" has been around with us for a long time. 15 years ago, IDS vendors integrated with vulnerability scanners to produce exactly these sorts of events. It's nonsense that never produced actionable data.

In other words, in the Verizon report, things went this direction. FIRST, they investigated a system and found IoCs (indicators that the system had been compromised). SECOND, they did the correlation between vuln/IDS. They didn't do it the other way around, because such a system produces too much false data. False data is false data. If you aren't starting with this vuln/IDS correlation, then looking for IoCs, then there is no reason to believe such correlations will be robust afterwards.

On of the reasons the data isn't robust is that IDS events do not mean what you think they mean. Most people in our industry treat them as "magic", that if an IDS triggers on a "FREAK" attack, then that's what happen.

But that's not what happened. First of all, there is the issue of false-positives, whereby the system claims a "FREAK" attack happened, when nothing related to the issue happened. Looking at various IDSs, this should be rare for FREAK, but happens for other kinds of attacks.

Then there is the issue of another level of false-positives. It's plausible, for example, that older browsers, email clients, and other systems may accidentally be downgrading to "export" ciphers simply because these are the only ciphers old and new computers have in common. Thus, you'll see a lot of "FREAK" events, where this downgrade did indeed occur, but not for malicious reasons.

In other words, this is not a truly false-positive, because the bad thing really did occur, but it is a semi-false-positive, because this was not malicious.

Then there is the problem of misunderstood events. For FREAK, both client and server must be vulnerable -- and clients reveal their vulnerability in every SSL request. Therefore, some IDSs trigger on that, telling you about vulnerable clients. The EmergingThreats rules have one called "ET POLICY FREAK Weak Export Suite From Client (CVE-2015-0204)". The key word here is "POLICY" -- it's not an attack signature but a policy signature.

But a lot of people are confused and think it's an attack. For example, this website lists it as an attack.


If somebody has these POLICY events enabled, then it will appear that their servers are under constant attack with the FREAK vulnerability, as random people around the Internet with old browsers/clients connect to their servers, regardless if the server itself is vulnerable.

Another source of semi-false-positives are vulnerability scanners, which simply scan for the vulnerability without fully exploiting/attacking the target. Again, this is a semi-false-positive, where it is correctly identified as FREAK, but incorrectly identified as an attack rather than a scan. As other critics of the Verizon report have pointed out, people have been doing Internet-wide scans for this bug. If you have a server exposed to the Internet, then it's been scanned for "FREAK". If you have internal servers, but run vulnerability scanners, they have been scanned for "FREAK". But none of these are malicious "attacks" that can be correlated according to the Verizon DBIR methodology.

Lastly, there are "real" attacks. There are no real FREAK attacks, except maybe twice in Syria when the NSA needed to compromise some SSL communications. And the NSA never does something if they can get caught. Therefore, no IDS event identifying "FREAK" has ever been a true attack.

So here's the thing. Knowing all this, we can reduce the factors in the Verizon DBIR methodology. The factor "has the system been attacked with FREAK?" can be reduced to "does the system support SSL?", because all SSL supporting systems have been attacked with FREAK, according to IDS. Furthermore, since people just apply all or none of the Microsoft patches, we don't ask "is the system vulnerable to FREAK?" so much as "has it been patched recently?".

Thus, the Verizon DBIR methodology becomes:

1. has the system been compromised?
2. has the system been patched recently?
3. does the system support SSL?

If all three answers are "yes", then it claims the system was compromised with FREAK. As you can plainly see, this is idiotic methodology.

In the case of FREAK, we already knew the right answer, and worked backward to find the flaw. But in truth, all the other vulnerabilities have the same flaw, for related reasons. The root of the problem is that people just don't understand IDS information. They, like Verizon, treat the IDS as some sort of magic black box or oracle, and never question the data.

Conclusion

An IDS is wonderfully useful tool if you pay attention to how it works and why it triggers on the things it does. It's not, however, an "intrusion detection" tool, whereby every event it produces should be acted upon as if it were an intrusion. It's not a magical system -- you really need to pay attention to the details.

Verizon didn't pay attention to the details. They simply dumped the output of an IDS inappropriately into some sort of analysis. Since the input data was garbage, no amount of manipulation and analysis would ever produce a valid result.




False-positives: Notice I list a range of "false-positives", from things that might trigger that have nothing to do with FREAK, to a range of things that are FREAK, but aren't attacks, and which cannot be treated as "intrusions". Such subtleties is why we can't have nice things in infosec. Everyone studies "false-positives" when studying for their CISSP examine, but truly don't understand them.

That's why when vendors claim "no false positives" they are blowing smoke. The issue is much more subtle than that.

by Robert Graham (noreply@blogger.com) at May 06, 2016 08:44 PM

May 05, 2016

Errata Security

Satoshi: how Craig Wright's deception worked

My previous post shows how anybody can verify Satoshi using a GUI. In this post, I'll do the same, with command-line tools (openssl). It's just a simple application of crypto (hashes, public-keys) to the problem.

I go through this step-by-step discussion in order to demonstrate Craig Wright's scam. Dan Kaminsky's post and the redditors comes to the same point through a different sequence, but I think my way is clearer.

Step #1: the Bitcoin address


We know certain Bitcoin addresses correspond to Satoshi Nakamoto him/her self. For the sake of discussion, we'll use the address 15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP. It's actually my address, but we'll pretend it's Satoshi's. In this post, I'm going to prove that this address belongs to me.

The address isn't the public-key, as you'd expect, but the hash of the public-key. Hashes are a lot shorter, and easier to pass around. We only pull out the public-key when we need to do a transaction. The hashing algorithm is explained on this website [http://gobittest.appspot.com/Address]. It's basically base58(ripemd(sha256(public-key)).

Step #2: You get the public-key


Hashes are one-way, so given a Bitcoin address, we can't immediately convert it into a public-key. Instead, we have to look it up in the blockchain, the vast public ledger that is at the heart of Bitcoin. The blockchain records every transaction, and is approaching 70-gigabytes in size.

To find an address's match public-key, we have to search for a transaction where the bitcoin is spent. If an address has only received Bitcoins, then its matching public-key won't appear in the Blockchain. In that case, a person trying to prove their identity will have to tell you the public-key, which is fine, of course, since the keys are designed to be public.

Luckily, there are lots of websites that store the blockchain in a database and make it easy for us to browse. I use Blockchain.info. The URL to my address is:

https://blockchain.info/address/15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP

There is a list of transactions here where I spend coin. Let's pick the top one, at this URL:

https://blockchain.info/tx/8c4263d864d4f36e4eb4065a877e3e9a68cbe1de63a7b1fda70096e1e209cbbb

Toward the bottom are the "scripts". Bitcoin has a small scripting language, allowing complex transactions to be created, but most transactions are simple. There are two common formats for these scripts, and old format and a new format. In the old format, you'll find the public-key in the Output Script. In the new format, you'll find the public-key in the Input Scripts. It'll be a long long number starting with "04".

In this case, my public-key is:

04b19ffb77b602e4ad3294f770130c7677374b84a7a164fe6a80c81f13833a673dbcdb15c29857ce1a23fca1c808b9c29404b84b986924e6ff08fb3517f38bc099

You can verify this hashes to my Bitcoin address by the website I mention above.

Step #3: You format the key according to OpenSSL


OpenSSL wants the public-key in it's own format (wrapped in ASN.1 DER, then encoded in BASE64). I should just insert the JavaScript form to do it directly in this post, but I'm lazy. Instead, use the following code in the file "foo.js":

KeyEncoder = require('key-encoder');
sec = new KeyEncoder('secp256k1');
args = process.argv.slice(2);
pemKey = sec.encodePublic(args[0], 'raw', 'pem');
console.log(pemKey);

Then run:

npm install key-encoder

node foo.js 04b19ffb77b602e4ad3294f770130c7677374b84a7a164fe6a80c81f13833a673dbcdb15c29857ce1a23fca1c808b9c29404b84b986924e6ff08fb3517f38bc099

This will output the following file pub.pem:

-----BEGIN PUBLIC KEY-----
MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEsZ/7d7YC5K0ylPdwEwx2dzdLhKehZP5q
gMgfE4M6Zz282xXCmFfOGiP8ocgIucKUBLhLmGkk5v8I+zUX84vAmQ==
-----END PUBLIC KEY-----

To verify that we have a correctly formatted OpenSSL public-key, we do the following command. As you can see, the hex of the OpenSSL public-key agrees with the original hex above 04b19ffb... that I got from the Blockchain: 

$ openssl ec -in pub.pem -pubin -text -noout
read EC key
Private-Key: (256 bit)
pub:
    04:b1:9f:fb:77:b6:02:e4:ad:32:94:f7:70:13:0c:
    76:77:37:4b:84:a7:a1:64:fe:6a:80:c8:1f:13:83:
    3a:67:3d:bc:db:15:c2:98:57:ce:1a:23:fc:a1:c8:
    08:b9:c2:94:04:b8:4b:98:69:24:e6:ff:08:fb:35:
    17:f3:8b:c0:99
ASN1 OID: secp256k1

Step #4: I create a message file


What are we are going to do is sign a message. That could be a message you create, that you test if I can decrypt. Or I can simply create my own message file.

In this example, I'm going to use the file message.txt:

Robert Graham is Satoshi Nakamoto

Obviously, if I can sign this file with Satoshi's key, then I'm the real Satoshi.

There's a problem here, though. The message I choose can be too long (such as when choosing a large work of Sartre). Or, in this case, depending on how you copy/paste the text into a file, it may end with varying "line-feeds" and "carriage-returns". 

Therefore, at this stage, I may instead just choose to hash the message file into something smaller and more consistent. I'm not going to in my example, but that's what Craig Wright does in his fraudulent example. And it's important.

BTW, if you just echo from the command-line, or use 'vi' to create a file, it'll automatically append a single line-feed. That's what I assume for my message. In hex you should get:

$ xxd -i message.txt
unsigned char message_txt[] = {
  0x52, 0x6f, 0x62, 0x65, 0x72, 0x74, 0x20, 0x47, 0x72, 0x61, 0x68, 0x61,
  0x6d, 0x20, 0x69, 0x73, 0x20, 0x53, 0x61, 0x74, 0x6f, 0x73, 0x68, 0x69,
  0x20, 0x4e, 0x61, 0x6b, 0x61, 0x6d, 0x6f, 0x74, 0x6f, 0x0a
};
unsigned int message_txt_len = 34;


Step #5: I grab my private-key from my wallet


To prove my identity, I extract my private-key from my wallet file, and convert it into an OpenSSL file in a method similar to that above, creating the file priv.pem (the sister of the pub.pem that you create). I'm skipping the steps, because I'm not actually going to show you my private key, but they are roughly the same as above. Bitcoin-qt has a little "dumprivkey" command that'll dump the private key, which I then wrap in OpenSSL ASN.1. If you want to do this, I used the following node.js code, with the "base-58" and "key-encoder" dependencies.

Base58 = require("base-58");
KeyEncoder = require('key-encoder');
sec = new KeyEncoder('secp256k1');
var args = process.argv.slice(2);
var x = Base58.decode(args[0]);
x = x.slice(1);
if (x.length == 36)
    x = x.slice(0, 32);
pemPrivateKey = sec.encodePrivate(x, 'raw', 'pem');
console.log(pemPrivateKey)

Step #6: I sign the message.txt with priv.pem


I then sign the file message.txt with my private-key priv.pem, and save the base64 encoded results in sig.b64.

openssl dgst -sign priv.pem message.txt | base64 >sig.b64

This produces the following file sig.b64 that hash the following contents:

MEUCIQDoy6K0xQ1cAPg7fXbQcmfbtK4VJ5wlMTzG4DaUV3zF9gIgLNbJw0oqj3lQf7lhe7TtPzse
PXf8GB3q4IhCiWVxTJ8=

How signing works is that it first creates a SHA256 hash of the file message.txt, then it encrypts it with the secp256k1 public-key algorithm. It wraps the result in a ASN.1 DER binary file. Sadly, there's no native BASE64 file format, so I have to encode it in BASE64 myself in order to post on this page, and you'll have to BASE64 decode it before you use it.

Step #6: You verify the signature


Okay, at this point you have three files. You have my public-key pub.pem, my messagemessage.txt, and the signature sig.b64.

First, you need to convert the signature back into binary:

base64 -d sig.b64 > sig.der

Now you run the verify command:

openssl dgst -verify pub.pem -signature sig.der message.txt

If I'm really who I say I am, and then you'll see the result:

Verified OK

If something has gone wrong, you'll get the error:

Verification Failure


How we know the Craig Wright post was a scam


This post is similarly structure to Craig Wright's post, and in the differences we'll figure out how he did his scam.

As I point out in Step #4 above, a large file (like a work from Sartre) would be difficult to work with, so I could just hash it, and put the binary hash into a file. It's really all the same, because I'm creating some arbitrary un-signed bytes, then signing them.

But here's the clever bit. If you've been paying attention, you'll notice that the Sartre file has been hashed twice by SHA256, before the hash has been encrypted. In other words, it looks like the function:

secp256k1(sha256(sha256(message)))

Now let's go back to Bitcoin transactions. Transactions are signed by first hashing twice::

secp256k1(sha256(sha256(transaction)))

Notice that the algorithms are the same. That's how how Craig Write tried to fool us. Unknown to us, he grabbed a transaction from the real Satoshi, and grabbed the initial hash (see Update below for contents ). He then claimed that his "Sartre" file had that same hash:

479f9dff0155c045da78402177855fdb4f0f396dc0d2c24f7376dd56e2e68b05

Which signed (hashed again, then encrypted), becomes:

3045022100c12a7d54972f26d14cb311339b5122f8c187417dde1e8efb6841f55c34220ae0022066632c5cd4161efa3a2837764eee9eb84975dd54c2de2865e9752585c53e7cce

That's a lie. How are we supposed to know? After all, we aren't going to type in a bunch of hex digits then go search the blockchain for those bytes. We didn't have a copy of the Sartre file to calculate the hash ourselves.

Now, when hashed an signed, the results from openssl exactly match the results from that old Bitcoin transaction. Craig Wright magically appears to have proven he knows Satoshi's private-key, when in fact he's copied the inputs/outputs and made us think we calculcated them.

It would've worked, too, but there's too many damn experts in the blockchain who immediately pick up on the subtle details. There's too many people willing to type in all those characters. Once typed in, it's a simple matter of googling them to find them in the blockchain.

Also, it looks as suspicious as all hell. He explains the trivial bits, like "what is hashing", with odd references to old publications, but then leaves out important bits. I had to write code in order to extract my own private-key from my wallet in order to make it into something that OpenSSL would accept -- I step he didn't actually have to go through, and thus, didn't have to document.


Conclusion


Both Bitcoin and OpenSSL are just straightforward applications of basic crypto. It's that they share the basics that made this crossover work. It's by applying our basic crypto knowledge to the problem that catches him in the lie.

I write this post not really to catch Craig Wright in a scam, but to help teach basic crypto. Working backwards from this blogpost, learning the bits you didn't understand, will teach you the important basics of crypto.


Appendix


To verify that I have that Bitcoin address, you'll need the three files:

pub.pem

-----BEGIN PUBLIC KEY-----
MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEsZ/7d7YC5K0ylPdwEwx2dzdLhKehZP5q
gMgfE4M6Zz282xXCmFfOGiP8ocgIucKUBLhLmGkk5v8I+zUX84vAmQ==
-----END PUBLIC KEY-----


message.txt
Robert Graham is Satoshi Nakamoto

sig.b64
MEUCIQDoy6K0xQ1cAPg7fXbQcmfbtK4VJ5wlMTzG4DaUV3zF9gIgLNbJw0oqj3lQf7lhe7TtPzsePXf8GB3q4IhCiWVxTJ8=

Now run the following command, and verify it matches the hex value for the public-key that you found in the transaction in the blockchain:

openssl ec -in pub.pem -pubin -text -noout

Now verify the message:

base64 -d sig.b64 > sig.der
openssl dgst -verify pub.pem -signature sig.der message.txt




Update:


The lie can be condensed into two images. In the first is excerpts from his post, where he claims the file "Sartre" has the specific sha256sum and contains the shown text:


But, we know that this checksum matches instead an intermediate step in the 2009 Bitcoin transaction, which if put in a file, would have the following contents:


The sha256sum result is the same in both cases, so either I'm lying or Craig Wright is. You can verify for yourself which one is lying by creating your own Sartre file from this base64 encoded data (copy/paste into file, then base64 -d > Sartre to create binary file).

AQAAAAG6kcHV5VqeL6tOQfVbhipzskcZqtE6Un0WnB+tO2O1EgEAAABDQQQR25Ph3NuKAWtJhA+MU7wetoo4LpexSC7K17FIppCaXLLg6t37hMz5dERk+C4WC/qbi2T51MA/mZuGQ/ZWtBKjrP////8CAMqaOwAAAABDQQS+2CfTdHS+/7N+/lM3AawffGAJV6RIe+izcTRvAWgm7m9XujDYikcqDk7NLwdZmnlfHwHeeNeRs4LmXuHFi0UIrADSSWsAAAAAQ0EEEduT4dzbigFrSYQPjFO8HraKOC6XsUguytexSKaQmlyy4Ord+4TM+XREZPguFgv6m4tk+dTAP5mbhkP2VrQSo6wAAAAAAQAAAA==

I got this file from https://rya.nc/sartre.html, after spending an hour looking for the right tool. Transactions are verified using a script within the transactions itself. At some intermediate step, it transmogrifies the transaction into something else, then verifies it. It's this transmogrified form of the transaction that we need to grab for the contents of the "Sartre" file.

by Robert Graham (noreply@blogger.com) at May 05, 2016 10:03 PM

Raymii.org

Migrating personal webapps and services

Recently I've migrated some of my personal servers and services to new machines and newer operating system versions. I prefer to migrate instead of upgrading the OS for a number of reasons. I'll also talk about the migration process and some stuff to remember when migrating web applications and services.

May 05, 2016 12:00 AM

May 03, 2016

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – April 2016

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” …  [223 pageviews]
  2. Top 10 Criteria for a SIEM?” came from one of my last projects I did when running my SIEM consulting firm in 2009-2011 (for my recent work on evaluating SIEM, see this document [2015 update]) [116 pageviews]
  3. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our new 2016 research on security monitoring use cases here! [84 pageviews]
  4. “A Myth of An Expert Generalist” is a fun rant on what I think it means to be “a security expert” today; it argues that you must specialize within security to really be called an expert [80 pageviews]
  5. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [78 pageviews of total 3857 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has about 3X the traffic of this blog]: 
 
Current research on IR:
Current research on EDR:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at May 03, 2016 08:00 PM

Carl Chenet

Feed2tweet, your RSS feed to Twitter Python self-hosted app

Feed2tweet is a self-hosted Python app to send you RSS feed to Twitter.

Feed2tweet is in production for Le Journal du hacker, a French Hacker News-style FOSS website and LinuxJobs.fr, the job board of the French-speaking FOSS community.

linuxjobs-horizontale

Feed2tweet 0.3 now only runs with Python 3.  It also fixes a nasty bug with RSS feeds modifying the RSS entry orders. Have a look at the Feed2tweet 0.3 changelog:

Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!


by Carl Chenet at May 03, 2016 04:15 PM

May 02, 2016

Vincent Bernat

Pragmatic Debian packaging

While the creation of Debian packages is abundantly documented, most tutorials are targeted to packages implementing the Debian policy. Moreover, Debian packaging has a reputation of being unnecessarily difficult1 and many people prefer to use less constrained tools2 like fpm or CheckInstall.

However, I would like to show how building Debian packages with the official tools can become straightforward if you bend some rules:

  1. No source package will be generated. Packages will be built directly from a checkout of a VCS repository.

  2. Additional dependencies can be downloaded during build. Packaging individually each dependency is a painstaking work, notably when you have to deal with some fast-paced ecosystems like Java, Javascript and Go.

  3. The produced packages may bundle dependencies. This is likely to raise some concerns about security and long-term maintenance, but this is a common trade-off in many ecosystems, notably Java, Javascript and Go.

Pragmatic packages 101§

In the Debian archive, you have two kinds of packages: the source packages and the binary packages. Each binary package is built from a source package. You need a name for each package.

As stated in the introduction, we won’t generate a source package but we will work with its unpacked form which is any source tree containing a debian/ directory. In our examples, we will start with a source tree containing only a debian/ directory but you are free to include this debian/ directory into an existing project.

As an example, we will package memcached, a distributed memory cache. There are four files to create:

  • debian/compat,
  • debian/changelog,
  • debian/control, and
  • debian/rules.

The first one is easy. Just put 9 in it:

echo 9 > debian/compat

The second one has the following content:

memcached (0-0) UNRELEASED; urgency=medium

  * Fake entry

 -- Happy Packager <happy@example.com>  Tue, 19 Apr 2016 22:27:05 +0200

The only important information is the name of the source package, memcached, on the first line. Everything else can be left as is as it won’t influence the generated binary packages.

The control file§

debian/control describes the metadata of both the source package and the generated binary packages. We have to write a block for each of them.

Source: memcached
Maintainer: Vincent Bernat <bernat@debian.org>

Package: memcached
Architecture: any
Description: high-performance memory object caching system

The source package is called memcached. We have to use the same name as in debian/changelog.

We generate only one binary package: memcached. In the remaining of the example, when you see memcached, this is the name of a binary package. The Architecture field should be set to either any or all. Use all exclusively if the package contains only arch-independent files. In doubt, just stick to any.

The Description field contains a short description of the binary package.

The build recipe§

The last mandatory file is debian/rules. It’s the recipe of the package. We need to retrieve memcached, build it and install its file tree in debian/memcached/. It looks like this:

#!/usr/bin/make -f

DISTRIBUTION = $(shell lsb_release -sr)
VERSION = 1.4.25
PACKAGEVERSION = $(VERSION)-0~$(DISTRIBUTION)0
TARBALL = memcached-$(VERSION).tar.gz
URL = http://www.memcached.org/files/$(TARBALL)

%:
    dh $@

override_dh_auto_clean:
override_dh_auto_test:
override_dh_auto_build:
override_dh_auto_install:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)
    ./configure --prefix=/usr
    make
    make install DESTDIR=debian/memcached

override_dh_gencontrol:
    dh_gencontrol -- -v$(PACKAGEVERSION)

The empty targets override_dh_auto_clean, override_dh_auto_test and override_dh_auto_build keep debhelper from being too smart. The override_dh_gencontrol target sets the package version3 without updating debian/changelog. If you ignore the slight boilerplate, the recipe is quite similar to what you would have done with fpm:

DISTRIBUTION=$(lsb_release -sr)
VERSION=1.4.25
PACKAGEVERSION=${VERSION}-0~${DISTRIBUTION}0
TARBALL=memcached-${VERSION}.tar.gz
URL=http://www.memcached.org/files/${TARBALL}

wget -N --progress=dot:mega ${URL}
tar --strip-components=1 -xf ${TARBALL}
./configure --prefix=/usr
make
make install DESTDIR=/tmp/installdir

# Build the final package
fpm -s dir -t deb \
    -n memcached \
    -v ${PACKAGEVERSION} \
    -C /tmp/installdir \
    --description "high-performance memory object caching system"

You can review the whole package tree on GitHub and build it with dpkg-buildpackage -us -uc -b.

Pragmatic packages 102§

At this point, we can iterate and add several improvements to our memcached package. None of those are mandatory but they are usually worth the additional effort.

Build dependencies§

Our initial build recipe only work when several packages are installed, like wget and libevent-dev. They are not present on all Debian systems. You can easily express that you need them by adding a Build-Depends section for the source package in debian/control:

Source: memcached
Build-Depends: debhelper (>= 9),
               wget, ca-certificates, lsb-release,
               libevent-dev

Always specify the debhelper (>= 9) dependency as we heavily rely on it. We don’t require make or a C compiler because it is assumed that the build-essential meta-package is installed and it pulls those. dpkg-buildpackage will complain if the dependencies are not met. If you want to install those packages from your CI system, you can use the following command4:

mk-build-deps \
    -t 'apt-get -o Debug::pkgProblemResolver=yes --no-install-recommends -qqy' \
    -i -r debian/control

You may also want to investigate pbuilder or sbuild, two tools to build Debian packages in a clean isolated environment.

Runtime dependencies§

If the resulting package is installed on a freshly installed machine, it won’t work because it will be missing libevent, a required library for memcached. You can express the dependencies needed by each binary package by adding a Depends field. Moreover, for dynamic libraries, you can automatically get the right dependencies by using some substitution variables:

Package: memcached
Depends: ${misc:Depends}, ${shlibs:Depends}

The resulting package will contain the following information:

$ dpkg -I ../memcached_1.4.25-0\~unstable0_amd64.deb | grep Depends
 Depends: libc6 (>= 2.17), libevent-2.0-5 (>= 2.0.10-stable)

Integration with init system§

Most packaged daemons come with some integration with the init system. This integration ensures the daemon will be started on boot and restarted on upgrade. For Debian-based distributions, there are several init systems available. The most prominent ones are:

  • System-V init is the historical init system. More modern inits are able to reuse scripts written for this init, so this is a safe common denominator for packaged daemons.
  • Upstart is the less-historical init system for Ubuntu (used in Ubuntu 14.10 and previous releases).
  • systemd is the default init system for Debian since Jessie and for Ubuntu since 15.04.

Writing a correct script for the System-V init is error-prone. Therefore, I usually prefer to provide a native configuration file for the default init system of the targeted distribution (Upstart and systemd).

System-V§

If you want to provide a System-V init script, have a look at /etc/init.d/skeleton on the most ancient distribution you want to target and adapt it5. Put the result in debian/memcached.init. It will be installed at the right place, invoked on install, upgrade and removal. On Debian-based systems, many init scripts allow user customizations by providing a /etc/default/memcached file. You can ship one by putting its content in debian/memcached.default.

Upstart§

Providing an Upstart job is similar: put it in debian/memcached.upstart. For example:

description "memcached daemon"

start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 5 60
expect daemon

script
  . /etc/default/memcached
  exec memcached -d -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
end script

When writing an Upstart job, the most important directive is expect. Be sure to get it right. Here, we use expect daemon and memcached is started with the -d flag.

systemd§

Providing a systemd unit is a bit more complex. The content of the file should go in debian/memcached.service. For example:

[Unit]
Description=memcached daemon
After=network.target

[Service]
Type=forking
EnvironmentFile=/etc/default/memcached
ExecStart=/usr/bin/memcached -d -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
Restart=on-failure

[Install]
WantedBy=multi-user.target

We reuse /etc/default/memcached even if it is not considered a good practice with systemd6. Like for Upstart, the directive Type is quite important. We used forking as memcached is started with the -d flag.

You also need to add a build-dependency to dh-systemd in debian/control:

Source: memcached
Build-Depends: debhelper (>= 9),
               wget, ca-certificates, lsb-release,
               libevent-dev,
               dh-systemd

And you need to modify the default rule in debian/rules:

%:
    dh $@ --with systemd

The extra complexity is a bit unfortunate but systemd integration is not part of debhelper7. Without those additional modifications, the unit will get installed but you won’t get a proper integration and the service won’t be enabled on install or boot.

Dedicated user§

Many daemons don’t need to run as root and it is a good practice to ship a dedicated user. In the case of memcached, we can provide a _memcached user8.

Add a debian/memcached.postinst file with the following content:

#!/bin/sh

set -e

case "$1" in
    configure)
        adduser --system --disabled-password --disabled-login --home /var/empty \
                --no-create-home --quiet --force-badname --group _memcached
        ;;
esac

#DEBHELPER#

exit 0

There is no cleanup of the user when the package is removed for two reasons:

  1. Less stuff to write.
  2. The user could still own some files.

The utility adduser will do the right thing whatever the requested user already exists or not. You need to add it as a dependency in debian/control:

Package: memcached
Depends: ${misc:Depends}, ${shlibs:Depends}, adduser

The #DEBHELPER# marker is important as it will be replaced by some code to handle the service configuration files (or some other stuff).

You can review the whole package tree on GitHub and build it with dpkg-buildpackage -us -uc -b.

Pragmatic packages 103§

It is possible to leverage debhelper to reduce the recipe size and to make it more declarative. This section is quite optional and it requires understanding a bit more how a Debian package is built. Feel free to skip it.

The big picture§

There are four steps to build a regular Debian package:

  1. debian/rules clean should clean the source tree to make it pristine.

  2. debian/rules build should trigger the build. For an autoconf-based software, like memcached, this step should execute something like ./configure && make.

  3. debian/rules install should install the file tree of each binary package. For an autoconf-based software, this step should execute make install DESTDIR=debian/memcached.

  4. debian/rules binary will pack the different file trees into binary packages.

You don’t directly write each of those targets. Instead, you let dh, a component of debhelper, do most of the work. The following debian/rules file should do almost everything correctly with many source packages:

#!/usr/bin/make -f
%:
    dh $@

For each of the four targets described above, you can run dh with --no-act to see what it would do. For example:

$ dh build --no-act
   dh_testdir
   dh_update_autotools_config
   dh_auto_configure
   dh_auto_build
   dh_auto_test

Each of those helpers has a manual page. Helpers starting with dh_auto_ are a bit “magic”. For example, dh_auto_configure will try to automatically configure a package prior to building: it will detect the build system and invoke ./configure, cmake or Makefile.PL.

If one of the helpers do not do the “right” thing, you can replace it by using an override target:

override_dh_auto_configure:
    ./configure --with-some-grog

Those helpers are also configurable, so you can just alter a bit their behaviour by invoking them with additional options:

override_dh_auto_configure:
    dh_auto_configure -- --with-some-grog

This way, ./configure will be called with your custom flag but also with a lot of default flags like --prefix=/usr for better integration.

In the initial memcached example, we overrode all those “magic” targets. dh_auto_clean, dh_auto_configure and dh_auto_build are converted to no-ops to avoid any unexpected behaviour. dh_auto_install is hijacked to do all the build process. Additionally, we modified the behavior of the dh_gencontrol helper by forcing the version number instead of using the one from debian/changelog.

Automatic builds§

As memcached is an autoconf-enabled package, dh knows how to build it: ./configure && make && make install. Therefore, we can let it handle most of the work with this debian/rules file:

#!/usr/bin/make -f

DISTRIBUTION = $(shell lsb_release -sr)
VERSION = 1.4.25
PACKAGEVERSION = $(VERSION)-0~$(DISTRIBUTION)0
TARBALL = memcached-$(VERSION).tar.gz
URL = http://www.memcached.org/files/$(TARBALL)

%:
    dh $@ --with systemd

override_dh_auto_clean:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)

override_dh_auto_test:
    # Don't run the whitespace test
    rm t/whitespace.t
    dh_auto_test

override_dh_gencontrol:
    dh_gencontrol -- -v$(PACKAGEVERSION)

The dh_auto_clean target is hijacked to download and setup the source tree9. We don’t override the dh_auto_configure step, so dh will execute the ./configure script with the appropriate options. We don’t override the dh_auto_build step either: dh will execute make. dh_auto_test is invoked after the build and it will run the memcached test suite. We need to override it because one of the test is complaining about odd whitespaces in the debian/ directory. We suppress this rogue test and let dh_auto_test executes the test suite. dh_auto_install is not overriden either, so dh will execute some variant of make install.

To get a better sense of the difference, here is a diff:

--- memcached-intermediate/debian/rules 2016-04-30 14:02:37.425593362 +0200
+++ memcached/debian/rules  2016-05-01 14:55:15.815063835 +0200
@@ -12,10 +12,9 @@
 override_dh_auto_clean:
-override_dh_auto_test:
-override_dh_auto_build:
-override_dh_auto_install:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)
-   ./configure --prefix=/usr
-   make
-   make install DESTDIR=debian/memcached
+
+override_dh_auto_test:
+   # Don't run the whitespace test
+   rm t/whitespace.t
+   dh_auto_test

It is up to you to decide if dh can do some work for you, but you could try to start from a minimal debian/rules and only override some targets.

Install additional files§

While make install installed the essential files for memcached, you may want to put additional files in the binary package. You could use cp in your build recipe, but you can also declare them:

  • files listed in debian/memcached.docs will be copied to /usr/share/doc/memcached by dh_installdocs,
  • files listed in debian/memcached.examples will be copied to /usr/share/doc/memcached/examples by dh_installexamples,
  • files listed in debian/memcached.manpages will be copied to the appropriate subdirectory of /usr/share/man by dh_installman,

Here is an example using wildcards for debian/memcached.docs:

doc/*.txt

If you need to copy some files to an arbitrary location, you can list them along with their destination directories in debian/memcached.install and dh_install will take care of the copy. Here is an example:

scripts/memcached-tool usr/bin

Using those files make the build process more declarative. It is a matter of taste and you are free to use cp in debian/rules instead. You can review the whole package tree on GitHub.

Other examples§

The GitHub repository contains some additional examples. They all follow the same scheme:

  • dh_auto_clean is hijacked to download and setup the source tree
  • dh_gencontrol is modified to use a computed version

Notably, you’ll find daemons in Java, Go, Python and Node.js. The goal of those examples is to demonstrate that using Debian tools to build Debian packages can be straightforward. Hope this helps.


  1. People may remember the time before debhelper 7.0.50 (circa 2009) where debian/rules was a daunting beast. However, nowaday, the boilerplate is quite reduced. 

  2. The complexity is not the only reason. Those alternative tools enable the creation of RPM packages, something that Debian tools obviously don’t. 

  3. There are many ways to version a package. Again, if you want to be pragmatic, the proposed solution should be good enough for Ubuntu. On Debian, it doesn’t cover upgrade from one distribution version to another, but we assume that nowadays, systems get reinstalled instead of being upgraded. 

  4. You also need to install devscripts and equivs package. 

  5. It’s also possible to use a script provided by upstream. However, there is no such thing as an init script that works on all distributions. Compare the proposed with the skeleton, check if it is using start-stop-daemon and if it sources /lib/lsb/init-functions before considering it. If it seems to fit, you can install it yourself in debian/memcached/etc/init.d/. debhelper will ensure its proper integration. 

  6. Instead, a user wanting to customize the options is expected to edit the unit with systemctl edit

  7. See #822670 

  8. The Debian Policy doesn’t provide any hint for the naming convention of those system users. A common usage is to prefix the daemon name with an underscore (like _memcached). Another common usage is to use Debian- as a prefix. The main drawback of the latest solution is that the name is likely to be replaced by the UID in ps and top because of its length. 

  9. We could call dh_auto_clean at the end of the target to let it invoke make clean. However, it is assumed that a fresh checkout is used before each build. 

by Vincent Bernat at May 02, 2016 07:25 PM

April 30, 2016

SysAdmin1138

Resumè of failure

There has been a Thing going through twitter lately, about a Princeton Prof who posted a resume of failures.

About that...

This is not a bad idea, especially for those of us in Ops or bucking for 'Senior' positions. Why? Because in my last job hunt, a very large percentage of interviewers asked a question like this:

Please describe your biggest personal failure, and what you learned from it?

That's a large scope. How to pick which one?

What was your biggest interpersonal failure, and how did you recover from it?

In a 15+ year career, figuring out which is 'biggest' is a challenge. But first, I need to remember what they are. For this one, I've been using events that happened around 2001; far enough back that they don't really apply to the person I am now. This is going to be a problem soon.

What was your biggest production-impacting failure, and how was the post-mortem process handled?

Oh, do I go for 'most embarrassing,' or, 'most educational'? I'd snark about 'so many choices', but my memory tends to wallpaper over the more embarassing fails in ways that make remembering them during an interview next to impossible. And in this case, the 'post-mortem process' bit at the end actually rules out my biggest production-impacting problem... there wasn't a post-mortem, other than knowing looks of, you're not going to do that again, right?

Please describe your biggest failure of working with management on something.

Working in service-organizations as long as I have, I have a lot of 'failure' here. Again, picking the right one to use in an interview is a problem.

You begin to see what I'm talking about here. If I had realized that my failures would be something I needed to both keep track of, and keep adequate notes on to refer back to them 3, 5, 9, 14 years down the line, I would have been much better prepared for these interviews. The interviewers are probing how I behave when Things Are Not Going Right, since that sheds far more light on a person than Things Are Going Perfectly projects.

A Resumè of Failure would have been an incredibly useful thing to have. Definitely do not post it online, since hiring managers are looking for rule-outs to thin the pile of applications. But keep it next to your copy of your resume, next to your References from Past Managers list.

by SysAdmin1138 at April 30, 2016 01:53 PM

Anton Chuvakin - Security Warrior

April 27, 2016

Raymii.org

Build a FreeBSD 10.3-release Openstack Image with bsd-cloudinit

We are going to prepare a FreeBSD image for Openstack deployment. We do this by creating a FreeBSD 10.3-RELEASE instance, installing it and converting it using bsd-cloudinit. We'll use the CloudVPS public Openstack cloud for this. We'll be using the Openstack command line tools, like nova, cinder and glance.

April 27, 2016 12:00 AM

April 26, 2016

Electricmonk.nl

Ansible-cmdb v1.14: Generate a host overview of Ansible facts.

I've just released ansible-cmdb v1.14. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.

This release includes the following bugfixes and feature improvements:

  • Look for ansible.cfg and use hostfile setting.
  • html_fancy: Properly sort vcpu and ram columns.
  • html_fancy: Remember which columns the user has toggled.
  • html_fancy: display groups and hostvars even if no host information was collected.
  • html_fancy: support for facter and custom facts.
  • html_fancy: Hide sections if there are no items for it.
  • html_fancy: Improvements in the rendering of custom variables.
  • Apply Dynamic Inventory vars to all hostnames in the group.
  • Many minor bugfixes.

As always, packages are available for Debian, Ubuntu, Redhat, Centos and other systems. Get the new release from the Github releases page.

by admin at April 26, 2016 01:30 PM

April 25, 2016

The Geekess

White Corporate Feminism

When I first went to the Grace Hopper Celebration of Women in Computing conference, it was magical. Being a woman in tech means I’m often the only woman in a team of male engineers, and if there’s more than one woman on a team, it’s usually because we have a project manager or marketing person who is a woman.

Going to the Grace Hopper conference, and being surrounded by women engineers and women computer science students, allowed me to relax in a way that I could never do in a male-centric space. I could talk with other women who just understood things like the glass ceiling and having to be constantly on guard in order to “fit in” with male colleagues. I had crafted a persona, an armor of collared shirts and jeans, trained myself to interrupt in order to make my male colleagues listen, and lied to myself and others that I wasn’t interested in “girly” hobbies like sewing or knitting. At Grace Hopper, surrounded by women, I could stop pretending, and try to figure out how to just be myself. To take a breath, stop interrupting, and cherish the fact that I was listened to and given space to listen to others.

However, after a day or so, I began to feel uneasy about two particular aspects of the Grace Hopper conference. I felt uneasy watching how aggressively the corporate representatives at the booths tried to persuade the students to join their companies. You couldn’t walk into the ballroom for keynotes without going through a gauntlet of recruiters. When I looked around the ballroom at the faces of the women surrounding me, I realized the second thing that made me uneasy. Even though Grace Hopper was hosted in Atlanta that year, a city that is 56% African American, there weren’t that many women of color attending. We’ve also seen the Grace Hopper conference feature more male keynote speakers, which is problematic when the goal of the conference is to allow women to connect to role models that look like them.

When I did a bit of research for this blog post, I looked at the board member list for Anita Borg Institute, who organizes the Grace Hopper Conference. I was unsurprised to see major corporate executives hold the majority of Anita Borg Institute board seats. However, I was curious why the board member page had no pictures on it. I used Google Image search in combination with the board member’s name and company to create this image:
anita-borg-board

My unease was recently echoed by Cate Huston, who also noticed the trend towards corporations trying to co-opt women’s only spaces to feed women into their toxic hiring pipeline. Last week, I also found this excellent article on white feminism, and how white women need to allow people of color to speak up about the problematic aspects of women-only spaces. There was also an interesting article last week about how “women’s only spaces” can be problematic for trans women to navigate if they don’t “pass” the white-centric standard of female beauty. The article also discusses that by promoting women-only spaces as “safe”, we are unintentially promoting the assumption that women can’t be predators, unconsciously sending the message to victims of violent or abusive women that they should remain silent about their abuse.

So how to do we expand women-only spaces to be more inclusive, and move beyond white corporate feminism? It starts with recognizing the problem often lies with the white women who start initiatives, and fail to bring in partners who are people of color. We also need to find ways to fund inclusive spaces and diversity efforts without big corporate backers.

We also need to take a critical look at how well-meaning diversity efforts often center around improving tech for white women. When you hear a white male say, “We need more women in X community,” take a moment to question them on why they’re so focused on women and not also bringing in more people of color, people who are differently abled, or LGBTQ people. We need to figure out how to expand the conversation beyond white women in tech, both in external conversations, and in our own projects.

One of the projects I volunteer for is Outreachy, a three-month paid internship program to increase diversity in open source. In 2011, the coordinators were told the language around encouraging “only women” to apply wasn’t trans-inclusive, so they changed the application requirements to clarify the program was open to both cis and trans women. In 2013, they clarified that Outreachy was also open to trans men and gender queer people. Last year, we wanted to open the program to men who were traditionally underrepresented in tech. After taking a long hard look at the statistics, we expanded the program to include all people in the U.S. who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander. We want to expand the program to additional people who are underrepresented in tech in other countries, so please contact us if you have good sources of diversity data for your country.

But most importantly, white people need to learn to listen to people of color instead of being a “white savior”. We need to believe people of color’s lived experience, amplify their voices when people of color tell us they feel isolated in tech, and stop insisting “not all white women” when people of color critique a problematic aspect of the feminist community.

Trying to move into more intersectional feminism is one of my goals, which is why I’m really excited to speak at the Richard Tapia Celebration of Diversity in Computing. I hadn’t heard of it until about a year ago (probably because they have less corporate sponsorship and less marketing), but it’s been described to me as “Grace Hopper for people of color”. I’m excited to talk to people about open source and Outreachy, but most importantly, I want to go and listen to people who have lived experiences that are different from mine, so I can promote their voices.

If you can kick in a couple dollars a month to help me cover costs for the conference, please donate on my Patreon. I’ll be writing about the people I meet at Tapia on my blog, so look for a follow-up post in late September!

by sarah at April 25, 2016 04:25 PM

April 23, 2016

That grumpy BSD guy

Does Your Email Provider Know What A "Joejob" Is?

Anecdotal evidence seems to indicate that Google and possibly other mail service providers are either quite ignorant of history when it comes to email and spam, or are applying unsavoury tactics to capture market dominance.

The first inklings that Google had reservations about delivering mail coming from my bsdly.net domain came earlier this year, when I was contacted by friends who have left their email service in the hands of Google, and it turned out that my replies to their messages did not reach their recipients, even when my logs showed that the Google mail servers had accepted the messages for delivery.

Contacting Google about matters like these means you first need to navigate some web forums. In this particular case (I won't give a direct reference, but a search on the likely keywords will likely turn up the relevant exchange), the denizens of that web forum appeared to be more interested in demonstrating their BOFHishness than actually providing input on debugging and resolving an apparent misconfiguration that was making valid mail disappear without a trace after it had entered Google's systems.

The forum is primarily intended as a support channel for people who host their mail at Google (this becomes very clear when you try out some of the web accessible tools to check domains not hosted by Google), so the only practial result was that I finally set up DKIM signing for outgoing mail from the domain, in addition to the SPF records that were already in place. I'm in fact less than fond of either of these SMTP addons, but there were anyway other channels for contact with my friends, and I let the matter rest there for a while.

If you've read earlier instalments in this column, you will know that I've operated bsdly.net with an email service since 2004 and a handful of other domains from some years before the bsdly.net domain was set up, sharing to varying extents the same infrastructure. One feature of the bsdly.net and associated domains setup is that in 2006, we started publishing a list of known bad addresses in our domains, that we used as spamtrap addresses as well as publising the blacklist that the greytrapping generates.

Over the years the list of spamtrap addresses -- harvested almost exclusively from records in our logs and greylists of apparent bounces of messages sent with forged From: addresses in our domains - has grown to a total of 29757 spamtraps, a full 7387 in the bsdly.net domain alone. At the time I'm writing this 31162 hosts have attempted to deliver mail to one of those spamtrap addresses. The exact numbers will likely change by the time you read this -- blacklisted addresses expire 24 hours after last contact, and new spamtrap addresses generally turn up a few more each week. With some simple scriptery, we pick them out of logs and greylists as they appear, and sometimes entire days pass without new candidates appearing. For a more general overview of how I run the blacklist, see this post from 2013.

In addition to the spamtrap addresses, the bsdly.net domain has some valid addresses including my own, and I've set up a few addresses for specific purposes (actually aliases), mainly set up so I can filter them into relevant mailboxes at the receiving end. Despite all our efforts to stop spam, occasionally spam is delivered to those aliases too (see eg the ethics of running the traplist page for some amusing examples).

Then this morning a piece of possibly well intended but actually quite clearly unwanted commercial email turned up, addressed to one of those aliases. For no actually good reason, I decided to send an answer to the message, telling them that whoever sold them the address list they were using were ripping them off.

That message bounced, and it turns out that the domain was hosted at Google.

Reading that bounce message is quite interesting, because if you read the page they link to, it looks very much like whoever runs Google Mail doesn't know what a joejob is.

The page, which again is intended mainly for Google's own customers, specifies that you should set up SPF and DKIM for domains. But looking at the headers, the message they reject passes both those criteria:

Received-SPF: pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) client-ip=2001:16d8:ff00:1a9::2;
Authentication-Results: mx.google.com;
dkim=pass (test mode) header.i=@bsdly.net;
spf=pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) smtp.mailfrom=peter@bsdly.net
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bsdly.net; s=x;
h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date:Message-ID:From:References:To:Subject; bh=OonsF8beQz17wcKmu+EJl34N5bW6uUouWw4JVE5FJV8=;
b=hGgolFeqxOOD/UdGXbsrbwf8WuMoe1vCnYJSTo5M9W2k2yy7wtpkMZOmwkEqZR0XQyj6qoCSriC6Hjh0WxWuMWv5BDZPkOEE3Wuag9+KuNGd7RL51BFcltcfyepBVLxY8aeJrjRXLjXS11TIyWenpMbtAf1yiNPKT1weIX3IYSw=;

Then for reasons known only to themselves, or most likely due to the weight they assign to some unknown data source, they reject the message anyway.

We do not know what that data source is. But with more than seven thousand bogus addresses that have generated bounces we've registered it's likely that the number of invalid bsdly.net From: addresses Google's systems has seen is far larger than the number of valid ones. The actual number of bogus addresses is likely higher, though: in the early days the collection process had enough manual steps that we're bound to have missed some. Valid bsdly.net addresses that do not eventually resolve to a mailbox I read are rare if not entirely non-existent. But the 'bulk mail' classification is bizarre if you even consider checking Received: headers.

The reason Google's systems most likely has seen more bogus bsdly.net From: addresses than valid ones is that by historical accident faking sender email addresses in SMTP dialogues is trivial.

Anecdotal evidence indicates that if a domain exists it will sooner or later be used in the from: field of some spam campaign where the messages originate somewhere else completely, and for that very reason the SPF and DKIM mechanisms were specified. I find both mechanisms slightly painful and inelegant, but used in their proper context, they do have their uses.

For the domains I've administered, we started seeing log entries, and in the cases where the addresses were actually deliverable, actual bounce messages for messages that definitely did not originate at our site and never went through our infrastructure a long time before bsdly.net was created. We didn't even think about recording those addresses until a practical use for them suddenly appeared with the greytrapping feature in OpenBSD 3.3 in 2003.

A little while after upgrading the relevant systems to OpenBSD 3.3, we had a functional greytrapping system going, at some point before the 2007 blog post I started publishing the generated blacklist. The rest is, well, what got us to where we are today.

From the data we see here, mail sent with faked sender addresses happens continuously and most likely to all domains, sooner or later. Joejobs that actually hit deliverable addresses happen too. Raw data from a campaign in late 2014 that used my main address as the purported sender is preserved here, collected with a mind to writing an article about the incident and comparing to a similar batch from 2008. That article could still be written at some point, and in the meantime the messages and specifically their headers are worth looking into if you're a bit like me. (That is, if you get some enjoyment out of such things as discovering the mindbogglingly bizarre and plain wrong mail configurations some people have apparently chosen to live with. And unless your base64 sightreading skills are far better than mine, some of the messages may require a bit of massaging with MIME tools to extract text you can actually read.)

Anyone who runs a mail service and bothers even occasionally to read mail server logs will know that joejobs and spam campaigns with fake and undeliverable return addresses happen all the time. If Google's mail admins are not aware of that fact, well, I'll stop right there and refuse to believe that they can be that incompentent.

The question then becomes, why are they doing this? Are they giving other independent operators the same treatment? If this is part of some kind of intimidation campaign (think "sign up for our service and we'll get your mail delivered, but if you don't, delivering to domains that we host becomes your problem"). I would think a campaign of intimidation would be a less than useful strategy when there are alread antitrust probes underway, these things can change direction as discoveries dictate.

Normally I would put oddities like the ones I saw in this case down to a silly misconfiguration, some combination of incompetence and arrogance and, quite possibly, some out of control automation thrown in. But here we are seeing clearly wrong behavior from a company that prides itself in hiring only the smartest people they can find. That doesn't totally rule out incompetence or plain bad luck, but it makes for a very strange episode. (And lest we forget, here is some data on a previous episode involving a large US corporation, spam and general silliness. (Now with the external link fixed via the wayback machine.))

One other interesting question is whether other operators, big and small behave in any similar ways. If you have seen phenomena like this involving Google or other operators, I would like to hear from you by email (easily found in this article) or in comments.

Update 2016-05-03: With Google silent on all aspects of this story, it is not possible pinpoint whether the public whining or something else that made a difference, but today a message aimed at one of those troublesome domains made it through to its intended recipient.

Much like the mysteriously disappeared messages, my logs show a "Message accepted for delivery" response from the Google servers, but unlike the earlier messages, this actually appeared in the intended inbox. In the meantime one of one of the useful bits of feedback I got on the article was that my IPv6 reverse DNS was in fact lacking. Today I fixed that, or at least made a plausible attempt to (you're very welcome to check with tools available to you). And for possibly unrelated reasons, my test message made it through all the way to the intended recipient's mailbox.

[Opinions offered here are my own and may or may not reflect the views of my employer.]


I will be giving a PF tutorial at BSDCan 2016, and I welcome your questions now that I'm revising the material for that session. See this blog post for some ideas (note that I'm only giving the PF tutorial this time around).

by Peter N. M. Hansteen (noreply@blogger.com) at April 23, 2016 05:56 PM

April 21, 2016

Michael Biven

Our Resistance to Change has Sent Us the Long Way Round to Where We’re Going

Eight years after Mark Mayo wrote about how cloud computing would change our profession pretty much everything still applies and people are still trying to adapt to the changes he highlighted. The post is primarily about the changes that individual systems administrators would be facing, but it also describes how the way we work would need to adapt.

The problem is that on the operations and systems side of web development our overall reaction to this was slow and I think we confused what the actual goal should be with something that’s was only temporary.

Configuration Management is the configure; make; make install of infrastructure.

The big change that happened during that time was the public cloud became a reality and we were able to programmatically interact with the infrastructure. That change opened up the possibility of adopting software development best practices. The reaction to this was better configuration management with tools like Puppet and Chef.

Five years after (2013) Mark’s post people started working on containers again along with a new read-only OS. Just as the public cloud was important by bringing an API to interact and manage an environment, Docker and CoreOS provided the opportunity for the early adopters to start thinking about delivering artifacts instead of updating existing things with config management. Immutable infrastructure finally became practical. Years ago we gave up on manually building software and started deploying packages. Configure; make; make install was replaced with rpm, apt-get, or yum.

We stopped deploying a process that would be repeated either manually or scripted and replaced it with the results. That’s what we’ve been missing. We set our sights lower and settled on just more manual work. We would never pull down a diff and do a make install. We would just update to the latest package. Puppet, Chef, Ansible and Salt have held us back.

Wasn’t a benefit of infrastructure as code the idea that we could move the environment into a CI/CD pipeline as a first class citizen?

The build pipelines that has existed in software development is now really a thing for infrastructure. Testing and a way (Packer and Docker ) to create artifacts has been available for a while, but now we have tools ( Terraform, Mesos, and Kubernetes ) to manage and orchestrate how those things are used.

You better learn to code, because we only need so many people to write documentation.

There are only two things that we all really work on. Communication and dealing with change. The communication can be writing text or writing code. Sometimes the change is planned and sometimes it’s unplanned. But at the end of the day these two things are really the only things we do. We communicate, perform and react to changes.

If you cannot write code then you can’t communicate, perform and react as effectively as you could. I think it’s safe to say this is why Mark highlighted those three points eight years ago.

  1. We have to accept and plan for a future where getting access to bare metal will be a luxury. Clients won’t want to pay for it, and frankly we won’t have the patience or time to deal with waiting more than 5 minutes for any infrastructure soon anyways.
  2. We need to learn to program better, with richer languages.
  3. Perhaps the biggest challenge we’re going to have is one we share with developers: How to deal with a world where everything is distributed, latency is wildly variable, where we have to scale for throughput more than anything else. I believe we’re only just starting to see the need for “scale” on the web and in systems work.

– Mark Mayo, Cloud computing is a sea change - How sysadmins can prepare

April 21, 2016 02:47 PM

April 19, 2016

R.I.Pienaar

Hiera Node Classifier 0.7

A while ago I released a Puppet 4 Hiera based node classifier to see what is next for hiera_include(). This had the major drawback that you couldn’t set an environment with it like with a real ENC since Puppet just doesn’t have that feature.

I’ve released a update to the classifier that now include a small real ENC that takes care of setting the environment based on certname and then boots up the classifier on the node.

Usage


ENCs tend to know only about the certname, you could imagine getting most recent seen facts from PuppetDB etc but I do not really want to assume things about peoples infrastructure. So for now this sticks to supporting classification based on certname only.

It’s really pretty simple, lets assume you are wanting to classify node1.example.net, you just need to have a node1.example.net.yaml (or JSON) file somewhere in a path. Typically this is going to be in a directory environment somewhere but could of course also be a site wide hiera directory.

In it you put:

classifier::environment: development

And this will node will form part of that environment. Past that everything in the previous post just applies so you make rules or assign classes as normal, and while doing so you have full access to node facts.

The classifier now expose some extra information to help you determine if the ENC is in use and based on what file it’s classifying the node:

  • $classifier::enc_used – boolean that indicates if the ENC is in use
  • $classifier::enc_source – path to the data file that set the environment. undef when not found
  • $classifier::enc_environment – the environment the ENC is setting

It supports a default environment which you configure when configuring Puppet to use a ENC as below.

Configuring Puppet


Configuring Puppet is pretty simple for this:

[main]
node_terminus = exec
external_nodes = /usr/local/bin/classifier_enc.rb --data-dir /etc/puppetlabs/code/hieradata --node-pattern nodes/%%.yaml

Apart from these you can do –default development to default to that and not production and you can add –debug /tmp/enc.log to get a bunch of debug output.

The data-dir above is for your classic Hiera single data dir setup, but you can also use globs to support environment data like –data-dir /etc/puppetlabs/code/environments/*/hieradata. It will now search the entire glob until it finds a match for the certname.

That’s really all there is to it, it produce a classification like this:

---
environment: production
classes:
  classifier:
    enc_used: true
    enc_source: /etc/puppetlabs/code/hieradata/node.example.yaml
    enc_environment: production

Conclusion


That’s really all there is to it, I think this might hit a high percentage of user cases and bring a key ability to the hiera classifiers. It’s a tad annoying there is no way really to do better granularity than just per node here, I might come up with something else but don’t really want to go too deep down that hole.

In future I’ll look about adding a class to install the classifier into some path and configure Puppet, for now that’s up to the user. It’s shipped in the bin dir of the module.

by R.I. Pienaar at April 19, 2016 03:14 PM

April 18, 2016

Slaptijack

Two Column for Loop in bash

I had this interesting question the other day. Someone had a file with two columns of data in it. They wanted to assign each column to a different variable and then take action using those two variables. Here's the example I wrote for them:

IFS=$'n';
for LINE in $(cat data_file); do
    VARA=$(echo ${LINE} | awk '{ print $1}')
    VARB=$(echo ${LINE} | awk '{ print $2 }')
    echo "VARA is ${VARA}"
    echo "VARB is ${VARB}"
done

The key here is to set the internal field separator ($IFS) to $'n' so that the for loop interates on lines rather than words. The it's simply a matter of splitting the column into individual variables. In this case, I chose to use awk since it's a simple procedure and speed is not really an issue. Long-term, I would probably re-write this using arrays.

by Scott Hebert at April 18, 2016 01:00 PM

April 11, 2016

Colin Percival

Write opinionated workarounds

A few years ago, I decided that I should aim for my code to be as portable as possible. This generally meant targeting POSIX; in some cases I required slightly more, e.g., "POSIX with OpenSSL installed and cryptographic entropy available from /dev/urandom". This dedication made me rather unusual among software developers; grepping the source code for the software I have installed on my laptop, I cannot find any other examples of code with strictly POSIX compliant Makefiles, for example. (I did find one other Makefile which claimed to be POSIX-compatible; but in actual fact it used a GNU extension.) As far as I was concerned, strict POSIX compliance meant never having to say you're sorry for portability problems; if someone ran into problems with my standard-compliant code, well, they could fix their broken operating system.

And some people did. Unfortunately, despite the promise of open source, many users were unable to make such fixes themselves, and for a rather large number of operating systems the principle of standards compliance seems to be more aspirational than actual. Given the limits which would otherwise be imposed on the user base of my software, I eventually decided that it was necessary to add workarounds for some of the more common bugs. That said, I decided upon two policies:

  1. Workarounds should be disabled by default, and only enabled upon detecting an afflicted system.
  2. Users should be warned that a workaround is being applied.

April 11, 2016 10:00 AM

April 10, 2016

Vincent Bernat

Testing network software with pytest and Linux namespaces

Started in 2008, lldpd is an implementation of IEEE 802.1AB-2005 (aka LLDP) written in C. While it contains some unit tests, like many other network-related software at the time, the coverage of those is pretty poor: they are hard to write because the code is written in an imperative style and tighly coupled with the system. It would require extensive mocking1. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way.

To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.

pytest in a nutshell§

pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:

  • you can directly use the assert keyword,
  • you can inject fixtures in any test function, and
  • you can parametrize tests.

Assertions§

With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:

class testArithmetics(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(1 + 3, 4)

The equivalent with pytest is simpler and more readable:

def test_addition():
    assert 1 + 3 == 4

pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson’s article.

Fixtures§

A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:

class testInVM(unittest.TestCase):

    def setUp(self):
        self.vm = VM('Test-VM')
        self.vm.start()
        self.ssh = SSHClient()
        self.ssh.connect(self.vm.public_ip)

    def tearDown(self):
        self.ssh.close()
        self.vm.destroy()

    def test_hello(self):
        stdin, stdout, stderr = self.ssh.exec_command("echo hello")
        stdin.close()
        self.assertEqual(stderr.read(), b"")
        self.assertEqual(stdout.read(), b"hello\n")

In the example above, we want to test various commands on a remote VM. The fixture launches a new VM and configure an SSH connection. However, if the SSH connection cannot be established, the fixture will fail and the tearDown() method won’t be invoked. The VM will be left running.

Instead, with pytest, we could do this:

@pytest.yield_fixture
def vm():
    r = VM('Test-VM')
    r.start()
    yield r
    r.destroy()

@pytest.yield_fixture
def ssh(vm):
    ssh = SSHClient()
    ssh.connect(vm.public_ip)
    yield ssh
    ssh.close()

def test_hello(ssh):
    stdin, stdout, stderr = ssh.exec_command("echo hello")
    stdin.close()
    stderr.read() == b""
    stdout.read() == b"hello\n"

The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.

Parameters§

If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:

@pytest.mark.parametrize("n1, n2, expected", [
    (1, 3, 4),
    (8, 20, 28),
    (-4, 0, -4)])
def test_addition(n1, n2, expected):
    assert n1 + n2 == expected

Testing lldpd§

The general plan for to test a feature in lldpd is the following:

  1. Setup two namespaces.
  2. Create a virtual link between them.
  3. Spawn a lldpd process in each namespace.
  4. Test the feature in one namespace.
  5. Check with lldpcli we get the expected result in the other.

Here is a typical test using the most interesting features of pytest:

@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features,
                    reason="LLDP-MED not supported")
@pytest.mark.parametrize("classe, expected", [
    (1, "Generic Endpoint (Class I)"),
    (2, "Media Endpoint (Class II)"),
    (3, "Communication Device Endpoint (Class III)"),
    (4, "Network Connectivity Device")])
def test_med_devicetype(lldpd, lldpcli, namespaces, links,
                        classe, expected):
    links(namespaces(1), namespaces(2))
    with namespaces(1):
        lldpd("-r")
    with namespaces(2):
        lldpd("-M", str(classe))
    with namespaces(1):
        out = lldpcli("-f", "keyvalue", "show", "neighbors", "details")
        assert out['lldp.eth0.lldp-med.device-type'] == expected

First, the test will be executed only if lldpd was compiled with LLDP-MED support. Second, the test is parametrized. We will execute four distinct tests, one for each role that lldpd should be able to take as an LLDP-MED-enabled endpoint.

The signature of the test has four parameters that are not covered by the parametrize() decorator: lldpd, lldpcli, namespaces and links. They are fixtures. A lot of magic happen in those to keep the actual tests short:

  • lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details.

  • lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.

  • namespaces is one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read fixtures/namespaces.py for more details. It is quite reusable in other projects2.

  • links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for more details.

You can see an example of a test run on the Travis build for 0.9.2. Since each test is correctly isolated, it’s possible to run parallel tests with pytest -n 10 --boxed. To catch even more bugs, both the address sanitizer (ASAN) and the undefined behavior sanitizer (UBSAN) are enabled. In case of a problem, notably a memory leak, the faulty program will exit with a non-zero exit code and the associated test will fail.


  1. A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations. 

  2. There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (root and _lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It’s not possible for a process to switch from one PID namespace to another. When you call setns() on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use multiprocessing module instead. 

by Vincent Bernat at April 10, 2016 04:22 PM

April 09, 2016

HolisticInfoSec.org

toolsmith #115: Volatility Acuity with VolUtility

Yes, we've definitely spent our share of toolsmith time on memory analysis tools such as Volatility and Rekall, but for good reason. I contend that memory analysis is fundamentally one of the most important skills you'll develop and utilize throughout your DFIR career.
By now you should have read The Art of Memory Forensics, if you haven't, it's money well spent, consider it an investment.
If there is one complaint, albeit a minor one, that analysts might raise specific to memory forensics tools, it's that they're very command-line oriented. While I appreciate this for speed and scripting, there are those among us who prefer a GUI. Who are we to judge? :-)
Kevin Breen's (@kevthehermit) VolUtility is a full function web UI for Volatility which fills the gap that's been on user wishlists for some time now.
When I reached out to Kevin regarding the current state of the project, he offered up a few good tidbits for user awareness.

1. Pull often. The project is still in its early stages and its early life is going to see a lot of tweaks, fixes, and enhancements as he finishes each of them.
2. If there is something that doesn’t work, could be better, or removed, open an issue. Kevin works best when people tell him what they want to see.
3. He's working with SANS to see VolUtility included in the SIFT distribution, and release a Debian package to make it easier to install. Vagrant and Docker instances are coming soon as well. 

The next two major VolUtility additions are:
1. Pre-Select plugins to run on image import.
2. Image Threat Score.

Notifications recently moved from notification bars to the toolbar, and there is now a right click context menu on the plugin output, which adds new features.

Installation

VolUtility installation is well documented on its GitHub site, but for the TLDR readers amongst you, here's the abbreviated version, step by step. This installation guidance assumes Ubuntu 14.04 LTS where Volatility has not yet been installed, nor have tools such as Git or Pip.
Follow this command set verbatim and you should be up and running in no time:
  1. sudo apt-get install git python-dev python-pip
  2. git clone https://github.com/volatilityfoundation/volatility
  3. cd volatility/
  4. sudo python setup.py install
  5. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
  6. echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
  7. sudo apt-get update
  8. sudo apt-get install -y mongodb-org
  9. sudo pip install pymongo pycrypto django virustotal-api distorm3
  10. git clone https://github.com/kevthehermit/VolUtility
  11. cd VolUtility/
  12. ./manage.py runserver 0.0.0.0:8000
Point your browser to http://localhost:8000 and there you have it.

VolUtility and an Old Friend

I pulled out an old memory image (hiomalvm02.raw) from September 2011's toolsmith specific to Volatility where we first really explored Volatility, it was version 2.0 back then. :-) This memory image will give us the ability to do a quick comparison of our results from 2011 against a fresh run with VolUtility and Volatility 2.5.

VolUtility will ask you for the path to Volatility plugins and the path to the memory image you'd like to analyze. I introduced my plugins path as /home/malman/Downloads/volatility/volatility/plugins.


The image I'd stashed in Downloads as well, the full path being /home/malman/Downloads/HIOMALVM02.raw.


Upon clicking Submit, cats began loading stuffs. If you enjoy this as much as I do, the Help menu allows you to watch the loading page as often as you'd like.


If you notice any issues such as the image load hanging, check your console, it will have captured any errors encountered.
On my first run, I had not yet installed distorm3, the console view allowed me to troubleshoot the issue quickly.

Now down to business. In our 2011 post using this image, I ran imageinfo, connscan, pslist, pstree, and malfind. I also ran cmdline for good measure via VolUtility. Running plugins in VolUtility is as easy as clicking the associated green arrow for each plugin. The results will accumulate on the toolbar and the top of the plugin selection pane, while the raw output for each plugin will appears beneath the plugin selection pane when you select View Output under Actions.


Results were indeed consistent with those from 2011 but enhanced by a few features. Imageinfo yielded WinXPSP3x86 as expected, connscan returned 188.40.138.148:80 as our evil IP and the associated suspect process ID of 1512. Pslist and pstree then confirmed parent processes and the evil emanated from an ill-conceived click via explorer.exe. If you'd like to export your results, it's as easy as selecting Export Output from the Action menu. I did so for pstree, as it is that plugin from whom all blessings flow, the results were written to pstree.csv.


We're reminded that explorer.exe (PID 1512) is the parent for cleansweep.exe (PID 3328) and that cleansweep.exe owns no threads current threads but is likely the original source of pwn. We're thus reminded to explore (hah!) PID 1512 for information. VolUtility allows you to run commands directly from the Tools Bar, I did so with vol.py -f /home/malman/Downloads/HIOMALVM02.raw malfind -p 1512.


Rather than regurgitate malfind results as noted from 2011 (you can review those for yourself), I instead used the VolUtility Tools Bar feature Yara Scan Memory. Be sure to follow Kevin's Yara installation guidance if you want to use this feature. Also remember to git pull! Kevin updated the Yara capabilities between the time I started this post and when I ran yarascan. Like he said, pull often. There is a yararules folder now in the VolUtility file hierarchy, I added spyeye.yar, as created from Jean-Philippe Teissier's rule set. Remember, from the September 2011 post, we know that hiomalvm02.raw taken from a system infected with SpyEye. I then selected Yara Scan Memory from the Tools Bar, and pointed to the just added spyeye.yar file.


The results were immediate, and many, as expected.


You can also use String Search / yara rule from the Tools Bar Search Type field to accomplish similar goals, and you can get very granular with you string searches to narrow down results.
Remember that your sessions will persist thanks to VolUtility's use of MongoDB, so if you pull, then restart VolUtility, you'll be quickly right back where you left off.

In Closing

VolUtility is a great effort, getting better all the time, and I find its convenience irresistible. Kevin's doing fine work here, pull down the project, use it, and offer feedback or contributions. It's a no-brainer for VolUtility to belong in SIFT by default, but as you've seen, building it for yourself is straightforward and quick. Add it to your DFIR utility belt today.
As always, ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Kevin (@kevthehermit) Breen for VolUtility and his feedback for this post.

by Russ McRee (noreply@blogger.com) at April 09, 2016 08:45 PM

Sean's IT Blog

What’s New in NVIDIA GRID Licensing

When GRID 2.0 was announced at VMworld 2015, it included a licensing component for the driver and software portion of the component.  NVIDIA has recently revised the licensing and simplified the licensing model.  They have also added a subscription-based model for customers that don’t want to buy a perpetual license and pay for support on a yearly basis.

Major Changes

There are a few major changes to the licensing model.  The first is that the Virtual Workstation Extended licensing tier has been deprecated, and the features from this level have been added into the Virtual Workstation licensing tier.  This means that high-end features, such as dedicating an entire GPU to a VM and CUDA support, are now available in the Virtual Workstation licensing tier.

The second major change is a licensing SKU for XenApp and Published Applications.  In the early version of GRID 2.0, licensing support for XenApp and Horizon Published Applications was complicated.  The new model provides for per-user licensing for server-based computing.

The third major change is a change to how the license is enforced.  In the original incarnation of GRID 2.0, a license server was required for utilizing the GRID 2.0 features.  That server handled license enforcement, and if it wasn’t available, or there were no licenses available, the desktops were not usable.  In the latest license revision, the license has shifted to EULA enforcement.  The license server is still required, but it is now used for reporting and capacity planning.

The final major change is the addition of a subscription-based licensing model.  This new model allows organizations to purchase licenses as they need them without having to do a large capital outlay.  The subscription model includes software support baked into the price.  Subscriptions can also be purchased in multi-year blocks, so I can pay for three years at one time.

One major difference between perpetual and subscription models is what happens when support expires.  In the perpetual model, you know the licensing.  If you allow support to expire, you can still use these features.  However, you will not be able to get software updates. In a subscription model, the licensed features are no longer available as soon as the subscription expires. 

The new pricing for GRID 2.0 is:

Name

Perpetual Licensing

Subscription Licensing (yearly)

Virtual Apps $20 + $5 SUMS $10
Virtual PC $100+$25 SUMS $50
Virtual Workstation $450 + $100 SUMS $250

Software support for the 1st year is not included when you purchase a perpetual license.  Purchasing the 1st year of support is required when buying perpetual licenses.  A license is required if you plan to use a direct pass-thru with a GRID card.


by seanpmassey at April 09, 2016 06:54 PM

April 04, 2016

Cryptography Engineering

What's the matter with PGP?

Image source: @bcrypt.
Last Thursday, Yahoo announced their plans to support end-to-end encryption using a fork of Google's end-to-end email extension. This is a Big Deal. With providers like Google and Yahoo onboard, email encryption is bound to get a big kick in the ass. This is something email badly needs.

So great work by Google and Yahoo! Which is why following complaint is going to seem awfully ungrateful. I realize this and I couldn't feel worse about it.

As transparent and user-friendly as the new email extensions are, they're fundamentally just re-implementations of OpenPGP -- and non-legacy-compatible ones, too. The problem with this is that, for all the good PGP has done in the past, it's a model of email encryption that's fundamentally broken.

It's time for PGP to die. 

In the remainder of this post I'm going to explain why this is so, what it means for the future of email encryption, and some of the things we should do about it. Nothing I'm going to say here will surprise anyone who's familiar with the technology -- in fact, this will barely be a technical post. That's because, fundamentally, most of the problems with email encryption aren't hyper-technical problems. They're still baked into the cake.

Background: PGP

Back in the late 1980s a few visionaries realized that this new 'e-mail' thing was awfully convenient and would likely be the future -- but that Internet mail protocols made virtually no effort to protect the content of transmitted messages. In those days (and still in these days) email transited the Internet in cleartext, often coming to rest in poorly-secured mailspools.

This inspired folks like Phil Zimmermann to create tools to deal with the problem. Zimmermann's PGP was a revolution. It gave users access to efficient public-key cryptography and fast symmetric ciphers in package you could install on a standard PC. Even better, PGP was compatible with legacy email systems: it would convert your ciphertext into a convenient ASCII armored format that could be easily pasted into the sophisticated email clients of the day -- things like "mail", "pine" or "the Compuserve e-mail client".

It's hard to explain what a big deal PGP was. Sure, it sucked badly to use. But in those days, everything sucked badly to use. Possession of a PGP key was a badge of technical merit. Folks held key signing parties. If you were a geek and wanted to discreetly share this fact with other geeks, there was no better time to be alive.

We've come a long way since the 1990s, but PGP mostly hasn't. While the protocol has evolved technically -- IDEA replaced BassOMatic, and was in turn replaced by better ciphers -- the fundamental concepts of PGP remain depressingly similar to what Zimmermann offered us in 1991. This has become a problem, and sadly one that's difficult to change.

Let's get specific.

PGP keys suck

Before we can communicate via PGP, we first need to exchange keys. PGP makes this downright unpleasant. In some cases, dangerously so.

Part of the problem lies in the nature of PGP public keys themselves. For historical reasons they tend to be large and contain lots of extraneous information, which it difficult to print them a business card or manually compare. You can write this off to a quirk of older technology, but even modern elliptic curve implementations still produce surprisingly large keys.
Three public keys offering roughly the same security level. From top-left: (1) Base58-encoded Curve25519 public key used in miniLock. (2) OpenPGP 256-bit elliptic curve public key format. (3a) GnuPG 3,072 bit RSA key and (3b) key fingerprint.
Since PGP keys aren't designed for humans, you need to move them electronically. But of course humans still need to verify the authenticity of received keys, as accepting an attacker-provided public key can be catastrophic.

PGP addresses this with a hodgepodge of key servers and public key fingerprints. These components respectively provide (untrustworthy) data transfer and a short token that human beings can manually verify. While in theory this is sound, in practice it adds complexity, which is always the enemy of security.

Now you may think this is purely academic. It's not. It can bite you in the ass.

Imagine, for example, you're a source looking to send secure email to a reporter at the Washington Post. This reporter publishes his fingerprint via Twitter, which means most obvious (and recommended) approach is to ask your PGP client to retrieve the key by fingerprint from a PGP key server. On the GnuPG command line can be done as follows:


Now let's ignore the fact that you've just leaked your key request to an untrusted server via HTTP. At the end of this process you should have the right key with high reliability. Right?

Except maybe not: if you happen to do this with GnuPG 2.0.18 -- one version off from the very latest GnuPG -- the client won't actually bother to check the fingerprint of the received key. A malicious server (or HTTP attacker) can ship you back the wrong key and you'll get no warning. This is fixed in the very latest versions of GPG but... Oy Vey.

PGP Key IDs are also pretty terrible,
due to the short length and continued
support for the broken V3 key format.
You can say that it's unfair to pick on all of PGP over an implementation flaw in GnuPG, but I would argue it speaks to a fundamental issue with the PGP design. PGP assumes keys are too big and complicated to be managed by mortals, but then in practice it practically begs users to handle them anyway. This means we manage them through a layer of machinery, and it happens that our machinery is far from infallible.

Which raises the question: why are we bothering with all this crap infrastructure in the first place. If we must exchange things via Twitter, why not simply exchange keys? Modern EC public keys are tiny. You could easily fit three or four of them in the space of this paragraph. If we must use an infrastructure layer, let's just use it to shunt all the key metadata around.

PGP key management sucks

If you can't trust Phil who can
you trust?
Manual key management is a mug's game. Transparent (or at least translucent) key management is the hallmark of every successful end-to-end secure encryption system. Now often this does involve some tradeoffs -- e.g.,, the need to trust a central authority to distribute keys -- but even this level of security would be lightyears better than the current situation with webmail.

To their credit, both Google and Yahoo have the opportunity to build their own key management solutions (at least, for those who trust Google and Yahoo), and they may still do so in the future. But today's solutions don't offer any of this, and it's not clear when they will. Key management, not pretty web interfaces, is the real weakness holding back widespread secure email.

ZRTP authentication string as
used in Signal.
For the record, classic PGP does have a solution to the problem. It's called the "web of trust", and it involves individuals signing each others' keys. I refuse to go into the problems with WoT because, frankly, life is too short. The TL;DR is that 'trust' means different things to you than it does to me. Most OpenPGP implementations do a lousy job of presenting any of this data to their users anyway.

The lack of transparent key management in PGP isn't unfixable. For those who don't trust Google or Yahoo, there are experimental systems like Keybase.io that attempt to tie keys to user identities. In theory we could even exchange our offline encryption keys through voice-authenticated channels using apps like OpenWhisperSystems' Signal. So far, nobody's bothered to do this -- all of these modern encryption tools are islands with no connection to the mainland. Connecting them together represents one of the real challenges facing widespread encrypted communications.

No forward secrecy

Try something: go delete some mail from your Gmail account. You've hit the archive button. Presumably you've also permanently wiped your Deleted Items folder. Now make sure you wipe your browser cache and the mailbox files for any IMAP clients you might be running (e.g., on your phone). Do any of your devices use SSD drives? Probably a safe bet to securely wipe those devices entirely. And at the end of this Google may still have a copy which could be vulnerable to law enforcement request or civil subpoena.

(Let's not get into the NSA's collect-it-all policy for encrypted messages. If the NSA is your adversary just forget about PGP.)

Forward secrecy (usually misnamed "perfect forward secrecy") ensures that if you can't destroy the ciphertexts, you can at least dispose of keys when you're done with them. Many online messaging systems like off-the-record messaging use PFS by default, essentially deriving a new key with each message volley sent. Newer 'ratcheting' systems like Trevor Perrin's Axolotl (used by TextSecure) have also begun to address the offline case.

Adding forward secrecy to asynchronous offline email is a much bigger challenge, but fundamentally it's at least possible to some degree. While securing the initial 'introduction' message between two participants may be challenging*, each subsequent reply can carry a new ephemeral key to be used in future communications. However this requires breaking changes to the PGP protocol and to clients -- changes that aren't likely to happen in a world where webmail providers have doubled down on the PGP model.

The OpenPGP format and defaults suck

If Will Smith looked like
this when your cryptography
was current, you need better
cryptography.
Poking through a modern OpenPGP implementation is like visiting a museum of 1990s crypto. For legacy compatibility reasons, many clients use old ciphers like CAST5 (a cipher that predates the AES competition). RSA encryption uses padding that looks disturbingly like PKCS#1v1.5 -- a format that's been relentlessly exploited in the past. Key size defaults don't reach the 128-bit security level. MACs are optional. Compression is often on by default. Elliptic curve crypto is (still!) barely supported.

Most of these issues are not exploitable unless you use PGP in a non-standard way, e.g., for instant messaging or online applications. And some people do use PGP this way.

But even if you're just using PGP just to send one-off emails to your grandmother, these bad defaults are pointless and unnecessary. It's one thing to provide optional backwards compatibility for that one friend who runs PGP on his Amiga. But few of my contacts do -- and moreover, client versions are clearly indicated in public keys.** Even if these archaic ciphers and formats aren't exploitable today, the current trajectory guarantees we'll still be using them a decade from now. Then all bets are off.

On the bright side, both Google and Yahoo seem to be pushing towards modern implementations that break compatibility with the old. Which raises a different question. If you're going to break compatibility with most PGP implementations, why bother with PGP at all?

Terrible mail client implementations

This is by far the worst aspect of the PGP ecosystem, and also the one I'd like to spend the least time on. In part this is because UX isn't technically PGP's problem; in part because the experience is inconsistent between implementations, and in part because it's inconsistent between users: one person's 'usable' is another person's technical nightmare.

But for what it's worth, many PGP-enabled mail clients make it ridiculously easy to send confidential messages with encryption turned off, to send unimportant messages with encryption turned on, to accidentally send to the wrong person's key (or the wrong subkey within a given person's key). They demand you encrypt your key with a passphrase, but routinely bug you to enter that passphrase in order to sign outgoing mail -- exposing your decryption keys in memory even when you're not reading secure email.

Most of these problems stem from the fact that PGP was designed to retain compatibility with standard (non-encrypted) email. If there's one lesson from the past ten years, it's that people are comfortable moving past email. We now use purpose-built messaging systems on a day-to-day basis. The startup cost of a secure-by-default environment is, at this point, basically an app store download.

Incidentally, the new Google/Yahoo web-based end-to-end clients dodge this problem by providing essentially no user interface at all. You enter your message into a separate box, and then plop the resulting encrypted data into the Compose box. This avoids many of the nastier interface problems, but only by making encryption non-transparent. This may change; it's too soon to know how.

So what should we be doing?

Quite a lot actually. The path to a proper encrypted email system isn't that far off. At minimum, any real solution needs:
  • A proper approach to key management. This could be anything from centralized key management as in Apple's iMessage -- which would still be better than nothing -- to a decentralized (but still usable) approach like the one offered by Signal or OTR. Whatever the solution, in order to achieve mass deployment, keys need to be made much more manageable or else submerged from the user altogether.
  • Forward secrecy baked into the protocol. This should be a pre-condition to any secure messaging system. 
  • Cryptography that post-dates the Fresh Prince. Enough said.
  • Screw backwards compatibility. Securing both encrypted and unencrypted email is too hard. We need dedicated networks that handle this from the start.
A number of projects are already going in this direction. Aside above-mentioned projects like Axolotl and TextSecure -- which pretend to be text messaging systems, but are really email in disguise -- projects like Mailpile are trying to re-architect the client interface (though they're sticking with the PGP paradigm). Projects like SMIMP are trying to attack this at the protocol level.*** At least in theory projects like DarkMail are also trying to adapt text messaging protocols to the email case, though details remain few and far between.

It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view 'PGP' to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix -- with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement 'PGP'. 

Conclusion

I realize I sound a bit cranky about this stuff. But as they say: a PGP critic is just a PGP user who's actually used the software for a while. At this point so much potential in this area and so many opportunities to do better. It's time for us to adopt those ideas and stop looking backwards.

Notes:

* Forward security even for introduction messages can be implemented, though it either require additional offline key distribution (e.g., TextSecure's 'pre-keys') or else the use of advanced primitives. For the purposes of a better PGP, just handling the second message in a conversation would be sufficient.

** Most PGP keys indicate the precise version of the client that generated them (which seems like a dumb thing to do). However if you want to add metadata to your key that indicates which ciphers you prefer, you have to use an optional command.

*** Thanks to Taylor Hornby for reminding me of this.

by Matthew Green (noreply@blogger.com) at April 04, 2016 08:18 PM

April 01, 2016

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – March 2016

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” …  [230 pageviews]
  2. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [112 pageviews]
  3. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our new 2016 research on security monitoring use cases here! [92 pageviews]
  4. My classic PCI DSS Log Review series is always popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ as well), useful for building log review processes and procedures , whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (out in its 4th edition!) [84+ pageviews to the main tag]
  5. Top 10 Criteria for a SIEM?” came from one of my last projects I did when running my SIEM consulting firm in 2009-2011 (for my recent work on evaluating SIEM, see this document [2015 update]) [78 pageviews of total 4113 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has about 3X the traffic of this blog]: 
 
Current research on IR:
Current research on EDR:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at April 01, 2016 04:02 PM

March 31, 2016

LZone - Sysadmin

Gerrit Howto Remove Changes

Sometimes you want to delete a Gerrit change to make it invisible for everyone (for example when you did commit an unencrypted secret...). AFAIK this is only possible via the SQL interface which you can enter with
ssh <gerrit host>:29418 gerrit gsql
and issue a delete with:
update changes set status='d' where change_id='<change id>';
For more Gerrit hints check out the Gerrit Cheat Sheet

March 31, 2016 10:15 PM

syslog.me

An init system in a Docker container

DockerHere’s another quick post about docker, sorry again if it will come out a bit raw.

In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:

if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

Let’s restate the problem to make it clear:

  • the logs from a docker container are available either thorough the docker logs command or through logging drivers;
  • the log of a docker container consists of the output of the containerized process to the usual filehandles; if the process has no output, no logs are exported; if the process logs to file, those files must be collected either copying them out of the container or through an external volume/filesystem, otherwise they will be lost;
  • if we want a containerized process to send its logs through syslog and the process expects to have a syslog daemon on board of the same host (in this case, the same container), you are in trouble.

Well, OK, maybe “in trouble” is a bit too much. Then let’s say that running more than one process in the same container requires a bit more work than the “one container, one process” case. The case of putting more than one process in a container is actually so common that it’s even included in the official documentation. That’s probably why supervisor seems to be so common among docker users.

Since the solution is explained in the documentation I could well use it, but I was more intrigued in understanding what a real init system looks like in a container. So rather than just study and slavishly apply the supervisor approach I decided to research on how to run systemd inside a docker container. It turned out it may not be easy or super-safe, but it’s definitely possible. This is what I did:

And that worked! According to the logs I actually have cf-serverd and syslog-ng running in the container. For example:

root@tyrrell:/home/bronto# docker run -ti --rm -P --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro --log-driver=syslog --log-opt tag="poc-systemd" bronto/poc-systemd
systemd 215 running in system mode. (+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ -SECCOMP -APPARMOR)
Detected virtualization 'other'.
Detected architecture 'x86-64'.

Welcome to Debian GNU/Linux 8 (jessie)!

Set hostname to <ffb5129613a3>.
[ OK ] Reached target Paths.
[ OK ] Created slice Root Slice.
[ OK ] Listening on Delayed Shutdown Socket.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Listening on Journal Socket.
[ OK ] Created slice System Slice.
[ OK ] Listening on Syslog Socket.
[ OK ] Reached target Sockets.
[ OK ] Reached target Slices.
[ OK ] Reached target Swap.
[ OK ] Reached target Local File Systems.
 Starting Create Volatile Files and Directories...
 Starting Journal Service...
[ OK ] Started Journal Service.
[ OK ] Started Create Volatile Files and Directories.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Timers.
[ OK ] Reached target Basic System.
 Starting CFEngine 3 deamons...
 Starting Regular background program processing daemon...
[ OK ] Started Regular background program processing daemon.
 Starting /etc/rc.local Compatibility...
 Starting System Logger Daemon...
 Starting Cleanup of Temporary Directories...
[ OK ] Started /etc/rc.local Compatibility.
[ OK ] Started Cleanup of Temporary Directories.
[ OK ] Started System Logger Daemon.
[ OK ] Started CFEngine 3 deamons.
[ OK ] Reached target Multi-User System.

“I want to try that, too! Can you share your dockerfiles?”

Sure thing, just head out to GitHub and enjoy!

Things to be sorted out

The systemd container doesn’t stop properly. For some reason, when you run docker stop docker sends a signal, which the container defiantly ignores, and waits 10 seconds, then it kills it. Dunno why but hey! It’s just a few days I’m using this contraption!!!

References


Tagged: cfengine, Debian, DevOps, docker, Github, Sysadmin, systemd

by bronto at March 31, 2016 03:24 PM

March 30, 2016

Sarah Allen

lessons learned since 1908

frances-braggI was the only grand-daughter of Frances Clara Kiefer Bragg who passed away on March 11 just 15 days before her 108th birthday. She was a loving, yet formidable force in the world. As a child, we would often visit her in her home in Cambridge when we lived near Boston and when we lived in other countries, she would visit us for a few weeks or months. Small lessons were embedded in our interactions, and there was always a proper way to do things. We needed to know how to conduct ourselves in society: when you set the table, the knife points toward the plate, always hang paintings at eye-level. There was a right way to make borscht and chicken soup. I learned sewing, cooking, and appreciation of fine art from my grandmother. She told funny stories of her travels, and family history, sprinkled with anecdotes of poets, writers and presidents.

As a teenager, I rebelled and stopped doing all of the things I was told to do. I figured maybe one didn’t always need to write a thank you letter – sometimes a phone call would be even better. And also, as a teenager sometimes I just didn’t feel like doing what was right and proper.

Now, reflecting on those early lessons learned from my grandmother, I wonder if the impression stuck with me that whatever my grandmother did was the appropriate and proper thing to do in society. I followed her lead in other ways. It felt appropriate and proper for a young woman to travel alone in Europe, and that one must speak one’s mind, as long as it is done politely.

Somehow it all seemed normal that my Grandma visited my cousin in India and rode on the back of his scooter. I couldn’t find that photo, but I found this one, I think from that same trip…

fran-in-india

Fran spoke French, German and Russian quite fluently at times, along with quite a bit of Spanish and Italian, and random phrases in half a dozen other languages. She lived in the real world of typewriters and tea. She could remember when the ice-box was kept cold with real blocks of ice. She told stories of when her family got a telephone, and when her father proudly drove their first car.

In the 90s, I tried to get her connected to the internet, telling her that she could send email in an instant for $20 per month. She asked, “why would I want to do that, when sending a letter costs just 32 cents? I don’t send that many letters every month.”

While we lived on this earth in some of the same decades, during these years of my adult life, we inhabited quite different worlds. From my world of software and silicon, I recall that people used to say of Steve Jobs that he created a reality distortion field, where people would believe his vision of the future — a future which would otherwise be unrealistic, yet by believing in it, they helped him create it.

I believe Fran had that power to create her own reality distortion field where people are witty, and heroes emerge from mundane events. Memories are versed in rhyme, silliness is interwoven in adult conversation, and children are ambassadors of wisdom. Memories of my grandmother are splashed in vivid watercolor or whimsical strokes of ink. Postcards can share a moment. Words have layers of meaning, which evaporate upon inspection.

sarah-and-fran

The post lessons learned since 1908 appeared first on the evolving ultrasaurus.

by sarah at March 30, 2016 01:21 AM

March 28, 2016

Carl Chenet

Du danger d’un acteur non-communautaire dans votre chaîne de production du Logiciel Libre

Suivez-moi aussi sur Diaspora*diaspora-banner ou Twitter 

La récente affaire désormais connue sous le nom de npmgate (voir plus bas si cet événement ne vous dit rien) est encore une illustration à mon sens du danger intrinsèque d’utiliser le produit possédé et/ou contrôlé par un acteur non-communautaire dans sa chaîne de production du Logiciel Libre. J’ai déjà tenté à plusieurs reprises d’avertir mes consœurs et confrères du Logiciel Libre sur ce danger, en particulier au sujet de Github dans mon billet de blog Le danger Github.

L’affaire npmgate

Pour rappel sur cette affaire précise du npmgate, l’entreprise américaine Kik.com a demandé à l’entreprise npm Inc., également société américaine, qui gère le site npmjs.com et l’infrastructure derrière l’installeur automatisé de modules Node.js npm, de renommer un module nommé kik écrit par Azer Koçulu, auteur prolifique de cette communauté. Il se trouve qu’il est également l’auteur d’une fonction left-pad de 11 lignes massivement utilisée comme dépendance dans de très très nombreux logiciels enNode.js. Ce monsieur avait exprimé un refus à la demande de kik.com.

Npm-logo

Cette entreprise, qui pour des raisons obscures de droits d’auteur semblait prête à tout pour faire respecter sa stupide demande (elle utilise d’ailleurs elle-même du Logiciel Libre, à savoir au moins jquery, Python, Packer et Jenkins d’après leur blog et en profite au passage pour montrer ainsi sa profonde gratitude à l’égarde de la communauté), s’est donc adressé à npm, inc, qui, effrayée par une éventuelle violation du droit d’auteur, a obtempéré, sûrement pour éviter tout conflit juridique.

Azer Koçulu, profondément énervé d’avoir vu son propre droit d’auteur et sa volonté bafoués, a alors décidé de retirer de la publication sur npmjs.com l’ensemble de ses modules, dont le très utilisé left-pad. Bien que npmjs.com ait apparemment tenté de publier un module avec le même nom, le jeu des dépendances et des différentes versions a provoqué une réaction en chaîne provoquant l’échec de la construction de très nombreux projets.

Sourceforge, Github, npmjs.com et … bientôt ?

Sourceforge, devant un besoin accru de rentabilité et face à la desaffection de ses utilisateurs au profit, on l’imagine, de Github, a commencé à ne plus respecter les prérequis essentiels d’une forge logicielle respectueuse de ses utilisateurs.

sourceforge-logo

J’ai présenté assez exhaustivement les dangers liés à l’utilisation de Github dans une chaîne de production du Logiciel dans mon article de blog le danger Github, à savoir en résumé la centralisation, qui avec de nombreux installeurs automatisés qui vont maintenant directement chercher les différents composants sur Github, si Github vient à retirer une dépendance, il se produira une réaction en chaîne équivalente au npmgate. Et cela n’est qu’une question de temps avant que cela n’arrive.

Autres risques, l’emploi de logiciel propriétaire comme l’outil de suit de bugs de Github issues, la fermeture et la portabilité limitées des données hébergées par eux et l’effet plus insidieux mais de plus en plus réel d’une sorte de fainéantise communautaire, inédite dans  notre communauté dans la recherche de solutions libres à des produits propriétaires existants et hégémoniques, sentiment également ressenti par d’autres.

github-logo

Le danger de l’acteur non-communautaire

Devant la massification des interdépendances entre les différents projets et le développement important du Logiciel Libre, aujourd’hui rejoint par les principaux acteurs hier du logiciel privateur comme Microsoft ou Apple qui aujourd’hui adoptent et tentent de récupérer à leur profit nos usages, il est plus que jamais dangereux de se reposer sur un acteur non-communautaires, à savoir en général une entreprise, dans la chaîne de création de nos logiciels.

microsoft-free.png

C’est un modèle séduisant à bien des égards. Facile, séduisant, en général peu participatif (on cherche à ce que vous soyez un utilisateur content), souvent avec un niveau de service élevé par rapport à ce que peut fournir un projet communautaire naissant, une entreprise investissant dans un nouveau projet, souvent à grand renfort de communication, va tout d’abord apparaître comme une bonne solution.

Il en va malheureusement bien différemment sur le moyen ou long terme comme nous le prouve les exemples suscités. Investir dans la communauté du Logiciel Libre en tant qu’utilisateur et créateur de code assure une solution satisfaisant tous les acteurs et surtout pérenne sur le long terme. Ce que chacun de nous recherche car bien malin qui peut prévoir le succès et la durée de vie qu’aura un nouveau projet logiciel.


by Carl Chenet at March 28, 2016 09:00 PM

March 26, 2016

Slaptijack

OpenSSH: Using a Bastion Host

Quick and dirty OpenSSH configlet here. If you have a set of hosts or devices that require you to first jump through a bastion host, the following will allow you to run a single ssh command: Host * ProxyCommand ssh -A <bastion_host> nc %h %p ...

by Scott Hebert at March 26, 2016 01:00 PM

March 25, 2016

syslog.me

A dockerized policy hub

DockerThis is a quick post, apologies in advance if it will come out a bit raw.

I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.

As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.

Oh, by the way, a run of docker search cfengine will tell you that I’m not the only one to have played with this😉

Building the image

To build the image I created the following directory structure:

cf-serverd
|-- Dockerfile
`-- etc
    `-- apt
        |-- sources.list.d
        |   `-- cfengine-community.list
        `-- trusted.gpg.d
            `-- cfengine.gpg

The cfengine-community.list and the cfengine.gpg files were created according to the instructions in the page Configuring CFEngine Linux Package Repositories. The docker file is as follows:

FROM debian:jessie
MAINTAINER Marco Marongiu
LABEL os.distributor="Debian" os.codename="jessie" os.release="8.3" cfengine.release="3.7.2"
RUN apt-get update ; apt-get install -y apt-transport-https
ADD etc/apt/trusted.gpg.d/cfengine.gpg /tmp/cfengine.gpg
RUN apt-key add /tmp/cfengine.gpg
ADD etc/apt/sources.list.d/cfengine-community.list /etc/apt/sources.list.d/cfengine-community.list
RUN apt-get update && apt-get install -y cfengine-community=3.7.2-1
ENTRYPOINT [ "/var/cfengine/bin/cf-serverd" ]
CMD [ "-F" ]
EXPOSE 5308

With this Dockerfile, running docker build -t bronto/cf-serverd:3.7.2 . in the build directory will create an image bronto/cf-serverd based on the official Debian Jessie image:

  • we update the APT sources and install apt-transport-https;
  • we copy the APT key into the container and run apt-key add on it (I tried copying it in /etc/apt/trusted.gpg.d directly but it didn’t work for some reason);
  • we add the APT source for CFEngine
  • we update the APT sources once again and we install CFEngine version 3.7.2 (if you don’t specify the version, it will install the latest highest version available, 3.8.1 to date);
  • we run cf-serverd and we expose the TCP port 5308, the port used by cf-serverd.

Thus, running docker run -d -P bronto/cf-serverd will create a container that exposes cf-serverd from the host on a random port.

Update: having defined an entry point in cf-serverd, you may have trouble to run a shell in the container to take a look around. There is a solution and is simple indeed:

docker run -ti --entrypoint /bin/bash bronto/cf-serverd:3.7.2

What you should add

[Update: in the first edition of this post this section was a bit different; some of the things I had doubts about are now solved and the section has been mostly rewritten]

The cf-serverd image (and generated containers that go with it) is functional but doesn’t do anything useful, for two reasons:

  1. it doesn’t have a file /var/cfengine/inputs/promises.cf, so cf-serverd doesn’t have a proper configuration
  2. it doesn’t have anything but the default masterfiles in /var/cfengine/masterfiles.

Both of those (a configuration for cf-serverd and a full set of masterfiles) should be added in images built upon this one, so that the new image is fully configured for your specific needs and environment. Besides, you should ensure that /var/cfengine/ppkeys is persistent, e.g. by mounting it in the container from the docker host, so that neither the “hub’s” nor clients’ public keys are lost when you replace the container with a new one. For example, you could have the keys on the docker host at /var/local/cf-serverd/ppkeys, and mount that on the container’s volume via docker run’s -v option:

docker run -p 5308:5308 -d -v/var/local/cf-serverd/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2

A possible deployment workflow

What’s normally done in a standard CFEngine set-up is that policy updates are deployed on the policy hub in /var/cfengine/masterfiles and the hub distributes the updates to the clients. In a dockerized scenario, one would probably build a new image based on the cf-serverd image, adding an updated set of masterfiles and a configuration for cf-serverd, and then instantiate a container from the new image. You would do that every time the policy is updated.

Things to be sorted out

[Update: This section, too, went through a major rewriting after I’ve learnt more about the solutions.]

Is the source IP of a client rewritten?

Problem statement: I don’t know enough about Docker’s networking so I don’t know if the source IP of the clients is rewritten. In that case, cf-serverd’s ACLs would be completely ineffective and a proper firewall should be established on the host.

Solution: the address is rewritten if the connection is made through localhost (the source address appears as the host address on the docker bridge); if the connection is made through a “standard” interface, the address is not rewritten — which is good from the perspective of cf-serverd as that allows it to apply the standard ACLs

Can logs be sent to syslog?

Problem statement: How can one send the logs of a containerized process to a syslog server?

Solution: a docker container can, by itself, send logs to a syslog server, as well as to other logging platforms and destinations, through logging drivers. In the case of our cf-serverd container, the following  command line sends the logs to a standard syslog service on the docker host:
docker run -P -d --log-driver=syslog --log-opt tag="cf-serverd" -v /var/cfengine/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2
This produces log lines like, for example:

Mar 29 15:10:57 tyrrell docker/cf-serverd[728]: notice: Server is starting...

Notice how the tag is reflected in the process name: without that tag, you’d get the short ID of the container, that is something like this:

Mar 29 11:15:27 tyrrell docker/a4f332cfb5e9[728]: notice: Server is starting...

You can also send logs directly to journald on systemd-based systems. You gain some flexibility in that you can filter logs via metadata. However, up to my knowledge, it’s not possible to get a clear log message like the first one above that clearly states that you have cf-serverd running in docker: with journald, all you get when running journalctl --follow _SYSTEMD_UNIT=docker.service is something like:

Mar 29 15:22:02 tyrrell docker[728]: notice: Server is starting...

However, if you started the service in a named container (e.g. with the --name=cf-serverd option) you can still enjoy some cool stuff, e.g. you can see the logs from just that container by running

journalctl --follow CONTAINER_NAME=cf-serverd

Actually, if you ran several containers with the same name at different times, journalctl will return the logs from all of them, which is a nice feature to me. But if all you want is the logs from, say, the currently running container named cf-serverd, you can still filter by CONTAINER_ID. Not bad.

However, if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

References

 


Tagged: cfengine, Configuration management, Debian, DevOps, docker, Sysadmin

by bronto at March 25, 2016 03:49 PM

March 22, 2016

R.I.Pienaar

A Puppet 4 Hiera Based Node Classifier

When I first wrote Hiera I included a simple little hack called hiera_include() that would do a Array lookup and include everything it found. I only included it even because include at the time did not take Array arguments. In time this has become quite widely used and many people do their node classification using just this and the built in hierarchical nature of Hiera.

I’ve always wanted to do better though, like maybe write an actual ENC that uses Hiera data keys on the provided certname? Seemed like the only real win would be to be able to set the node environment from Hiera, I guess this might be valuable enough on it’s own.

Anyway, I think the ENC interface is really pretty bad and should be replaced by something better. So I’ve had the idea of a Hiera based classifier in my mind for years.

Some time ago Ben Ford made a interesting little hack project that used a set of rules to classify nodes and this stuck to my mind as being quite a interesting approach. I guess it’s a bit like the new PE node classifier.

Anyway, so I took this as a starting point and started working on a Hiera based classifier for Puppet 4 – and by that I mean the very very latest Puppet 4, it uses a bunch of the things I blogged about recently and the end result is that the module is almost entirely built using the native Puppet 4 DSL.

Simple list-of-classes based Classification


So first lets take a look at how this replaces/improves on the old hiera_include().

Not really much to be done I am afraid, it’s an array with some entries in it. It now uses the Knockout Prefix features of Puppet Lookup that I blogged about before to allow you to exclude classes from nodes:

So we want to include the sysadmins and sensu classes on all nodes, stick this in your common tier:

# common.yaml
classifier::extra_classes:
 - sysadmins
 - sensu

Then you have some nodes that need some more classes:

# clients/acme.yaml
classifier::extra_classes:
 - acme_sysadmins

At this point it’s basically same old same old, but lets see if we had some node that needed Nagios and not Sensu:

# nodes/example.net.yaml
classifier::extra_classes:
 - --sensu
 - nagios

Here we use the knockout prefix of to remove the sensu class and add the nagios one instead. That’s already a big win from old hiera_include() but to be fair this is just as a result of the new Lookup features.

It really gets interesting later when you throw in some rules.

Rule Based Classification


The classifier is built around a set of Classifications and these are made up of one or many rules per Classification which if they match on a host means a classification applies to the node. And the classifications can include classes and create data.

Here’s a sample rule where I want to do some extra special handling of RedHat like machines. But I want to handle VMs different from Physical machines.

# common.yaml
classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
    classes:
      - centos::vm
 
  RedHat:
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
    data:
      redhat_os: true
    classes:
      - centos::common

This shows 2 Classifications one called “RedHat VMs” and one just “RedHat”, you can see the VMs one contains 2 rules and it sets match: all so they both have to match.

End result here is that all RedHat machines get centos::common and RedHat VMs also get centos::vm. Additionally 2 pieces of data will be created, a bit redundant in this example but you get the idea.

Using the Classifier


So using the classifier in the basic sense is just like hiera_include():

node default {
  include classifier
}

This will process all the rules and include the resulting classes. It will also expose a bunch of information via this class, the most interesting is $classifier::data which is a Hash of all the data that the rules emit. But you can also access the the included classes via $classifier::classes and even the whole post processed classification structure in $classifier::classification. Some others are mentioned in the README.

You can do very impressive Hiera based overrides, here’s an example of adjusting a rule for a particular node:

# clients/acme.yaml
classifier::rules:
  RedHat VMs:
    classes:
      - some::other
    data:
      extra_data: true

This has the result that for this particular client additional data will be produced and additional classes will be included – but only on their RedHat VMs. You can even use the knockout feature here to really adjust the data and classes.

The classes get included automatically for you and if you set classifier::debug you’ll get a bunch of insight into how classification happens.

Hiera Inception


So at this point things are pretty neat, but I wanted to both see how the new Data Provider API look and also see if I can expose my classifier back to Hiera.

Imagine I am making all these classifications but with what I shown above it’s quite limited because it’s just creating data for the $classifier::data hash. What you really want is to create Hiera data and be able to influence Automatic Parameter Lookup.

So a rule like:

# clients/acme.yaml
classifier::rules:
  RedHat:
    data:
      centos::common::selinux: permissive

Here I am taking the earlier RedHat rule and setting centos::common::selinux: permissive, now you want this to be Data that will be used by the Automatic Parameter Lookup system to set the selinux parameter of the centos::common class.

You can configure your Environment with this hiera.yaml

# environments/production/hiera.yaml
---
version: 4
datadir: "hieradata"
hierarchy:
  - name: "%{trusted.certname}"
    backend: "yaml"
 
  - name: "classification data"
    backend: "classifier"
 
  # ... and the rest

Here I allow node specific YAML files to override the classifier and then have a new Data Provider called classifier that expose the classification back to Hiera. Doing it this way is super important, the priority the classifier have on a site is not a single one size fits all choice, doing it this way means the site admins can decide where in their life classification site so it best fits their workflows.

So this is where the inception reference comes in, you extract data from Hiera, process it using the Puppet DSL and expose it back to Hiera. At first thought this is a bit insane but it works and it’s really nice. Basically this lets you completely redesign hiera from something that is Hierarchical in nature and turn it into a rule based system – or a hybrid.

And you can even test it from the CLI:

% puppet lookup --compile --explain centos::common::selinux
Merge strategy first
  Data Binding "hiera"
    No such key: "centos::common::selinux"
  Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "environments/production/hiera.yaml"
    Merge strategy first
      Data Provider "%{trusted.certname}"
        Path "environments/production/hieradata/dev2.devco.net.yaml"
          Original path: "%{trusted.certname}"
          No such key: "centos::common::selinux"
      Data Provider "classification data"
        Found key: "centos::common::selinux" value: "permissive"
      Merged result: "permissive"
  Merged result: "permissive"

I hope to expose here which rule provided this data like the other lookup explanations do.

Clearly this feature is a bit crazy, so consider this a exploration of what’s possible rather than a strong endorsement of this kind of thing :)

Implementation


Implementing this has been pretty interesting, I got to use a lot of the new Puppet 4 features. Like I mentioned all the data processing, iteration and deriving of classes and data is done using the native Puppet DSL, take a look at the functions directory for example.

It also makes use of the new Type system and Type Aliases all over the place to create a strong schema for the incoming data that gets validated at all levels of the process. See the types directory.

The new Modules in Data is used to set lookup strategies so that there are no manual calling of lookup(), see the module data.

Writing a Data Provider ie. a Hiera Backend for the new lookup system is pretty nice, I think the APIs around there is still maturing so definitely bleeding edge stuff. You can see the bindings and data provider in the lib directory.

As such this module only really has a hope of working on Puppet 4.4.0 at least, and I expect to use new features as they come along.

Conclusion


There’s a bunch more going on, check the module README. It’s been quite interesting to be able to really completely rethink how Hiera data is created and what a modern take on classification can achieve.

With this approach if you’re really not too keen on the hierarchy you can totally just use this as a rules based Hiera instead, that’s pretty interesting! I wonder what other strategies for creating data could be prototyped like this?

I realise this is very similar to the PE node classifier but with some additional benefits in being exposed to Hiera via the Data Provider, being something you can commit to git and being adjustable and overridable using the new Hiera features I think it will appeal to a different kind of user. But yeah, it’s quite similar. Credit to Ben Ford for his original Ruby based implementation of this idea which I took and iterated on. Regardless the ‘like a iTunes smart list’ node classifier isn’t exactly a new idea and have been discussed for literally years :)

You can get the module on the forge as ripienaar/classifier and I’d greatly welcome feedback and ideas.

by R.I. Pienaar at March 22, 2016 04:38 PM

Evaggelos Balaskas

Let's Encrypt on Prosody & enable Forward secrecy

Below is my setup to enable Forward secrecy

Generate DH parameters:

# openssl dhparam -out /etc/pki/tls/dh-2048.pem 2048

and then configure your prosody with Let’s Encrypt certificates

VirtualHost "balaskas.gr"

  ssl = {
      key = "/etc/letsencrypt/live/balaskas.gr/privkey.pem";
      certificate = "/etc/letsencrypt/live/balaskas.gr/fullchain.pem";
      cafile = "/etc/pki/tls/certs/ca-bundle.crt";

      # enable strong encryption
      ciphers="EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4";
      dhparam = "/etc/pki/tls/dh-2048.pem";
    }

if you only want to accept TLS connection from clients and servers, change your settings to these:

c2s_require_encryption = true
s2s_secure_auth = true

Check your setup

XMPP Observatory

or check your certificates with openssl:

Server: # openssl s_client -connect balaskas.gr:5269  -starttls xmpp < /dev/null
Client: # openssl s_client -connect balaskas.gr:5222  -starttls xmpp < /dev/null

March 22, 2016 11:14 AM

syslog.me

The future of configuration management – mini talks

On March 7th I was at the DevOps Norway meetup where both Jan Ivar Beddari and me presented an extended version of the ignite talks we held at Config Management Camp in February. The talks were streamed and recorded through Hangouts and the recording is available on YouTube.

The meeting gave me the opportunity to explain in a bit more detail my viewpoint about why we need a completely new design for our configuration management tools. I had tried already in a blog post that caused some amount of controversy and it was good to get a second chance.

I’d recommend you watch both Jan Ivar’s talk and mine, but if you’re interested only in my part then you can check out here:

And don’t forget to check out the slides, both Jan Ivar’s and mine.


Tagged: Configuration management, DevOps, Sysadmin

by bronto at March 22, 2016 09:00 AM

[bc-log]

Create self-managing servers with Masterless Saltstack Minions

Over the past two articles I've described building a Continuous Delivery pipeline for my blog (the one you are currently reading). The first article covered packaging the blog into a Docker container and the second covered using Travis CI to build the Docker image and perform automated testing against it.

While the first two articles covered quite a bit of the CD pipeline there is one piece missing; automating deployment. While there are many infrastructure and application tools for automated deployments I've chosen to use Saltstack. I've chosen Saltstack for many reasons but the main reason is that it can be used to manage both my host system's configuration and the Docker container for my blog application. Before I can start using Saltstack however, I first need to set it up.

I've covered setting up Saltstack before, but for this article I am planning on setting up Saltstack in a Masterless architecture. A setup that is quite different from the traditional Saltstack configuration.

Masterless Saltstack

A traditional Saltstack architecture is based on a Master and Minion design. With this architecture the Salt Master will push desired states to the Salt Minion. This means that in order for a Salt Minion to apply the desired states it needs to be able to connect to the master, download the desired states and then apply them.

A masterless configuration on the other hand involves only the Salt Minion. With a masterless architecture the Salt state files are stored locally on the Minion bypassing the need to connect and download states from a Master. This architecture provides a few benefits over the traditional Master/Minion architecture. The first is removing the need to have a Salt Master server; which will help reduce infrastructure costs, an important item as the environment in question is dedicated to hosting a simple personal blog.

The second benefit is that in a masterless configuration each Salt Minion is independent which makes it very easy to provision new Minions and scale out. The ability to scale out is useful for a blog, as there are times when an article is reposted and traffic suddenly increases. By making my servers self-managing I am able to meet that demand very quickly.

A third benefit is that Masterless Minions have no reliance on a Master server. In a traditional architecture if the Master server is down for any reason the Minions are unable to fetch and apply the Salt states. With a Masterless architecture, the availability of a Master server is not even a question.

Setting up a Masterless Minion

In this article I will walk through how to install and configure a Salt in a masterless configuration.

Installing salt-minion

The first step to creating a Masterless Minion is to install the salt-minion package. To do this we will follow the official steps for Ubuntu systems outlined at docs.saltstack.com. Which primarily uses the Apt package manager to perform the installation.

Importing Saltstack's GPG Key

Before installing the salt-minion package we will first need to import Saltstack's Apt repository key. We can do this with a simple bash one-liner.

# wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
OK

This GPG key will allow Apt to validate packages downloaded from Saltstack's Apt repository.

Adding Saltstack's Apt Repository

With the key imported we can now add Saltstack's Apt repository to our /etc/apt/sources.list file. This file is used by Apt to determine which repositories to check for available packages.

# vi /etc/apt/sources.list

Once editing the file simply append the following line to the bottom.

deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main

With the repository defined we can now update Apt's repository inventory. A step that is required before we can start installing packages from the new repository.

Updating Apt's cache

To update Apt's repository inventory, we will execute the command apt-get update.

# apt-get update
Ign http://archive.ubuntu.com trusty InRelease                                 
Get:1 http://security.ubuntu.com trusty-security InRelease [65.9 kB]           
Get:2 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]             
Get:3 http://repo.saltstack.com trusty InRelease [2,813 B]                     
Get:4 http://repo.saltstack.com trusty/main amd64 Packages [8,046 B]           
Get:5 http://security.ubuntu.com trusty-security/main Sources [105 kB]         
Hit http://archive.ubuntu.com trusty Release.gpg                              
Ign http://repo.saltstack.com trusty/main Translation-en_US                    
Ign http://repo.saltstack.com trusty/main Translation-en                       
Hit http://archive.ubuntu.com trusty Release                                   
Hit http://archive.ubuntu.com trusty/main Sources                              
Hit http://archive.ubuntu.com trusty/universe Sources                          
Hit http://archive.ubuntu.com trusty/main amd64 Packages                       
Hit http://archive.ubuntu.com trusty/universe amd64 Packages                   
Hit http://archive.ubuntu.com trusty/main Translation-en                       
Hit http://archive.ubuntu.com trusty/universe Translation-en                   
Ign http://archive.ubuntu.com trusty/main Translation-en_US                    
Ign http://archive.ubuntu.com trusty/universe Translation-en_US                
Fetched 3,136 kB in 8s (358 kB/s)                                              
Reading package lists... Done

With the above complete we can now access the packages available within Saltstack's repository.

Installing with apt-get

Specifically we can now install the salt-minion package, to do this we will execute the command apt-get install salt-minion.

# apt-get install salt-minion
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common
Suggested packages:
  debtags python-jinja2-doc python-beaker python-mako-doc
  python-egenix-mxdatetime mysql-server-5.1 mysql-server python-mysqldb-dbg
  python-augeas
The following NEW packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common salt-minion
0 upgraded, 15 newly installed, 0 to remove and 155 not upgraded.
Need to get 4,959 kB of archives.
After this operation, 24.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main mysql-common all 5.5.47-0ubuntu0.14.04.1 [13.5 kB]
Get:2 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main python-tornado amd64 4.2.1-1 [274 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libmysqlclient18 amd64 5.5.47-0ubuntu0.14.04.1 [597 kB]
Get:4 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main salt-common all 2015.8.7+ds-1 [3,108 kB]
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for ureadahead (0.100.0-16) ...

After a successful installation of the salt-minion package we now have a salt-minion instance running with the default configuration.

Configuring the Minion

With a Traditional Master/Minion setup, this point would be where we configure the Minion to connect to the Master server and restart the running service.

For this setup however, we will be skipping the Master server definition. Instead we will need to tell the salt-minion service to look for Salt state files locally. To alter the salt-minion's configuration we can either edit the /etc/salt/minion configuration file which is the default configuration file. Or we could add a new file into /etc/salt/minion.d/; this .d directory is used to override default configurations defined in /etc/salt/minion.

My personal preference is to create a new file within the minion.d/ directory, as this keeps the configuration easy to manage. However, there is no right or wrong method; as this is a personal and environmental preference.

For this article we will go ahead and create the following file /etc/salt/minion.d/masterless.conf.

# vi /etc/salt/minion.d/masterless.conf

Within this file we will add two configurations.

file_client: local
file_roots:
  base:
    - /srv/salt/base
  bencane:
    - /srv/salt/bencane

The first configuration item above is file_client. By setting this configuration to local we are telling the salt-minion service to search locally for desired state configurations rather than connecting to a Master.

The second configuration is the file_roots dictionary. This defines the location of Salt state files. In the above example we are defining both /srv/salt/base and /srv/salt/bencane. These two directories will be where we store our Salt state files for this Minion to apply.

Stopping the salt-minion service

While in most cases we would need to restart the salt-minion service to apply the configuration changes, in this case, we actually need to do the opposite; we need to stop the salt-minion service.

# service salt-minion stop
salt-minion stop/waiting

The salt-minion service does not need to be running when setup as a Masterless Minion. This is because the salt-minion service is only running to listen for events from the Master. Since we have no master there is no reason to keep this service running. If left running the salt-minion service will repeatedly try to connect to the defined Master server which by default is a host that resolves to salt. To remove unnecessary overhead it is best to simply stop this service in a Masterless Minion configuration.

Populating the desired states

At this point we have a Salt Minion that has been configured to run masterless. However, at this point the Masterless Minion has no Salt states to apply. In this section we will provide the salt-minion agent two sets of Salt states to apply. The first will be placed into the /srv/salt/base directory. This file_roots directory will contain a base set of Salt states that I have created to manage a basic Docker host.

Deploying the base Salt states

The states in question are available via a public GitHub repository. To deploy these Salt states we can simply clone the repository into the /srv/salt/base directory. Before doing so however, we will need to first create the /srv/salt directory.

# mkdir -p /srv/salt

The /srv/salt directory is Salt's default state directory, it is also the parent directory for both the base and bencane directories we defined within the file_roots configuration. Now that the parent directory exists, we will clone the base repository into this directory using git.

# cd /srv/salt/
# git clone https://github.com/madflojo/salt-base.git base
Cloning into 'base'...
remote: Counting objects: 50, done.
remote: Total 50 (delta 0), reused 0 (delta 0), pack-reused 50
Unpacking objects: 100% (50/50), done.
Checking connectivity... done.

As the salt-base repository is copied into the base directory the Salt states within that repository are now available to the salt-minion agent.

# ls -la /srv/salt/base/
total 84
drwxr-xr-x 18 root root 4096 Feb 28 21:00 .
drwxr-xr-x  3 root root 4096 Feb 28 21:00 ..
drwxr-xr-x  2 root root 4096 Feb 28 21:00 dockerio
drwxr-xr-x  2 root root 4096 Feb 28 21:00 fail2ban
drwxr-xr-x  2 root root 4096 Feb 28 21:00 git
drwxr-xr-x  8 root root 4096 Feb 28 21:00 .git
drwxr-xr-x  3 root root 4096 Feb 28 21:00 groups
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iotop
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iptables
-rw-r--r--  1 root root 1081 Feb 28 21:00 LICENSE
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ntpd
drwxr-xr-x  2 root root 4096 Feb 28 21:00 python-pip
-rw-r--r--  1 root root  106 Feb 28 21:00 README.md
drwxr-xr-x  2 root root 4096 Feb 28 21:00 screen
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ssh
drwxr-xr-x  2 root root 4096 Feb 28 21:00 swap
drwxr-xr-x  2 root root 4096 Feb 28 21:00 sysdig
drwxr-xr-x  3 root root 4096 Feb 28 21:00 sysstat
drwxr-xr-x  2 root root 4096 Feb 28 21:00 timezone
-rw-r--r--  1 root root  208 Feb 28 21:00 top.sls
drwxr-xr-x  2 root root 4096 Feb 28 21:00 wget

From the above directory listing we can see that the base directory has quite a few Salt states. These states are very useful for managing a basic Ubuntu system as they perform steps such as installing Docker (dockerio), to setting the system timezone (timezone). Everything needed to run a basic Docker host is available and defined within these base states.

Applying the base Salt states

Even though the salt-minion agent can now use these Salt states, there is nothing running to tell the salt-minion agent it should do so. Therefore the desired states are not being applied.

To apply our new base states we can use the salt-call command to tell the salt-minion agent to read the Salt states and apply the desired states within them.

# salt-call --local state.highstate

The salt-call command is used to interact with the salt-minion agent from command line. In the above the salt-call command was executed with the state.highstate option.

This tells the agent to look for all defined states and apply them. The salt-call command also included the --local option, this option is specifically used when running a Masterless Minion. This flag tells the salt-minion agent to look through it's local state files rather than attempting to pull from a Salt Master.

The below shows the results of the execution above, within this output we can see the various states being applied successfully.

----------
          ID: GMT
    Function: timezone.system
      Result: True
     Comment: Set timezone GMT
     Started: 21:09:31.515117
    Duration: 126.465 ms
     Changes:   
              ----------
              timezone:
                  GMT
----------
          ID: wget
    Function: pkg.latest
      Result: True
     Comment: Package wget is already up-to-date
     Started: 21:09:31.657403
    Duration: 29.133 ms
     Changes:   

Summary for local
-------------
Succeeded: 26 (changed=17)
Failed:     0
-------------
Total states run:     26

In the above output we can see that all of the defined states were executed successfully. We can validate this further if we check the status of the docker service. Which we can see from below is now running; where before executing salt-call, Docker was not installed on this system.

# service docker status
docker start/running, process 11994

With a successful salt-call execution our Salt Minion is now officially a Masterless Minion. However, even though our server has Salt installed, and is configured as a Masterless Minion, there are still a few steps we need to take to make this Minion "Self Managing".

Self-Managing Minions

In order for our Minion to be Self-Managed, the Minion server should not only apply the base states above, it should also keep the salt-minion service and configuration up to date as well. To do this, we will be cloning yet another git repository.

Deploying the blog specific Salt states

This repository however, has specific Salt states used to manage the salt-minion agent, for not only this but also any other Masterless Minion used to host this blog.

# cd /srv/salt
# git clone https://github.com/madflojo/blog-salt bencane
Cloning into 'bencane'...
remote: Counting objects: 25, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 25 (delta 4), reused 20 (delta 2), pack-reused 0
Unpacking objects: 100% (25/25), done.
Checking connectivity... done.

In the above command we cloned the blog-salt repository into the /srv/salt/bencane directory. Like the /srv/salt/base directory the /srv/salt/bencane directory is also defined within the file_roots that we setup earlier.

Applying the blog specific Salt states

With these new states copied to the /srv/salt/bencane directory, we can once again run the salt-call command to trigger the salt-minion agent to apply these states.

# salt-call --local state.highstate
[INFO    ] Loading fresh modules for state activity
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://top.sls'
[INFO    ] Fetching file from saltenv 'bencane', ** skipped ** latest already in cache u'salt://top.sls'
----------
          ID: /etc/salt/minion.d/masterless.conf
    Function: file.managed
      Result: True
     Comment: File /etc/salt/minion.d/masterless.conf is in the correct state
     Started: 21:39:00.800568
    Duration: 4.814 ms
     Changes:   
----------
          ID: /etc/cron.d/salt-standalone
    Function: file.managed
      Result: True
     Comment: File /etc/cron.d/salt-standalone updated
     Started: 21:39:00.806065
    Duration: 7.584 ms
     Changes:   
              ----------
              diff:
                  New file
              mode:
                  0644

Summary for local
-------------
Succeeded: 37 (changed=7)
Failed:     0
-------------
Total states run:     37

Based on the output of the salt-call execution we can see that 7 Salt states were executed successfully. This means that the new Salt states within the bencane directory were applied. But what exactly did these states do?

Understanding the "Self-Managing" Salt states

This second repository has a hand full of states that perform various tasks specific to this environment. The "Self-Managing" states are all located within the srv/salt/bencane/salt directory.

$ ls -la /srv/salt/bencane/salt/
total 20
drwxr-xr-x 5 root root 4096 Mar 20 05:28 .
drwxr-xr-x 5 root root 4096 Mar 20 05:28 ..
drwxr-xr-x 3 root root 4096 Mar 20 05:28 config
drwxr-xr-x 2 root root 4096 Mar 20 05:28 minion
drwxr-xr-x 2 root root 4096 Mar 20 05:28 states

Within the salt directory there are several more directories that have defined Salt states. To get started let's look at the minion directory. Specifically, let's take a look at the salt/minion/init.sls file.

# cat salt/minion/init.sls
salt-minion:
  pkgrepo:
    - managed
    - humanname: SaltStack Repo
    - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub
  pkg:
    - latest
  service:
    - dead
    - enable: False

/etc/salt/minion.d/masterless.conf:
  file.managed:
    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

/etc/cron.d/salt-standalone:
  file.managed:
    - source: salt://salt/config/etc/cron.d/salt-standalone

Within the minion/init.sls file there are 5 Salt states defined.

Breaking down the minion/init.sls states

Let's break down some of these states to better understand what actions they are performing.

  pkgrepo:
    - managed
    - humanname: SaltStack Repo
    - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub

The first state defined is a pkgrepo state. We can see based on the options that this state is use to manage the Apt repository that we defined earlier. We can also see from the key_url option, that even the GPG key we imported earlier is managed by this state.

  pkg:
    - latest

The second state defined is a pkg state. This is used to manage a specific package, specifically in this case the salt-minion package. Since the latest option is present the salt-minion agent will not only install the latest salt-minion package but also keep it up to date with the latest version if it is already installed.

  service:
    - dead
    - enable: False

The third state is a service state. This state is used to manage the salt-minion service. With the dead and enable: False settings specified the salt-minion agent will stop and disable the salt-minion service.

So far these states are performing the same steps we performed manually above. Let's keep breaking down the minion/init.sls file to understand what other steps we have told Salt to perform.

/etc/salt/minion.d/masterless.conf:
  file.managed:
    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

The fourth state is a file state, this state is deploying a /etc/salt/minion.d/masterless.conf file. This just happens to be the same file we created earlier. Let's take a quick look at the file being deployed to understand what Salt is doing.

$ cat salt/config/etc/salt/minion.d/masterless.conf
file_client: local
file_roots:
  base:
    - /srv/salt/base
  bencane:
    - /srv/salt/bencane

The contents of this file are exactly the same as the masterless.conf file we created in the earlier steps. This means that while right now the configuration file being deployed is the same as what is currently deployed. In the future if any changes are made to the masterless.conf within this git repository, those changes will then be deployed on the next state.highstate execution.

/etc/cron.d/salt-standalone:
  file.managed:
    - source: salt://salt/config/etc/cron.d/salt-standalone

The fifth state is also a file state, while this state is also deploying a file the file in question is very different. Let's take a look at this file to understand what it is used for.

$ cat salt/config/etc/cron.d/salt-standalone
*/2 * * * * root su -c "/usr/bin/salt-call state.highstate --local 2>&1 > /dev/null"

The salt-standalone file is a /etc/cron.d based cron job that appears to be running the same salt-call command we ran earlier to apply the local Salt states. In a masterless configuration there is no scheduled task to tell the salt-minion agent to apply all of the Salt states. The above cron job takes care of this by simply executing a local state.highstate execution every 2 minutes.

Summary of minion/init.sls

Based on the contents of the minion/init.sls we can see how this salt-minion agent is configured to be "Self-Managing". From the above we were able to see that the salt-minion agent is configured to perform the following steps.

  1. Configure the Saltstack Apt repository and GPG keys
  2. Install the salt-minion package or update to the newest version if already installed
  3. Deploy the masterless.conf configuration file into /etc/salt/minion.d/
  4. Deploy the /etc/cron.d/salt-standalone file which deploys a cron job to initiate state.highstate executions

These steps ensure that the salt-minion agent is both configured correctly and applying desired states every 2 minutes.

While the above steps are useful for applying the current states, the whole point of continuous delivery is to deploy changes quickly. To do this we need to also keep the Salt states up-to-date.

Keeping Salt states up-to-date with Salt

One way to keep our Salt states up to date is to tell the salt-minion agent to update them for us.

Within the /srv/salt/bencane/salt directory exists a states directory that contains two files base.sls and bencane.sls. These two files both contain similar Salt states. Let's break down the contents of the base.sls file to understand what actions it's telling the salt-minion agent to perform.

$ cat salt/states/base.sls
/srv/salt/base:
  file.directory:
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

base_states:
  git.latest:
    - name: https://github.com/madflojo/salt-base.git
    - target: /srv/salt/base
    - force: True

In the above we can see that the base.sls file contains two Salt states. The first is a file state that is set to ensure the /srv/salt/base directory exists with the defined permissions.

The second state is a bit more interesting as it is a git state which is set to pull the latest copy of the salt-base repository and clone it into /srv/salt/base.

With this state defined, every time the salt-minion agent runs (which is every 2 minutes via the cron.d job); the agent will check for new updates to the repository and deploy them to /srv/salt/base.

The bencane.sls file contains similar states, with the difference being the repository cloned and the location to deploy the state files to.

$ cat salt/states/bencane.sls
/srv/salt/bencane:
  file.directory:
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

bencane_states:
  git.latest:
    - name: https://github.com/madflojo/blog-salt.git
    - target: /srv/salt/bencane
    - force: True

At this point, we now have a Masterless Salt Minion that is not only configured to "self-manage" it's own packages, but also the Salt state files that drive it.

As the state files within the git repositories are updated, those updates are then pulled from each Minion every 2 minutes. Whether that change is adding the screen package, or deploying a new Docker container; that change is deployed across many Masterless Minions all at once.

What's next

With the above steps complete, we now have a method for taking a new server and turning it into a Self-Managed Masterless Minion. What we didn't cover however, is how to automate the initial installation and configuration.

In next months article, we will talk about using salt-ssh to automate the first time installation and configuration the salt-minion agent using the same Salt states we used today.


Posted by Benjamin Cane

March 22, 2016 07:30 AM

March 21, 2016

Cryptography Engineering

Attack of the Week: Apple iMessage

Today's Washington Post has a story entitled "Johns Hopkins researchers poke a hole in Apple’s encryption", which describes the results of some research my students and I have been working on over the past few months.

As you might have guessed from the headline, the work concerns Apple, and specifically Apple's iMessage text messaging protocol. Over the past months my students Christina Garman, Ian Miers, Gabe Kaptchuk and Mike Rushanan and I have been looking closely at the encryption used by iMessage, in order to determine how the system fares against sophisticated attackers. The results of this analysis include some very neat new attacks that allow us to -- under very specific circumstances -- decrypt the contents of iMessage attachments, such as photos and videos.

The research team. From left:
Gabe Kaptchuk, Mike Rushanan, Ian Miers, Christina Garman
Now before I go further, it's worth noting that the security of a text messaging protocol may not seem like the most important problem in computer security. And under normal circumstances I might agree with you. But today the circumstances are anything but normal: encryption systems like iMessage are at the center of a critical national debate over the role of technology companies in assisting law enforcement.

A particularly unfortunate aspect of this controversy has been the repeated call for U.S. technology companies to add "backdoors" to end-to-end encryption systems such as iMessage. I've always felt that one of the most compelling arguments against this approach -- an argument I've made along with other colleagues -- is that we just don't know how to construct such backdoors securely. But lately I've come to believe that this position doesn't go far enough -- in the sense that it is woefully optimistic. The fact of the matter is that forget backdoors: we barely know how to make encryption work at all. If anything, this work makes me much gloomier about the subject.

But enough with the generalities. The TL;DR of our work is this:
Apple iMessage, as implemented in versions of iOS prior to 9.3 and Mac OS X prior to 10.11.4, contains serious flaws in the encryption mechanism that could allow an attacker -- who obtains iMessage ciphertexts -- to decrypt the payload of certain attachment messages via a slow but remote and silent attack, provided that one sender or recipient device is online. While capturing encrypted messages is difficult in practice on recent iOS devices, thanks to certificate pinning, it could still be conducted by a nation state attacker or a hacker with access to Apple's servers. You should probably patch now.
For those who want the gory details, I'll proceed with the rest of this post using the "fun" question and answer format I save for this sort of post.
What is Apple iMessage and why should I care?
Those of you who read this blog will know that I have a particular obsession with Apple iMessage. This isn’t because I’m weirdly obsessed with Apple — although it is a little bit because of that. Mostly it's because I think iMessage is an important protocol. The text messaging service, which was introduced in 2011, has the distinction of being the first widely-used end-to-end encrypted text messaging system in the world. 

To understand the significance of this, it's worth giving some background. Before iMessage, the vast majority of text messages were sent via SMS or MMS, meaning that they were handled by your cellular provider. Although these messages are technically encrypted, this encryption exists only on the link between your phone and the nearest cellular tower. Once an SMS reaches the tower, it’s decrypted, then stored and delivered without further protection. This means that your most personal messages are vulnerable to theft by telecom employees or sophisticated hackers. Worse, many U.S. carriers still use laughably weak encryption and protocols that are vulnerable to active interception.

So from a security point of view, iMessage was a pretty big deal. In a single stroke, Apple deployed encrypted messaging to millions of users, ensuring (in principle) that even Apple itself couldn't decrypt their communications. The even greater accomplishment was that most people didn’t even notice this happened — the encryption was handled so transparently that few users are aware of it. And Apple did this at very large scale: today, iMessage handles peak throughput of more than 200,000 encrypted messages per second, with a supported base of nearly one billion devices. 
So iMessage is important. But is it any good?
Answering this question has been kind of a hobby of mine for the past couple of years. In the past I've written about Apple's failure to publish the iMessage protocol, and on iMessage's dependence on a vulnerable centralized key server. Indeed, the use of a centralized key server is still one of iMessage's biggest weaknesses, since an attacker who controls the keyserver can use it to inject keys and conduct man in the middle attacks on iMessage users.

But while key servers are a risk, attacks on a key server seem fundamentally challenging to implement -- since they require the ability to actively manipulate Apple infrastructure without getting caught. Moreover, such attacks are only useful for prospective surveillance. If you fail to substitute a user's key before they have an interesting conversation, you can't recover their communications after the fact.

A more interesting question is whether iMessage's encryption is secure enough to stand up against retrospective decryption attacks -- that is, attempts to decrypt messages after they have been sent. Conducting such attacks is much more interesting than the naive attacks on iMessage's key server, since any such attack would require the existence of a fundamental vulnerability in iMessage's encryption itself. And in 2016 encryption seems like one of those things that we've basically figured out how to get right.

Which means, of course, that we probably haven't.
How does iMessage encryption work?
What we know about the iMessage encryption protocol comes from a previous reverse-engineering effort by a group from Quarkslab, as well as from Apple's iOS Security Guide. Based on these sources, we arrive at the following (simplified) picture of the basic iMessage encryption scheme:


To encrypt an iMessage, your phone first obtains the RSA public key of the person you're sending to. It then generates a random AES key k and encrypts the message with that key using CTR mode. Then it encrypts k using the recipient's RSA key. Finally, it signs the whole mess using the sender's ECDSA signing key. This prevents tampering along the way.

So what's missing here?

Well, the most obviously missing element is that iMessage does not use a Message Authentication Code (MAC) or authenticated encryption scheme to prevent tampering with the message. To simulate this functionality, iMessage simply uses an ECDSA signature formulated by the sender. Naively, this would appear to be good enough. Critically, it's not.

The attack works as follows. Imagine that a clever attacker intercepts the message above and is able to register her own iMessage account. First, the attacker strips off the original ECDSA signature made by the legitimate sender, and replaces it with a signature of her own. Next, she sends the newly signed message to the original recipient using her own account:


The outcome is that the user receives and decrypts a copy of the message, which has now apparently originated from the attacker rather than from the original sender. Ordinarily this would be a pretty mild attack -- but there's a useful wrinkle. In replacing the sender's signature with one of her own, the attacker has gained a powerful capability. Now she can tamper with the AES ciphertext (red) at will.

Specifically, since in iMessage the AES ciphertext is not protected by a MAC, it is therefore malleable. As long as the attacker signs the resulting message with her key, she can flip any bits in the AES ciphertext she wants -- and this will produce a corresponding set of changes when the recipient ultimately decrypts the message. This means that, for example, if the attacker guesses that the message contains the word "cat" at some position, she can flip bits in the ciphertext to change that part of the message to read "dog" -- and she can make this change even though she can't actually read the encrypted message.

Only one more big step to go.

Now further imagine that the recipient's phone will decrypt the message correctly provided that the underlying plaintext that appears following decryption is correctly formatted. If the plaintext is improperly formatted -- for a silly example, our tampering made it say "*7!" instead of "pig" -- then on receiving the message, the recipient's phone might return an error that the attacker can see.

It's well known that such a configuration capability allows our attacker the ability to learn information about the original message, provided that she can send many "mauled" variants to be decrypted. By mauling the underlying message in specific ways -- e.g., attempting to turn "dog" into "pig" and observing whether decryption succeeds -- the attacker can gradually learn the contents of the original message. The technique is known as a format oracle, and it's similar to the padding oracle attack discovered by Vaudenay.
So how exactly does this format oracle work?
The format oracle in iMessage is not a padding oracle. Instead it has to do with the compression that iMessage uses on every message it sends.

You see, prior to encrypting each message payload, iMessage applies a complex formatting that happens to conclude with gzip compression. Gzip is a modestly complex compression scheme that internally identifies repeated strings, applies Huffman coding, then tacks a CRC checksum computed over the original data at the end of the compressed message. It's this gzip-compressed payload that's encrypted within the AES portion of an iMessage ciphertext.

It turns out that given the ability to maul a gzip-compressed, encrypted ciphertext, there exists a fairly complicated attack that allows us to gradually recover the contents of the message by mauling the original message thousands of times and sending the modified versions to be decrypted by the target device. The attack turns on our ability to maul the compressed data by flipping bits, then "fix up" the CRC checksum correspondingly so that it reflects the change we hope to see in the uncompressed data. Depending on whether that test succeeds, we can gradually recover the contents of a message -- one byte at a time.

While I'm making this sound sort of simple, the truth is it's not. The message is encoded using Huffman coding, with a dynamic Huffman table we can't see -- since it's encrypted. This means we need to make laser-specific changes to the ciphertext such that we can predict the effect of those changes on the decrypted message, and we need to do this blind. Worse, iMessage has various countermeasures that make the attack more complex.

The complete details of the attack appear in the paper, and they're pretty eye-glazing, so I won't repeat them here. In a nutshell, we are able to decrypt a message under the following conditions:
  1. We can obtain a copy of the encrypted message
  2. We can send approximately 2^18 (invisible) encrypted messages to the target device
  3. We can determine whether or not those messages decrypted successfully or not
The first condition can be satisfied by obtaining ciphertexts from a compromise of Apple's Push Notification Service servers (which are responsible for routing encrypted iMessages) or by intercepting TLS connections using a stolen certificate -- something made more difficult due to the addition of certificate pinning in iOS 9. The third element is the one that initially seems the most challenging. After all, when I send an iMessage to your device, there's no particular reason that your device should send me any sort of response when the message decrypts. And yet this information is fundamental to conducting the attack!

It turns out that there's a big exception to this rule: attachment messages.
How do attachment messages differ from normal iMessages?
When I include a photo in an iMessage, I don't actually send you the photograph through the normal iMessage channel. Instead, I first encrypt that photo using a random 256-bit AES key, then I compute a SHA1 hash and upload the encrypted photo to iCloud. What I send you via iMessage is actually just an iCloud.com URL to the encrypted photo, the SHA1 hash, and the decryption key.

Contents of an "attachment" message.
When you successfully receive and decrypt an iMessage from some recipient, your Messages client will automatically reach out and attempt to download that photo. It's this download attempt, which happens only when the phone successfully decrypts an attachment message, that makes it possible for an attacker to know whether or not the decryption has succeeded.

One approach for the attacker to detect this download attempt is to gain access to and control your local network connections. But this seems impractical. A more sophisticated approach is to actually maul the URL within the ciphertext so that rather than pointing to iCloud.com, it points to a related URL such as i8loud.com. Then the attacker can simply register that domain, place a server there and allow the client to reach out to it. This requires no access to the victim's local network.

By capturing an attachment message, repeatedly mauling it, and monitoring the download attempts made by the victim device, we can gradually recover all of the digits of the encryption key stored within the attachment. Then we simply reach out to iCloud and download the attachment ourselves. And that's game over. The attack is currently quite slow -- it takes more than 70 hours to run -- but mostly because our code is slow and not optimized. We believe with more engineering it could be made to run in a fraction of a day.

Result of decrypting the AES key for an attachment. Note that the ? represents digits we could not recover for various reasons, typically due to string repetitions. We can brute-force the remaining digits.
The need for an online response is why our attack currently works against attachment messages only: those are simply the messages that make the phone do visible things. However, this does not mean the flaw in iMessage encryption is somehow limited to attachments -- it could very likely be used against other iMessages, given an appropriate side-channel.
How is Apple fixing this?
Apple's fixes are twofold. First, starting in iOS 9.0 (and before our work), Apple began deploying aggressive certificate pinning across iOS applications. This doesn't fix the attack on iMessage crypto, but it does make it much harder for attackers to recover iMessage ciphertexts to decrypt in the first place.

Unfortunately even if this works perfectly, Apple still has access to iMessage ciphertexts. Worse, Apple's servers will retain these messages for up to 30 days if they are not delivered to one of your devices. A vulnerability in Apple Push Network authentication, or a compromise of these servers could read them all out. This means that pinning is only a mitigation, not a true fix.

As of iOS 9.3, Apple has implemented a short-term mitigation that my student Ian Miers proposed. This relies on the fact that while the AES ciphertext is malleable, the RSA-OAEP portion of the ciphertext is not. The fix maintains a "cache" of recently received RSA ciphertexts and rejects any repeated ciphertexts. In practice, this shuts down our attack -- provided the cache is large enough. We believe it probably is.

In the long term, Apple should drop iMessage like a hot rock and move to Signal/Axolotl.
So what does it all mean?
As much as I wish I had more to say, fundamentally, security is just plain hard. Over time we get better at this, but for the foreseeable future we'll never be ahead. The only outcome I can hope for is that people realize how hard this process is -- and stop asking technologists to add unacceptable complexity to systems that already have too much of it.

by Matthew Green (noreply@blogger.com) at March 21, 2016 09:44 PM

March 18, 2016

Slaptijack

Interleave Two Lists of Variable Length (Python)

I was recently asked to take two lists and interleave them. Although I can not think of a scenario off the top of my head where this might be useful, it doesn't strike me as a completely silly thing to do. StackOverflow's top answer for this problem us...

by Scott Hebert at March 18, 2016 01:00 PM

R.I.Pienaar

Puppet 4 Type Aliases

Back when I first took a look at Puppet 4 features I explored the new Data Types and said:

Additionally I cannot see myself using a Struct like above in the argument list – to which Henrik says they are looking to add a typedef thing to the language so you can give complex Struc’s a more convenient name and use that. This will help that a lot.

And since Puppet 4.4.0 this has now become a reality. So a quick post to look at that.

The Problem


I’ve been writing a Hiera based node classifier both to scratch and itch and to have something fairly complex to explore the new features in Puppet 4.

The classifier takes a set of classification rules and produce classifications – classes to include and parameters – from there. Here’s a sample classification:

classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
      centos::vm::someprop: someval
    classes:
      - centos::vm

This is a classification rule that has 2 rules to match against machines running RedHat like operating systems and that are virtual. In that case if both these are true it will:

  • Include the class centos::vm
  • Create some data redhat_vm => true and centos::vm::someprop => someval

You can have an arbitrary amount of classifications made up of a arbitrary amount of rules. This data lives in hiera so you can have all sorts of merging, overriding and knock out fun with it.

The amazing thing is since Puppet 4.4.0 there is now no Ruby code involved in doing what I said above, all the parsing, looping, evaluating or rules and building of data structures are all done using functions written in the pure Puppet DSL.

There’s some Ruby there in the form of a custom backend for the new lookup based hiera system – but this is experimental, optional and a bit crazy.

Anyway, so here’s the problem, before Puppet 4.4.0 my main class had this in:

class classifier (
  Hash[String,
    Struct[{
      match    => Enum["all", "any"],
      rules    => Array[
        Struct[{
          fact     => String,
          operator => Enum["==", "=~", ">", " =>", "<", "<="],
          value    => Data,
          invert   => Optional[Boolean]
        }]
      ],
      data     => Optional[Hash[Pattern[/\A[a-z0-9_][a-zA-Z0-9_]*\Z/], Data]],
      classes  => Array[Pattern[/\A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*\Z/]]
    }]
  ] $rules = {}
) {
....
}

This describes the full valid rule as a Puppet Type. It’s pretty horrible. Worse I have a number of functions and classes all that receives the full classification or parts of it and I’d have to duplicate all this all over.

The Solution


So as of yesterday I can now make this a lot better:

class classifier (
  Classifier::Classifications  $rules = {},
) {
....
}

to do this I made a few files in the module:

# classifier/types/matches.pp
type Classifier::Matches = Enum["all", "any"]
# classifier/types/classname.pp
type Classifier::Classname = Pattern[/\A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*\Z/]

and a few more, eventually ending up in:

# classifier/types/classification.pp
type Classifier::Classification = Struct[{
  match    => Classifier::Matches,
  rules    => Array[Classifier::Rule],
  data     => Classifier::Data,
  classes  => Array[Classifier::Classname]
}]

Which you can see solves the problem quite nicely. Now in classes and functions where I need lets say just a Rule all I do is use Classifier::Rule instead of all the crazy.

This makes the native Puppet Data Types perfectly usable for me, well worth adopting these.

by R.I. Pienaar at March 18, 2016 07:40 AM

March 14, 2016

Electricmonk.nl

Terrible Virtualbox disk performance

For a while I've noticed that Virtualbox has terrible performance when install Debian / Ubuntu packages. A little top, iotop and atop research later, and it turns out the culprit is the disk I/O, which is just ludicrously slow. The cause is the fact that Virtualbox doesn't have "Use host IO cache" turned on by default for SATA controllers.Turning that option on gives a massive improvement to speed.

To turn "Host IO  cache":

  • Open the Virtual Machine's Settings dialog
  • Go to "Storage"
  • Click on the "Controller: SATA" controller
  • Enable the "Host I/O Cache" setting.
  • You may also want to check any other controllers and / or disks to see if this option is on there.
  • Save the settings and start the VM and you'll see a big improvement.

 

 

hostiocache

Update:

Craig Gill was kind enough to mail me with the reasoning behind why the "Host I/O cache" setting is off by default:

Hello, thanks for the blog post, I just wanted to comment on it below:

In your blog post "Terrible Virtualbox disk performance", you mention
that the 'use host i/o cache' is off by default.
In the VirtualBox help it explains why it is off by default, and it
basically boils down to safety over performance.

Here are the disadvantages of having that setting on (which these
points may not apply to you):

1. Delayed writing through the host OS cache is less secure. When the
guest OS writes data, it considers the data written even though it has
not yet arrived on a physical disk. If for some reason the write does
not happen (power failure, host crash), the likelihood of data loss
increases.

2. Disk image files tend to be very large. Caching them can therefore
quickly use up the entire host OS cache. Depending on the efficiency
of the host OS caching, this may slow down the host immensely,
especially if several VMs run at the same time. For example, on Linux
hosts, host caching may result in Linux delaying all writes until the
host cache is nearly full and then writing out all these changes at
once, possibly stalling VM execution for minutes. This can result in
I/O errors in the guest as I/O requests time out there.

3. Physical memory is often wasted as guest operating systems
typically have their own I/O caches, which may result in the data
being cached twice (in both the guest and the host caches) for little
effect.

"If you decide to disable host I/O caching for the above reasons,
VirtualBox uses its own small cache to buffer writes, but no read
caching since this is typically already performed by the guest OS. In
addition, VirtualBox fully supports asynchronous I/O for its virtual
SATA, SCSI and SAS controllers through multiple I/O threads."

Thanks,
Craig

That's important information to keep in mind. Thanks Craig!

My reply:

Thanks for the heads up!

For what it's worth, I already wanted to write the article a few weeks ago, and have been running with all my VirtualBox hosts with Host I/O cache set to "on", and I haven't noticed any problems. I haven't lost any data in databases running on Virtualbox (even though they've crashed a few times; or actually I killed them with ctrl-alt-backspace), the host is actually more responsive if VirtualBoxes are under heavy disk load. I do have 16 Gb of memory, which may make the difference. 

The speedups in the guests is more than worth any other drawbacks though. The gitlab omnibus package now installs in 4 minutes rather than 5+ hours (yes, hours).

I'll keep this blog post updates in case I run into any weird problems.

by admin at March 14, 2016 08:34 PM

Evaggelos Balaskas

Top Ten Linux Distributions and https

Top Ten Linux Distributions and https

A/A |  Distro    |          URL               | Verified by       | Begin      | End        | Key
01. | ArchLinux  | https://www.archlinux.org/ | Let's Encrypt     | 02/24/2016 | 05/24/2016 | 2048
02. | Linux Mint | https://linuxmint.com/     | COMODO CA Limited | 02/24/2016 | 02/24/2017 | 2048
03. | Debian     | https://www.debian.org/    | Gandi             | 12/11/2015 | 01/21/2017 | 3072
04. | Ubuntu     | http://www.ubuntu.com      | -                 | -          | -          | -
05. | openSUSE   | https://www.opensuse.org/  | DigiCert Inc      | 02/17/2015 | 04/23/2018 | 2048
06. | Fedora     | https://getfedora.org/     | DigiCert Inc      | 11/24/2014 | 11/28/2017 | 4096
07. | CentOS     | https://www.centos.org/    | DigiCert Inc      | 07/29/2014 | 08/02/2017 | 2048
08. | Manjaro    | https://manjaro.github.io/ | DigiCert Inc      | 01/20/2016 | 04/06/2017 | 2048
09. | Mageia     | https://www.mageia.org/    | Gandi             | 03/01/2016 | 02/07/2018 | 2048
10. | Kali       | https://www.kali.org/      | GeoTrust Inc      | 11/09/2014 | 11/12/2018 | 2048
Tag(s): https

March 14, 2016 04:27 PM

March 12, 2016

Evaggelos Balaskas

Baïkal - CalDAV & CardDAV server

Baïkal is a CalDAV and CardDAV server, based on sabre/dav,

To self hosted your own CalDAV & CardDAV server is one of the first step to better control your data and keep your data, actually, yours!So here comes Baikal which is really easy to setup. That easily you can also configure any device (mobile/tablet/laptop/desktop) to use your baikal instance and synchronize your calendar & contacts everywhere.

 

In this blog post are some personal notes on installing or upgrading baikal on your web server.

 

[ The latest version as this article was written is 0.4.1 ]

 

Change to your web directory (usually is something like: /var/www/html/) and download baikal:

Clean Install - Latest release 0.4.1
based on sabre/dav 3.1.2
You need at least PHP 5.5 but preferable use 5.6.

# wget -c https://github.com/fruux/Baikal/releases/download/0.4.1/baikal-0.4.1.zip
# yes | unzip baikal-0.4.1.zip

# chown -R apache:apache baikal/

That’s it !

 

Be Aware that there is a big difference between 0.2.7 and versions greater that 0.3.x.
And that is, that the URL has an extra part: html

from: https://baikal.example.com/admin
to : https://baikal.example.com/html/admin

If you already had installed baikal-0.2.7 and you want to upgrade to 0.4.x version and later, then you have to follow the below steps:

# wget -c http://baikal-server.com/get/baikal-flat-0.2.7.zip
# unzip baikal-flat-0.2.7.zip
# mv baikal-flat baikal

# wget -c https://github.com/fruux/Baikal/releases/download/0.4.1/baikal-0.4.1.zip
# yes | unzip baikal-0.4.1.zip

# touch baikal/Specific/ENABLE_INSTALL
# chown -R apache:apache baikal/

 

I prefer to create a new virtualhost every time I need to add a new functionality to my domain.

Be smart & use encryption !
Below is mine virtualhost as an example:

< VirtualHost *:443 >

    ServerName	baikal.example.com

    # SSL Support
    SSLEngine on

    SSLProtocol ALL -SSLv2 -SSLv3
    SSLHonorCipherOrder on
    SSLCipherSuite HIGH:!aNULL:!MD5

    SSLCertificateFile /etc/letsencrypt/live/baikal.example.com/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/baikal.example.com/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/baikal.example.com/chain.pem

    # Logs
    CustomLog logs/baikal.access.log combined
    ErrorLog  logs/baikal.error.log

    DocumentRoot /var/www/html/baikal/

    < Directory /var/www/html/baikal/ >
            Order allow,deny
            Allow from all
    < /Directory >

< /VirtualHost >

 

Next step is to open your browser and browse your baikal's location,

eg. https://baikal.example.com/html/

admin interface:

https://baikal.example.com/html/admin/

or

if you have an older version (0.2.7) on your system

eg. https://baikal.example.com

 

I use SQLite for personal use (makes easy backup process) but you can always choose MySQL .

Dashboard on 0.4.1

 

baikal_041d.jpg

 

Useful URIs are:

Principals:

 

baikal_041c.jpg

 

Plugins:

 

baikal_041b.jpg

 

Nodes:

 

baikal_041a.jpg

 

 

Here is a sceen-guide on latest versions:

 

baikal_01.jpg

 

baikal_02.jpg

 

baikal_03.jpg

 

baikal_04.jpg

 

 

Login to the admin dashboard and create your user through
Users and resources tab

and you are done with the baikal installation & configuration process.

Principals

Applications (caldav/carddav and task clients) can now be accessed by visiting principals URI:

https://baikal.example.com/html/card.php/principals

or via dav.php

https://baikal.example.com/html/dav.php

but If your client does not support the above holistic URI, then try the below for calendar & contacts:

CalDAV

https://baikal.example.com/html/cal.php/calendars/test/default

CardDAV

https://baikal.example.com/html/card.php/addressbooks/test/default
 

baikal_041d.jpg

 

On android devices, I use: DAVdroid

If you have a problem with your self-signed certificate,
try adding it to your device through the security settings.

 

davdroid_01.jpg

 
 

davdroid_03.jpg

 

March 12, 2016 06:54 PM

March 10, 2016

HolisticInfoSec.org

toolsmith #114: WireEdit and Deep Packet Modification




PCAPs or it didn't happen, right? 



Introduction
Packet heads, this toolsmith is for you. Social media to the rescue. Packet Watcher (jinq102030) Tweeted using the #toolsmith hashtag to say that WireEdit would make a great toolsmith topic. Right you are, sir! Thank you. Many consider Wireshark the eponymous tool for packet analysis; it was only my second toolsmith topic almost ten years ago in November 2006. I wouldn't dream of conducting network forensic analysis without NetworkMiner (August 2008) or CapLoader (October 2015). Then there's Xplico, Security Onion, NST, Hex, the list goes on and on...
Time to add a new one. Ever want to more easily edit those packets? Me too. Enter WireEdit, a comparatively new player in the space. Michael Sukhar (@wirefloss) wrote and maintains WireEdit, the first universal WYSIWYG (what you see is what you get) packet editor. Michael identifies WireEdit as a huge productivity booster for anybody working with network packets, in a manner similar to other industry groundbreaking WYSIWIG tools.

In Michael's own words: "Network packets are complex data structures built and manipulated by applying complex but, in most cases, well known rules. Manipulating packets with C/C++, or even Python, requires programming skills not everyone possesses and often lots of time, even if one has to change a single bit value. The other existing packet editors support editing of low stack layers like IPv4, TCP/UDP, etc, because the offsets of specific fields from the beginning of the packet are either fixed or easily calculated. The application stack layers supported by those pre-WireEdit tools are usually the text based ones, like SIP, HTTP, etc. This is because no magic is required to edit text. WireEdit's main innovation is that it allows editing binary encoded application layers in a WYSIWYG mode."

I've typically needed to edit packets to anonymize or reduce captures, but why else would one want to edit packets?
1) Sanitization: Often, PCAPs contain sensitive data. To date, there has been no easy mechanism to “sanitize” a PCAP, which, in turn, makes traces hard to share.
2) Security testing: Engineers often want to vary or manipulate packets to see how the network stack reacts to it. To date, that task is often accomplished via programmatic means.
WireEdit allows you to do so in just a few clicks.

Michael describes a demo video he published in April 2015, where he edits the application layer of the SS7 stack (GSM MAP packets). GSM MAP is the protocol responsible for much of the application logic in “classic” mobile networks, and is still widely deployed. The packet he edits carries an SMS text message, and the layer he edits is way up the stack and binary encoded. Michael describes the message displayed as a text, but notes that if looking at the binary of the packet, you wouldn’t find it there due to complex encoding. If you choose to decode in order to edit the text, your only option is to look up the offset of the appropriate bytes in Wireshark or a similar tool, and try to edit the bytes directly.
This often completely breaks the packet and Michael proudly points out that he's not aware of any tool allowing to such editing in WYSIWYG mode. Nor am I, and I enjoyed putting WireEdit through a quick validation of my own.

Test Plan

I conceived a test plan to modify a PCAP of normal web browsing traffic with web application attacks written in to the capture with WireEdit. Before editing the capture, I'd run it through a test harness to validate that no rules were triggered resulting in any alerts, thus indicating that the capture was clean. The test harness was a Snort 2.9.8.0 instance I'd implemented on a new Ubuntu 14.04 LTS, configured with Snort VRT and Emerging Threats emerging-web_server and emerging-web_specific_apps rules enabled. To keep our analysis all in the family I took a capture while properly browsing the OpenBSD entry for tcpdump.
A known good request for such a query would, in text, as a URL, look like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8
Conversely, if I were to pass a cross-site scripting attack (I did not, you do not) via this same URL and one of the available parameters, in text, it might look something like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8%22onmouseover%3Dalert(1337)%2F%2F
Again though, my test plan was one where I wasn't conducting any actual attacks against any website, and instead used WireEdit to "maliciously" modify the packet capture from a normal browsing session. I would then parsed it with Snort to validate that the related web application security rules fired correctly.
This in turn would validate WireEdit's capabilities as a WYSIWYG PCAP editor as you'll see in the walk-though. Such a testing scenario is a very real method for testing the efficacy of your IDS for web application attack detection, assuming it utilized a Snort-based rule set.

Testing

On my Ubuntu Snort server VM I ran sudo tcpdump -i eth0 -w toolsmith.pcap while browsing http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8.
Next, I ran sudo snort -c /etc/snort/snort.conf -r toolsmith.pcap against the clean, unmodified PCAP to validate that no alerts were received, results noted in Figure 1.

Figure 1: No alerts triggered via initial OpenBSD browsing PCAP
I then dragged the capture (407 packets) over to my Windows host running WireEdit.
Now wait for it, because this is a lot to grasp in short period of time regarding using WireEdit.
In the WireEdit UI, click the Open icon, then select the PCAP you wish to edit, and...that's it, you're ready to edit. :-) WireEdit tagged packet #9 with the pre-requisite GET request marker I was interest in so expanded that packet, and drilled down to the HTTP: GET descriptor and the Request-URI under Request-Line. More massive complexity, here take notes because it's gonna be rough. I right-clicked the Request-URI, selected Edit PDU, and edited the PDU with a cross-site scripting (JavaScript) payload (URL encoded) as part of the GET request. I told you, really difficult right? Figure 2 shows just how easy it really is.

Figure 2: Using WireEdit to modify Request-URI with XSS payload
I then saved the edited PCAP as toolsmithXSS.pcap and dragged it back over to my Snort server and re-ran it through Snort. The once clean, pristine PCAP elicited an entirely different response from Snort this time. Figure 3 tells no lies.

Figure 3: XSS ET Snort alert fires
Perfect, in what was literally a :30 second edit with WireEdit, I validated that my ten minute Snort setup catches cross-site scripting attempts with at least one rule. And no websites were actually harmed in the making of this test scenario, just a quick tweak with WireEdit.
That was fun, let's do it again, this time with a SQL injection payload. Continuing with toolsmithXSS.pcap I jumped to the GET request in frame 203 as it included a request for a different query and again edited the Request-URI with an attack specific for MySQL as seen in Figure 4.



I saved this PCAP modification as toolsmithXSS_SQLi.pcap and returned to the Snort server for yet another happy trip Snort Rule Lane. As Figure 5 represents, we had an even better result this time.  


Figure 5: WireEdited PCAP trigger multiple SQL injection alerts
In addition to the initial XSS alert firing again, this time we collected four alerts for:

  • ET WEB_SERVER MYSQL SELECT CONCAT SQL Injection Attempt
  • ET WEB_SERVER SELECT USER SQL Injection Attempt in URI
  • ET WEB_SERVER Possible SQL Injection Attempt UNION SELECT
  • ET WEB_SERVER Possible SQL Injection Attempt SELECT FROM

That's a big fat "hell, yes" for WireEdit.
Still with me that I never actually executed these attacks? I just edited the PCAP with WireEdit and fed it back to the Snort beast. Imagine a PCAP like being optimized for the OWASP Top 10 and being added to your security test library, and you didn't need to conduct any actual web application attacks. Thanks WireEdit!

Conclusion

WireEdit is beautifully documented, with a great Quickstart. Peruse the WireEdit website and FAQ, and watch the available videos. The next time you need to edit packets, you'll be well armed and ready to do so with WireEdit, and you won't be pulling your hair out trying to accomplish it quickly, effectively, and correctly. WireEdit my a huge leap from not known to me to the top five on my favorite tools list. WireEdit is everything it is claimed to be. Outstanding.
Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Michael Sukhar for WireEdit and Packet Watcher for the great suggestion.

by Russ McRee (noreply@blogger.com) at March 10, 2016 06:46 AM

March 09, 2016

That grumpy BSD guy

Domain Name Scams Are Alive And Well, Thank You

Is somebody actually trying to register your company name as a .cn or .asia domain? Not likely. And don't pay them.

It has been a while since anybody tried to talk me into registering a domain name I wasn't sure I wanted in the first place, but it has happened before. Scams more or less like the Swedish one are as common as they are transparent, but apparently enough people take the bait that the scammers keep trying.

After a few quiet years in my backwater of the Internet, in March of 2016, we saw a new sales push that came from China. The initial contact on March 4th, from somebody calling himself Jim Bing read (preserved here with headers for reference, you may need MIME tools to actually extract text due to character set handling),

Subject: Notice for "bsdly"

Dear CEO,

(If you are not the person who is in charge of this, please forward this to your CEO, because this is urgent, Thanks)

We are a Network Service Company which is the domain name registration center in China.
We received an application from Huabao Ltd on March 2, 2016. They want to register " bsdly " as their Internet Keyword and " bsdly.cn "、" bsdly.com.cn " 、" bsdly.net.cn "、" bsdly.org.cn " 、" bsdly.asia " domain names, they are in China and Asia domain names. But after checking it, we find " bsdly " conflicts with your company. In order to deal with this matter better, so we send you email and confirm whether this company is your distributor or business partner in China or not?

Best Regards,

Jim
General Manager
Shanghai Office (Head Office)
8006, Xinlong Building, No. 415 WuBao Road,
Shanghai 201105, China
Tel: +86 216191 8696
Mobile: +86 1870199 4951
Fax: +86 216191 8697
Web: www.cnweb-registry.com


The message was phrased a bit oddly in parts (as in, why would anybody register an"internet keyword"?), but not entirely unintelligible as English-language messages from Asians sometimes are.

I had a slight feeling of deja vu -- I remembered a very similar message turning up in 2008 while we were in the process of selling the company we'd started a number of years earlier. In the spirit of due diligence (after asking the buyer) we replied then that the company did not have any plans for expanding into China, and if my colleagues ever heard back, it likely happened after I'd left the company.

This time around I was only taking a break between several semi-urgent tasks, so I quickly wrote a reply, phrased in a way that I thought would likely make them just go away (also preserved here):

Subject: Re: Notice for "bsdly"
 
Dear Jim Bing,

We do not have any Chinese partners at this time, and we are not
currently working to establish a presence in Chinese territory. As to
Huabao Ltd's intentions for registering those domains, I have no idea
why they should want to.

Even if we do not currently plan to operate in China and see no need
to register those domains ourselves at this time, there is a risk of
some (possibly minor) confusion if those names are to be registered
and maintained by a third party. If you have the legal and practical
authority to deny these registrations that would be my preference.

Yours,
Peter N. M. Hansteen


Then on March 7th, a message from "Jiang zhihai" turned up (preserved here, again note the character set issues):

Subject: " bsdly "
Dear Sirs,

Our company based in chinese office, our company has submitted the " bsdly " as CN/ASIA(.asia/.cn/.com.cn/.net.cn/.org.cn) domain name and Internet Keyword, we are waiting for Mr. Jim's approval. We think these names are very important for our business in Chinese and Asia market. Even though Mr. Jim advises us to change another name, we will persist in this name.

Best regards

Jiang zhihai

Now, if they're in a formal process of getting approval for a that domain name, why would they want to screw things up by contacting me directly? I was beginning to smell rat, but I sent them an answer anyway (preserved here):

Subject: Re: " bsdly "

Dear Jiang zhihai,

You've managed to make me a tad curious as to why the "bsdly" name
would be important in these markets.

While there is a very specific reason why I chose that name for my
domains back in 2004, I don't see any reason why you wouldn't be
perfectly well served by picking some other random sequence of characters.

So out of pure curiosity, care to explain why you're doing this?

Sincerely,
Peter N. M. Hansteen

Yes, that domain name has been around for a while. I didn't immediately remember exactly when I'd registered the domain, but a quick look at the whois info (preserved here) confirmed what I thought. I've had it since 2004.

Anyone who is vaguely familiar with the stuff I write about will have sufficient wits about them to recognize the weak pun the domain name is. If "bsdly" has any other significance whatsoever in other languages including the several Chinese ones, I'd genuinely like to know.

But by now I was pretty sure this was a scam. Registrars may or may not do trademark searches before registering domains, but in most cases the registrar would not care either way. Domain registration is for the most part a purely technical service that extends to making sure whether any requested domains are in fact available, while any legal disputes such as trademark issues could very easily be sent off to the courts for the end users at both ends to resolve. The supposed Chinese customer contacting me directly just does not make sense.

Then of course a few hours after I'd sent that reply, our man Jim fired off a new message (preserved here, MIME and all):

Subject: CN/ASIA domain names & Internet Keyword

Dear Peter N. M. Hansteen,

Based on your company having no relationship with them, we have suggested they should choose another name to avoid this conflict but they insist on this name as CN/ASIA domain names (asia/ cn/ com.cn/ net.cn/ org.cn) and internet keyword on the internet. In our opinion, maybe they do the similar business as your company and register it to promote his company.
According to the domain name registration principle: The domain names and internet keyword which applied based on the international principle are opened to companies as well as individuals. Any companies or individuals have rights to register any domain name and internet keyword which are unregistered. Because your company haven't registered this name as CN/ASIA domains and internet keyword on the internet, anyone can obtain them by registration. However, in order to avoid this conflict, the trademark or original name owner has priority to make this registration in our audit period. If your company is the original owner of this name and want to register these CN/ASIA domain names (asia/ cn/ com.cn/ net.cn/ org.cn) and internet keyword to prevent anybody from using them, please inform us. We can send an application form and the price list to you and help you register these within dispute period.

Kind regards

Jim
General Manager
Shanghai Office (Head Office)
8006, Xinlong Building, No. 415 WuBao Road,
Shanghai 201105, China
Tel: +86 216191 8696
Mobile: +86 1870199 4951
Fax: +86 216191 8697
Web: www.cnwebregistry.com

So basically he's fishing for me to pony up some cash and register those domains myself through their outfit. Quelle surprise.

I'd already checked whether my regular registrar offers .cn registrations (they don't), and checking for what looked like legitimate .cn domain registrars turned up that registering a .cn domain would likely cost to the tune of USD 35. Not a lot of money, but more than I care to spend (and keep spending on a regular basis) on something I emphatically do not need.

So I decided to do my homework. It turns out that this is a scam that's been going on for years. A search on the names of persons and companies turned up Matt Lowe's 2012 blog post Chinese Domain Name Registration Scams with a narrative identical to my experience, with only minor variations in names and addresses.

Checking whois while writing this it turns out that apparently bsdly.cn has been registered:

[Wed Mar 09 20:34:34] peter@skapet:~$ whois bsdly.cn
Domain Name: bsdly.cn
ROID: 20160229s10001s82486914-cn
Domain Status: ok
Registrant ID: 22cn120821rm22yr
Registrant: 徐新荣
Registrant Contact Email: 1725093@qq.com
Sponsoring Registrar: 浙江贰贰网络有限公司
Name Server: ns1.22.cn
Name Server: ns2.22.cn
Registration Time: 2016-02-29 20:55:09
Expiration Time: 2017-02-28 20:55:09
DNSSEC: unsigned

But it doesn't resolve more than a week after registration:

[Wed Mar 09 20:34:47] peter@skapet:~$ host bsdly.cn
Host bsdly.cn not found: 2(SERVFAIL)


That likely means they thought me a prospect and registered with an intent to sell, and they've already spent some amount of cash they're not getting back from me. I think we can consider them LARTed, however on a very small scale.

What's more, none of the name servers specified in the whois info seem to answer DNS queries:

[Wed Mar 09 20:35:36] peter@skapet:~$ dig @ns1.22.cn bsdly.cn any

; <<>> DiG 9.4.2-P2 <<>> @ns1.22.cn bsdly.cn any
; (2 servers found)
;; global options:  printcmd
;; connection timed out; no servers could be reached
[Wed Mar 09 20:36:14] peter@skapet:~$ dig @ns2.22.cn bsdly.cn any

; <<>> DiG 9.4.2-P2 <<>> @ns2.22.cn bsdly.cn any
; (2 servers found)
;; global options:  printcmd
;; connection timed out; no servers could be reached



So summing up,
  • This is a scam that appears to have been running for years.
  • If something similar to those messages start turning up in your inbox, the one thing you do not want to do is to actually pay for the domains they're offering.

    Most likely you do not need those domains, and it's easy to check how far along they are in the registration process. If you have other contacts that will cheaply and easily let you register those domains yourself, there's an element of entertainment to consider. But keep in mind that automatic renewals for domains you don't actually need can turn irritating once you've had a few laughs over the LARTing.
  • If you are actually considering setting up shop in the markets they're offering domains for and you receive those messages before you've come around to registering domains matching your trademarks, you are the one who's screwed up.
If this makes you worried about Asian cyber-criminals or the Cyber Command of the People's Liberation Army out to get your cyber-whatever, please calm down.

Sending near-identical email messages to people listed in various domains' whois info does not require a lot of resources, and as Matt says in his article, there are indications that this could very well be the work (for some values of) of a single individual. As cybercrime goes, this is the rough equivalent of some petty, if unpleasant, street crime.

I'm all ears for suggestions for further LARTing (at least those that do not require a lot of effort on my part), and if you've had similar experiences, I'd like to hear from you (in comments or email). Do visit Matt Lowe's site too, and add to his collection if you want to help him keep track.

And of course, if "Jim Bing" or Jiang zhihai" actually answer any of my questions, I'll let you know with an update to this article.

Update 2016-03-15: As you can imagine I've been checking whether bsdly.cn resolves and the registration status of the domain via whois at semi-random intervals of at least a few hours since I started the blog post. I was a bit surprised to find that the .cn whois server does not answer requests at the moment:

[Tue Mar 15 10:23:31] peter@portal:~$ whois bsdly.cn
whois: cn.whois-servers.net: connect: Connection timed out


It could of course be a coincidence and an unrelated technical issue. I'd appreciate independent verification.

by Peter N. M. Hansteen (noreply@blogger.com) at March 09, 2016 08:17 PM