Planet SysAdmin


April 29, 2016

Chris Siebenmann

You should plan for your anti-spam scanner malfunctioning someday

Yesterday I mentioned that the commercial anti-spam and anti-virus system we use ran into a bug where it hung up on some incoming emails. One reaction to this is to point and laugh; silly us for using a commercial anti-spam system, we probably got what we deserved here. I think that this attitude is a mistake.

The reality is that all modern anti-spam and anti-virus systems are going to have bugs. It's basically inherent in the nature of the beast. These systems are trying to do a bunch of relatively sophisticated analysis on relatively complicated binary formats, like ZIP files, PDFs, and various sorts of executables; it would be truly surprising if all of the code involved in doing this was completely bug free, and every so often the bugs are going to have sufficiently bad consequences to cause explosions.

(It doesn't even need to be a bug as such. For example, many regular expression engines have pathological behavior when exposed to a combination of certain inputs and certain regular expressions. This is not a code bug since the RE engine is working as designed, but the consequences are similar.)

What this means is that you probably want to think ahead about what you'll do if your scanner system starts malfunctioning at the level of either hanging or crashing when it processes a particular email message. The first step is to think about what might happen with your overall system and what it would look like to your monitoring. What are danger signs that mean something isn't going right in your mail scanning?

Once you've considered the symptoms, you can think about pre-building some mail system features to let you deal with the problem. Two obvious things to consider are documented ways of completely disabling your mail scanner and forcing specific problem messages to bypass the mail scanner. Having somewhat gone through this exercise myself (more than once by now), I can assure you that developing mailer configuration changes on the fly as your mail system is locking up is what they call 'not entirely fun'. It's much better to have this sort of stuff ready to go in advance even if you never turn out to need it.

(Building stuff on the fly to solve your urgent problem can be exciting even as it's nerve-wracking, but heroism is not the right answer.)

At this point you may also want to think about policy issues. If the mail scanner is breaking, do you have permission to get much more aggressive with things like IP blocks in order to prevent dangerous messages from getting in, or is broadly accepting email important enough to your organization to live with the added risks of less or no mail scanning? There's no single right answer here and maybe the final decisions will only be made on the spot, but you and your organization can at least start to consider this now.

by cks at April 29, 2016 04:22 AM

April 28, 2016

Everything Sysadmin

Have you downloaded the March/April issue of acmqueue yet?

The March/April issue of acmqueue - the magazine written for and by software engineers that leaves no corner of the development world unturned - is now available for download.

This issue contains a preview of a chapter from our next book, the 3rd edition of TPOSANA. This issue contains a preview of a chapter from our next book, the 3rd edition of TPOSANA. The chapter is called "The Small Batches Principle". We are very excited to be able to bring you this preview and hope you find the chapter fun and educational. The book won't be out until Oct 7, 2016, so don't miss this opportunity to read it early!

The bimonthly issues of acmqueue are free to ACM Professional members. (One-year subscription cost is $19.99 for non-ACM members.) You can also buy a single issue. For more information.

by Tom Limoncelli at April 28, 2016 03:00 PM

Chris Siebenmann

You should probably track what types of files your users get in email

Most of the time our commercial anti-spam system works fine and we don't have to think about it or maintain it (which is one of the great attractions of using a commercial system for this). Today was not one of those times. This morning, we discovered that some incoming email messages we were receiving make its filtering processes hang using 100% CPU; after a while, this caused all inbound email to stop. More specifically, the dangerous incoming messages appeared to be a burst of viruses or malware in zipped .EXEs.

This is clearly a bug and hopefully it will get fixed, but in the mean time we needed to do something about it. Things like, say, blocking all ZIP files, or all ZIP files with .EXEs in them. As we were talking about this, we realized something important: we had no idea how many ZIP files our users normally get, especially how many (probably) legitimate ones. If we temporarily stopped accepting all ZIP file attachments, how many people would we be affecting? No one, or a lot? Nor did we know what sort of file types are common or uncommon in the ZIP files that our users get (legitimate or otherwise), or what sort of file types users get other than ZIP files. Are people getting mailed .EXEs or the like directly? Are they getting mailed anything other than ZIP files as attachments?

(Well, the answer to that one will be 'yes', as a certain amount of HTML email comes with attached images. But you get the idea.)

Knowing this sort of information is important for the same reason as knowing what TLS ciphers your users are using. Someday you may be in our situation and really want to know if it's safe to temporarily (or permanently) block something, or whether it'll badly affect users. And if something potentially dangerous has low levels of legitimate usage, well, you have a stronger case for preemptively doing something about it. All of this requires knowing what your existing traffic is, rather than having to guess or assume, and for that you need to gather the data.

Getting this sort of data for email does have complications, of course. One of them is that you'd really like to be able to distinguish between legitimate email and known spam in tracking this sort of stuff, because blocking known spam is a lot different than blocking legitimate email. This may require logging things in a way that either directly ties them to spam level information and so on or at least lets you cross-correlate later between different logs. This can affect where you want to do the logging; for example, you might want to do logging downstream of your spam detection system instead of upstream of it.

(This is particularly relevant for us because obviously we now need to do our file type blocking and interception upstream of said anti-spam system. I had been dreaming of ways to make it log information about what it saw going by even if it didn't block things, but now maybe not; it'd be relatively hard to correlate its logs again our anti-spam logs.)

by cks at April 28, 2016 05:37 AM

April 27, 2016

Errata Security

Who's your lawyer. Insights & Wisdom via HBO's Silicon Valley (S.3, E. 1)

The company's attorney may be your friend, but they're not your lawyer.  In this guest post, friend of Errata Elizabeth Wharton (@lawyerliz) looks at the common misconception highlighted in this week's Silicon Valley episode.

 
by Elizabeth Wharton


Amidst the usual startup shenanigans and inside-valley-jokes, HBO's Silicon Valley Season 3, Episode 1 contained a sharp reminder: lawyer loyalty runs with the "client," know whether you are the client.   A lawyer hired by a company has an entity as its client, not the individuals or officers of that company.  If you want an attorney then hire your own. 

Silicon Valley Season 3, Episode 1- Setting the Scene (without too many spoilers, I promise)
Upon learning of a board room ouster from the CEO to the CTO role, the startup company's founder Richard storms into the meeting with two board "friends" in tow (one, Ron, is the company's counsel).  As Richard burns the bridges of the offered CTO position and prepares for his dramatic exit, he turns to Ron and asks if he's ready to leave the meeting.  To the Richard's surprise, Ron calmly reminds Richard that he hired Ron to serve as corporate counsel on behalf of the company.  Ron goes further, explaining that their (the company's and Richard's) interests became adverse as soon as Richard included threats of litigation and payback against the board and the company  in his epic "From CEO to CTO? I Quit!" rant only a few moments before.   After letting this news sink in, Richard storms off to continue his rant elsewhere.

Insights from Richard's Gaffe (Don't be Richard)
While harsh, Ron's response highlights a common overlooked issue in startup companies:  a company and each of its founders and officers are treated as separate and distinct from the other.  This distinction, that a company is a separate entity from its individual co-founders, provides the basis for the liability protections and tax benefits with forming the company in the first place.  The layer of insulation between the company and the individuals extends both ways. Once a founder's idea or concept becomes a company, the idea's creator (the founder) and the company no longer share the same interests.  What benefits the individual founder may not be in the best interest of the company and vice-versa. As Richard discovered during the board room exchange, a board of directors must put the interests of the company above personal friendships with one of the company's founders.

Similarly, an attorney hired to form a company or provide corporate counsel for the company represents the company's interests and not those of the individuals who make up the company.  If the parties intend for the attorney to represent an individual co-founder instead of the company then the documents and engagement letter must reflect this distinction.  Understanding the attorney-client relationship and duties of loyalty is key when reviewing a term sheet, operating agreement, or any other agreement between individuals and/or a company.  Perspective and point of view in preparing and interpreting agreed upon terms revolve around the bias of the drafter.  The company's lawyer does not have a duty to point out to an individual founder, officer, or investor if the deal terms that benefit the company overall would in turn diminish an individual's personal interests.  If they're not your lawyer, then you're not the client that they're protecting.

Silicon Valley's Wisdom Takeaway: A lawyer's professional loyalty runs with their client. Even if they called you "Richie" only moments before, when you're not the client then counsel won't join your dramatic exit or  advise you on your next move. 


Elizabeth is a business and policy attorney specializing in information security and unmanned systems.  While Elizabeth is an attorney, nothing in this post is intended as legal advice.  If you need legal advice, get your own lawyer.  (An earlier version of Elizabeth's post first appeared via LinkedIn Pulse but has since been updated and expanded.)

by Elizabeth Wharton (noreply@blogger.com) at April 27, 2016 05:48 PM

Chris Siebenmann

How 'there are no technical solutions to social problems' is wrong

One of the things that you will hear echoing around the Internet is the saying that there are no technical solutions to social problems. This is sometimes called 'Ranum's Law', where it's generally phrased as 'you can't fix people problems with software' (cf). Years ago you probably could have found me nodding along sagely to this and full-heartedly agreeing with it. However, I've changed; these days, I disagree with the spirit of the saying.

It is certainly true you cannot outright solve social problems with technology (well, almost all of the time). Technology is not that magical, and the social is more powerful than the technical barring very unusual situations. And in general social problems are wicked problems, and those are extremely difficult to tackle in general. This is an important thing to realize, because social problems matter and computing has a great tendency to either ignore them outright or assume that our technology will magically solve them for us.

However, the way that this saying is often used is for technologists to wash their hands of the social problems entirely, and this is a complete and utter mistake. It is not true that technical measures are either useless or socially neutral, because the technical is part of the world and so it basically always affects the social. In practice, in reality, technical features often strongly influence social outcomes, and it follows that they can make social problems more or less likely. That social problems matter means that we need to explicitly consider them when building technical things.

(The glaring example of this is all the various forms of spam. Spam is a social problem, but it can be drastically enabled or drastically hindered by all sorts of technical measures and so sensible modern designers aggressively try to design spam out of their technical systems.)

If we ignore the social effects of our technical decisions, we are doing it wrong (and bad things usually ensue). If we try to pretend that our technical decisions do not have social ramifications, we are either in denial or fools. It doesn't matter whether we intended the social ramifications or didn't think about them; in either case, we may rightfully be at least partially blamed for the consequences of our decisions. The world does not care why we did something, all it cares about is what consequences our decisions have. And our decisions very definitely have (social) consequences, even for small and simple decisions like refusing to let people change their login names.

Ranum's Law is not an excuse to live in a rarefied world where all is technical and only technical, because such a rarefied world does not exist. To the extent that we pretend it exists, it is a carefully cultivated illusion. We are certainly not fooling other people with the illusion; we may or may not be fooling ourselves.

(I feel I have some claim to know what the original spirit of the saying was because I happened to be around in the right places at the right time to hear early versions of it. At the time it was fairly strongly a 'there is no point in even trying' remark.)

by cks at April 27, 2016 03:51 AM

Errata Security

My next scan

So starting next week, running for a week, I plan on scanning for ports 0-65535 (TCP). Each probe will be completely random selection of IP+port. The purpose is to answer the question about the most common open ports.

This would take a couple years to scan for all ports, so I'm not going to do that. But, scanning for a week should give me a good statistical sampling of 1% of the total possible combinations.

Specifically, the scan will open a connection and wait a few seconds for a banner. Protocols like FTP, SSH, and VNC reply first with data, before you send requests. Doing this should find such things lurking at odd ports. We know that port 22 is the most common for SSH, but what is the second most common?

Then, if I get no banner in response, I'll send an SSL "Hello" message. We know that port 443 is the most common SSL port, but what is the second most common?

In other words, by waiting for SSH, then sending SSL, I'll find SSH even it's on the (wrong) port of 443, and I'll find SSL even if it's on port 22. And all other ports, too.

Anyway, I point this out because people will start to see a lot of strange things in their logs. Also, I'm hoping that people will have suggestions before I start the scan for additional things to do during the scan.

Update: I'll be scanning from addresses between 209.126.230.70 and 209.126.230.78.



BTW, yes '0' is a valid port.

BTW, numbers larger than 65535 or smaller than 0 (negative numbers) aren't valid -- but they'll work in most applications because they simply use the lower 16-bits of any numbers that are given. Thus, port number -1 is just 65535, and port number 65536 is the same as 0.


by Robert Graham (noreply@blogger.com) at April 27, 2016 12:14 AM

Raymii.org

Build a FreeBSD 10.3-release Openstack Image with bsd-cloudinit

We are going to prepare a FreeBSD image for Openstack deployment. We do this by creating a FreeBSD 10.3-RELEASE instance, installing it and converting it using bsd-cloudinit. We'll use the CloudVPS public Openstack cloud for this. We'll be using the Openstack command line tools, like nova, cinder and glance.

April 27, 2016 12:00 AM

April 26, 2016

Racker Hacker

Talk Recap: Automated security hardening with OpenStack-Ansible

Today is the second day of the OpenStack Summit in Austin and I offered up a talk on host security hardening in OpenStack clouds. The video should be available sometime soon, but you can download the slides now.

Here’s a quick recap of the talk and the conversations afterward:

Security tug-of-war

Information security is a challenging task, mainly because it is more than just a technical problem. Technology is a big part of it, but communication, culture, and compromise are also critical. I flashed up this statement on the slides:

In the end, the information security teams, the developers and the auditors must be happy. This can be a challenging tightrope to walk, but automating some security allows everyone to get what they want in a scalable and repeatable way.

Meeting halfway

The openstack-ansible-security role allows information security teams to meet developers or OpenStack deployers halfway. It can easily bolt onto existing Ansible playbooks and manage host security hardening for Ubuntu 14.04 systems. The role also works in non-OpenStack environments just as well. All of the documentation, configuration, and Ansible tasks are all included with the role.

The role itself applies security configurations to each host in an environment. Those configurations are based on the Security Technical Implementation Guide (STIG) from the Defense Information Systems Agency (DISA), which is part of the United States Department of Defense. The role takes the configurations from the STIG and makes small tweaks to fit an OpenStack environment. All of the tasks are carefully translated from the STIG for Red Hat Enterprise Linux 6 (there is no STIG for Ubuntu currently).

The role is available now as part of OpenStack-Ansible in the Liberty, Mitaka, and Newton releases. Simply adjust apply_security_hardening from false to true and deploy. For other users, the role can easily be used in any Ansible playbook. (Be sure to review the configuration to ensure its defaults meet your requirements.)

Getting involved

OpenStack securityWe need your help! Upcoming plans include Ubuntu 16.04 and CentOS support, a rebase onto the RHEL 7 STIG (which will be finalized soon), and better reporting.

Join us later this week for the OpenStack-Ansible design summit sessions or anytime on Freenode in #openstack-ansible. We’re on the OpenStack development mailing list as well (be sure to use the [openstack-ansible][security] tags.

Hallway conversations

Lots of people came by to chat afterwards and offered to join in the development. A few people were hoping it would have been the security “silver bullet”, and I reset some expectations.

Some attendees has good ideas around making the role more generic and adding an “OpenStack switch” that would configure many variables to fit an OpenStack environment. That would allow people to use it easily with non-OpenStack environments.

Other comments were around hardening inside of Linux containers. These users had “heavy” containers where the entire OS is virtualized and multiple processes might be running at the same time. Some of the configuration changes (especially the kernel tunables) don’t make sense inside a container like that, but many of the others could be useful. For more information on securing Linux containers, watch the video from Thomas Cameron’s talk here at the summit.

Thank you

I’d like to thank everyone for coming to the talk today and sharing their feedback. It’s immensely useful and I pile all of that feedback into future talks. Also, I’d like to thank all of the people at Rackspace who helped me review the slides and improve them.

The post Talk Recap: Automated security hardening with OpenStack-Ansible appeared first on major.io.

by Major Hayden at April 26, 2016 09:19 PM

Everything Sysadmin

Watch us live today! LISA Conversations Episode 9: kc claffy on "Named Data Networking"

Today (Tuesday, April 26, 2016) we'll be recording episode #9 of LISA Conversations. Join the Google Hangout and submit questions live via this link.

Our guest will be kc claffy. We'll be discussing her talk Named Data Networking from LISA '15.

The recorded episode will be available shortly afterwards on YouTube.

You won't want to miss this!

by Tom Limoncelli at April 26, 2016 03:00 PM

Electricmonk.nl

Ansible-cmdb v1.14: Generate a host overview of Ansible facts.

I've just released ansible-cmdb v1.14. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.

This release includes the following bugfixes and feature improvements:

  • Look for ansible.cfg and use hostfile setting.
  • html_fancy: Properly sort vcpu and ram columns.
  • html_fancy: Remember which columns the user has toggled.
  • html_fancy: display groups and hostvars even if no host information was collected.
  • html_fancy: support for facter and custom facts.
  • html_fancy: Hide sections if there are no items for it.
  • html_fancy: Improvements in the rendering of custom variables.
  • Apply Dynamic Inventory vars to all hostnames in the group.
  • Many minor bugfixes.

As always, packages are available for Debian, Ubuntu, Redhat, Centos and other systems. Get the new release from the Github releases page.

by admin at April 26, 2016 01:30 PM

April 25, 2016

The Geekess

White Corporate Feminism

When I first went to the Grace Hopper Celebration of Women in Computing conference, it was magical. Being a woman in tech means I’m often the only woman in a team of male engineers, and if there’s more than one woman on a team, it’s usually because we have a project manager or marketing person who is a woman.

Going to the Grace Hopper conference, and being surrounded by women engineers and women computer science students, allowed me to relax in a way that I could never do in a male-centric space. I could talk with other women who just understood things like the glass ceiling and having to be constantly on guard in order to “fit in” with male colleagues. I had crafted a persona, an armor of collared shirts and jeans, trained myself to interrupt in order to make my male colleagues listen, and lied to myself and others that I wasn’t interested in “girly” hobbies like sewing or knitting. At Grace Hopper, surrounded by women, I could stop pretending, and try to figure out how to just be myself. To take a breath, stop interrupting, and cherish the fact that I was listened to and given space to listen to others.

However, after a day or so, I began to feel uneasy about two particular aspects of the Grace Hopper conference. I felt uneasy watching how aggressively the corporate representatives at the booths tried to persuade the students to join their companies. You couldn’t walk into the ballroom for keynotes without going through a gauntlet of recruiters. When I looked around the ballroom at the faces of the women surrounding me, I realized the second thing that made me uneasy. Even though Grace Hopper was hosted in Atlanta that year, a city that is 56% African American, there weren’t that many women of color attending. We’ve also seen the Grace Hopper conference feature more male keynote speakers, which is problematic when the goal of the conference is to allow women to connect to role models that look like them.

When I did a bit of research for this blog post, I looked at the board member list for Anita Borg Institute, who organizes the Grace Hopper Conference. I was unsurprised to see major corporate executives hold the majority of Anita Borg Institute board seats. However, I was curious why the board member page had no pictures on it. I used Google Image search in combination with the board member’s name and company to create this image:
anita-borg-board

My unease was recently echoed by Cate Huston, who also noticed the trend towards corporations trying to co-opt women’s only spaces to feed women into their toxic hiring pipeline. Last week, I also found this excellent article on white feminism, and how white women need to allow people of color to speak up about the problematic aspects of women-only spaces. There was also an interesting article last week about how “women’s only spaces” can be problematic for trans women to navigate if they don’t “pass” the white-centric standard of female beauty. The article also discusses that by promoting women-only spaces as “safe”, we are unintentially promoting the assumption that women can’t be predators, unconsciously sending the message to victims of violent or abusive women that they should remain silent about their abuse.

So how to do we expand women-only spaces to be more inclusive, and move beyond white corporate feminism? It starts with recognizing the problem often lies with the white women who start initiatives, and fail to bring in partners who are people of color. We also need to find ways to fund inclusive spaces and diversity efforts without big corporate backers.

We also need to take a critical look at how well-meaning diversity efforts often center around improving tech for white women. When you hear a white male say, “We need more women in X community,” take a moment to question them on why they’re so focused on women and not also bringing in more people of color, people who are differently abled, or LGBTQ people. We need to figure out how to expand the conversation beyond white women in tech, both in external conversations, and in our own projects.

One of the projects I volunteer for is Outreachy, a three-month paid internship program to increase diversity in open source. In 2011, the coordinators were told the language around encouraging “only women” to apply wasn’t trans-inclusive, so they changed the application requirements to clarify the program was open to both cis and trans women. In 2013, they clarified that Outreachy was also open to trans men and gender queer people. Last year, we wanted to open the program to men who were traditionally underrepresented in tech. After taking a long hard look at the statistics, we expanded the program to include all people in the U.S. who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander. We want to expand the program to additional people who are underrepresented in tech in other countries, so please contact us if you have good sources of diversity data for your country.

But most importantly, white people need to learn to listen to people of color instead of being a “white savior”. We need to believe people of color’s lived experience, amplify their voices when people of color tell us they feel isolated in tech, and stop insisting “not all white women” when people of color critique a problematic aspect of the feminist community.

Trying to move into more intersectional feminism is one of my goals, which is why I’m really excited to speak at the Richard Tapia Celebration of Diversity in Computing. I hadn’t heard of it until about a year ago (probably because they have less corporate sponsorship and less marketing), but it’s been described to me as “Grace Hopper for people of color”. I’m excited to talk to people about open source and Outreachy, but most importantly, I want to go and listen to people who have lived experiences that are different from mine, so I can promote their voices.

If you can kick in a couple dollars a month to help me cover costs for the conference, please donate on my Patreon. I’ll be writing about the people I meet at Tapia on my blog, so look for a follow-up post in late September!

by sarah at April 25, 2016 04:25 PM

April 23, 2016

That grumpy BSD guy

Does Your Email Provider Know What A "Joejob" Is?

Anecdotal evidence seems to indicate that Google and possibly other mail service providers are either quite ignorant of history when it comes to email and spam, or are applying unsavoury tactics to capture market dominance.

The first inklings that Google had reservations about delivering mail coming from my bsdly.net domain came earlier this year, when I was contacted by friends who have left their email service in the hands of Google, and it turned out that my replies to their messages did not reach their recipients, even when my logs showed that the Google mail servers had accepted the messages for delivery.

Contacting Google about matters like these means you first need to navigate some web forums. In this particular case (I won't give a direct reference, but a search on the likely keywords will likely turn up the relevant exchange), the denizens of that web forum appeared to be more interested in demonstrating their BOFHishness than actually providing input on debugging and resolving an apparent misconfiguration that was making valid mail disappear without a trace after it had entered Google's systems.

The forum is primarily intended as a support channel for people who host their mail at Google (this becomes very clear when you try out some of the web accessible tools to check domains not hosted by Google), so the only practial result was that I finally set up DKIM signing for outgoing mail from the domain, in addition to the SPF records that were already in place. I'm in fact less than fond of either of these SMTP addons, but there were anyway other channels for contact with my friends, and I let the matter rest there for a while.

If you've read earlier instalments in this column, you will know that I've operated bsdly.net with an email service since 2004 and a handful of other domains from some years before the bsdly.net domain was set up, sharing to varying extents the same infrastructure. One feature of the bsdly.net and associated domains setup is that in 2006, we started publishing a list of known bad addresses in our domains, that we used as spamtrap addresses as well as publising the blacklist that the greytrapping generates.

Over the years the list of spamtrap addresses -- harvested almost exclusively from records in our logs and greylists of apparent bounces of messages sent with forged From: addresses in our domains - has grown to a total of 29757 spamtraps, a full 7387 in the bsdly.net domain alone. At the time I'm writing this 31162 hosts have attempted to deliver mail to one of those spamtrap addresses. The exact numbers will likely change by the time you read this -- blacklisted addresses expire 24 hours after last contact, and new spamtrap addresses generally turn up a few more each week. With some simple scriptery, we pick them out of logs and greylists as they appear, and sometimes entire days pass without new candidates appearing. For a more general overview of how I run the blacklist, see this post from 2013.

In addition to the spamtrap addresses, the bsdly.net domain has some valid addresses including my own, and I've set up a few addresses for specific purposes (actually aliases), mainly set up so I can filter them into relevant mailboxes at the receiving end. Despite all our efforts to stop spam, occasionally spam is delivered to those aliases too (see eg the ethics of running the traplist page for some amusing examples).

Then this morning a piece of possibly well intended but actually quite clearly unwanted commercial email turned up, addressed to one of those aliases. For no actually good reason, I decided to send an answer to the message, telling them that whoever sold them the address list they were using were ripping them off.

That message bounced, and it turns out that the domain was hosted at Google.

Reading that bounce message is quite interesting, because if you read the page they link to, it looks very much like whoever runs Google Mail doesn't know what a joejob is.

The page, which again is intended mainly for Google's own customers, specifies that you should set up SPF and DKIM for domains. But looking at the headers, the message they reject passes both those criteria:

Received-SPF: pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) client-ip=2001:16d8:ff00:1a9::2;
Authentication-Results: mx.google.com;
dkim=pass (test mode) header.i=@bsdly.net;
spf=pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) smtp.mailfrom=peter@bsdly.net
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bsdly.net; s=x;
h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date:Message-ID:From:References:To:Subject; bh=OonsF8beQz17wcKmu+EJl34N5bW6uUouWw4JVE5FJV8=;
b=hGgolFeqxOOD/UdGXbsrbwf8WuMoe1vCnYJSTo5M9W2k2yy7wtpkMZOmwkEqZR0XQyj6qoCSriC6Hjh0WxWuMWv5BDZPkOEE3Wuag9+KuNGd7RL51BFcltcfyepBVLxY8aeJrjRXLjXS11TIyWenpMbtAf1yiNPKT1weIX3IYSw=;

Then for reasons known only to themselves, or most likely due to the weight they assign to some unknown data source, they reject the message anyway.

We do not know what that data source is. But with more than seven thousand bogus addresses that have generated bounces we've registered it's likely that the number of invalid bsdly.net From: addresses Google's systems has seen is far larger than the number of valid ones. The actual number of bogus addresses is likely higher, though: in the early days the collection process had enough manual steps that we're bound to have missed some. Valid bsdly.net addresses that do not eventually resolve to a mailbox I read are rare if not entirely non-existent. But the 'bulk mail' classification is bizarre if you even consider checking Received: headers.

The reason Google's systems most likely has seen more bogus bsdly.net From: addresses than valid ones is that by historical accident faking sender email addresses in SMTP dialogues is trivial.

Anecdotal evidence indicates that if a domain exists it will sooner or later be used in the from: field of some spam campaign where the messages originate somewhere else completely, and for that very reason the SPF and DKIM mechanisms were specified. I find both mechanisms slightly painful and inelegant, but used in their proper context, they do have their uses.

For the domains I've administered, we started seeing log entries, and in the cases where the addresses were actually deliverable, actual bounce messages for messages that definitely did not originate at our site and never went through our infrastructure a long time before bsdly.net was created. We didn't even think about recording those addresses until a practical use for them suddenly appeared with the greytrapping feature in OpenBSD 3.3 in 2003.

A little while after upgrading the relevant systems to OpenBSD 3.3, we had a functional greytrapping system going, at some point before the 2007 blog post I started publishing the generated blacklist. The rest is, well, what got us to where we are today.

From the data we see here, mail sent with faked sender addresses happens continuously and most likely to all domains, sooner or later. Joejobs that actually hit deliverable addresses happen too. Raw data from a campaign in late 2014 that used my main address as the purported sender is preserved here, collected with a mind to writing an article about the incident and comparing to a similar batch from 2008. That article could still be written at some point, and in the meantime the messages and specifically their headers are worth looking into if you're a bit like me. (That is, if you get some enjoyment out of such things as discovering the mindbogglingly bizarre and plain wrong mail configurations some people have apparently chosen to live with. And unless your base64 sightreading skills are far better than mine, some of the messages may require a bit of massaging with MIME tools to extract text you can actually read.)

Anyone who runs a mail service and bothers even occasionally to read mail server logs will know that joejobs and spam campaigns with fake and undeliverable return addresses happen all the time. If Google's mail admins are not aware of that fact, well, I'll stop right there and refuse to believe that they can be that incompentent.

The question then becomes, why are they doing this? Are they giving other independent operators the same treatment? If this is part of some kind of intimidation campaign (think "sign up for our service and we'll get your mail delivered, but if you don't, delivering to domains that we host becomes your problem"). I would think a campaign of intimidation would be a less than useful strategy when there are alread antitrust probes underway, these things can change direction as discoveries dictate.

Normally I would put oddities like the ones I saw in this case down to a silly misconfiguration, some combination of incompetence and arrogance and, quite possibly, some out of control automation thrown in. But here we are seeing clearly wrong behavior from a company that prides itself in hiring only the smartest people they can find. That doesn't totally rule out incompetence or plain bad luck, but it makes for a very strange episode. (And lest we forget, here is some data on a previous episode involving a large US corporation, spam and general silliness.)

One other interesting question is whether other operators, big and small behave in any similar ways. If you have seen phenomena like this involving Google or other operators, I would like to hear from you by email (easily found in this article) or in comments.

[Opinions offered here are my own and may or may not reflect the views of my employer.]


I will be giving a PF tutorial at BSDCan 2016, and I welcome your questions now that I'm revising the material for that session. See this blog post for some ideas (note that I'm only giving the PF tutorial this time around).

by Peter N. M. Hansteen (noreply@blogger.com) at April 23, 2016 05:56 PM

April 22, 2016

Everything Sysadmin

Reminder: Do your homework for next week's LISA Conversations: kc claffy on "Named Data Networking"

This weekend is a good time to watch the video we'll be discussing on the next episode of LISA conversations: kc claffy's talk from LISA '15 titled Named Data Networking.

  • Homework: Watch her talk ahead of time.

Then you'll be prepared when we record the episode on Tuesday, April 26, 2016 at 3:30-4:30 p.m. Pacific Time. Register (optional) and watch via this link. Watching live makes it possible to participate in the Q&A.

The recorded episode will be available shortly afterwards on YouTube.

You won't want to miss this!

by Tom Limoncelli at April 22, 2016 03:00 PM

Racker Hacker

Lessons learned: Five years of colocation

Network cables in a colocation datacenterBack in 2011, I decided to try out a new method for hosting my websites and other applications: colocation. Before that, I used shared hosting, VPS providers (“cloud” wasn’t a popular thing back then), and dedicated servers. Each had their drawbacks in different areas. Some didn’t perform well, some couldn’t recover from failure well, and some were terribly time consuming to maintain.

This post will explain why I decided to try colocation and will hopefully help you avoid some of my mistakes.

Why choose colocation?

For the majority of us, hosting an application involves renting something from another company. This includes tangible things, such as disk space, servers, and networking, as well as the intangible items, like customer support. We all choose how much we rent (and hope our provider does well) and how much we do ourselves.

Colocation usually involves renting space in a rack, electricity, and network bandwidth. In these environments, the customer is expected to maintain the server hardware, perimeter network devices (switches, routers, and firewalls), and all of the other equipment downstream from the provider’s equipment. Providers generally offer good price points for these services since they only need to ensure your rack is secure, powered, and networked.

As an example, a quarter rack in Dallas at a low to mid-range colocation provider can cost between $200-400 per month. That normally comes with a power allotment (mine is 8 amps) and network bandwidth (more on that later). In my quarter rack, I have five 1U servers plus a managed switch. One of those servers acts as my firewall.

Let’s consider a similar scenario at a dedicated hosting provider. I checked some clearance prices in Dallas while writing this post and found pricing for servers which have similar CPUs as my servers, but with much less storage. The pricing for five servers runs about $550/month and we haven’t even considered a switch and firewall yet.

Cost is one factor, but I have some other preferences which push me strongly towards colocation:

  • Customized server configuration: I can choose the quantity and type of components I need
  • Customized networking: My servers run an OpenStack private cloud and I need complex networking
  • Physical access: Perhaps I’m getting old, but I enjoy getting my hands dirty from time to time

If you have similar preferences, the rest of this post should be a good guide for getting started with a colocation environment.

Step 1: Buy quality parts

Seriously — buy quality parts. That doesn’t mean you need to go buy the newest server available from a well-known manufacturer, but you do need to find reliable parts in a new or used server.

I’ve built my environment entirely with Supermicro servers. The X9SCi-LN4F servers have been extremely reliable. My first two came brand new from Silicon Mechanics and I’m a big fan of their products. Look at their Rackform R.133 for something very similar to what I have.

My last three have all come from Mr. Rackables (an eBay seller). They’re used X9SCi-LN4F servers, but they’re all in good condition. I couldn’t beat the price for those servers, either.

Before you buy, do some reading on brand reliability and compatibility. If you plan to run Linux, do a Google search for the model of the server and the version of Linux you plan to run. Also, take a moment to poke around some forums to see what other people think of the server. Go to WebHostingTalk and search for hardware there, too.

Buying quality parts gives you a little extra piece of mind, especially if your colo is a long drive from your home or work.

Step 2: Ask questions

Your first conversation with most providers will be through a salesperson. That’s not necessarily a bad thing since they will give you a feature and pricing overview fairly quickly. However, I like to ask additional questions of the salesperson to learn more about the technical staff.

Here are some good conversation starters for making it past a salesperson to the technical staff:

  • Your website says I get 8 amps of power. How many plugs on the PDU can I use?
  • My applications need IPv6 connectivity. Is it possible to get something larger than a /64?
  • For IPv6 connectivity, will I use DHCP-PD or will you route a larger IPv6 block to me via SLAAC?
  • I’d like to delegate reverse DNS to my own nameservers. Do you support that?
  • If I have a problem with a component, can I ship you a part so you can replace it?

When you do get a response, look for a few things:

  • Did the response come back in a reasonable timeframe? (Keep in mind that you aren’t a paying customer yet.)
  • Did the technical person take the time to fully explain their response?
  • Does the response leave a lot of room for creative interpretation? (If so, ask for clarification.)
  • Does the technical invite additional questions?

It’s much better to find problems now than after you’ve signed a contract. I’ll talk more about contracts later.

Step 3: Get specific on networking

Look for cities with very diverse network options. Big cities, or cities with lots of datacenters, will usually have good network provider choices and better prices. In Texas, your best bet is Dallas (which is where I host).

I’ve learned that bandwidth measurement is one of the biggest areas of confusion and “creative interpretation” in colocation hosting. Every datacenter I’ve hosted with has handled things differently.

There are four main ways that most colocation providers handle networking:

95th percentile

This is sometimes called burstable billing since it allows you to have traffic bursts without getting charged a lot for it. In the old days, you had to commit to a rate and stick with it (more on that method next). 95th percentile billing allows you to have some spikes during the measurement period without requiring negotiations for a higher transfer rate.

Long story short, this billing method measures your bandwidth on regular intervals and throws out the top 5% of your intervals. This means that some unusual spikes (so long as it’s less than 36 hours in a month) won’t cause you to need a higher committed rate. For most people, this measurement method is beneficial since your spikes are thrown out. However, if you have sustained spikes, this billing method can be painful.

Committed rate

If you see a datacenter say something like “$15 per megabit”, they’re probably measuring with committed rates. In this model, you choose a rate, like 10 megabits/second, and pay based on that rate. At $15 per megabit, your bandwidth charge would be $150 per month. The actual space in the rack and the power often costs extra.

Some people have good things to say about this billing method, but it seems awful to me. If you end up using 1 megabit per second, you still get billed for 10. If you have some spikes that creep over 10 megabit, even if you stay well under that rate as an average, the datacenter folks will ask you to do a higher commit. That means more money for something you may or may not use.

In my experience, this is is a serious red flag. This suggests that the datacenter network probably doesn’t have much extra capacity available within the datacenter, with its providers, or both. You sometimes see this in cities where bandwidth is more expensive. If you find a datacenter that’s totally perfect, but they want to do a committed rate, ask if there’s an option for 95th percentile. If not, I strongly suggest looking elsewhere.

Total bandwidth

Total bandwidth billing is what you normally see from most dedicated server or VPS providers. They will promise a certain transfer allotment and a network port speed. You will often see something like “10TB on a 100mbps port” and that means you can transfer 10TB in a month at at 100 megabits per second. They often don’t care how you consume the bandwidth, but if you pass 10TB, you will likely pay overages per gigabyte. (Be sure to ask about the overage price.)

This method is nice because you can watch it like a water meter. You can take your daily transfer, multiply by 30, and see if the transfer allotment is enough. Most datacenters will allow you to upgrade to a gigabit port for a fee and this will allow you to handle spikes a little easier.

For personal colocation, this bandwidth billing method is great. It could be a red flag for larger customers because it gives a hint that the datacenter might be oversubscribed on networking. If every customer wanted to burst to 100 megabit at the same time, there probably isn’t enough network connectivity to allow everyone to get that guaranteed rate.

Bring your own

Finally, you could always negotiate with bandwidth providers and bring your own bandwidth. This comes with its own challenges and it’s probably not worth it for a personal colocation environment. Larger business like this method because they often have rates negotiated with providers for their office connectivity and they can often get a good rate within the colocation datacenter.

There are plenty of up-front costs with bringing your own bandwidth provider. Some of those costs may come from the datacenter itself, especially if new cabling is required.

Step 4: Plan for failure

Ensure that spare parts are available at a moment’s notice if something goes wrong. In my case, I keep two extra hard drives in the rack with my servers as well as two sticks of RAM. My colocation datacenter is about four hours away by car, and I certainly don’t want to make an emergency trip there to replace a broken hard drive.

Buying servers with out of band management can be a huge help during an emergency. Getting console access remotely can shave plenty of time off of an outage. The majority of Supermicro servers come with an out of band management controller by default. You can use simple IPMI commands to reboot the server or use the iKVM interface to interact with the console. Many of these management controllers allow you to mount USB drives remotely and re-image a server at any time.

Ask the datacenter if they have a network KVM that you could use or rent during an emergency. Be sure to ask about pricing and the time expectations when you request one to be connected to your servers.

Step 5: Contracts and payment

Be sure to read the contract carefully. Pay special attention to how the datacenter handles outages and how you can request SLA credits. Take time to review any sections on what rights you have when they don’t hold up their end of the deal.

As with any contract, find out what happens when the contract ends. Does it auto-renew? Do you keep the same rates? Can you go month to month? Reviewing these sections before signing could save a lot of money later.

Wrapping up

Although cloud hosting has certainly made it easier to serve applications, there are still some people out there that prefer to have more customization than cloud hosting allows. For some applications, cloud hosting can be prohibitively expensive. Some applications don’t tolerate a shared platform and the noisy neighbor issues that come with it.

Colocation can be a challenging, but rewarding, experience. As with anything, you must do your homework. I certainly hope this post helps make that homework a little easier.


Since I know someone will ask me: I host with Corespace and I’ve been with them for a little over a year. They have been great so far and their staff has been friendly in person, via tickets, and via telephone.

Photo credit: Kev (Flickr)

The post Lessons learned: Five years of colocation appeared first on major.io.

by Major Hayden at April 22, 2016 01:30 PM

April 21, 2016

Michael Biven

Our Resistance to Change has Sent Us the Long Way Round to Where We’re Going

Eight years after Mark Mayo wrote about how cloud computing would change our profession pretty much everything still applies and people are still trying to adapt to the changes he highlighted. The post is primarily about the changes that individual systems administrators would be facing, but it also describes how the way we work would need to adapt.

The problem is that on the operations and systems side of web development our overall reaction to this was slow and I think we confused what the actual goal should be with something that’s was only temporary.

Configuration Management is the configure; make; make install of infrastructure.

The big change that happened during that time was the public cloud became a reality and we were able to programmatically interact with the infrastructure. That change opened up the possibility of adopting software development best practices. The reaction to this was better configuration management with tools like Puppet and Chef.

Five years after (2013) Mark’s post people started working on containers again along with a new read-only OS. Just as the public cloud was important by bringing an API to interact and manage an environment, Docker and CoreOS provided the opportunity for the early adopters to start thinking about delivering artifacts instead of updating existing things with config management. Immutable infrastructure finally became practical. Years ago we gave up on manually building software and started deploying packages. Configure; make; make install was replaced with rpm, apt-get, or yum.

We stopped deploying a process that would be repeated either manually or scripted and replaced it with the results. That’s what we’ve been missing. We set our sights lower and settled on just more manual work. We would never pull down a diff and do a make install. We would just update to the latest package. Puppet, Chef, Ansible and Salt have held us back.

Wasn’t a benefit of infrastructure as code the idea that we could move the environment into a CI/CD pipeline as a first class citizen?

The build pipelines that has existed in software development is now really a thing for infrastructure. Testing and a way (Packer and Docker ) to create artifacts has been available for a while, but now we have tools ( Terraform, Mesos, and Kubernetes ) to manage and orchestrate how those things are used.

You better learn to code, because we only need so many people to write documentation.

There are only two things that we all really work on. Communication and dealing with change. The communication can be writing text or writing code. Sometimes the change is planned and sometimes it’s unplanned. But at the end of the day these two things are really the only things we do. We communicate, perform and react to changes.

If you cannot write code then you can’t communicate, perform and react as effectively as you could. I think it’s safe to say this is why Mark highlighted those three points eight years ago.

  1. We have to accept and plan for a future where getting access to bare metal will be a luxury. Clients won’t want to pay for it, and frankly we won’t have the patience or time to deal with waiting more than 5 minutes for any infrastructure soon anyways.
  2. We need to learn to program better, with richer languages.
  3. Perhaps the biggest challenge we’re going to have is one we share with developers: How to deal with a world where everything is distributed, latency is wildly variable, where we have to scale for throughput more than anything else. I believe we’re only just starting to see the need for “scale” on the web and in systems work.

– Mark Mayo, Cloud computing is a sea change - How sysadmins can prepare

April 21, 2016 02:47 PM

Racker Hacker

OpenStack Summit in Austin is almost here!

OpenStack comes home to Austin on Monday for the OpenStack Summit! I will be there with plenty of other Rackers to learn, collaborate, and share our story.

OpenStack Summit Austin 2016 - Automated Security Hardening with OpenStack-Ansible - Major HaydenIf you’re interested in applying automated security hardening to an OpenStack cloud, be sure to drop by my talk on Tuesday from 11:15 to 11:55 AM. I’ll explain how Rackspace uses the openstack-ansible-security Ansible role to automatically apply hardening standards to servers in an OpenStack private cloud. If you aren’t able to attend, don’t worry. I will post my slides and a summary here on the blog.

Here are the sessions on the top of my list:

If you’re at the Summit and you want to meet, feel free to email me at major@mhtx.net or connect with me on Twitter. I hope it’s a great week!

The post OpenStack Summit in Austin is almost here! appeared first on major.io.

by Major Hayden at April 21, 2016 01:16 PM

April 19, 2016

R.I.Pienaar

Hiera Node Classifier 0.7

A while ago I released a Puppet 4 Hiera based node classifier to see what is next for hiera_include(). This had the major drawback that you couldn’t set an environment with it like with a real ENC since Puppet just doesn’t have that feature.

I’ve released a update to the classifier that now include a small real ENC that takes care of setting the environment based on certname and then boots up the classifier on the node.

Usage


ENCs tend to know only about the certname, you could imagine getting most recent seen facts from PuppetDB etc but I do not really want to assume things about peoples infrastructure. So for now this sticks to supporting classification based on certname only.

It’s really pretty simple, lets assume you are wanting to classify node1.example.net, you just need to have a node1.example.net.yaml (or JSON) file somewhere in a path. Typically this is going to be in a directory environment somewhere but could of course also be a site wide hiera directory.

In it you put:

classifier::environment: development

And this will node will form part of that environment. Past that everything in the previous post just applies so you make rules or assign classes as normal, and while doing so you have full access to node facts.

The classifier now expose some extra information to help you determine if the ENC is in use and based on what file it’s classifying the node:

  • $classifier::enc_used – boolean that indicates if the ENC is in use
  • $classifier::enc_source – path to the data file that set the environment. undef when not found
  • $classifier::enc_environment – the environment the ENC is setting

It supports a default environment which you configure when configuring Puppet to use a ENC as below.

Configuring Puppet


Configuring Puppet is pretty simple for this:

[main]
node_terminus = exec
external_nodes = /usr/local/bin/classifier_enc.rb --data-dir /etc/puppetlabs/code/hieradata --node-pattern nodes/%%.yaml

Apart from these you can do –default development to default to that and not production and you can add –debug /tmp/enc.log to get a bunch of debug output.

The data-dir above is for your classic Hiera single data dir setup, but you can also use globs to support environment data like –data-dir /etc/puppetlabs/code/environments/*/hieradata. It will now search the entire glob until it finds a match for the certname.

That’s really all there is to it, it produce a classification like this:

---
environment: production
classes:
  classifier:
    enc_used: true
    enc_source: /etc/puppetlabs/code/hieradata/node.example.yaml
    enc_environment: production

Conclusion


That’s really all there is to it, I think this might hit a high percentage of user cases and bring a key ability to the hiera classifiers. It’s a tad annoying there is no way really to do better granularity than just per node here, I might come up with something else but don’t really want to go too deep down that hole.

In future I’ll look about adding a class to install the classifier into some path and configure Puppet, for now that’s up to the user. It’s shipped in the bin dir of the module.

by R.I. Pienaar at April 19, 2016 03:14 PM

April 18, 2016

Slaptijack

Two Column for Loop in bash

I had this interesting question the other day. Someone had a file with two columns of data in it. They wanted to assign each column to a different variable and then take action using those two variables. Here's the example I wrote for them:

IFS=$'n';
for LINE in $(cat data_file); do
    VARA=$(echo ${LINE} | awk '{ print $1}')
    VARB=$(echo ${LINE} | awk '{ print $2 }')
    echo "VARA is ${VARA}"
    echo "VARB is ${VARB}"
done

The key here is to set the internal field separator ($IFS) to $'n' so that the for loop interates on lines rather than words. The it's simply a matter of splitting the column into individual variables. In this case, I chose to use awk since it's a simple procedure and speed is not really an issue. Long-term, I would probably re-write this using arrays.

by Scott Hebert at April 18, 2016 01:00 PM

Adnans Sysadmin/Dev Blog

April 14, 2016

Errata Security

Defining "Gray Hat"

WIRED has written an article defining “White Hat”, “Black Hat”, and “Grey Hat”. It’s incomplete and partisan.

Black Hats are the bad guys: cybercriminals (like Russian cybercrime gangs), cyberspies (like the Chinese state-sponsored hackers that broke into OPM), or cyberterrorists (ISIS hackers who want to crash the power grid). They may or may not include cybervandals (like some Anonymous activity) that simply defaces websites. Black Hats are those who want to cause damage or profit at the expense of others.

White Hats do the same thing as Black Hats, but are the good guys. The break into networks (as pentesters), but only with permission, when a company/organization hires them to break into their own network. They research the security art, such vulnerabilities, exploits, and viruses. When they find vulnerabilities, they typically work to fix/patch them. (That you frequently have to apply security updates to your computers/devices is primarily due to White Hats). They develop products and tools for use by good guys (even though they sometimes can be used by the bad guys). The movie “Sneakers” refers to a team of White Hat hackers.

Grey Hat is anything that doesn’t fit nicely within these two categories. There are many objective meanings. It can sometimes refer to those who break the law, but who don’t have criminal intent. It can sometimes include the cybervandals, whose activities are more of a prank rather than a serious enterprise. It can refer to “Search Engine Optimizers” who use unsavory methods to trick search engines like Google to rank certain pages higher in search results, to generate advertising profits.

But, it’s also used subjectively, to simply refer to activities the speaker disagrees with. Our community has many debates over proper behavior. Those on one side of a debate frequently use Gray Hat to refer to those on the other side of the debate.

The biggest recent debate is “0day sales to the NSA”, which blew up after Stuxnet, and in particular, after Snowden. This is when experts look for bugs/vulnerabilities, but instead of reporting them to the vendor to be fixed (as White Hats typically do), they sell the bugs to the NSA, so the vulnerabilities (call “0days” in this context) can be used to hack computers in intelligence and military operations. Partisans who don’t like the NSA use “Grey Hat” to refer to those who sell 0days to the NSA.
WIRED’s definition is this partisan definition. Kim Zetter has done more to report on Stuxnet than any other journalist, which is why her definition is so narrow.

But Google is your friend. If you search for “Gray Hat” on Google and set the time range to pre-Stuxnet, then you’ll find no use of the term that corresponds to Kim’s definition, despite the term being in widespread use for more than a decade by that point. Instead, you’ll find things like this EFF “Gray Hat Guide”. You’ll also find how L0pht used the term to describe themselves when selling their password cracking tool called “L0phtcrack”, from back in 1998.

Fast forward to today, activists from the EFF and ACLU call 0day sellers “merchants of death”. But those on the other side of the debate point out how the 0days in Stuxnet saved thousands of lives. The US government had decided to stop Iran’s nuclear program, and 0days gave them a way to do that without bombs, assassinations, or a shooting war. Those who engage in 0day sales do so with the highest professional ethics. If that WaPo article about Gray Hats unlocking the iPhone is true, then it’s almost certain it's the FBI side of things who leaked the information, because 0day sellers don’t. It’s the government who is full of people who foreswear their oaths for petty reasons, not those who do 0day research.

The point is, the ethics of 0day sales are a hot debate. Using either White Hat or Gray Hat to refer to 0day sellers prejudices that debate. It reflects your own opinion, not that of the listener, who might choose a different word. The definition by WIRED, or the use of "Gray Hat" in the WaPo article, are obviously biased and partisan.

by Robert Graham (noreply@blogger.com) at April 14, 2016 07:50 AM

April 12, 2016

Raymii.org

IPv6 in a Docker container on a non-ipv6 network

At work and at home my ISP's have native IPv6. I recently was at a clients location where they had no IPv6 at all and had to set up and demonstrate an application in a Docker container with IPv6 functionality. They said the had IPv6 but on location it appeared that IPv6 wasn't working. Since IPv6 was required for the demo the container needed a workaround. This article describes the workaround I used to add IPv6 to a Docker container on a non IPv6 network.

April 12, 2016 12:00 AM

April 11, 2016

Colin Percival

Write opinionated workarounds

A few years ago, I decided that I should aim for my code to be as portable as possible. This generally meant targeting POSIX; in some cases I required slightly more, e.g., "POSIX with OpenSSL installed and cryptographic entropy available from /dev/urandom". This dedication made me rather unusual among software developers; grepping the source code for the software I have installed on my laptop, I cannot find any other examples of code with strictly POSIX compliant Makefiles, for example. (I did find one other Makefile which claimed to be POSIX-compatible; but in actual fact it used a GNU extension.) As far as I was concerned, strict POSIX compliance meant never having to say you're sorry for portability problems; if someone ran into problems with my standard-compliant code, well, they could fix their broken operating system.

And some people did. Unfortunately, despite the promise of open source, many users were unable to make such fixes themselves, and for a rather large number of operating systems the principle of standards compliance seems to be more aspirational than actual. Given the limits which would otherwise be imposed on the user base of my software, I eventually decided that it was necessary to add workarounds for some of the more common bugs. That said, I decided upon two policies:

  1. Workarounds should be disabled by default, and only enabled upon detecting an afflicted system.
  2. Users should be warned that a workaround is being applied.

April 11, 2016 10:00 AM

April 10, 2016

Vincent Bernat

Testing network software with pytest and Linux namespaces

Started in 2008, lldpd is an implementation of IEEE 802.1AB-2005 (aka LLDP) written in C. While it contains some unit tests, like many other network-related software at the time, the coverage of those is pretty poor: they are hard to write because the code is written in an imperative style and tighly coupled with the system. It would require extensive mocking1. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way.

To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.

pytest in a nutshell

pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:

  • you can directly use the assert keyword,
  • you can inject fixtures in any test function, and
  • you can parametrize tests.

Assertions

With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:

class testArithmetics(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(1 + 3, 4)

The equivalent with pytest is simpler and more readable:

def test_addition():
    assert 1 + 3 == 4

pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson’s article.

Fixtures

A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:

class testInVM(unittest.TestCase):

    def setUp(self):
        self.vm = VM('Test-VM')
        self.vm.start()
        self.ssh = SSHClient()
        self.ssh.connect(self.vm.public_ip)

    def tearDown(self):
        self.ssh.close()
        self.vm.destroy()

    def test_hello(self):
        stdin, stdout, stderr = self.ssh.exec_command("echo hello")
        stdin.close()
        self.assertEqual(stderr.read(), b"")
        self.assertEqual(stdout.read(), b"hello\n")

In the example above, we want to test various commands on a remote VM. The fixture launches a new VM and configure an SSH connection. However, if the SSH connection cannot be established, the fixture will fail and the tearDown() method won’t be invoked. The VM will be left running.

Instead, with pytest, we could do this:

@pytest.yield_fixture
def vm():
    r = VM('Test-VM')
    r.start()
    yield r
    r.destroy()

@pytest.yield_fixture
def ssh(vm):
    ssh = SSHClient()
    ssh.connect(vm.public_ip)
    yield ssh
    ssh.close()

def test_hello(ssh):
    stdin, stdout, stderr = ssh.exec_command("echo hello")
    stdin.close()
    stderr.read() == b""
    stdout.read() == b"hello\n"

The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.

Parameters

If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:

@pytest.mark.parametrize("n1, n2, expected", [
    (1, 3, 4),
    (8, 20, 28),
    (-4, 0, -4)])
def test_addition(n1, n2, expected):
    assert n1 + n2 == expected

Testing lldpd

The general plan for to test a feature in lldpd is the following:

  1. Setup two namespaces.
  2. Create a virtual link between them.
  3. Spawn a lldpd process in each namespace.
  4. Test the feature in one namespace.
  5. Check with lldpcli we get the expected result in the other.

Here is a typical test using the most interesting features of pytest:

@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features,
                    reason="LLDP-MED not supported")
@pytest.mark.parametrize("classe, expected", [
    (1, "Generic Endpoint (Class I)"),
    (2, "Media Endpoint (Class II)"),
    (3, "Communication Device Endpoint (Class III)"),
    (4, "Network Connectivity Device")])
def test_med_devicetype(lldpd, lldpcli, namespaces, links,
                        classe, expected):
    links(namespaces(1), namespaces(2))
    with namespaces(1):
        lldpd("-r")
    with namespaces(2):
        lldpd("-M", str(classe))
    with namespaces(1):
        out = lldpcli("-f", "keyvalue", "show", "neighbors", "details")
        assert out['lldp.eth0.lldp-med.device-type'] == expected

First, the test will be executed only if lldpd was compiled with LLDP-MED support. Second, the test is parametrized. We will execute four distinct tests, one for each role that lldpd should be able to take as an LLDP-MED-enabled endpoint.

The signature of the test has four parameters that are not covered by the parametrize() decorator: lldpd, lldpcli, namespaces and links. They are fixtures. A lot of magic happen in those to keep the actual tests short:

  • lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details.

  • lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.

  • namespaces is one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read fixtures/namespaces.py for more details. It is quite reusable in other projects2.

  • links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for more details.

You can see an example of a test run on the Travis build for 0.9.2. Since each test is correctly isolated, it’s possible to run parallel tests with pytest -n 10 --boxed. To catch even more bugs, both the address sanitizer (ASAN) and the undefined behavior sanitizer (UBSAN) are enabled. In case of a problem, notably a memory leak, the faulty program will exit with a non-zero exit code and the associated test will fail.


  1. A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations. 

  2. There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (root and _lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It’s not possible for a process to switch from one PID namespace to another. When you call setns() on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use multiprocessing module instead. 

by Vincent Bernat at April 10, 2016 04:22 PM

April 09, 2016

HolisticInfoSec.org

toolsmith #115: Volatility Acuity with VolUtility

Yes, we've definitely spent our share of toolsmith time on memory analysis tools such as Volatility and Rekall, but for good reason. I contend that memory analysis is fundamentally one of the most important skills you'll develop and utilize throughout your DFIR career.
By now you should have read The Art of Memory Forensics, if you haven't, it's money well spent, consider it an investment.
If there is one complaint, albeit a minor one, that analysts might raise specific to memory forensics tools, it's that they're very command-line oriented. While I appreciate this for speed and scripting, there are those among us who prefer a GUI. Who are we to judge? :-)
Kevin Breen's (@kevthehermit) VolUtility is a full function web UI for Volatility which fills the gap that's been on user wishlists for some time now.
When I reached out to Kevin regarding the current state of the project, he offered up a few good tidbits for user awareness.

1. Pull often. The project is still in its early stages and its early life is going to see a lot of tweaks, fixes, and enhancements as he finishes each of them.
2. If there is something that doesn’t work, could be better, or removed, open an issue. Kevin works best when people tell him what they want to see.
3. He's working with SANS to see VolUtility included in the SIFT distribution, and release a Debian package to make it easier to install. Vagrant and Docker instances are coming soon as well. 

The next two major VolUtility additions are:
1. Pre-Select plugins to run on image import.
2. Image Threat Score.

Notifications recently moved from notification bars to the toolbar, and there is now a right click context menu on the plugin output, which adds new features.

Installation

VolUtility installation is well documented on its GitHub site, but for the TLDR readers amongst you, here's the abbreviated version, step by step. This installation guidance assumes Ubuntu 14.04 LTS where Volatility has not yet been installed, nor have tools such as Git or Pip.
Follow this command set verbatim and you should be up and running in no time:
  1. sudo apt-get install git python-dev python-pip
  2. git clone https://github.com/volatilityfoundation/volatility
  3. cd volatility/
  4. sudo python setup.py install
  5. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
  6. echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
  7. sudo apt-get update
  8. sudo apt-get install -y mongodb-org
  9. sudo pip install pymongo pycrypto django virustotal-api distorm3
  10. git clone https://github.com/kevthehermit/VolUtility
  11. cd VolUtility/
  12. ./manage.py runserver 0.0.0.0:8000
Point your browser to http://localhost:8000 and there you have it.

VolUtility and an Old Friend

I pulled out an old memory image (hiomalvm02.raw) from September 2011's toolsmith specific to Volatility where we first really explored Volatility, it was version 2.0 back then. :-) This memory image will give us the ability to do a quick comparison of our results from 2011 against a fresh run with VolUtility and Volatility 2.5.

VolUtility will ask you for the path to Volatility plugins and the path to the memory image you'd like to analyze. I introduced my plugins path as /home/malman/Downloads/volatility/volatility/plugins.


The image I'd stashed in Downloads as well, the full path being /home/malman/Downloads/HIOMALVM02.raw.


Upon clicking Submit, cats began loading stuffs. If you enjoy this as much as I do, the Help menu allows you to watch the loading page as often as you'd like.


If you notice any issues such as the image load hanging, check your console, it will have captured any errors encountered.
On my first run, I had not yet installed distorm3, the console view allowed me to troubleshoot the issue quickly.

Now down to business. In our 2011 post using this image, I ran imageinfo, connscan, pslist, pstree, and malfind. I also ran cmdline for good measure via VolUtility. Running plugins in VolUtility is as easy as clicking the associated green arrow for each plugin. The results will accumulate on the toolbar and the top of the plugin selection pane, while the raw output for each plugin will appears beneath the plugin selection pane when you select View Output under Actions.


Results were indeed consistent with those from 2011 but enhanced by a few features. Imageinfo yielded WinXPSP3x86 as expected, connscan returned 188.40.138.148:80 as our evil IP and the associated suspect process ID of 1512. Pslist and pstree then confirmed parent processes and the evil emanated from an ill-conceived click via explorer.exe. If you'd like to export your results, it's as easy as selecting Export Output from the Action menu. I did so for pstree, as it is that plugin from whom all blessings flow, the results were written to pstree.csv.


We're reminded that explorer.exe (PID 1512) is the parent for cleansweep.exe (PID 3328) and that cleansweep.exe owns no threads current threads but is likely the original source of pwn. We're thus reminded to explore (hah!) PID 1512 for information. VolUtility allows you to run commands directly from the Tools Bar, I did so with vol.py -f /home/malman/Downloads/HIOMALVM02.raw malfind -p 1512.


Rather than regurgitate malfind results as noted from 2011 (you can review those for yourself), I instead used the VolUtility Tools Bar feature Yara Scan Memory. Be sure to follow Kevin's Yara installation guidance if you want to use this feature. Also remember to git pull! Kevin updated the Yara capabilities between the time I started this post and when I ran yarascan. Like he said, pull often. There is a yararules folder now in the VolUtility file hierarchy, I added spyeye.yar, as created from Jean-Philippe Teissier's rule set. Remember, from the September 2011 post, we know that hiomalvm02.raw taken from a system infected with SpyEye. I then selected Yara Scan Memory from the Tools Bar, and pointed to the just added spyeye.yar file.


The results were immediate, and many, as expected.


You can also use String Search / yara rule from the Tools Bar Search Type field to accomplish similar goals, and you can get very granular with you string searches to narrow down results.
Remember that your sessions will persist thanks to VolUtility's use of MongoDB, so if you pull, then restart VolUtility, you'll be quickly right back where you left off.

In Closing

VolUtility is a great effort, getting better all the time, and I find its convenience irresistible. Kevin's doing fine work here, pull down the project, use it, and offer feedback or contributions. It's a no-brainer for VolUtility to belong in SIFT by default, but as you've seen, building it for yourself is straightforward and quick. Add it to your DFIR utility belt today.
As always, ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Kevin (@kevthehermit) Breen for VolUtility and his feedback for this post.

by Russ McRee (noreply@blogger.com) at April 09, 2016 08:45 PM

Sean's IT Blog

What’s New in NVIDIA GRID Licensing

When GRID 2.0 was announced at VMworld 2015, it included a licensing component for the driver and software portion of the component.  NVIDIA has recently revised the licensing and simplified the licensing model.  They have also added a subscription-based model for customers that don’t want to buy a perpetual license and pay for support on a yearly basis.

Major Changes

There are a few major changes to the licensing model.  The first is that the Virtual Workstation Extended licensing tier has been deprecated, and the features from this level have been added into the Virtual Workstation licensing tier.  This means that high-end features, such as dedicating an entire GPU to a VM and CUDA support, are now available in the Virtual Workstation licensing tier.

The second major change is a licensing SKU for XenApp and Published Applications.  In the early version of GRID 2.0, licensing support for XenApp and Horizon Published Applications was complicated.  The new model provides for per-user licensing for server-based computing.

The third major change is a change to how the license is enforced.  In the original incarnation of GRID 2.0, a license server was required for utilizing the GRID 2.0 features.  That server handled license enforcement, and if it wasn’t available, or there were no licenses available, the desktops were not usable.  In the latest license revision, the license has shifted to EULA enforcement.  The license server is still required, but it is now used for reporting and capacity planning.

The final major change is the addition of a subscription-based licensing model.  This new model allows organizations to purchase licenses as they need them without having to do a large capital outlay.  The subscription model includes software support baked into the price.  Subscriptions can also be purchased in multi-year blocks, so I can pay for three years at one time.

One major difference between perpetual and subscription models is what happens when support expires.  In the perpetual model, you know the licensing.  If you allow support to expire, you can still use these features.  However, you will not be able to get software updates. In a subscription model, the licensed features are no longer available as soon as the subscription expires. 

The new pricing for GRID 2.0 is:

Name

Perpetual Licensing

Subscription Licensing (yearly)

Virtual Apps $20 + $5 SUMS $10
Virtual PC $100+$25 SUMS $50
Virtual Workstation $450 + $100 SUMS $250

Software support for the 1st year is not included when you purchase a perpetual license.  Purchasing the 1st year of support is required when buying perpetual licenses.  A license is required if you plan to use a direct pass-thru with a GRID card.


by seanpmassey at April 09, 2016 06:54 PM

April 08, 2016

Adnans Sysadmin/Dev Blog

April 04, 2016

Cryptography Engineering

What's the matter with PGP?

Image source: @bcrypt.
Last Thursday, Yahoo announced their plans to support end-to-end encryption using a fork of Google's end-to-end email extension. This is a Big Deal. With providers like Google and Yahoo onboard, email encryption is bound to get a big kick in the ass. This is something email badly needs.

So great work by Google and Yahoo! Which is why following complaint is going to seem awfully ungrateful. I realize this and I couldn't feel worse about it.

As transparent and user-friendly as the new email extensions are, they're fundamentally just re-implementations of OpenPGP -- and non-legacy-compatible ones, too. The problem with this is that, for all the good PGP has done in the past, it's a model of email encryption that's fundamentally broken.

It's time for PGP to die. 

In the remainder of this post I'm going to explain why this is so, what it means for the future of email encryption, and some of the things we should do about it. Nothing I'm going to say here will surprise anyone who's familiar with the technology -- in fact, this will barely be a technical post. That's because, fundamentally, most of the problems with email encryption aren't hyper-technical problems. They're still baked into the cake.

Background: PGP

Back in the late 1980s a few visionaries realized that this new 'e-mail' thing was awfully convenient and would likely be the future -- but that Internet mail protocols made virtually no effort to protect the content of transmitted messages. In those days (and still in these days) email transited the Internet in cleartext, often coming to rest in poorly-secured mailspools.

This inspired folks like Phil Zimmermann to create tools to deal with the problem. Zimmermann's PGP was a revolution. It gave users access to efficient public-key cryptography and fast symmetric ciphers in package you could install on a standard PC. Even better, PGP was compatible with legacy email systems: it would convert your ciphertext into a convenient ASCII armored format that could be easily pasted into the sophisticated email clients of the day -- things like "mail", "pine" or "the Compuserve e-mail client".

It's hard to explain what a big deal PGP was. Sure, it sucked badly to use. But in those days, everything sucked badly to use. Possession of a PGP key was a badge of technical merit. Folks held key signing parties. If you were a geek and wanted to discreetly share this fact with other geeks, there was no better time to be alive.

We've come a long way since the 1990s, but PGP mostly hasn't. While the protocol has evolved technically -- IDEA replaced BassOMatic, and was in turn replaced by better ciphers -- the fundamental concepts of PGP remain depressingly similar to what Zimmermann offered us in 1991. This has become a problem, and sadly one that's difficult to change.

Let's get specific.

PGP keys suck

Before we can communicate via PGP, we first need to exchange keys. PGP makes this downright unpleasant. In some cases, dangerously so.

Part of the problem lies in the nature of PGP public keys themselves. For historical reasons they tend to be large and contain lots of extraneous information, which it difficult to print them a business card or manually compare. You can write this off to a quirk of older technology, but even modern elliptic curve implementations still produce surprisingly large keys.
Three public keys offering roughly the same security level. From top-left: (1) Base58-encoded Curve25519 public key used in miniLock. (2) OpenPGP 256-bit elliptic curve public key format. (3a) GnuPG 3,072 bit RSA key and (3b) key fingerprint.
Since PGP keys aren't designed for humans, you need to move them electronically. But of course humans still need to verify the authenticity of received keys, as accepting an attacker-provided public key can be catastrophic.

PGP addresses this with a hodgepodge of key servers and public key fingerprints. These components respectively provide (untrustworthy) data transfer and a short token that human beings can manually verify. While in theory this is sound, in practice it adds complexity, which is always the enemy of security.

Now you may think this is purely academic. It's not. It can bite you in the ass.

Imagine, for example, you're a source looking to send secure email to a reporter at the Washington Post. This reporter publishes his fingerprint via Twitter, which means most obvious (and recommended) approach is to ask your PGP client to retrieve the key by fingerprint from a PGP key server. On the GnuPG command line can be done as follows:


Now let's ignore the fact that you've just leaked your key request to an untrusted server via HTTP. At the end of this process you should have the right key with high reliability. Right?

Except maybe not: if you happen to do this with GnuPG 2.0.18 -- one version off from the very latest GnuPG -- the client won't actually bother to check the fingerprint of the received key. A malicious server (or HTTP attacker) can ship you back the wrong key and you'll get no warning. This is fixed in the very latest versions of GPG but... Oy Vey.

PGP Key IDs are also pretty terrible,
due to the short length and continued
support for the broken V3 key format.
You can say that it's unfair to pick on all of PGP over an implementation flaw in GnuPG, but I would argue it speaks to a fundamental issue with the PGP design. PGP assumes keys are too big and complicated to be managed by mortals, but then in practice it practically begs users to handle them anyway. This means we manage them through a layer of machinery, and it happens that our machinery is far from infallible.

Which raises the question: why are we bothering with all this crap infrastructure in the first place. If we must exchange things via Twitter, why not simply exchange keys? Modern EC public keys are tiny. You could easily fit three or four of them in the space of this paragraph. If we must use an infrastructure layer, let's just use it to shunt all the key metadata around.

PGP key management sucks

If you can't trust Phil who can
you trust?
Manual key management is a mug's game. Transparent (or at least translucent) key management is the hallmark of every successful end-to-end secure encryption system. Now often this does involve some tradeoffs -- e.g.,, the need to trust a central authority to distribute keys -- but even this level of security would be lightyears better than the current situation with webmail.

To their credit, both Google and Yahoo have the opportunity to build their own key management solutions (at least, for those who trust Google and Yahoo), and they may still do so in the future. But today's solutions don't offer any of this, and it's not clear when they will. Key management, not pretty web interfaces, is the real weakness holding back widespread secure email.

ZRTP authentication string as
used in Signal.
For the record, classic PGP does have a solution to the problem. It's called the "web of trust", and it involves individuals signing each others' keys. I refuse to go into the problems with WoT because, frankly, life is too short. The TL;DR is that 'trust' means different things to you than it does to me. Most OpenPGP implementations do a lousy job of presenting any of this data to their users anyway.

The lack of transparent key management in PGP isn't unfixable. For those who don't trust Google or Yahoo, there are experimental systems like Keybase.io that attempt to tie keys to user identities. In theory we could even exchange our offline encryption keys through voice-authenticated channels using apps like OpenWhisperSystems' Signal. So far, nobody's bothered to do this -- all of these modern encryption tools are islands with no connection to the mainland. Connecting them together represents one of the real challenges facing widespread encrypted communications.

No forward secrecy

Try something: go delete some mail from your Gmail account. You've hit the archive button. Presumably you've also permanently wiped your Deleted Items folder. Now make sure you wipe your browser cache and the mailbox files for any IMAP clients you might be running (e.g., on your phone). Do any of your devices use SSD drives? Probably a safe bet to securely wipe those devices entirely. And at the end of this Google may still have a copy which could be vulnerable to law enforcement request or civil subpoena.

(Let's not get into the NSA's collect-it-all policy for encrypted messages. If the NSA is your adversary just forget about PGP.)

Forward secrecy (usually misnamed "perfect forward secrecy") ensures that if you can't destroy the ciphertexts, you can at least dispose of keys when you're done with them. Many online messaging systems like off-the-record messaging use PFS by default, essentially deriving a new key with each message volley sent. Newer 'ratcheting' systems like Trevor Perrin's Axolotl (used by TextSecure) have also begun to address the offline case.

Adding forward secrecy to asynchronous offline email is a much bigger challenge, but fundamentally it's at least possible to some degree. While securing the initial 'introduction' message between two participants may be challenging*, each subsequent reply can carry a new ephemeral key to be used in future communications. However this requires breaking changes to the PGP protocol and to clients -- changes that aren't likely to happen in a world where webmail providers have doubled down on the PGP model.

The OpenPGP format and defaults suck

If Will Smith looked like
this when your cryptography
was current, you need better
cryptography.
Poking through a modern OpenPGP implementation is like visiting a museum of 1990s crypto. For legacy compatibility reasons, many clients use old ciphers like CAST5 (a cipher that predates the AES competition). RSA encryption uses padding that looks disturbingly like PKCS#1v1.5 -- a format that's been relentlessly exploited in the past. Key size defaults don't reach the 128-bit security level. MACs are optional. Compression is often on by default. Elliptic curve crypto is (still!) barely supported.

Most of these issues are not exploitable unless you use PGP in a non-standard way, e.g., for instant messaging or online applications. And some people do use PGP this way.

But even if you're just using PGP just to send one-off emails to your grandmother, these bad defaults are pointless and unnecessary. It's one thing to provide optional backwards compatibility for that one friend who runs PGP on his Amiga. But few of my contacts do -- and moreover, client versions are clearly indicated in public keys.** Even if these archaic ciphers and formats aren't exploitable today, the current trajectory guarantees we'll still be using them a decade from now. Then all bets are off.

On the bright side, both Google and Yahoo seem to be pushing towards modern implementations that break compatibility with the old. Which raises a different question. If you're going to break compatibility with most PGP implementations, why bother with PGP at all?

Terrible mail client implementations

This is by far the worst aspect of the PGP ecosystem, and also the one I'd like to spend the least time on. In part this is because UX isn't technically PGP's problem; in part because the experience is inconsistent between implementations, and in part because it's inconsistent between users: one person's 'usable' is another person's technical nightmare.

But for what it's worth, many PGP-enabled mail clients make it ridiculously easy to send confidential messages with encryption turned off, to send unimportant messages with encryption turned on, to accidentally send to the wrong person's key (or the wrong subkey within a given person's key). They demand you encrypt your key with a passphrase, but routinely bug you to enter that passphrase in order to sign outgoing mail -- exposing your decryption keys in memory even when you're not reading secure email.

Most of these problems stem from the fact that PGP was designed to retain compatibility with standard (non-encrypted) email. If there's one lesson from the past ten years, it's that people are comfortable moving past email. We now use purpose-built messaging systems on a day-to-day basis. The startup cost of a secure-by-default environment is, at this point, basically an app store download.

Incidentally, the new Google/Yahoo web-based end-to-end clients dodge this problem by providing essentially no user interface at all. You enter your message into a separate box, and then plop the resulting encrypted data into the Compose box. This avoids many of the nastier interface problems, but only by making encryption non-transparent. This may change; it's too soon to know how.

So what should we be doing?

Quite a lot actually. The path to a proper encrypted email system isn't that far off. At minimum, any real solution needs:
  • A proper approach to key management. This could be anything from centralized key management as in Apple's iMessage -- which would still be better than nothing -- to a decentralized (but still usable) approach like the one offered by Signal or OTR. Whatever the solution, in order to achieve mass deployment, keys need to be made much more manageable or else submerged from the user altogether.
  • Forward secrecy baked into the protocol. This should be a pre-condition to any secure messaging system. 
  • Cryptography that post-dates the Fresh Prince. Enough said.
  • Screw backwards compatibility. Securing both encrypted and unencrypted email is too hard. We need dedicated networks that handle this from the start.
A number of projects are already going in this direction. Aside above-mentioned projects like Axolotl and TextSecure -- which pretend to be text messaging systems, but are really email in disguise -- projects like Mailpile are trying to re-architect the client interface (though they're sticking with the PGP paradigm). Projects like SMIMP are trying to attack this at the protocol level.*** At least in theory projects like DarkMail are also trying to adapt text messaging protocols to the email case, though details remain few and far between.

It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view 'PGP' to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix -- with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement 'PGP'. 

Conclusion

I realize I sound a bit cranky about this stuff. But as they say: a PGP critic is just a PGP user who's actually used the software for a while. At this point so much potential in this area and so many opportunities to do better. It's time for us to adopt those ideas and stop looking backwards.

Notes:

* Forward security even for introduction messages can be implemented, though it either require additional offline key distribution (e.g., TextSecure's 'pre-keys') or else the use of advanced primitives. For the purposes of a better PGP, just handling the second message in a conversation would be sufficient.

** Most PGP keys indicate the precise version of the client that generated them (which seems like a dumb thing to do). However if you want to add metadata to your key that indicates which ciphers you prefer, you have to use an optional command.

*** Thanks to Taylor Hornby for reminding me of this.

by Matthew Green (noreply@blogger.com) at April 04, 2016 08:18 PM

April 03, 2016

Adnans Sysadmin/Dev Blog

April 01, 2016

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – March 2016

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” …  [230 pageviews]
  2. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [112 pageviews]
  3. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our new 2016 research on security monitoring use cases here! [92 pageviews]
  4. My classic PCI DSS Log Review series is always popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ as well), useful for building log review processes and procedures , whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (out in its 4th edition!) [84+ pageviews to the main tag]
  5. Top 10 Criteria for a SIEM?” came from one of my last projects I did when running my SIEM consulting firm in 2009-2011 (for my recent work on evaluating SIEM, see this document [2015 update]) [78 pageviews of total 4113 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has about 3X the traffic of this blog]: 
 
Current research on IR:
Current research on EDR:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at April 01, 2016 04:02 PM

March 31, 2016

LZone - Sysadmin

Gerrit Howto Remove Changes

Sometimes you want to delete a Gerrit change to make it invisible for everyone (for example when you did commit an unencrypted secret...). AFAIK this is only possible via the SQL interface which you can enter with
ssh <gerrit host>:29418 gerrit gsql
and issue a delete with:
update changes set status='d' where change_id='<change id>';
For more Gerrit hints check out the Gerrit Cheat Sheet

March 31, 2016 10:15 PM

SSH ProxyCommand Examples

Use "ProxyCommand" in your ~/.ssh/config to easily access servers hidden behind port knocking and jump hosts.

Also check out the .

Use Gateway/Jumphost

Host unreachable_host
  ProxyCommand ssh gateway_host exec nc %h %p

Automatic Jump Host Proxying

Host «your jump host>
  ForwardAgent yes
  Hostname «your jump host>
  User «your user name on jump host>

# Note the server list can have wild cards, e.g. "webserver-* database*" Host «server list> ForwardAgent yes User «your user name on all these hosts> ProxyCommand ssh -q «your jump host> nc -q0 %h 22

Automatic Port Knocking

Host myserver
   User myuser
   Host myserver.com
   ProxyCommand bash -c '/usr/bin/knock %h 1000 2000 3000 4000; sleep 1; exec /bin/nc %h %p'

March 31, 2016 08:11 PM

syslog.me

An init system in a Docker container

DockerHere’s another quick post about docker, sorry again if it will come out a bit raw.

In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:

if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

Let’s restate the problem to make it clear:

  • the logs from a docker container are available either thorough the docker logs command or through logging drivers;
  • the log of a docker container consists of the output of the containerized process to the usual filehandles; if the process has no output, no logs are exported; if the process logs to file, those files must be collected either copying them out of the container or through an external volume/filesystem, otherwise they will be lost;
  • if we want a containerized process to send its logs through syslog and the process expects to have a syslog daemon on board of the same host (in this case, the same container), you are in trouble.

Well, OK, maybe “in trouble” is a bit too much. Then let’s say that running more than one process in the same container requires a bit more work than the “one container, one process” case. The case of putting more than one process in a container is actually so common that it’s even included in the official documentation. That’s probably why supervisor seems to be so common among docker users.

Since the solution is explained in the documentation I could well use it, but I was more intrigued in understanding what a real init system looks like in a container. So rather than just study and slavishly apply the supervisor approach I decided to research on how to run systemd inside a docker container. It turned out it may not be easy or super-safe, but it’s definitely possible. This is what I did:

And that worked! According to the logs I actually have cf-serverd and syslog-ng running in the container. For example:

root@tyrrell:/home/bronto# docker run -ti --rm -P --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro --log-driver=syslog --log-opt tag="poc-systemd" bronto/poc-systemd
systemd 215 running in system mode. (+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ -SECCOMP -APPARMOR)
Detected virtualization 'other'.
Detected architecture 'x86-64'.

Welcome to Debian GNU/Linux 8 (jessie)!

Set hostname to <ffb5129613a3>.
[ OK ] Reached target Paths.
[ OK ] Created slice Root Slice.
[ OK ] Listening on Delayed Shutdown Socket.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Listening on Journal Socket.
[ OK ] Created slice System Slice.
[ OK ] Listening on Syslog Socket.
[ OK ] Reached target Sockets.
[ OK ] Reached target Slices.
[ OK ] Reached target Swap.
[ OK ] Reached target Local File Systems.
 Starting Create Volatile Files and Directories...
 Starting Journal Service...
[ OK ] Started Journal Service.
[ OK ] Started Create Volatile Files and Directories.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Timers.
[ OK ] Reached target Basic System.
 Starting CFEngine 3 deamons...
 Starting Regular background program processing daemon...
[ OK ] Started Regular background program processing daemon.
 Starting /etc/rc.local Compatibility...
 Starting System Logger Daemon...
 Starting Cleanup of Temporary Directories...
[ OK ] Started /etc/rc.local Compatibility.
[ OK ] Started Cleanup of Temporary Directories.
[ OK ] Started System Logger Daemon.
[ OK ] Started CFEngine 3 deamons.
[ OK ] Reached target Multi-User System.

“I want to try that, too! Can you share your dockerfiles?”

Sure thing, just head out to GitHub and enjoy!

Things to be sorted out

The systemd container doesn’t stop properly. For some reason, when you run docker stop docker sends a signal, which the container defiantly ignores, and waits 10 seconds, then it kills it. Dunno why but hey! It’s just a few days I’m using this contraption!!!

References


Tagged: cfengine, Debian, DevOps, docker, Github, Sysadmin, systemd

by bronto at March 31, 2016 03:24 PM

March 30, 2016

Sarah Allen

lessons learned since 1908

frances-braggI was the only grand-daughter of Frances Clara Kiefer Bragg who passed away on March 11 just 15 days before her 108th birthday. She was a loving, yet formidable force in the world. As a child, we would often visit her in her home in Cambridge when we lived near Boston and when we lived in other countries, she would visit us for a few weeks or months. Small lessons were embedded in our interactions, and there was always a proper way to do things. We needed to know how to conduct ourselves in society: when you set the table, the knife points toward the plate, always hang paintings at eye-level. There was a right way to make borscht and chicken soup. I learned sewing, cooking, and appreciation of fine art from my grandmother. She told funny stories of her travels, and family history, sprinkled with anecdotes of poets, writers and presidents.

As a teenager, I rebelled and stopped doing all of the things I was told to do. I figured maybe one didn’t always need to write a thank you letter – sometimes a phone call would be even better. And also, as a teenager sometimes I just didn’t feel like doing what was right and proper.

Now, reflecting on those early lessons learned from my grandmother, I wonder if the impression stuck with me that whatever my grandmother did was the appropriate and proper thing to do in society. I followed her lead in other ways. It felt appropriate and proper for a young woman to travel alone in Europe, and that one must speak one’s mind, as long as it is done politely.

Somehow it all seemed normal that my Grandma visited my cousin in India and rode on the back of his scooter. I couldn’t find that photo, but I found this one, I think from that same trip…

fran-in-india

Fran spoke French, German and Russian quite fluently at times, along with quite a bit of Spanish and Italian, and random phrases in half a dozen other languages. She lived in the real world of typewriters and tea. She could remember when the ice-box was kept cold with real blocks of ice. She told stories of when her family got a telephone, and when her father proudly drove their first car.

In the 90s, I tried to get her connected to the internet, telling her that she could send email in an instant for $20 per month. She asked, “why would I want to do that, when sending a letter costs just 32 cents? I don’t send that many letters every month.”

While we lived on this earth in some of the same decades, during these years of my adult life, we inhabited quite different worlds. From my world of software and silicon, I recall that people used to say of Steve Jobs that he created a reality distortion field, where people would believe his vision of the future — a future which would otherwise be unrealistic, yet by believing in it, they helped him create it.

I believe Fran had that power to create her own reality distortion field where people are witty, and heroes emerge from mundane events. Memories are versed in rhyme, silliness is interwoven in adult conversation, and children are ambassadors of wisdom. Memories of my grandmother are splashed in vivid watercolor or whimsical strokes of ink. Postcards can share a moment. Words have layers of meaning, which evaporate upon inspection.

sarah-and-fran

The post lessons learned since 1908 appeared first on the evolving ultrasaurus.

by sarah at March 30, 2016 01:21 AM

March 29, 2016

LZone - Sysadmin

March 28, 2016

Carl Chenet

Du danger d’un acteur non-communautaire dans votre chaîne de production du Logiciel Libre

Suivez-moi aussi sur Diaspora*diaspora-banner ou Twitter 

La récente affaire désormais connue sous le nom de npmgate (voir plus bas si cet événement ne vous dit rien) est encore une illustration à mon sens du danger intrinsèque d’utiliser le produit possédé et/ou contrôlé par un acteur non-communautaire dans sa chaîne de production du Logiciel Libre. J’ai déjà tenté à plusieurs reprises d’avertir mes consœurs et confrères du Logiciel Libre sur ce danger, en particulier au sujet de Github dans mon billet de blog Le danger Github.

L’affaire npmgate

Pour rappel sur cette affaire précise du npmgate, l’entreprise américaine Kik.com a demandé à l’entreprise npm Inc., également société américaine, qui gère le site npmjs.com et l’infrastructure derrière l’installeur automatisé de modules Node.js npm, de renommer un module nommé kik écrit par Azer Koçulu, auteur prolifique de cette communauté. Il se trouve qu’il est également l’auteur d’une fonction left-pad de 11 lignes massivement utilisée comme dépendance dans de très très nombreux logiciels enNode.js. Ce monsieur avait exprimé un refus à la demande de kik.com.

Npm-logo

Cette entreprise, qui pour des raisons obscures de droits d’auteur semblait prête à tout pour faire respecter sa stupide demande (elle utilise d’ailleurs elle-même du Logiciel Libre, à savoir au moins jquery, Python, Packer et Jenkins d’après leur blog et en profite au passage pour montrer ainsi sa profonde gratitude à l’égarde de la communauté), s’est donc adressé à npm, inc, qui, effrayée par une éventuelle violation du droit d’auteur, a obtempéré, sûrement pour éviter tout conflit juridique.

Azer Koçulu, profondément énervé d’avoir vu son propre droit d’auteur et sa volonté bafoués, a alors décidé de retirer de la publication sur npmjs.com l’ensemble de ses modules, dont le très utilisé left-pad. Bien que npmjs.com ait apparemment tenté de publier un module avec le même nom, le jeu des dépendances et des différentes versions a provoqué une réaction en chaîne provoquant l’échec de la construction de très nombreux projets.

Sourceforge, Github, npmjs.com et … bientôt ?

Sourceforge, devant un besoin accru de rentabilité et face à la desaffection de ses utilisateurs au profit, on l’imagine, de Github, a commencé à ne plus respecter les prérequis essentiels d’une forge logicielle respectueuse de ses utilisateurs.

sourceforge-logo

J’ai présenté assez exhaustivement les dangers liés à l’utilisation de Github dans une chaîne de production du Logiciel dans mon article de blog le danger Github, à savoir en résumé la centralisation, qui avec de nombreux installeurs automatisés qui vont maintenant directement chercher les différents composants sur Github, si Github vient à retirer une dépendance, il se produira une réaction en chaîne équivalente au npmgate. Et cela n’est qu’une question de temps avant que cela n’arrive.

Autres risques, l’emploi de logiciel propriétaire comme l’outil de suit de bugs de Github issues, la fermeture et la portabilité limitées des données hébergées par eux et l’effet plus insidieux mais de plus en plus réel d’une sorte de fainéantise communautaire, inédite dans  notre communauté dans la recherche de solutions libres à des produits propriétaires existants et hégémoniques, sentiment également ressenti par d’autres.

github-logo

Le danger de l’acteur non-communautaire

Devant la massification des interdépendances entre les différents projets et le développement important du Logiciel Libre, aujourd’hui rejoint par les principaux acteurs hier du logiciel privateur comme Microsoft ou Apple qui aujourd’hui adoptent et tentent de récupérer à leur profit nos usages, il est plus que jamais dangereux de se reposer sur un acteur non-communautaires, à savoir en général une entreprise, dans la chaîne de création de nos logiciels.

microsoft-free.png

C’est un modèle séduisant à bien des égards. Facile, séduisant, en général peu participatif (on cherche à ce que vous soyez un utilisateur content), souvent avec un niveau de service élevé par rapport à ce que peut fournir un projet communautaire naissant, une entreprise investissant dans un nouveau projet, souvent à grand renfort de communication, va tout d’abord apparaître comme une bonne solution.

Il en va malheureusement bien différemment sur le moyen ou long terme comme nous le prouve les exemples suscités. Investir dans la communauté du Logiciel Libre en tant qu’utilisateur et créateur de code assure une solution satisfaisant tous les acteurs et surtout pérenne sur le long terme. Ce que chacun de nous recherche car bien malin qui peut prévoir le succès et la durée de vie qu’aura un nouveau projet logiciel.


by Carl Chenet at March 28, 2016 09:00 PM

March 26, 2016

Slaptijack

OpenSSH: Using a Bastion Host

Quick and dirty OpenSSH configlet here. If you have a set of hosts or devices that require you to first jump through a bastion host, the following will allow you to run a single ssh command: Host * ProxyCommand ssh -A <bastion_host> nc %h %p ...

by Scott Hebert at March 26, 2016 01:00 PM

March 25, 2016

syslog.me

A dockerized policy hub

DockerThis is a quick post, apologies in advance if it will come out a bit raw.

I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.

As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.

Oh, by the way, a run of docker search cfengine will tell you that I’m not the only one to have played with this😉

Building the image

To build the image I created the following directory structure:

cf-serverd
|-- Dockerfile
`-- etc
    `-- apt
        |-- sources.list.d
        |   `-- cfengine-community.list
        `-- trusted.gpg.d
            `-- cfengine.gpg

The cfengine-community.list and the cfengine.gpg files were created according to the instructions in the page Configuring CFEngine Linux Package Repositories. The docker file is as follows:

FROM debian:jessie
MAINTAINER Marco Marongiu
LABEL os.distributor="Debian" os.codename="jessie" os.release="8.3" cfengine.release="3.7.2"
RUN apt-get update ; apt-get install -y apt-transport-https
ADD etc/apt/trusted.gpg.d/cfengine.gpg /tmp/cfengine.gpg
RUN apt-key add /tmp/cfengine.gpg
ADD etc/apt/sources.list.d/cfengine-community.list /etc/apt/sources.list.d/cfengine-community.list
RUN apt-get update && apt-get install -y cfengine-community=3.7.2-1
ENTRYPOINT [ "/var/cfengine/bin/cf-serverd" ]
CMD [ "-F" ]
EXPOSE 5308

With this Dockerfile, running docker build -t bronto/cf-serverd:3.7.2 . in the build directory will create an image bronto/cf-serverd based on the official Debian Jessie image:

  • we update the APT sources and install apt-transport-https;
  • we copy the APT key into the container and run apt-key add on it (I tried copying it in /etc/apt/trusted.gpg.d directly but it didn’t work for some reason);
  • we add the APT source for CFEngine
  • we update the APT sources once again and we install CFEngine version 3.7.2 (if you don’t specify the version, it will install the latest highest version available, 3.8.1 to date);
  • we run cf-serverd and we expose the TCP port 5308, the port used by cf-serverd.

Thus, running docker run -d -P bronto/cf-serverd will create a container that exposes cf-serverd from the host on a random port.

Update: having defined an entry point in cf-serverd, you may have trouble to run a shell in the container to take a look around. There is a solution and is simple indeed:

docker run -ti --entrypoint /bin/bash bronto/cf-serverd:3.7.2

What you should add

[Update: in the first edition of this post this section was a bit different; some of the things I had doubts about are now solved and the section has been mostly rewritten]

The cf-serverd image (and generated containers that go with it) is functional but doesn’t do anything useful, for two reasons:

  1. it doesn’t have a file /var/cfengine/inputs/promises.cf, so cf-serverd doesn’t have a proper configuration
  2. it doesn’t have anything but the default masterfiles in /var/cfengine/masterfiles.

Both of those (a configuration for cf-serverd and a full set of masterfiles) should be added in images built upon this one, so that the new image is fully configured for your specific needs and environment. Besides, you should ensure that /var/cfengine/ppkeys is persistent, e.g. by mounting it in the container from the docker host, so that neither the “hub’s” nor clients’ public keys are lost when you replace the container with a new one. For example, you could have the keys on the docker host at /var/local/cf-serverd/ppkeys, and mount that on the container’s volume via docker run’s -v option:

docker run -p 5308:5308 -d -v/var/local/cf-serverd/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2

A possible deployment workflow

What’s normally done in a standard CFEngine set-up is that policy updates are deployed on the policy hub in /var/cfengine/masterfiles and the hub distributes the updates to the clients. In a dockerized scenario, one would probably build a new image based on the cf-serverd image, adding an updated set of masterfiles and a configuration for cf-serverd, and then instantiate a container from the new image. You would do that every time the policy is updated.

Things to be sorted out

[Update: This section, too, went through a major rewriting after I’ve learnt more about the solutions.]

Is the source IP of a client rewritten?

Problem statement: I don’t know enough about Docker’s networking so I don’t know if the source IP of the clients is rewritten. In that case, cf-serverd’s ACLs would be completely ineffective and a proper firewall should be established on the host.

Solution: the address is rewritten if the connection is made through localhost (the source address appears as the host address on the docker bridge); if the connection is made through a “standard” interface, the address is not rewritten — which is good from the perspective of cf-serverd as that allows it to apply the standard ACLs

Can logs be sent to syslog?

Problem statement: How can one send the logs of a containerized process to a syslog server?

Solution: a docker container can, by itself, send logs to a syslog server, as well as to other logging platforms and destinations, through logging drivers. In the case of our cf-serverd container, the following  command line sends the logs to a standard syslog service on the docker host:
docker run -P -d --log-driver=syslog --log-opt tag="cf-serverd" -v /var/cfengine/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2
This produces log lines like, for example:

Mar 29 15:10:57 tyrrell docker/cf-serverd[728]: notice: Server is starting...

Notice how the tag is reflected in the process name: without that tag, you’d get the short ID of the container, that is something like this:

Mar 29 11:15:27 tyrrell docker/a4f332cfb5e9[728]: notice: Server is starting...

You can also send logs directly to journald on systemd-based systems. You gain some flexibility in that you can filter logs via metadata. However, up to my knowledge, it’s not possible to get a clear log message like the first one above that clearly states that you have cf-serverd running in docker: with journald, all you get when running journalctl --follow _SYSTEMD_UNIT=docker.service is something like:

Mar 29 15:22:02 tyrrell docker[728]: notice: Server is starting...

However, if you started the service in a named container (e.g. with the --name=cf-serverd option) you can still enjoy some cool stuff, e.g. you can see the logs from just that container by running

journalctl --follow CONTAINER_NAME=cf-serverd

Actually, if you ran several containers with the same name at different times, journalctl will return the logs from all of them, which is a nice feature to me. But if all you want is the logs from, say, the currently running container named cf-serverd, you can still filter by CONTAINER_ID. Not bad.

However, if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

References

 


Tagged: cfengine, Configuration management, Debian, DevOps, docker, Sysadmin

by bronto at March 25, 2016 03:49 PM

March 22, 2016

R.I.Pienaar

A Puppet 4 Hiera Based Node Classifier

When I first wrote Hiera I included a simple little hack called hiera_include() that would do a Array lookup and include everything it found. I only included it even because include at the time did not take Array arguments. In time this has become quite widely used and many people do their node classification using just this and the built in hierarchical nature of Hiera.

I’ve always wanted to do better though, like maybe write an actual ENC that uses Hiera data keys on the provided certname? Seemed like the only real win would be to be able to set the node environment from Hiera, I guess this might be valuable enough on it’s own.

Anyway, I think the ENC interface is really pretty bad and should be replaced by something better. So I’ve had the idea of a Hiera based classifier in my mind for years.

Some time ago Ben Ford made a interesting little hack project that used a set of rules to classify nodes and this stuck to my mind as being quite a interesting approach. I guess it’s a bit like the new PE node classifier.

Anyway, so I took this as a starting point and started working on a Hiera based classifier for Puppet 4 – and by that I mean the very very latest Puppet 4, it uses a bunch of the things I blogged about recently and the end result is that the module is almost entirely built using the native Puppet 4 DSL.

Simple list-of-classes based Classification


So first lets take a look at how this replaces/improves on the old hiera_include().

Not really much to be done I am afraid, it’s an array with some entries in it. It now uses the Knockout Prefix features of Puppet Lookup that I blogged about before to allow you to exclude classes from nodes:

So we want to include the sysadmins and sensu classes on all nodes, stick this in your common tier:

# common.yaml
classifier::extra_classes:
 - sysadmins
 - sensu

Then you have some nodes that need some more classes:

# clients/acme.yaml
classifier::extra_classes:
 - acme_sysadmins

At this point it’s basically same old same old, but lets see if we had some node that needed Nagios and not Sensu:

# nodes/example.net.yaml
classifier::extra_classes:
 - --sensu
 - nagios

Here we use the knockout prefix of to remove the sensu class and add the nagios one instead. That’s already a big win from old hiera_include() but to be fair this is just as a result of the new Lookup features.

It really gets interesting later when you throw in some rules.

Rule Based Classification


The classifier is built around a set of Classifications and these are made up of one or many rules per Classification which if they match on a host means a classification applies to the node. And the classifications can include classes and create data.

Here’s a sample rule where I want to do some extra special handling of RedHat like machines. But I want to handle VMs different from Physical machines.

# common.yaml
classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
    classes:
      - centos::vm
 
  RedHat:
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
    data:
      redhat_os: true
    classes:
      - centos::common

This shows 2 Classifications one called “RedHat VMs” and one just “RedHat”, you can see the VMs one contains 2 rules and it sets match: all so they both have to match.

End result here is that all RedHat machines get centos::common and RedHat VMs also get centos::vm. Additionally 2 pieces of data will be created, a bit redundant in this example but you get the idea.

Using the Classifier


So using the classifier in the basic sense is just like hiera_include():

node default {
  include classifier
}

This will process all the rules and include the resulting classes. It will also expose a bunch of information via this class, the most interesting is $classifier::data which is a Hash of all the data that the rules emit. But you can also access the the included classes via $classifier::classes and even the whole post processed classification structure in $classifier::classification. Some others are mentioned in the README.

You can do very impressive Hiera based overrides, here’s an example of adjusting a rule for a particular node:

# clients/acme.yaml
classifier::rules:
  RedHat VMs:
    classes:
      - some::other
    data:
      extra_data: true

This has the result that for this particular client additional data will be produced and additional classes will be included – but only on their RedHat VMs. You can even use the knockout feature here to really adjust the data and classes.

The classes get included automatically for you and if you set classifier::debug you’ll get a bunch of insight into how classification happens.

Hiera Inception


So at this point things are pretty neat, but I wanted to both see how the new Data Provider API look and also see if I can expose my classifier back to Hiera.

Imagine I am making all these classifications but with what I shown above it’s quite limited because it’s just creating data for the $classifier::data hash. What you really want is to create Hiera data and be able to influence Automatic Parameter Lookup.

So a rule like:

# clients/acme.yaml
classifier::rules:
  RedHat:
    data:
      centos::common::selinux: permissive

Here I am taking the earlier RedHat rule and setting centos::common::selinux: permissive, now you want this to be Data that will be used by the Automatic Parameter Lookup system to set the selinux parameter of the centos::common class.

You can configure your Environment with this hiera.yaml

# environments/production/hiera.yaml
---
version: 4
datadir: "hieradata"
hierarchy:
  - name: "%{trusted.certname}"
    backend: "yaml"
 
  - name: "classification data"
    backend: "classifier"
 
  # ... and the rest

Here I allow node specific YAML files to override the classifier and then have a new Data Provider called classifier that expose the classification back to Hiera. Doing it this way is super important, the priority the classifier have on a site is not a single one size fits all choice, doing it this way means the site admins can decide where in their life classification site so it best fits their workflows.

So this is where the inception reference comes in, you extract data from Hiera, process it using the Puppet DSL and expose it back to Hiera. At first thought this is a bit insane but it works and it’s really nice. Basically this lets you completely redesign hiera from something that is Hierarchical in nature and turn it into a rule based system – or a hybrid.

And you can even test it from the CLI:

% puppet lookup --compile --explain centos::common::selinux
Merge strategy first
  Data Binding "hiera"
    No such key: "centos::common::selinux"
  Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "environments/production/hiera.yaml"
    Merge strategy first
      Data Provider "%{trusted.certname}"
        Path "environments/production/hieradata/dev2.devco.net.yaml"
          Original path: "%{trusted.certname}"
          No such key: "centos::common::selinux"
      Data Provider "classification data"
        Found key: "centos::common::selinux" value: "permissive"
      Merged result: "permissive"
  Merged result: "permissive"

I hope to expose here which rule provided this data like the other lookup explanations do.

Clearly this feature is a bit crazy, so consider this a exploration of what’s possible rather than a strong endorsement of this kind of thing :)

Implementation


Implementing this has been pretty interesting, I got to use a lot of the new Puppet 4 features. Like I mentioned all the data processing, iteration and deriving of classes and data is done using the native Puppet DSL, take a look at the functions directory for example.

It also makes use of the new Type system and Type Aliases all over the place to create a strong schema for the incoming data that gets validated at all levels of the process. See the types directory.

The new Modules in Data is used to set lookup strategies so that there are no manual calling of lookup(), see the module data.

Writing a Data Provider ie. a Hiera Backend for the new lookup system is pretty nice, I think the APIs around there is still maturing so definitely bleeding edge stuff. You can see the bindings and data provider in the lib directory.

As such this module only really has a hope of working on Puppet 4.4.0 at least, and I expect to use new features as they come along.

Conclusion


There’s a bunch more going on, check the module README. It’s been quite interesting to be able to really completely rethink how Hiera data is created and what a modern take on classification can achieve.

With this approach if you’re really not too keen on the hierarchy you can totally just use this as a rules based Hiera instead, that’s pretty interesting! I wonder what other strategies for creating data could be prototyped like this?

I realise this is very similar to the PE node classifier but with some additional benefits in being exposed to Hiera via the Data Provider, being something you can commit to git and being adjustable and overridable using the new Hiera features I think it will appeal to a different kind of user. But yeah, it’s quite similar. Credit to Ben Ford for his original Ruby based implementation of this idea which I took and iterated on. Regardless the ‘like a iTunes smart list’ node classifier isn’t exactly a new idea and have been discussed for literally years :)

You can get the module on the forge as ripienaar/classifier and I’d greatly welcome feedback and ideas.

by R.I. Pienaar at March 22, 2016 04:38 PM

Evaggelos Balaskas

Let's Encrypt on Prosody & enable Forward secrecy

Below is my setup to enable Forward secrecy

Generate DH parameters:

# openssl dhparam -out /etc/pki/tls/dh-2048.pem 2048

and then configure your prosody with Let’s Encrypt certificates

VirtualHost "balaskas.gr"

  ssl = {
      key = "/etc/letsencrypt/live/balaskas.gr/privkey.pem";
      certificate = "/etc/letsencrypt/live/balaskas.gr/fullchain.pem";
      cafile = "/etc/pki/tls/certs/ca-bundle.crt";

      # enable strong encryption
      ciphers="EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4";
      dhparam = "/etc/pki/tls/dh-2048.pem";
    }

if you only want to accept TLS connection from clients and servers, change your settings to these:

c2s_require_encryption = true
s2s_secure_auth = true

Check your setup

XMPP Observatory

or check your certificates with openssl:

Server: # openssl s_client -connect balaskas.gr:5269  -starttls xmpp < /dev/null
Client: # openssl s_client -connect balaskas.gr:5222  -starttls xmpp < /dev/null

March 22, 2016 11:14 AM

syslog.me

The future of configuration management – mini talks

On March 7th I was at the DevOps Norway meetup where both Jan Ivar Beddari and me presented an extended version of the ignite talks we held at Config Management Camp in February. The talks were streamed and recorded through Hangouts and the recording is available on YouTube.

The meeting gave me the opportunity to explain in a bit more detail my viewpoint about why we need a completely new design for our configuration management tools. I had tried already in a blog post that caused some amount of controversy and it was good to get a second chance.

I’d recommend you watch both Jan Ivar’s talk and mine, but if you’re interested only in my part then you can check out here:

And don’t forget to check out the slides, both Jan Ivar’s and mine.


Tagged: Configuration management, DevOps, Sysadmin

by bronto at March 22, 2016 09:00 AM

[bc-log]

Create self-managing servers with Masterless Saltstack Minions

Over the past two articles I've described building a Continuous Delivery pipeline for my blog (the one you are currently reading). The first article covered packaging the blog into a Docker container and the second covered using Travis CI to build the Docker image and perform automated testing against it.

While the first two articles covered quite a bit of the CD pipeline there is one piece missing; automating deployment. While there are many infrastructure and application tools for automated deployments I've chosen to use Saltstack. I've chosen Saltstack for many reasons but the main reason is that it can be used to manage both my host system's configuration and the Docker container for my blog application. Before I can start using Saltstack however, I first need to set it up.

I've covered setting up Saltstack before, but for this article I am planning on setting up Saltstack in a Masterless architecture. A setup that is quite different from the traditional Saltstack configuration.

Masterless Saltstack

A traditional Saltstack architecture is based on a Master and Minion design. With this architecture the Salt Master will push desired states to the Salt Minion. This means that in order for a Salt Minion to apply the desired states it needs to be able to connect to the master, download the desired states and then apply them.

A masterless configuration on the other hand involves only the Salt Minion. With a masterless architecture the Salt state files are stored locally on the Minion bypassing the need to connect and download states from a Master. This architecture provides a few benefits over the traditional Master/Minion architecture. The first is removing the need to have a Salt Master server; which will help reduce infrastructure costs, an important item as the environment in question is dedicated to hosting a simple personal blog.

The second benefit is that in a masterless configuration each Salt Minion is independent which makes it very easy to provision new Minions and scale out. The ability to scale out is useful for a blog, as there are times when an article is reposted and traffic suddenly increases. By making my servers self-managing I am able to meet that demand very quickly.

A third benefit is that Masterless Minions have no reliance on a Master server. In a traditional architecture if the Master server is down for any reason the Minions are unable to fetch and apply the Salt states. With a Masterless architecture, the availability of a Master server is not even a question.

Setting up a Masterless Minion

In this article I will walk through how to install and configure a Salt in a masterless configuration.

Installing salt-minion

The first step to creating a Masterless Minion is to install the salt-minion package. To do this we will follow the official steps for Ubuntu systems outlined at docs.saltstack.com. Which primarily uses the Apt package manager to perform the installation.

Importing Saltstack's GPG Key

Before installing the salt-minion package we will first need to import Saltstack's Apt repository key. We can do this with a simple bash one-liner.

# wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
OK

This GPG key will allow Apt to validate packages downloaded from Saltstack's Apt repository.

Adding Saltstack's Apt Repository

With the key imported we can now add Saltstack's Apt repository to our /etc/apt/sources.list file. This file is used by Apt to determine which repositories to check for available packages.

# vi /etc/apt/sources.list

Once editing the file simply append the following line to the bottom.

deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main

With the repository defined we can now update Apt's repository inventory. A step that is required before we can start installing packages from the new repository.

Updating Apt's cache

To update Apt's repository inventory, we will execute the command apt-get update.

# apt-get update
Ign http://archive.ubuntu.com trusty InRelease                                 
Get:1 http://security.ubuntu.com trusty-security InRelease [65.9 kB]           
Get:2 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]             
Get:3 http://repo.saltstack.com trusty InRelease [2,813 B]                     
Get:4 http://repo.saltstack.com trusty/main amd64 Packages [8,046 B]           
Get:5 http://security.ubuntu.com trusty-security/main Sources [105 kB]         
Hit http://archive.ubuntu.com trusty Release.gpg                              
Ign http://repo.saltstack.com trusty/main Translation-en_US                    
Ign http://repo.saltstack.com trusty/main Translation-en                       
Hit http://archive.ubuntu.com trusty Release                                   
Hit http://archive.ubuntu.com trusty/main Sources                              
Hit http://archive.ubuntu.com trusty/universe Sources                          
Hit http://archive.ubuntu.com trusty/main amd64 Packages                       
Hit http://archive.ubuntu.com trusty/universe amd64 Packages                   
Hit http://archive.ubuntu.com trusty/main Translation-en                       
Hit http://archive.ubuntu.com trusty/universe Translation-en                   
Ign http://archive.ubuntu.com trusty/main Translation-en_US                    
Ign http://archive.ubuntu.com trusty/universe Translation-en_US                
Fetched 3,136 kB in 8s (358 kB/s)                                              
Reading package lists... Done

With the above complete we can now access the packages available within Saltstack's repository.

Installing with apt-get

Specifically we can now install the salt-minion package, to do this we will execute the command apt-get install salt-minion.

# apt-get install salt-minion
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common
Suggested packages:
  debtags python-jinja2-doc python-beaker python-mako-doc
  python-egenix-mxdatetime mysql-server-5.1 mysql-server python-mysqldb-dbg
  python-augeas
The following NEW packages will be installed:
  dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common
  python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack
  python-mysqldb python-tornado python-zmq salt-common salt-minion
0 upgraded, 15 newly installed, 0 to remove and 155 not upgraded.
Need to get 4,959 kB of archives.
After this operation, 24.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main mysql-common all 5.5.47-0ubuntu0.14.04.1 [13.5 kB]
Get:2 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main python-tornado amd64 4.2.1-1 [274 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libmysqlclient18 amd64 5.5.47-0ubuntu0.14.04.1 [597 kB]
Get:4 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main salt-common all 2015.8.7+ds-1 [3,108 kB]
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for ureadahead (0.100.0-16) ...

After a successful installation of the salt-minion package we now have a salt-minion instance running with the default configuration.

Configuring the Minion

With a Traditional Master/Minion setup, this point would be where we configure the Minion to connect to the Master server and restart the running service.

For this setup however, we will be skipping the Master server definition. Instead we will need to tell the salt-minion service to look for Salt state files locally. To alter the salt-minion's configuration we can either edit the /etc/salt/minion configuration file which is the default configuration file. Or we could add a new file into /etc/salt/minion.d/; this .d directory is used to override default configurations defined in /etc/salt/minion.

My personal preference is to create a new file within the minion.d/ directory, as this keeps the configuration easy to manage. However, there is no right or wrong method; as this is a personal and environmental preference.

For this article we will go ahead and create the following file /etc/salt/minion.d/masterless.conf.

# vi /etc/salt/minion.d/masterless.conf

Within this file we will add two configurations.

file_client: local
file_roots:
  base:
    - /srv/salt/base
  bencane:
    - /srv/salt/bencane

The first configuration item above is file_client. By setting this configuration to local we are telling the salt-minion service to search locally for desired state configurations rather than connecting to a Master.

The second configuration is the file_roots dictionary. This defines the location of Salt state files. In the above example we are defining both /srv/salt/base and /srv/salt/bencane. These two directories will be where we store our Salt state files for this Minion to apply.

Stopping the salt-minion service

While in most cases we would need to restart the salt-minion service to apply the configuration changes, in this case, we actually need to do the opposite; we need to stop the salt-minion service.

# service salt-minion stop
salt-minion stop/waiting

The salt-minion service does not need to be running when setup as a Masterless Minion. This is because the salt-minion service is only running to listen for events from the Master. Since we have no master there is no reason to keep this service running. If left running the salt-minion service will repeatedly try to connect to the defined Master server which by default is a host that resolves to salt. To remove unnecessary overhead it is best to simply stop this service in a Masterless Minion configuration.

Populating the desired states

At this point we have a Salt Minion that has been configured to run masterless. However, at this point the Masterless Minion has no Salt states to apply. In this section we will provide the salt-minion agent two sets of Salt states to apply. The first will be placed into the /srv/salt/base directory. This file_roots directory will contain a base set of Salt states that I have created to manage a basic Docker host.

Deploying the base Salt states

The states in question are available via a public GitHub repository. To deploy these Salt states we can simply clone the repository into the /srv/salt/base directory. Before doing so however, we will need to first create the /srv/salt directory.

# mkdir -p /srv/salt

The /srv/salt directory is Salt's default state directory, it is also the parent directory for both the base and bencane directories we defined within the file_roots configuration. Now that the parent directory exists, we will clone the base repository into this directory using git.

# cd /srv/salt/
# git clone https://github.com/madflojo/salt-base.git base
Cloning into 'base'...
remote: Counting objects: 50, done.
remote: Total 50 (delta 0), reused 0 (delta 0), pack-reused 50
Unpacking objects: 100% (50/50), done.
Checking connectivity... done.

As the salt-base repository is copied into the base directory the Salt states within that repository are now available to the salt-minion agent.

# ls -la /srv/salt/base/
total 84
drwxr-xr-x 18 root root 4096 Feb 28 21:00 .
drwxr-xr-x  3 root root 4096 Feb 28 21:00 ..
drwxr-xr-x  2 root root 4096 Feb 28 21:00 dockerio
drwxr-xr-x  2 root root 4096 Feb 28 21:00 fail2ban
drwxr-xr-x  2 root root 4096 Feb 28 21:00 git
drwxr-xr-x  8 root root 4096 Feb 28 21:00 .git
drwxr-xr-x  3 root root 4096 Feb 28 21:00 groups
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iotop
drwxr-xr-x  2 root root 4096 Feb 28 21:00 iptables
-rw-r--r--  1 root root 1081 Feb 28 21:00 LICENSE
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ntpd
drwxr-xr-x  2 root root 4096 Feb 28 21:00 python-pip
-rw-r--r--  1 root root  106 Feb 28 21:00 README.md
drwxr-xr-x  2 root root 4096 Feb 28 21:00 screen
drwxr-xr-x  2 root root 4096 Feb 28 21:00 ssh
drwxr-xr-x  2 root root 4096 Feb 28 21:00 swap
drwxr-xr-x  2 root root 4096 Feb 28 21:00 sysdig
drwxr-xr-x  3 root root 4096 Feb 28 21:00 sysstat
drwxr-xr-x  2 root root 4096 Feb 28 21:00 timezone
-rw-r--r--  1 root root  208 Feb 28 21:00 top.sls
drwxr-xr-x  2 root root 4096 Feb 28 21:00 wget

From the above directory listing we can see that the base directory has quite a few Salt states. These states are very useful for managing a basic Ubuntu system as they perform steps such as installing Docker (dockerio), to setting the system timezone (timezone). Everything needed to run a basic Docker host is available and defined within these base states.

Applying the base Salt states

Even though the salt-minion agent can now use these Salt states, there is nothing running to tell the salt-minion agent it should do so. Therefore the desired states are not being applied.

To apply our new base states we can use the salt-call command to tell the salt-minion agent to read the Salt states and apply the desired states within them.

# salt-call --local state.highstate

The salt-call command is used to interact with the salt-minion agent from command line. In the above the salt-call command was executed with the state.highstate option.

This tells the agent to look for all defined states and apply them. The salt-call command also included the --local option, this option is specifically used when running a Masterless Minion. This flag tells the salt-minion agent to look through it's local state files rather than attempting to pull from a Salt Master.

The below shows the results of the execution above, within this output we can see the various states being applied successfully.

----------
          ID: GMT
    Function: timezone.system
      Result: True
     Comment: Set timezone GMT
     Started: 21:09:31.515117
    Duration: 126.465 ms
     Changes:   
              ----------
              timezone:
                  GMT
----------
          ID: wget
    Function: pkg.latest
      Result: True
     Comment: Package wget is already up-to-date
     Started: 21:09:31.657403
    Duration: 29.133 ms
     Changes:   

Summary for local
-------------
Succeeded: 26 (changed=17)
Failed:     0
-------------
Total states run:     26

In the above output we can see that all of the defined states were executed successfully. We can validate this further if we check the status of the docker service. Which we can see from below is now running; where before executing salt-call, Docker was not installed on this system.

# service docker status
docker start/running, process 11994

With a successful salt-call execution our Salt Minion is now officially a Masterless Minion. However, even though our server has Salt installed, and is configured as a Masterless Minion, there are still a few steps we need to take to make this Minion "Self Managing".

Self-Managing Minions

In order for our Minion to be Self-Managed, the Minion server should not only apply the base states above, it should also keep the salt-minion service and configuration up to date as well. To do this, we will be cloning yet another git repository.

Deploying the blog specific Salt states

This repository however, has specific Salt states used to manage the salt-minion agent, for not only this but also any other Masterless Minion used to host this blog.

# cd /srv/salt
# git clone https://github.com/madflojo/blog-salt bencane
Cloning into 'bencane'...
remote: Counting objects: 25, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 25 (delta 4), reused 20 (delta 2), pack-reused 0
Unpacking objects: 100% (25/25), done.
Checking connectivity... done.

In the above command we cloned the blog-salt repository into the /srv/salt/bencane directory. Like the /srv/salt/base directory the /srv/salt/bencane directory is also defined within the file_roots that we setup earlier.

Applying the blog specific Salt states

With these new states copied to the /srv/salt/bencane directory, we can once again run the salt-call command to trigger the salt-minion agent to apply these states.

# salt-call --local state.highstate
[INFO    ] Loading fresh modules for state activity
[INFO    ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://top.sls'
[INFO    ] Fetching file from saltenv 'bencane', ** skipped ** latest already in cache u'salt://top.sls'
----------
          ID: /etc/salt/minion.d/masterless.conf
    Function: file.managed
      Result: True
     Comment: File /etc/salt/minion.d/masterless.conf is in the correct state
     Started: 21:39:00.800568
    Duration: 4.814 ms
     Changes:   
----------
          ID: /etc/cron.d/salt-standalone
    Function: file.managed
      Result: True
     Comment: File /etc/cron.d/salt-standalone updated
     Started: 21:39:00.806065
    Duration: 7.584 ms
     Changes:   
              ----------
              diff:
                  New file
              mode:
                  0644

Summary for local
-------------
Succeeded: 37 (changed=7)
Failed:     0
-------------
Total states run:     37

Based on the output of the salt-call execution we can see that 7 Salt states were executed successfully. This means that the new Salt states within the bencane directory were applied. But what exactly did these states do?

Understanding the "Self-Managing" Salt states

This second repository has a hand full of states that perform various tasks specific to this environment. The "Self-Managing" states are all located within the srv/salt/bencane/salt directory.

$ ls -la /srv/salt/bencane/salt/
total 20
drwxr-xr-x 5 root root 4096 Mar 20 05:28 .
drwxr-xr-x 5 root root 4096 Mar 20 05:28 ..
drwxr-xr-x 3 root root 4096 Mar 20 05:28 config
drwxr-xr-x 2 root root 4096 Mar 20 05:28 minion
drwxr-xr-x 2 root root 4096 Mar 20 05:28 states

Within the salt directory there are several more directories that have defined Salt states. To get started let's look at the minion directory. Specifically, let's take a look at the salt/minion/init.sls file.

# cat salt/minion/init.sls
salt-minion:
  pkgrepo:
    - managed
    - humanname: SaltStack Repo
    - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub
  pkg:
    - latest
  service:
    - dead
    - enable: False

/etc/salt/minion.d/masterless.conf:
  file.managed:
    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

/etc/cron.d/salt-standalone:
  file.managed:
    - source: salt://salt/config/etc/cron.d/salt-standalone

Within the minion/init.sls file there are 5 Salt states defined.

Breaking down the minion/init.sls states

Let's break down some of these states to better understand what actions they are performing.

  pkgrepo:
    - managed
    - humanname: SaltStack Repo
    - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main
    - dist: {{ grains['lsb_distrib_codename'] }}
    - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub

The first state defined is a pkgrepo state. We can see based on the options that this state is use to manage the Apt repository that we defined earlier. We can also see from the key_url option, that even the GPG key we imported earlier is managed by this state.

  pkg:
    - latest

The second state defined is a pkg state. This is used to manage a specific package, specifically in this case the salt-minion package. Since the latest option is present the salt-minion agent will not only install the latest salt-minion package but also keep it up to date with the latest version if it is already installed.

  service:
    - dead
    - enable: False

The third state is a service state. This state is used to manage the salt-minion service. With the dead and enable: False settings specified the salt-minion agent will stop and disable the salt-minion service.

So far these states are performing the same steps we performed manually above. Let's keep breaking down the minion/init.sls file to understand what other steps we have told Salt to perform.

/etc/salt/minion.d/masterless.conf:
  file.managed:
    - source: salt://salt/config/etc/salt/minion.d/masterless.conf

The fourth state is a file state, this state is deploying a /etc/salt/minion.d/masterless.conf file. This just happens to be the same file we created earlier. Let's take a quick look at the file being deployed to understand what Salt is doing.

$ cat salt/config/etc/salt/minion.d/masterless.conf
file_client: local
file_roots:
  base:
    - /srv/salt/base
  bencane:
    - /srv/salt/bencane

The contents of this file are exactly the same as the masterless.conf file we created in the earlier steps. This means that while right now the configuration file being deployed is the same as what is currently deployed. In the future if any changes are made to the masterless.conf within this git repository, those changes will then be deployed on the next state.highstate execution.

/etc/cron.d/salt-standalone:
  file.managed:
    - source: salt://salt/config/etc/cron.d/salt-standalone

The fifth state is also a file state, while this state is also deploying a file the file in question is very different. Let's take a look at this file to understand what it is used for.

$ cat salt/config/etc/cron.d/salt-standalone
*/2 * * * * root su -c "/usr/bin/salt-call state.highstate --local 2>&1 > /dev/null"

The salt-standalone file is a /etc/cron.d based cron job that appears to be running the same salt-call command we ran earlier to apply the local Salt states. In a masterless configuration there is no scheduled task to tell the salt-minion agent to apply all of the Salt states. The above cron job takes care of this by simply executing a local state.highstate execution every 2 minutes.

Summary of minion/init.sls

Based on the contents of the minion/init.sls we can see how this salt-minion agent is configured to be "Self-Managing". From the above we were able to see that the salt-minion agent is configured to perform the following steps.

  1. Configure the Saltstack Apt repository and GPG keys
  2. Install the salt-minion package or update to the newest version if already installed
  3. Deploy the masterless.conf configuration file into /etc/salt/minion.d/
  4. Deploy the /etc/cron.d/salt-standalone file which deploys a cron job to initiate state.highstate executions

These steps ensure that the salt-minion agent is both configured correctly and applying desired states every 2 minutes.

While the above steps are useful for applying the current states, the whole point of continuous delivery is to deploy changes quickly. To do this we need to also keep the Salt states up-to-date.

Keeping Salt states up-to-date with Salt

One way to keep our Salt states up to date is to tell the salt-minion agent to update them for us.

Within the /srv/salt/bencane/salt directory exists a states directory that contains two files base.sls and bencane.sls. These two files both contain similar Salt states. Let's break down the contents of the base.sls file to understand what actions it's telling the salt-minion agent to perform.

$ cat salt/states/base.sls
/srv/salt/base:
  file.directory:
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

base_states:
  git.latest:
    - name: https://github.com/madflojo/salt-base.git
    - target: /srv/salt/base
    - force: True

In the above we can see that the base.sls file contains two Salt states. The first is a file state that is set to ensure the /srv/salt/base directory exists with the defined permissions.

The second state is a bit more interesting as it is a git state which is set to pull the latest copy of the salt-base repository and clone it into /srv/salt/base.

With this state defined, every time the salt-minion agent runs (which is every 2 minutes via the cron.d job); the agent will check for new updates to the repository and deploy them to /srv/salt/base.

The bencane.sls file contains similar states, with the difference being the repository cloned and the location to deploy the state files to.

$ cat salt/states/bencane.sls
/srv/salt/bencane:
  file.directory:
    - user: root
    - group: root
    - mode: 700
    - makedirs: True

bencane_states:
  git.latest:
    - name: https://github.com/madflojo/blog-salt.git
    - target: /srv/salt/bencane
    - force: True

At this point, we now have a Masterless Salt Minion that is not only configured to "self-manage" it's own packages, but also the Salt state files that drive it.

As the state files within the git repositories are updated, those updates are then pulled from each Minion every 2 minutes. Whether that change is adding the screen package, or deploying a new Docker container; that change is deployed across many Masterless Minions all at once.

What's next

With the above steps complete, we now have a method for taking a new server and turning it into a Self-Managed Masterless Minion. What we didn't cover however, is how to automate the initial installation and configuration.

In next months article, we will talk about using salt-ssh to automate the first time installation and configuration the salt-minion agent using the same Salt states we used today.


Posted by Benjamin Cane

March 22, 2016 07:30 AM

March 21, 2016

Cryptography Engineering

Attack of the Week: Apple iMessage

Today's Washington Post has a story entitled "Johns Hopkins researchers poke a hole in Apple’s encryption", which describes the results of some research my students and I have been working on over the past few months.

As you might have guessed from the headline, the work concerns Apple, and specifically Apple's iMessage text messaging protocol. Over the past months my students Christina Garman, Ian Miers, Gabe Kaptchuk and Mike Rushanan and I have been looking closely at the encryption used by iMessage, in order to determine how the system fares against sophisticated attackers. The results of this analysis include some very neat new attacks that allow us to -- under very specific circumstances -- decrypt the contents of iMessage attachments, such as photos and videos.

The research team. From left:
Gabe Kaptchuk, Mike Rushanan, Ian Miers, Christina Garman
Now before I go further, it's worth noting that the security of a text messaging protocol may not seem like the most important problem in computer security. And under normal circumstances I might agree with you. But today the circumstances are anything but normal: encryption systems like iMessage are at the center of a critical national debate over the role of technology companies in assisting law enforcement.

A particularly unfortunate aspect of this controversy has been the repeated call for U.S. technology companies to add "backdoors" to end-to-end encryption systems such as iMessage. I've always felt that one of the most compelling arguments against this approach -- an argument I've made along with other colleagues -- is that we just don't know how to construct such backdoors securely. But lately I've come to believe that this position doesn't go far enough -- in the sense that it is woefully optimistic. The fact of the matter is that forget backdoors: we barely know how to make encryption work at all. If anything, this work makes me much gloomier about the subject.

But enough with the generalities. The TL;DR of our work is this:
Apple iMessage, as implemented in versions of iOS prior to 9.3 and Mac OS X prior to 10.11.4, contains serious flaws in the encryption mechanism that could allow an attacker -- who obtains iMessage ciphertexts -- to decrypt the payload of certain attachment messages via a slow but remote and silent attack, provided that one sender or recipient device is online. While capturing encrypted messages is difficult in practice on recent iOS devices, thanks to certificate pinning, it could still be conducted by a nation state attacker or a hacker with access to Apple's servers. You should probably patch now.
For those who want the gory details, I'll proceed with the rest of this post using the "fun" question and answer format I save for this sort of post.
What is Apple iMessage and why should I care?
Those of you who read this blog will know that I have a particular obsession with Apple iMessage. This isn’t because I’m weirdly obsessed with Apple — although it is a little bit because of that. Mostly it's because I think iMessage is an important protocol. The text messaging service, which was introduced in 2011, has the distinction of being the first widely-used end-to-end encrypted text messaging system in the world. 

To understand the significance of this, it's worth giving some background. Before iMessage, the vast majority of text messages were sent via SMS or MMS, meaning that they were handled by your cellular provider. Although these messages are technically encrypted, this encryption exists only on the link between your phone and the nearest cellular tower. Once an SMS reaches the tower, it’s decrypted, then stored and delivered without further protection. This means that your most personal messages are vulnerable to theft by telecom employees or sophisticated hackers. Worse, many U.S. carriers still use laughably weak encryption and protocols that are vulnerable to active interception.

So from a security point of view, iMessage was a pretty big deal. In a single stroke, Apple deployed encrypted messaging to millions of users, ensuring (in principle) that even Apple itself couldn't decrypt their communications. The even greater accomplishment was that most people didn’t even notice this happened — the encryption was handled so transparently that few users are aware of it. And Apple did this at very large scale: today, iMessage handles peak throughput of more than 200,000 encrypted messages per second, with a supported base of nearly one billion devices. 
So iMessage is important. But is it any good?
Answering this question has been kind of a hobby of mine for the past couple of years. In the past I've written about Apple's failure to publish the iMessage protocol, and on iMessage's dependence on a vulnerable centralized key server. Indeed, the use of a centralized key server is still one of iMessage's biggest weaknesses, since an attacker who controls the keyserver can use it to inject keys and conduct man in the middle attacks on iMessage users.

But while key servers are a risk, attacks on a key server seem fundamentally challenging to implement -- since they require the ability to actively manipulate Apple infrastructure without getting caught. Moreover, such attacks are only useful for prospective surveillance. If you fail to substitute a user's key before they have an interesting conversation, you can't recover their communications after the fact.

A more interesting question is whether iMessage's encryption is secure enough to stand up against retrospective decryption attacks -- that is, attempts to decrypt messages after they have been sent. Conducting such attacks is much more interesting than the naive attacks on iMessage's key server, since any such attack would require the existence of a fundamental vulnerability in iMessage's encryption itself. And in 2016 encryption seems like one of those things that we've basically figured out how to get right.

Which means, of course, that we probably haven't.
How does iMessage encryption work?
What we know about the iMessage encryption protocol comes from a previous reverse-engineering effort by a group from Quarkslab, as well as from Apple's iOS Security Guide. Based on these sources, we arrive at the following (simplified) picture of the basic iMessage encryption scheme:


To encrypt an iMessage, your phone first obtains the RSA public key of the person you're sending to. It then generates a random AES key k and encrypts the message with that key using CTR mode. Then it encrypts k using the recipient's RSA key. Finally, it signs the whole mess using the sender's ECDSA signing key. This prevents tampering along the way.

So what's missing here?

Well, the most obviously missing element is that iMessage does not use a Message Authentication Code (MAC) or authenticated encryption scheme to prevent tampering with the message. To simulate this functionality, iMessage simply uses an ECDSA signature formulated by the sender. Naively, this would appear to be good enough. Critically, it's not.

The attack works as follows. Imagine that a clever attacker intercepts the message above and is able to register her own iMessage account. First, the attacker strips off the original ECDSA signature made by the legitimate sender, and replaces it with a signature of her own. Next, she sends the newly signed message to the original recipient using her own account:


The outcome is that the user receives and decrypts a copy of the message, which has now apparently originated from the attacker rather than from the original sender. Ordinarily this would be a pretty mild attack -- but there's a useful wrinkle. In replacing the sender's signature with one of her own, the attacker has gained a powerful capability. Now she can tamper with the AES ciphertext (red) at will.

Specifically, since in iMessage the AES ciphertext is not protected by a MAC, it is therefore malleable. As long as the attacker signs the resulting message with her key, she can flip any bits in the AES ciphertext she wants -- and this will produce a corresponding set of changes when the recipient ultimately decrypts the message. This means that, for example, if the attacker guesses that the message contains the word "cat" at some position, she can flip bits in the ciphertext to change that part of the message to read "dog" -- and she can make this change even though she can't actually read the encrypted message.

Only one more big step to go.

Now further imagine that the recipient's phone will decrypt the message correctly provided that the underlying plaintext that appears following decryption is correctly formatted. If the plaintext is improperly formatted -- for a silly example, our tampering made it say "*7!" instead of "pig" -- then on receiving the message, the recipient's phone might return an error that the attacker can see.

It's well known that such a configuration capability allows our attacker the ability to learn information about the original message, provided that she can send many "mauled" variants to be decrypted. By mauling the underlying message in specific ways -- e.g., attempting to turn "dog" into "pig" and observing whether decryption succeeds -- the attacker can gradually learn the contents of the original message. The technique is known as a format oracle, and it's similar to the padding oracle attack discovered by Vaudenay.
So how exactly does this format oracle work?
The format oracle in iMessage is not a padding oracle. Instead it has to do with the compression that iMessage uses on every message it sends.

You see, prior to encrypting each message payload, iMessage applies a complex formatting that happens to conclude with gzip compression. Gzip is a modestly complex compression scheme that internally identifies repeated strings, applies Huffman coding, then tacks a CRC checksum computed over the original data at the end of the compressed message. It's this gzip-compressed payload that's encrypted within the AES portion of an iMessage ciphertext.

It turns out that given the ability to maul a gzip-compressed, encrypted ciphertext, there exists a fairly complicated attack that allows us to gradually recover the contents of the message by mauling the original message thousands of times and sending the modified versions to be decrypted by the target device. The attack turns on our ability to maul the compressed data by flipping bits, then "fix up" the CRC checksum correspondingly so that it reflects the change we hope to see in the uncompressed data. Depending on whether that test succeeds, we can gradually recover the contents of a message -- one byte at a time.

While I'm making this sound sort of simple, the truth is it's not. The message is encoded using Huffman coding, with a dynamic Huffman table we can't see -- since it's encrypted. This means we need to make laser-specific changes to the ciphertext such that we can predict the effect of those changes on the decrypted message, and we need to do this blind. Worse, iMessage has various countermeasures that make the attack more complex.

The complete details of the attack appear in the paper, and they're pretty eye-glazing, so I won't repeat them here. In a nutshell, we are able to decrypt a message under the following conditions:
  1. We can obtain a copy of the encrypted message
  2. We can send approximately 2^18 (invisible) encrypted messages to the target device
  3. We can determine whether or not those messages decrypted successfully or not
The first condition can be satisfied by obtaining ciphertexts from a compromise of Apple's Push Notification Service servers (which are responsible for routing encrypted iMessages) or by intercepting TLS connections using a stolen certificate -- something made more difficult due to the addition of certificate pinning in iOS 9. The third element is the one that initially seems the most challenging. After all, when I send an iMessage to your device, there's no particular reason that your device should send me any sort of response when the message decrypts. And yet this information is fundamental to conducting the attack!

It turns out that there's a big exception to this rule: attachment messages.
How do attachment messages differ from normal iMessages?
When I include a photo in an iMessage, I don't actually send you the photograph through the normal iMessage channel. Instead, I first encrypt that photo using a random 256-bit AES key, then I compute a SHA1 hash and upload the encrypted photo to iCloud. What I send you via iMessage is actually just an iCloud.com URL to the encrypted photo, the SHA1 hash, and the decryption key.

Contents of an "attachment" message.
When you successfully receive and decrypt an iMessage from some recipient, your Messages client will automatically reach out and attempt to download that photo. It's this download attempt, which happens only when the phone successfully decrypts an attachment message, that makes it possible for an attacker to know whether or not the decryption has succeeded.

One approach for the attacker to detect this download attempt is to gain access to and control your local network connections. But this seems impractical. A more sophisticated approach is to actually maul the URL within the ciphertext so that rather than pointing to iCloud.com, it points to a related URL such as i8loud.com. Then the attacker can simply register that domain, place a server there and allow the client to reach out to it. This requires no access to the victim's local network.

By capturing an attachment message, repeatedly mauling it, and monitoring the download attempts made by the victim device, we can gradually recover all of the digits of the encryption key stored within the attachment. Then we simply reach out to iCloud and download the attachment ourselves. And that's game over. The attack is currently quite slow -- it takes more than 70 hours to run -- but mostly because our code is slow and not optimized. We believe with more engineering it could be made to run in a fraction of a day.

Result of decrypting the AES key for an attachment. Note that the ? represents digits we could not recover for various reasons, typically due to string repetitions. We can brute-force the remaining digits.
The need for an online response is why our attack currently works against attachment messages only: those are simply the messages that make the phone do visible things. However, this does not mean the flaw in iMessage encryption is somehow limited to attachments -- it could very likely be used against other iMessages, given an appropriate side-channel.
How is Apple fixing this?
Apple's fixes are twofold. First, starting in iOS 9.0 (and before our work), Apple began deploying aggressive certificate pinning across iOS applications. This doesn't fix the attack on iMessage crypto, but it does make it much harder for attackers to recover iMessage ciphertexts to decrypt in the first place.

Unfortunately even if this works perfectly, Apple still has access to iMessage ciphertexts. Worse, Apple's servers will retain these messages for up to 30 days if they are not delivered to one of your devices. A vulnerability in Apple Push Network authentication, or a compromise of these servers could read them all out. This means that pinning is only a mitigation, not a true fix.

As of iOS 9.3, Apple has implemented a short-term mitigation that my student Ian Miers proposed. This relies on the fact that while the AES ciphertext is malleable, the RSA-OAEP portion of the ciphertext is not. The fix maintains a "cache" of recently received RSA ciphertexts and rejects any repeated ciphertexts. In practice, this shuts down our attack -- provided the cache is large enough. We believe it probably is.

In the long term, Apple should drop iMessage like a hot rock and move to Signal/Axolotl.
So what does it all mean?
As much as I wish I had more to say, fundamentally, security is just plain hard. Over time we get better at this, but for the foreseeable future we'll never be ahead. The only outcome I can hope for is that people realize how hard this process is -- and stop asking technologists to add unacceptable complexity to systems that already have too much of it.

by Matthew Green (noreply@blogger.com) at March 21, 2016 09:44 PM

March 18, 2016

Slaptijack

Interleave Two Lists of Variable Length (Python)

I was recently asked to take two lists and interleave them. Although I can not think of a scenario off the top of my head where this might be useful, it doesn't strike me as a completely silly thing to do. StackOverflow's top answer for this problem us...

by Scott Hebert at March 18, 2016 01:00 PM

R.I.Pienaar

Puppet 4 Type Aliases

Back when I first took a look at Puppet 4 features I explored the new Data Types and said:

Additionally I cannot see myself using a Struct like above in the argument list – to which Henrik says they are looking to add a typedef thing to the language so you can give complex Struc’s a more convenient name and use that. This will help that a lot.

And since Puppet 4.4.0 this has now become a reality. So a quick post to look at that.

The Problem


I’ve been writing a Hiera based node classifier both to scratch and itch and to have something fairly complex to explore the new features in Puppet 4.

The classifier takes a set of classification rules and produce classifications – classes to include and parameters – from there. Here’s a sample classification:

classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
      centos::vm::someprop: someval
    classes:
      - centos::vm

This is a classification rule that has 2 rules to match against machines running RedHat like operating systems and that are virtual. In that case if both these are true it will:

  • Include the class centos::vm
  • Create some data redhat_vm => true and centos::vm::someprop => someval

You can have an arbitrary amount of classifications made up of a arbitrary amount of rules. This data lives in hiera so you can have all sorts of merging, overriding and knock out fun with it.

The amazing thing is since Puppet 4.4.0 there is now no Ruby code involved in doing what I said above, all the parsing, looping, evaluating or rules and building of data structures are all done using functions written in the pure Puppet DSL.

There’s some Ruby there in the form of a custom backend for the new lookup based hiera system – but this is experimental, optional and a bit crazy.

Anyway, so here’s the problem, before Puppet 4.4.0 my main class had this in:

class classifier (
  Hash[String,
    Struct[{
      match    => Enum["all", "any"],
      rules    => Array[
        Struct[{
          fact     => String,
          operator => Enum["==", "=~", ">", " =>", "<", "<="],
          value    => Data,
          invert   => Optional[Boolean]
        }]
      ],
      data     => Optional[Hash[Pattern[/\A[a-z0-9_][a-zA-Z0-9_]*\Z/], Data]],
      classes  => Array[Pattern[/\A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*\Z/]]
    }]
  ] $rules = {}
) {
....
}

This describes the full valid rule as a Puppet Type. It’s pretty horrible. Worse I have a number of functions and classes all that receives the full classification or parts of it and I’d have to duplicate all this all over.

The Solution


So as of yesterday I can now make this a lot better:

class classifier (
  Classifier::Classifications  $rules = {},
) {
....
}

to do this I made a few files in the module:

# classifier/types/matches.pp
type Classifier::Matches = Enum["all", "any"]
# classifier/types/classname.pp
type Classifier::Classname = Pattern[/\A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*\Z/]

and a few more, eventually ending up in:

# classifier/types/classification.pp
type Classifier::Classification = Struct[{
  match    => Classifier::Matches,
  rules    => Array[Classifier::Rule],
  data     => Classifier::Data,
  classes  => Array[Classifier::Classname]
}]

Which you can see solves the problem quite nicely. Now in classes and functions where I need lets say just a Rule all I do is use Classifier::Rule instead of all the crazy.

This makes the native Puppet Data Types perfectly usable for me, well worth adopting these.

by R.I. Pienaar at March 18, 2016 07:40 AM

March 14, 2016

Electricmonk.nl

Terrible Virtualbox disk performance

For a while I've noticed that Virtualbox has terrible performance when install Debian / Ubuntu packages. A little top, iotop and atop research later, and it turns out the culprit is the disk I/O, which is just ludicrously slow. The cause is the fact that Virtualbox doesn't have "Use host IO cache" turned on by default for SATA controllers.Turning that option on gives a massive improvement to speed.

To turn "Host IO  cache":

  • Open the Virtual Machine's Settings dialog
  • Go to "Storage"
  • Click on the "Controller: SATA" controller
  • Enable the "Host I/O Cache" setting.
  • You may also want to check any other controllers and / or disks to see if this option is on there.
  • Save the settings and start the VM and you'll see a big improvement.

 

 

hostiocache

Update:

Craig Gill was kind enough to mail me with the reasoning behind why the "Host I/O cache" setting is off by default:

Hello, thanks for the blog post, I just wanted to comment on it below:

In your blog post "Terrible Virtualbox disk performance", you mention
that the 'use host i/o cache' is off by default.
In the VirtualBox help it explains why it is off by default, and it
basically boils down to safety over performance.

Here are the disadvantages of having that setting on (which these
points may not apply to you):

1. Delayed writing through the host OS cache is less secure. When the
guest OS writes data, it considers the data written even though it has
not yet arrived on a physical disk. If for some reason the write does
not happen (power failure, host crash), the likelihood of data loss
increases.

2. Disk image files tend to be very large. Caching them can therefore
quickly use up the entire host OS cache. Depending on the efficiency
of the host OS caching, this may slow down the host immensely,
especially if several VMs run at the same time. For example, on Linux
hosts, host caching may result in Linux delaying all writes until the
host cache is nearly full and then writing out all these changes at
once, possibly stalling VM execution for minutes. This can result in
I/O errors in the guest as I/O requests time out there.

3. Physical memory is often wasted as guest operating systems
typically have their own I/O caches, which may result in the data
being cached twice (in both the guest and the host caches) for little
effect.

"If you decide to disable host I/O caching for the above reasons,
VirtualBox uses its own small cache to buffer writes, but no read
caching since this is typically already performed by the guest OS. In
addition, VirtualBox fully supports asynchronous I/O for its virtual
SATA, SCSI and SAS controllers through multiple I/O threads."

Thanks,
Craig

That's important information to keep in mind. Thanks Craig!

My reply:

Thanks for the heads up!

For what it's worth, I already wanted to write the article a few weeks ago, and have been running with all my VirtualBox hosts with Host I/O cache set to "on", and I haven't noticed any problems. I haven't lost any data in databases running on Virtualbox (even though they've crashed a few times; or actually I killed them with ctrl-alt-backspace), the host is actually more responsive if VirtualBoxes are under heavy disk load. I do have 16 Gb of memory, which may make the difference. 

The speedups in the guests is more than worth any other drawbacks though. The gitlab omnibus package now installs in 4 minutes rather than 5+ hours (yes, hours).

I'll keep this blog post updates in case I run into any weird problems.

by admin at March 14, 2016 08:34 PM

Evaggelos Balaskas

Top Ten Linux Distributions and https

Top Ten Linux Distributions and https

A/A |  Distro    |          URL               | Verified by       | Begin      | End        | Key
01. | ArchLinux  | https://www.archlinux.org/ | Let's Encrypt     | 02/24/2016 | 05/24/2016 | 2048
02. | Linux Mint | https://linuxmint.com/     | COMODO CA Limited | 02/24/2016 | 02/24/2017 | 2048
03. | Debian     | https://www.debian.org/    | Gandi             | 12/11/2015 | 01/21/2017 | 3072
04. | Ubuntu     | http://www.ubuntu.com      | -                 | -          | -          | -
05. | openSUSE   | https://www.opensuse.org/  | DigiCert Inc      | 02/17/2015 | 04/23/2018 | 2048
06. | Fedora     | https://getfedora.org/     | DigiCert Inc      | 11/24/2014 | 11/28/2017 | 4096
07. | CentOS     | https://www.centos.org/    | DigiCert Inc      | 07/29/2014 | 08/02/2017 | 2048
08. | Manjaro    | https://manjaro.github.io/ | DigiCert Inc      | 01/20/2016 | 04/06/2017 | 2048
09. | Mageia     | https://www.mageia.org/    | Gandi             | 03/01/2016 | 02/07/2018 | 2048
10. | Kali       | https://www.kali.org/      | GeoTrust Inc      | 11/09/2014 | 11/12/2018 | 2048
Tag(s): https

March 14, 2016 04:27 PM

March 12, 2016

Evaggelos Balaskas

Baïkal - CalDAV & CardDAV server

Baïkal is a CalDAV and CardDAV server, based on sabre/dav,

To self hosted your own CalDAV & CardDAV server is one of the first step to better control your data and keep your data, actually, yours!So here comes Baikal which is really easy to setup. That easily you can also configure any device (mobile/tablet/laptop/desktop) to use your baikal instance and synchronize your calendar & contacts everywhere.

 

In this blog post are some personal notes on installing or upgrading baikal on your web server.

 

[ The latest version as this article was written is 0.4.1 ]

 

Change to your web directory (usually is something like: /var/www/html/) and download baikal:

Clean Install - Latest release 0.4.1
based on sabre/dav 3.1.2
You need at least PHP 5.5 but preferable use 5.6.

# wget -c https://github.com/fruux/Baikal/releases/download/0.4.1/baikal-0.4.1.zip
# yes | unzip baikal-0.4.1.zip

# chown -R apache:apache baikal/

That’s it !

 

Be Aware that there is a big difference between 0.2.7 and versions greater that 0.3.x.
And that is, that the URL has an extra part: html

from: https://baikal.example.com/admin
to : https://baikal.example.com/html/admin

If you already had installed baikal-0.2.7 and you want to upgrade to 0.4.x version and later, then you have to follow the below steps:

# wget -c http://baikal-server.com/get/baikal-flat-0.2.7.zip
# unzip baikal-flat-0.2.7.zip
# mv baikal-flat baikal

# wget -c https://github.com/fruux/Baikal/releases/download/0.4.1/baikal-0.4.1.zip
# yes | unzip baikal-0.4.1.zip

# touch baikal/Specific/ENABLE_INSTALL
# chown -R apache:apache baikal/

 

I prefer to create a new virtualhost every time I need to add a new functionality to my domain.

Be smart & use encryption !
Below is mine virtualhost as an example:

< VirtualHost *:443 >

    ServerName	baikal.example.com

    # SSL Support
    SSLEngine on

    SSLProtocol ALL -SSLv2 -SSLv3
    SSLHonorCipherOrder on
    SSLCipherSuite HIGH:!aNULL:!MD5

    SSLCertificateFile /etc/letsencrypt/live/baikal.example.com/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/baikal.example.com/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/baikal.example.com/chain.pem

    # Logs
    CustomLog logs/baikal.access.log combined
    ErrorLog  logs/baikal.error.log

    DocumentRoot /var/www/html/baikal/

    < Directory /var/www/html/baikal/ >
            Order allow,deny
            Allow from all
    < /Directory >

< /VirtualHost >

 

Next step is to open your browser and browse your baikal's location,

eg. https://baikal.example.com/html/

admin interface:

https://baikal.example.com/html/admin/

or

if you have an older version (0.2.7) on your system

eg. https://baikal.example.com

 

I use SQLite for personal use (makes easy backup process) but you can always choose MySQL .

Dashboard on 0.4.1

 

baikal_041d.jpg

 

Useful URIs are:

Principals:

 

baikal_041c.jpg

 

Plugins:

 

baikal_041b.jpg

 

Nodes:

 

baikal_041a.jpg

 

 

Here is a sceen-guide on latest versions:

 

baikal_01.jpg

 

baikal_02.jpg

 

baikal_03.jpg

 

baikal_04.jpg

 

 

Login to the admin dashboard and create your user through
Users and resources tab

and you are done with the baikal installation & configuration process.

Principals

Applications (caldav/carddav and task clients) can now be accessed by visiting principals URI:

https://baikal.example.com/html/card.php/principals

or via dav.php

https://baikal.example.com/html/dav.php

but If your client does not support the above holistic URI, then try the below for calendar & contacts:

CalDAV

https://baikal.example.com/html/cal.php/calendars/test/default

CardDAV

https://baikal.example.com/html/card.php/addressbooks/test/default
 

baikal_041d.jpg

 

On android devices, I use: DAVdroid

If you have a problem with your self-signed certificate,
try adding it to your device through the security settings.

 

davdroid_01.jpg

 
 

davdroid_03.jpg

 

March 12, 2016 06:54 PM

March 10, 2016

HolisticInfoSec.org

toolsmith #114: WireEdit and Deep Packet Modification




PCAPs or it didn't happen, right? 



Introduction
Packet heads, this toolsmith is for you. Social media to the rescue. Packet Watcher (jinq102030) Tweeted using the #toolsmith hashtag to say that WireEdit would make a great toolsmith topic. Right you are, sir! Thank you. Many consider Wireshark the eponymous tool for packet analysis; it was only my second toolsmith topic almost ten years ago in November 2006. I wouldn't dream of conducting network forensic analysis without NetworkMiner (August 2008) or CapLoader (October 2015). Then there's Xplico, Security Onion, NST, Hex, the list goes on and on...
Time to add a new one. Ever want to more easily edit those packets? Me too. Enter WireEdit, a comparatively new player in the space. Michael Sukhar (@wirefloss) wrote and maintains WireEdit, the first universal WYSIWYG (what you see is what you get) packet editor. Michael identifies WireEdit as a huge productivity booster for anybody working with network packets, in a manner similar to other industry groundbreaking WYSIWIG tools.

In Michael's own words: "Network packets are complex data structures built and manipulated by applying complex but, in most cases, well known rules. Manipulating packets with C/C++, or even Python, requires programming skills not everyone possesses and often lots of time, even if one has to change a single bit value. The other existing packet editors support editing of low stack layers like IPv4, TCP/UDP, etc, because the offsets of specific fields from the beginning of the packet are either fixed or easily calculated. The application stack layers supported by those pre-WireEdit tools are usually the text based ones, like SIP, HTTP, etc. This is because no magic is required to edit text. WireEdit's main innovation is that it allows editing binary encoded application layers in a WYSIWYG mode."

I've typically needed to edit packets to anonymize or reduce captures, but why else would one want to edit packets?
1) Sanitization: Often, PCAPs contain sensitive data. To date, there has been no easy mechanism to “sanitize” a PCAP, which, in turn, makes traces hard to share.
2) Security testing: Engineers often want to vary or manipulate packets to see how the network stack reacts to it. To date, that task is often accomplished via programmatic means.
WireEdit allows you to do so in just a few clicks.

Michael describes a demo video he published in April 2015, where he edits the application layer of the SS7 stack (GSM MAP packets). GSM MAP is the protocol responsible for much of the application logic in “classic” mobile networks, and is still widely deployed. The packet he edits carries an SMS text message, and the layer he edits is way up the stack and binary encoded. Michael describes the message displayed as a text, but notes that if looking at the binary of the packet, you wouldn’t find it there due to complex encoding. If you choose to decode in order to edit the text, your only option is to look up the offset of the appropriate bytes in Wireshark or a similar tool, and try to edit the bytes directly.
This often completely breaks the packet and Michael proudly points out that he's not aware of any tool allowing to such editing in WYSIWYG mode. Nor am I, and I enjoyed putting WireEdit through a quick validation of my own.

Test Plan

I conceived a test plan to modify a PCAP of normal web browsing traffic with web application attacks written in to the capture with WireEdit. Before editing the capture, I'd run it through a test harness to validate that no rules were triggered resulting in any alerts, thus indicating that the capture was clean. The test harness was a Snort 2.9.8.0 instance I'd implemented on a new Ubuntu 14.04 LTS, configured with Snort VRT and Emerging Threats emerging-web_server and emerging-web_specific_apps rules enabled. To keep our analysis all in the family I took a capture while properly browsing the OpenBSD entry for tcpdump.
A known good request for such a query would, in text, as a URL, look like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8
Conversely, if I were to pass a cross-site scripting attack (I did not, you do not) via this same URL and one of the available parameters, in text, it might look something like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8%22onmouseover%3Dalert(1337)%2F%2F
Again though, my test plan was one where I wasn't conducting any actual attacks against any website, and instead used WireEdit to "maliciously" modify the packet capture from a normal browsing session. I would then parsed it with Snort to validate that the related web application security rules fired correctly.
This in turn would validate WireEdit's capabilities as a WYSIWYG PCAP editor as you'll see in the walk-though. Such a testing scenario is a very real method for testing the efficacy of your IDS for web application attack detection, assuming it utilized a Snort-based rule set.

Testing

On my Ubuntu Snort server VM I ran sudo tcpdump -i eth0 -w toolsmith.pcap while browsing http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8.
Next, I ran sudo snort -c /etc/snort/snort.conf -r toolsmith.pcap against the clean, unmodified PCAP to validate that no alerts were received, results noted in Figure 1.

Figure 1: No alerts triggered via initial OpenBSD browsing PCAP
I then dragged the capture (407 packets) over to my Windows host running WireEdit.
Now wait for it, because this is a lot to grasp in short period of time regarding using WireEdit.
In the WireEdit UI, click the Open icon, then select the PCAP you wish to edit, and...that's it, you're ready to edit. :-) WireEdit tagged packet #9 with the pre-requisite GET request marker I was interest in so expanded that packet, and drilled down to the HTTP: GET descriptor and the Request-URI under Request-Line. More massive complexity, here take notes because it's gonna be rough. I right-clicked the Request-URI, selected Edit PDU, and edited the PDU with a cross-site scripting (JavaScript) payload (URL encoded) as part of the GET request. I told you, really difficult right? Figure 2 shows just how easy it really is.

Figure 2: Using WireEdit to modify Request-URI with XSS payload
I then saved the edited PCAP as toolsmithXSS.pcap and dragged it back over to my Snort server and re-ran it through Snort. The once clean, pristine PCAP elicited an entirely different response from Snort this time. Figure 3 tells no lies.

Figure 3: XSS ET Snort alert fires
Perfect, in what was literally a :30 second edit with WireEdit, I validated that my ten minute Snort setup catches cross-site scripting attempts with at least one rule. And no websites were actually harmed in the making of this test scenario, just a quick tweak with WireEdit.
That was fun, let's do it again, this time with a SQL injection payload. Continuing with toolsmithXSS.pcap I jumped to the GET request in frame 203 as it included a request for a different query and again edited the Request-URI with an attack specific for MySQL as seen in Figure 4.



I saved this PCAP modification as toolsmithXSS_SQLi.pcap and returned to the Snort server for yet another happy trip Snort Rule Lane. As Figure 5 represents, we had an even better result this time.  


Figure 5: WireEdited PCAP trigger multiple SQL injection alerts
In addition to the initial XSS alert firing again, this time we collected four alerts for:

  • ET WEB_SERVER MYSQL SELECT CONCAT SQL Injection Attempt
  • ET WEB_SERVER SELECT USER SQL Injection Attempt in URI
  • ET WEB_SERVER Possible SQL Injection Attempt UNION SELECT
  • ET WEB_SERVER Possible SQL Injection Attempt SELECT FROM

That's a big fat "hell, yes" for WireEdit.
Still with me that I never actually executed these attacks? I just edited the PCAP with WireEdit and fed it back to the Snort beast. Imagine a PCAP like being optimized for the OWASP Top 10 and being added to your security test library, and you didn't need to conduct any actual web application attacks. Thanks WireEdit!

Conclusion

WireEdit is beautifully documented, with a great Quickstart. Peruse the WireEdit website and FAQ, and watch the available videos. The next time you need to edit packets, you'll be well armed and ready to do so with WireEdit, and you won't be pulling your hair out trying to accomplish it quickly, effectively, and correctly. WireEdit my a huge leap from not known to me to the top five on my favorite tools list. WireEdit is everything it is claimed to be. Outstanding.
Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Michael Sukhar for WireEdit and Packet Watcher for the great suggestion.

by Russ McRee (noreply@blogger.com) at March 10, 2016 06:46 AM

March 09, 2016

That grumpy BSD guy

Domain Name Scams Are Alive And Well, Thank You

Is somebody actually trying to register your company name as a .cn or .asia domain? Not likely. And don't pay them.

It has been a while since anybody tried to talk me into registering a domain name I wasn't sure I wanted in the first place, but it has happened before. Scams more or less like the Swedish one are as common as they are transparent, but apparently enough people take the bait that the scammers keep trying.

After a few quiet years in my backwater of the Internet, in March of 2016, we saw a new sales push that came from China. The initial contact on March 4th, from somebody calling himself Jim Bing read (preserved here with headers for reference, you may need MIME tools to actually extract text due to character set handling),

Subject: Notice for "bsdly"

Dear CEO,

(If you are not the person who is in charge of this, please forward this to your CEO, because this is urgent, Thanks)

We are a Network Service Company which is the domain name registration center in China.
We received an application from Huabao Ltd on March 2, 2016. They want to register " bsdly " as their Internet Keyword and " bsdly.cn "、" bsdly.com.cn " 、" bsdly.net.cn "、" bsdly.org.cn " 、" bsdly.asia " domain names, they are in China and Asia domain names. But after checking it, we find " bsdly " conflicts with your company. In order to deal with this matter better, so we send you email and confirm whether this company is your distributor or business partner in China or not?

Best Regards,

Jim
General Manager
Shanghai Office (Head Office)
8006, Xinlong Building, No. 415 WuBao Road,
Shanghai 201105, China
Tel: +86 216191 8696
Mobile: +86 1870199 4951
Fax: +86 216191 8697
Web: www.cnweb-registry.com


The message was phrased a bit oddly in parts (as in, why would anybody register an"internet keyword"?), but not entirely unintelligible as English-language messages from Asians sometimes are.

I had a slight feeling of deja vu -- I remembered a very similar message turning up in 2008 while we were in the process of selling the company we'd started a number of years earlier. In the spirit of due diligence (after asking the buyer) we replied then that the company did not have any plans for expanding into China, and if my colleagues ever heard back, it likely happened after I'd left the company.

This time around I was only taking a break between several semi-urgent tasks, so I quickly wrote a reply, phrased in a way that I thought would likely make them just go away (also preserved here):

Subject: Re: Notice for "bsdly"
 
Dear Jim Bing,

We do not have any Chinese partners at this time, and we are not
currently working to establish a presence in Chinese territory. As to
Huabao Ltd's intentions for registering those domains, I have no idea
why they should want to.

Even if we do not currently plan to operate in China and see no need
to register those domains ourselves at this time, there is a risk of
some (possibly minor) confusion if those names are to be registered
and maintained by a third party. If you have the legal and practical
authority to deny these registrations that would be my preference.

Yours,
Peter N. M. Hansteen


Then on March 7th, a message from "Jiang zhihai" turned up (preserved here, again note the character set issues):

Subject: " bsdly "
Dear Sirs,

Our company based in chinese office, our company has submitted the " bsdly " as CN/ASIA(.asia/.cn/.com.cn/.net.cn/.org.cn) domain name and Internet Keyword, we are waiting for Mr. Jim's approval. We think these names are very important for our business in Chinese and Asia market. Even though Mr. Jim advises us to change another name, we will persist in this name.

Best regards

Jiang zhihai

Now, if they're in a formal process of getting approval for a that domain name, why would they want to screw things up by contacting me directly? I was beginning to smell rat, but I sent them an answer anyway (preserved here):

Subject: Re: " bsdly "

Dear Jiang zhihai,

You've managed to make me a tad curious as to why the "bsdly" name
would be important in these markets.

While there is a very specific reason why I chose that name for my
domains back in 2004, I don't see any reason why you wouldn't be
perfectly well served by picking some other random sequence of characters.

So out of pure curiosity, care to explain why you're doing this?

Sincerely,
Peter N. M. Hansteen

Yes, that domain name has been around for a while. I didn't immediately remember exactly when I'd registered the domain, but a quick look at the whois info (preserved here) confirmed what I thought. I've had it since 2004.

Anyone who is vaguely familiar with the stuff I write about will have sufficient wits about them to recognize the weak pun the domain name is. If "bsdly" has any other significance whatsoever in other languages including the several Chinese ones, I'd genuinely like to know.

But by now I was pretty sure this was a scam. Registrars may or may not do trademark searches before registering domains, but in most cases the registrar would not care either way. Domain registration is for the most part a purely technical service that extends to making sure whether any requested domains are in fact available, while any legal disputes such as trademark issues could very easily be sent off to the courts for the end users at both ends to resolve. The supposed Chinese customer contacting me directly just does not make sense.

Then of course a few hours after I'd sent that reply, our man Jim fired off a new message (preserved here, MIME and all):

Subject: CN/ASIA domain names & Internet Keyword

Dear Peter N. M. Hansteen,

Based on your company having no relationship with them, we have suggested they should choose another name to avoid this conflict but they insist on this name as CN/ASIA domain names (asia/ cn/ com.cn/ net.cn/ org.cn) and internet keyword on the internet. In our opinion, maybe they do the similar business as your company and register it to promote his company.
According to the domain name registration principle: The domain names and internet keyword which applied based on the international principle are opened to companies as well as individuals. Any companies or individuals have rights to register any domain name and internet keyword which are unregistered. Because your company haven't registered this name as CN/ASIA domains and internet keyword on the internet, anyone can obtain them by registration. However, in order to avoid this conflict, the trademark or original name owner has priority to make this registration in our audit period. If your company is the original owner of this name and want to register these CN/ASIA domain names (asia/ cn/ com.cn/ net.cn/ org.cn) and internet keyword to prevent anybody from using them, please inform us. We can send an application form and the price list to you and help you register these within dispute period.

Kind regards

Jim
General Manager
Shanghai Office (Head Office)
8006, Xinlong Building, No. 415 WuBao Road,
Shanghai 201105, China
Tel: +86 216191 8696
Mobile: +86 1870199 4951
Fax: +86 216191 8697
Web: www.cnwebregistry.com

So basically he's fishing for me to pony up some cash and register those domains myself through their outfit. Quelle surprise.

I'd already checked whether my regular registrar offers .cn registrations (they don't), and checking for what looked like legitimate .cn domain registrars turned up that registering a .cn domain would likely cost to the tune of USD 35. Not a lot of money, but more than I care to spend (and keep spending on a regular basis) on something I emphatically do not need.

So I decided to do my homework. It turns out that this is a scam that's been going on for years. A search on the names of persons and companies turned up Matt Lowe's 2012 blog post Chinese Domain Name Registration Scams with a narrative identical to my experience, with only minor variations in names and addresses.

Checking whois while writing this it turns out that apparently bsdly.cn has been registered:

[Wed Mar 09 20:34:34] peter@skapet:~$ whois bsdly.cn
Domain Name: bsdly.cn
ROID: 20160229s10001s82486914-cn
Domain Status: ok
Registrant ID: 22cn120821rm22yr
Registrant: 徐新荣
Registrant Contact Email: 1725093@qq.com
Sponsoring Registrar: 浙江贰贰网络有限公司
Name Server: ns1.22.cn
Name Server: ns2.22.cn
Registration Time: 2016-02-29 20:55:09
Expiration Time: 2017-02-28 20:55:09
DNSSEC: unsigned

But it doesn't resolve more than a week after registration:

[Wed Mar 09 20:34:47] peter@skapet:~$ host bsdly.cn
Host bsdly.cn not found: 2(SERVFAIL)


That likely means they thought me a prospect and registered with an intent to sell, and they've already spent some amount of cash they're not getting back from me. I think we can consider them LARTed, however on a very small scale.

What's more, none of the name servers specified in the whois info seem to answer DNS queries:

[Wed Mar 09 20:35:36] peter@skapet:~$ dig @ns1.22.cn bsdly.cn any

; <<>> DiG 9.4.2-P2 <<>> @ns1.22.cn bsdly.cn any
; (2 servers found)
;; global options:  printcmd
;; connection timed out; no servers could be reached
[Wed Mar 09 20:36:14] peter@skapet:~$ dig @ns2.22.cn bsdly.cn any

; <<>> DiG 9.4.2-P2 <<>> @ns2.22.cn bsdly.cn any
; (2 servers found)
;; global options:  printcmd
;; connection timed out; no servers could be reached



So summing up,
  • This is a scam that appears to have been running for years.
  • If something similar to those messages start turning up in your inbox, the one thing you do not want to do is to actually pay for the domains they're offering.

    Most likely you do not need those domains, and it's easy to check how far along they are in the registration process. If you have other contacts that will cheaply and easily let you register those domains yourself, there's an element of entertainment to consider. But keep in mind that automatic renewals for domains you don't actually need can turn irritating once you've had a few laughs over the LARTing.
  • If you are actually considering setting up shop in the markets they're offering domains for and you receive those messages before you've come around to registering domains matching your trademarks, you are the one who's screwed up.
If this makes you worried about Asian cyber-criminals or the Cyber Command of the People's Liberation Army out to get your cyber-whatever, please calm down.

Sending near-identical email messages to people listed in various domains' whois info does not require a lot of resources, and as Matt says in his article, there are indications that this could very well be the work (for some values of) of a single individual. As cybercrime goes, this is the rough equivalent of some petty, if unpleasant, street crime.

I'm all ears for suggestions for further LARTing (at least those that do not require a lot of effort on my part), and if you've had similar experiences, I'd like to hear from you (in comments or email). Do visit Matt Lowe's site too, and add to his collection if you want to help him keep track.

And of course, if "Jim Bing" or Jiang zhihai" actually answer any of my questions, I'll let you know with an update to this article.

Update 2016-03-15: As you can imagine I've been checking whether bsdly.cn resolves and the registration status of the domain via whois at semi-random intervals of at least a few hours since I started the blog post. I was a bit surprised to find that the .cn whois server does not answer requests at the moment:

[Tue Mar 15 10:23:31] peter@portal:~$ whois bsdly.cn
whois: cn.whois-servers.net: connect: Connection timed out


It could of course be a coincidence and an unrelated technical issue. I'd appreciate independent verification.

by Peter N. M. Hansteen (noreply@blogger.com) at March 09, 2016 08:17 PM

March 08, 2016

Anton Chuvakin - Security Warrior

Links for 2016-03-07 [del.icio.us]

March 08, 2016 08:00 AM

March 07, 2016

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – Feburary 2016

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009. Is it relevant now? Well, you be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” …  [267 pageviews]
  2. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our new 2016 research on security monitoring use cases here! [106 pageviews]
  3. Top 10 Criteria for a SIEM?” came from one of my last projects I did when running mySIEM consulting firm in 2009-2011 (for my recent work on evaluating SIEM, see this document [2015 update]) [104 pageviews]
  4. “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?” is a quick framework for assessing the SIEM project (well, a program, really) costs at an organization (much more details on this here in this paper). [70 pageviews]
  5. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” is a companion to the checklist (updated version) [65 pageviews of total 3861 pageviews to all blog pages]
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog
 
Current research on IR:
Current research on EDR:
Past research on SIEM:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015.
Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at March 07, 2016 04:16 PM

Sean's IT Blog

It’s Time To Reconsider My Thoughts on GPUs in VDI…

Last year, I wrote that it was too early to consider GPUs for general VDI use and that they  should be reserved only for VDI use cases where they are absolutely required.  There were a number of reasons for this including user density per GPU, lack of monitoring and vMotion, and economics.  That lead to a Frontline Chatter podcast discussing this topic in more depth with industry expert Thomas Poppelgaard.

When I wrote that post, I said that there would be a day when GPUs would make sense for all VDI deployments.  That day is coming soon.  There is a killer app that will greatly benefit all users (in certain cases) who have access to a GPU.

Last week, I got to spend some time out at NVIDIA’s Headquarters in Santa Clara taking part in NVIDIA GRID Days.  GRID Days was a two day event interacting with the senior management of NVIDIA’s GRID product line along with briefings on the current and future technology in GRID.

Disclosure: NVIDIA paid for my travel, lodging, and some of my meals while I was out in Santa Clara.  This has not influenced the content of this post.

The killer app that will drive GPU adoption in VDI environments is Blast Extreme.  Blast Extreme is the new protocol being introduced in VMware Horizon 7 that utilizes H.264 as the codec for the desktop experience.  The benefit of using H.264 over other codecs is that many devices include hardware for encoding and decoding H.264 streams.  This includes almost every video card made in the last decade.

So what does this have to do with VDI?

When a user is logged into a virtual desktop or is using a published application on an RDSH server, the desktop and applications that they’re interacting with are being rendered, captured, encoded, or converted into a stream of data, and then transported over the network to the client.  Normally, this encoding happens in software and uses CPU cycles.  (PCoIP has hardware offload in the form of APEX cards, but these only handle the encoding phase, rendering happens somewhere else…

When GPUs are available to virtual desktops or RDSH/XenApp servers, the rendering and encoding tasks can be pushed into the GPU where dedicated and optimized hardware can take over these tasks.  This reduces the amount of CPU overhead on each desktop, and it can lead to snappier user experience.  NVIDIA’s testing has also shown that Blast Extreme with GPU offload uses less bandwidth and has lower latency compared to PCoIP.

Note: These aren’t my numbers, and I haven’t had a chance to validate these finding in my lab.  When Horizon 7 is released, I plan to do similar testing of my own comparing PCoIP and Blast Extreme in both LAN and WAN environment.

If I use Blast Extreme, and I install GRID cards in my hosts, I gain two tangible user experience benefits.  Users now have access to a GPU, which many applications, especially Microsoft Office and most web browsers, tap into for processing and rendering.  And they gain the benefits of using that same GPU to encode the H.264 streams that Blast Extreme uses, potentially lowering the bandwidth and latency of their session.  This, overall, translates into significant improvements in their virtual desktop and published applications experience*.

Many of the limitations of vGPU still exist.  There is no vMotion support, and performance analytics are not fully exposed to the guest OS.  But density has improved significantly with the new M6 and M60 cards.  So while it may not be cost effective to retrofit GPUs into existing Horizon deployments, GPUs are now worth considering for new Horizon 7 deployments.

*Caveat: If users are on a high latency network connection, or if the connection has a lot of contention, you may have different results.


by seanpmassey at March 07, 2016 03:38 PM

March 04, 2016

Electricmonk.nl

mdpreview, a Markdown previewer to be used with an external editor

There are many Markdown previewers out there, from the simplest commandline tool + webbrowser to full-fledged Markdown IDE's. I've tried quite a few, and I like none of them. I write my Markdown in an external editor (Vim), something very few Markdown previewers take in account. The ones that do are buggy. So I wrote mdpreview, a standalone Markdown previewer for Linux that works great with an external editor such as Vim. The main selling points:

  • Automatic reload when your Markdown file changes. Unlike many other previewers, it remembers your scroll position during reload and doesn't put you back at the top.
  • Themes that closely resemble Github and Bitbucket, so you actually know what it's going to look like when published. There are also some additional themes that are nice on the eyes (solarized). 
  • An option to set Keep-on-top window hinting, so the previewer always stays on top of other windows.
  • Vi motion keys (jkGg)
  • Append detection. If the end of the document is being viewed and new contents is appended, mdpreview automatically scrolls to the bottom.
  • mdpreview remembers your window size and position. A very basic feature you'd think most previewers would support, but don't.

A feature to automatically scroll to the last made change in the Markdown file is currently being implemented.

Here's mdpreview running the Solarized theme:

mdpreview-sol

The Github theme:

mdpreview-github

And the BitBucket theme:

mdpreview-bitbucket

 

More information and installation instructions are available on the Github page.

 

by admin at March 04, 2016 03:43 PM

The Lone Sysadmin

Use Microsoft Excel For Your Text Manipulation Needs

I’m just going to lay it out there: sysadmins should use Microsoft Excel more. I probably will be labeled a traitor and a heathen for this post. It’s okay, I have years of practice having blasphemous opinions on various IT religious beliefs. Do I know how to use the UNIX text tools like sed, awk, xargs, find, […]

The post Use Microsoft Excel For Your Text Manipulation Needs appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at March 04, 2016 05:16 AM

March 02, 2016

SysAdmin1138

How Dunbar's Number intersects with scaling a startup

Dunbar's Number is a postulate claiming that due to how human brains are structured, there is an upper limit to number of personal relationships we can keep track of. Commonly, that number is presumed to be about 150; though the science is not nearly as sure of that number.  This 150 includes all of your personal relationships:

  • Family
  • Coworkers
  • Friends
  • People you run into every day and have learned their names

And so on.

This postulate has an intersection with growing a company, and how the office culture evolves. When a company is 4 people in a shared open-plan office, it's quite easy for everyone to know everything about everyone else. You can still kind of do that at 20 people. Getting to 50 starts pushing things, since 'coworkers' begins to take up a large piece of a person's personal-relationship social graph. At 100, there are going to be people you don't know in your company. As it evolves, office-culture needs to deal with all of these stages.

One of the critiques to Dunbar's Number, more of a refinement, is a report by Matthew Brashears (doi 10.1038/srep01513) that claims humans use compression techniques to expand the number of storable relationships. The idea is, in essence:

If the structure of relationships has an external framework, it is possible to infer relationships. Therefore, humans do not have to memorize every leg of the social-graph, which allows them to have more connections than otherwise are possible.

One such external structure has direct relevancy to how offices work: kin-lables.

The example used in the report are things like son, daughter, uncle, father. English is not a concise language for describing complex family structures, so I'll use something it is good at: company org-charts.

Dunbar's OrgChart

If you are a Senior Software Engineer in the Dev Team, you probably have a good idea what your relationship is with a generic QA Engineer in the QA Team. This relationship is implied in the org-chart, so you don't have to keep track of it. The QA team and engineering work together a lot, so it's pretty likely that a true personal relationship may be formed. That's cool.

Scaling a company culture begins to hit problems when you get big enough that disparate functional groups don't know each other except through Company Events. Take a structure like this one:

Dunbar's OrgChart - bigger

When the company is 20 people, it is entirely reasonable for one of the software engineers to personally know all of the marketing and sales team (all three of them). At 75 people, when each of these directorates have been fleshed out with full teams, and both Marketing and Engineering have split sub-teams for sales, marketing, front-end, and back-end, it is far less reasonable for everyone to know everyone else; there is little business-need for the back-end engineers to have any reason to talk to the sales team for any reason other than at Company Events.

This is where the cunning Director of Culture can start building in structure to stand in for the personal relationships it is increasingly impossible to maintain. All-hands Company Events help maintain the illusion of knowing everyone else, at least on a nodding basis. Another way is to start fostering team-relationships across org-chart boundaries using non-business goals, such as shared charity events. This would allow members of the Back-End Team to have a relationship with the Sales Team, which would further allow the individual members of the teams to infer personal relationships with the other team.

This only kicks the can down the road, though.

There will come a time when it will be simply impossible for everyone to know everyone else, even with fully implicit relationships. There will be parts of the company that are black boxes, shrouded in mystery, filled with people you didn't even know existed. A 500 person company is a very different thing than a 100 person company.

As a company grows, they will encounter these inflection points:

  1. Personal relationships can't be held with everyone in the company.
  2. Implicit relationships can't be held with everyone in the company.
  3. Parts of the company are largely unknowable.

As the company gets ever larger, the same progression holds within divisions, departments, and even teams. The wise manager of culture plans for each of these inflection points.

by SysAdmin1138 at March 02, 2016 03:33 PM

March 01, 2016

OpenSSL

An OpenSSL User's Guide to DROWN

Today, an international group of researchers unveiled DROWN (Decrypting RSA with Obsolete and Weakened eNcryption), aka CVE-2016-0800, a novel cross-protocol attack that uses SSLv2 handshakes to decrypt TLS sessions.

Over the past weeks, the OpenSSL team worked closely with the researchers to determine the exact impact of DROWN on OpenSSL and devise countermeasures to protect our users. Today’s OpenSSL release makes it impossible to configure a TLS server in such a way that it is vulnerable to DROWN.

Background on DROWN

DROWN research combines brute-force decryption of deliberately weakened EXPORT-grade ciphersuites with a Bleichenbacher padding oracle exposed by an SSLv2 server to uncover TLS session keys. Bleichenbacher oracle attacks are well known and defended-against but, ironically, the attack relies on exactly these widely implemented countermeasures to succeed.

The original Bleichenbacher attack, while mathematically brilliant, is still relatively infeasible to carry out in practice, requiring the attacker to make hundreds of thousands to millions of connections to the victim server in order to compromise a single session key. This is where the research truly shines: beautiful mathematical techniques reduce the number of connections to the ten thousands, bringing the attack down to a practical level. The researchers spent a mere US$440 on the EC2 cloud platform to decrypt a victim client session in a matter of hours.

The attack works against every known SSL/TLS implementation supporting SSLv2. It is, however, particularly dangerous against OpenSSL versions predating March 2015 (more on that below). Bottom line: if you are running OpenSSL 1.0.2 (the first, no-letter release) or OpenSSL 1.0.1l or earlier, you should upgrade immediately.

The DROWN attack is nuanced and non-trivial to implement, so we will likely not see immediate exploitation. But, once implemented, the effects are chilling. DROWN attackers can decrypt sessions recorded in the past. All client sessions are vulnerable if the target server still supports SSLv2 today, irrespective of whether the client ever supported it. There is, however, a silver lining. Only plain RSA handshakes are directly vulnerable - forward-secure sessions can perhaps be compromised by a powerful man-in-the-middle attacker, but all passively recorded forward-secure sessions remain secure.

Two devastating practical findings make DROWN much more impactful than it ever should have been. First, it is surprisingly common for services to share keys. DROWN can target your HTTPS server even if only your email server supports SSLv2, so long as the two share the same private key. While 11% of HTTPS servers with browser-trusted certificates are directly vulnerable to DROWN, another whopping 11% fall victim through some other service (most commonly SMTP on port 25).

Second, in the OpenSSL security releases of March 2015, we rewrote a section of code, which coincidentally fixed a security bug (CVE-2016-0703). This bug, if present in the server, makes the DROWN attack run in just a few minutes on, well, our laptops. This reduced complexity could lead to successful real-time man-in-the-middle attacks, hijacking a session even if the client and the server would otherwise negotiate a forward-secure Diffie-Hellman ciphersuite. Much to our surprise, DROWN scans found over 4 million HTTPS servers that, almost a year later, are still unpatched. In the wake of Heartbleed, the world saw a surge of upgrades but the awareness appears to have dropped. Thus, while the following FAQ will guide you through defending your services against DROWN, we encourage you to upgrade to OpenSSL latest even if you’re not vulnerable, and keep doing so regularly upon every security release.

DROWN FAQ for OpenSSL users

Is my server vulnerable to DROWN?

It depends on your server configuration. You can only be sure that you are not vulnerable if none of your services sharing a given private key enable SSLv2. Your secure TLS-only HTTPS server is vulnerable if you expose the same key on an email server that supports SSLv2.

Apache HTTP Server users

If you use an Apache HTTP server with keys that you don’t share with any other services, and that server runs Apache httpd 2.4.x, you are not affected (because httpd 2.4 unconditionally disabled SSLv2). If the server runs Apache httpd 2.2.x, SSLv2 is supported by default, and you are likely to be vulnerable. You should ensure that your configuration disables SSLv2 (and preferably, SSLv3 too):

1
SSLProtocol All -SSLv2 -SSLv3

nginx users

Versions 0.7.64, 0.8.18, and earlier enable SSLv2 by default. You should similarly disable SSLv2 in your configuration:

1
ssl_protocols TLSv1 TLSv1.1 TLSv1.2

Later nginx versions have safe default configurations, but you should double-check the ssl_protocols setting in your config.

Postfix users

Postfix releases 2.9.14, 2.10.8, 2.11.6, 3.0.2, released on 20/Jul/2015 and all later releases are not vulnerable in their default configuration. The below recommended TLS settings for Postfix are sufficient to avoid exposure to DROWN. Many of these are defaults in sufficiently recent releases. Nevertheless, in addition to ensuring that your Postfix configuration disables SSLv2 and weak or obsolete ciphers, you should also deploy the appropriate OpenSSL upgrade.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
   # Minimal recommended settings.  Whenever the built-in defaults are
   # sufficient, let the built-in defaults stand by deleting any explicit
   # overrides.  The default mandatory TLS protocols have never included
   # SSLv2, check to make sure you have not inadvertently enabled it.
   #
   smtpd_tls_protocols = !SSLv2, !SSLv3
   smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
   tlsproxy_tls_protocols = $smtpd_tls_protocols
   tlsproxy_tls_mandatory_protocols = $smtpd_tls_mandatory_protocols

   smtp_tls_protocols = !SSLv2, !SSLv3
   smtp_tls_mandatory_protocols = !SSLv2, !SSLv3
   lmtp_tls_protocols = !SSLv2, !SSLv3
   lmtp_tls_mandatory_protocols = !SSLv2, !SSLv3

   smtpd_tls_ciphers = medium
   smtp_tls_ciphers = medium

   # Other best practices

   # Strongly recommended:
   # http://www.postfix.org/FORWARD_SECRECY_README.html#server_fs
   #
   smtpd_tls_dh1024_param_file=${config_directory}/dh2048.pem
   smtpd_tls_eecdh_grade = strong

   # Suggested, not strictly needed:
   #
   smtpd_tls_exclude_ciphers =
        EXPORT, LOW, MD5, SEED, IDEA, RC2
   smtp_tls_exclude_ciphers =
        EXPORT, LOW, MD5, aDSS, kECDHe, kECDHr, kDHd, kDHr, SEED, IDEA, RC2

Everyone

Note that if you’re running anything but the latest OpenSSL releases from January 2016 (1.0.2f and 1.0.1r), a subtle bug (CVE-2015-3197) allows the server to accept SSLv2 EXPORT handshakes even if EXPORT ciphers are not configured. Therefore, disabling weak ciphers is not a universally reliable countermeasure against DROWN; you should really disable SSLv2 entirely.

The DROWN researchers have also set up a more precise test for vulnerable services.

Should I upgrade to the latest OpenSSL version?

In short: yes, everyone should upgrade. You can mitigate DROWN completely with a configuration change. But today’s release fixes a number of other vulnerabilities, and we cannot emphasize the importance of timely upgrades enough.

How do I know which OpenSSL version my system is running?

If you obtained OpenSSL directly from us (from https://www.openssl.org, or from https://github.com/openssl/openssl), run the following command to find out:

1
$ openssl version

If you are using the system OpenSSL provided with your Linux distribution, or obtained OpenSSL from another vendor, the version number is not a reliable indicator of the security status. Some third party distributions have a policy of only backporting selected security updates without changing to a newer version, to provide stability. Each distribution varies; you should install the updates provided by your vendor and contact them for questions about this or any other security issues.

Debian users can also track the security status of Debian releases, using Debian’s security tracker. All issues affecting OpenSSL can be found in the search by source package and information about DROWN will appear under the tracker for CVE-2016-0800.

Red Hat has an information page about this issue for their users.

I run an OpenSSL-based client application. Do I need to do anything?

DROWN is primarily a server-side vulnerability. However, as a matter of best practices, you should ensure that your client supports the highest current protocol version (TLSv1.2), and negotiates forward-secure ciphersuites.

You should also upgrade to OpenSSL latest. (Did we say that already?)

Does DROWN compromise my private key?

No. DROWN attacks can only target individual sessions, not the server’s key. Even if there has been a successful DROWN attack against you, there is no need to regenerate your private key, so long as you can confidently identify all services that share this key, and disable SSLv2 for them.

The OpenSSL team would like to thank Nimrod Aviram, J. Alex Halderman and the entire DROWN team for their cooperation.

March 01, 2016 02:59 PM

Cryptography Engineering

Attack of the week: DROWN

To every thing there is a season. And in the world of cryptography, today we have the first signs of the season of TLS vulnerabilities.

This year's season is off to a roaring start with not one, but two serious bugs announcements by the OpenSSL project, each of which guarantees that your TLS connections are much less than private than you'd like them to be. I can't talk about both vulnerabilities and keep my sanity, so today I'm going to confine myself to the more dramatic of the two vulnerabilities: a new cross-protocol attack on TLS named "DROWN".

Technically DROWN stands for "Decrypting RSA using Obsolete and Weakened eNcryption", but honestly feel free to forget that because the name itself is plenty descriptive. In short, due to a series of dumb mistakes on the part of a vast number of people, DROWN means that TLS connections to a depressingly huge slice of the web (and mail servers, VPNs etc.) are essentially open to attack by fairly modest adversaries.

So that's bad news. The worse news -- as I'll explain below -- is that this whole mess was mostly avoidable.

For a detailed technical explanation of DROWN, you should go read the complete technical paper by Aviram et al. Or visit the DROWN team's excellent website. If that doesn't appeal to you, read on for a high level explanation of what DROWN is all about, and what it means for the security of the web. Here's the TL;DR:
If you're running a web server configured to use SSLv2, and particularly one that's running OpenSSL (even with all SSLv2 ciphers disabled!), you may be vulnerable to a fast attack that decrypts many recorded TLS connections made to that box. Most worryingly, the attack does not require the client to ever make an SSLv2 connection itself, and it isn't a downgrade attack. Instead, it relies on the fact that SSLv2 -- and particularly the legacy "export" ciphersuites it incorporates -- are pure poison, and simply having these active on a server is enough to invalidate the security of all connections made to that device.
For the rest of this post I'll use the "fun" question and answer format I save for this kind of attack. First, some background.
What are TLS and SSLv2, and why should I care?
Transport Layer Security (TLS) is the most important security protocol on the Internet. You should care about it because nearly every transaction you conduct on the Internet relies on TLS (v1.0, 1.1 or 1.2) to some degree, and failures in TLS can flat out ruin your day.

But TLS wasn't always TLS. The protocol began its life at Netscape Communications under the name "Secure Sockets Layer", or SSL. Rumor has it that the first version of SSL was so awful that the protocol designers collected every printed copy and buried them in a secret New Mexico landfill site. As a consequence, the first public version of SSL is actually SSL version 2. It's pretty terrible as well -- but not (entirely) for the reasons you might think.

Let me explain.

Working group last call, SSL version 2.
The reason you might think SSLv2 is terrible is because it was a product of the mid-1990s, which modern cryptographers view as the "dark ages of cryptography". Many of the nastier cryptographic attacks we know about today had not yet been discovered. As a result, the SSLv2 protocol designers were forced to essentially grope their way in the dark, and so were frequently devoured by grues -- to their chagrin and our benefit, since the attacks on SSLv2 offered priceless lessons for the next generation of protocols.

And yet, these honest mistakes are not worst thing about SSLv2. The most truly awful bits stem from the fact that the SSLv2 designers were forced to ruin their own protocol. This was the result of needing to satisfy the U.S. government's misguided attempt to control the export of cryptography. Rather than using only secure encryption, the designers were forced to build in a series of "export-grade ciphersuites" that offered abysmal 40-bit session keys and other nonsense. I've previously written about the effect of export crypto on today's security. Today we'll have another lesson.
Wait, isn't SSLv2 ancient history?
For some time in the early 2000s, SSLv2 was still supported by browsers as a fallback protocol, which meant that active attackers could downgrade an SSLv3 or TLS connection by tricking a browser into using the older protocol. Fortunately those attacks are long gone now: modern web browsers have banished SSLv2 entirely -- along with export cryptography in general. If you're using a recent version of Chrome, IE or Safari, you should never have to worry about accidentally making an SSLv2 connection.

The problem is that while clients (such as browsers) have done away with SSLv2, many servers still support the protocol. In most cases this is the result of careless server configuration. In others, the blame lies with crummy and obsolete embedded devices that haven't seen a software update in years -- and probably never will. (You can see if your server is vulnerable here.)

And then there's the special case of OpenSSL, which helpfully provides a configuration option that's intended to disable SSLv2 ciphersuites -- but which, unfortunately, does no such thing. In the course of their work, the DROWN researchers discovered that even when this option is set, clients may still request arbitrary SSLv2 ciphersuites. (This issue was quietly patched in January. Upgrade.)

The reason this matters is that SSL/TLS servers do a very silly thing. You see, since people don't like to buy multiple certificates, a server that's configured to use both TLS and SSLv2 will generally use the same RSA private key to support both protocols. This means any bugs in the way SSLv2 handles that private key could very well affect the security of TLS.

And this is where DROWN comes in.
So what is DROWN?
DROWN is a classic example of a "cross protocol attack". This type of attack makes use of bugs in one protocol implementation (SSLv2) to attack the security of connections made under a different protocol entirely -- in this case, TLS. More concretely, DROWN is based on the critical observation that while SSLv2 and TLS both support RSA encryption, TLS properly defends against certain well-known attacks on this encryption -- while SSLv2's export suites emphatically do not.

I will try to make this as painless as possible, but here we need to dive briefly into the weeds.

You see, both SSLv2 and TLS use a form of RSA encryption padding known as RSA-PKCS#1v1.5. In the late 1990s, a man named Daniel Bleichenbacher proposed an amazing attack on this encryption scheme that allows an attacker to decrypt an RSA ciphertext efficiently -- under the sole condition that they can ask an online server to decrypt many related ciphertexts, and give back only one bit of information for each one -- namely, the bit representing whether decryption was successful or not.

Bleichenbacher's attack proved particularly devastating for SSL servers, since the standard SSL RSA-based handshake involves the client encrypting a secret (called the Pre-Master Secret, or PMS) under the server's RSA public key, and then sending this value over the wire. An attacker who eavesdrops the encrypted PMS can run the Bleichenbacher attack against the server, sending it thousands of related values (in the guise of new SSL connections), and using the server's error responses to gradually decrypt the PMS itself. With this value in hand, the attacker can compute SSL session keys and decrypt the recorded SSL session.

A nice diagram of the SSL RSA handshake, courtesy Cloudflare (who don't know I'm using it, thanks guys!)
The main SSL/TLS countermeasure against Bleichenbacher's attack is basically a hack. When the server detects that an RSA ciphertext has decrypted improperly, it lies. Instead of returning an error, which the attacker could use to implement the attack, it generates a random pre-master secret and continues with the rest of the protocol as though this bogus value was what it actually decrypted. This causes the protocol to break down later on down the line, since the server will compute essentially a random session key. But it's sufficient to prevent the attacker from learning whether the RSA decryption succeeded or not, and that kills the attack dead.
Anti-Bleichenbacher countermeasure from the TLS 1.2 spec.
Now let's take a moment to reflect and make an observation.

If the attacker sends a valid RSA ciphertext to be decrypted, the server will decrypt it and obtain some PMS value. If the attacker sends the same valid ciphertext a second time, the server will decrypt and obtain the same PMS value again. Indeed, the server will always get the same PMS even if the attacker sends the same valid ciphertext a hundred times in a row.

On the other hand, if the attacker repeatedly sends the same invalid ciphertext, the server will choose a different PMS every time. This observation is crucial.

In theory, if the attacker holds a ciphertext that might be valid or invalid -- and the attacker would like to know which is true -- they can send the same ciphertext to be decrypted repeatedly. This will lead to two possible conditions. In condition (1) where the ciphertext is valid, decryption will produce the "same PMS every time". Condition (2) for an invalid ciphertext will produce a "different PMS each time". If the attacker could somehow tell the difference between condition (1) and condition (2), they could determine whether the ciphertext was valid. That determination alone would be enough to resurrect the Bleichenbacher attack. Fortunately in TLS, the PMS is never used directly; it's first passed through a strong hash function and combined with a bunch of random nonces to obtain a Master Secret. This result then used in further strong ciphers and hash functions. Thanks to the strength of the hash function and ciphers, the resulting keys are so garbled that the attacker literally cannot tell whether she's observing condition (1) or (2).

And here we finally we run into the problem of SSLv2. 

You see, SSLv2 implementations include a similar anti-Bleichenbacher countermeasure. Only here there are some key differences. In SSLv2 there is no PMS -- the encrypted value is used as the Master Secret and employed directly to derive the encryption session key. Moreover, in export modes, the Master Secret may be as short as 40 bits, and used with correspondingly weak export ciphers. This means an attacker can send multiple ciphertexts, then brute-force the resulting short keys. After recovering these keys for a tiny number of sessions, they will be able to determine whether they're in condition (1) or (2). This would effectively resurrect the Bleichenbacher attack.
This still sounds like an attack on SSLv2, not on TLS. What am I missing? 
SSLv2 export ciphers.
And now we come to the full horror of SSLv2.

Since most servers configured with both SSLv2 and TLS support will use the same RSA private key for decrypting sessions from either protocol, a Bleichenbacher attack on the SSLv2 implementation -- with its vulnerable crappy export ciphersuites -- can be used to decrypt the contents of a normal TLS-based RSA ciphertext. After all, both protocols are using the same darned secret key. Due to formatting differences in the RSA ciphertext between the two protocols, this attack doesn't work all the time -- but it does work for approximately one out of a thousand TLS handshakes.

To put things succinctly: with access to a whole hell of a lot of computation, an attacker can intercept a TLS connection, then at their leisure make many thousands of queries to the SSLv2-enabled server, and decrypt that connection. The "general DROWN" attack actually requires watching about 1,000 TLS handshakes to find a vulnerable RSA ciphertext, about 40,000 queries to the server, and about 2^50 offline operations.
LOL. That doesn't sound practical at all. You cryptographers suck. 
First off, that isn't really a question, it's more of a rude statement. But since this is exactly the sort of reaction cryptographers often get when they point out perfectly practical theoretical attacks on real protocols, I'd like to take a moment to push back.

While the attack described above seems costly, it can be conducted in several hours and $440 on Amazon EC2. Are your banking credentials worth $440? Probably not. But someone else's probably are. Given all the things we have riding on TLS, it's better for it not to be broken at all.

More urgently, the reason cryptographers spend time on "impractical attacks" is that attacks always get better. And sometimes they get better fast.

The attack described above is called "General DROWN" and yes, it's a bit impractical. But in the course of writing just this single paper, the DROWN researchers discovered a second variant of their attack that's many orders of magnitude faster than the general one described above. This attack, which they call "Special DROWN" can decrypt a TLS RSA ciphertext in about one minute on a single CPU core.

This attack relies on a bug in the way OpenSSL handles SSLv2 key processing, a bug that was (inadvertently) fixed in March 2015, but remains open across the Internet. The Special DROWN bug puts DROWN squarely in the domain of script kiddies, for thousands of websites across the Internet.
So how many sites are vulnerable?
This is probably the most depressing part of the entire research project. According to wide-scale Internet scans run by the DROWN researchers, more than 2.3 million HTTPS servers with browser-trusted certificates are vulnerable to special DROWN, and 3.5 million HTTPS servers are vulnerable to General DROWN. That's a sizeable chunk of the encrypted web, including a surprising chunk of the Chinese and Colombian Internet.

And while I've focused on the main attacks in this post, it's worth pointing out that DROWN also affects other protocol suites, like TLS with ephemeral Diffie-Hellman and even Google's QUIC. So these vulnerabilities should not be taken lightly.

If you want to know whether your favorite site is vulnerable, you can use the DROWN researchers' handy test.
What happens now?
In January, OpenSSL patched the bug that allows the SSLv2 ciphersuites to remain alive. Last March, the project inadvertently fixed the bug that makes Special DROWN possible. But that's hardly the end. The patch they're announcing today is much more direct: hopefully it will make it impossible to turn on SSLv2 altogether. This will solve the problem for everyone... at least for everyone willing to patch. Which, sadly, is unlikely to be anywhere near enough.

More broadly, attacks like DROWN illustrate the cost of having old, vulnerable protocols on the Internet. And they show the terrible cost that we're still paying for export cryptography systems that introduced deliberate vulnerabilities in encryption so that intelligence agencies could pursue a small short-term advantage -- at the cost of long-term security.

Given that we're currently in the midst of a very important discussion about the balance of short- and long-term security, let's hope that we won't make the same mistake again.

by Matthew Green (noreply@blogger.com) at March 01, 2016 02:36 PM

The Lone Sysadmin

Big Trouble in Little Changes

I was making a few changes today when I ran across this snippet of code. It bothers me. /bin/mkdir /var/lib/docker /bin/mount /dev/Volume00/docker_lv /var/lib/docker echo "/dev/Volume00/docker_lv /var/lib/docker ext4 defaults 1 2" >> /etc/fstab “Why does it bother you, Bob?” you might ask. “They’re just mounting a filesystem.” My problem is that any change that affects booting […]

The post Big Trouble in Little Changes appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at March 01, 2016 05:49 AM

February 29, 2016

Carl Chenet

Simplifier la vie de votre community manager avec le projet Twitter Out Of The Browser

Afin d’exister médiatiquement aujourd’hui, il est devenu nécessaire d’être présent sur plusieurs réseaux sociaux. Il faut y apporter quotidiennement du contenu pour espérer construire une audience, et cela sur chacun des ces réseaux sociaux.

Heureusement pour vous ou votre community manager, vous pouvez grandement réduire ce travail pour le réseau social Twitter grâce au projet Twitter Out Of The Browser.

Twitter_logo_blue

Le projet Twitter Out Of The Browser (TOOTB) est une suite d’outils permettant d’automatiser en grande partie les actions qui constituent votre présence sur le réseau social Twitter. Les outils constituant la suite TOOTB sont écrits en Python et sont normalement correctement documentés sur Readthedocs. Aujourd’hui le projet TOOTB compte les logiciels suivants :

  • Feed2tweet permet l’envoi automatisé d’un flux RSS vers le réseau Twitter
  • db2twitter extrait du contenu d’une base de données (plusieurs supportées), prépare un tweet avec et le poste sur Twitter
  • Retweet augmente la visibilité de vos tweets en retweetant un tweet selon différents critères à travers différents comptes Twitter
  • Twitterwatch surveille l’âge du dernier tweet pour vérifier que votre compte Twitter est régulièrement alimenté, permetant d’identfier (par exemple un défaut de feed2tweet ou db2twitter ou avec le compte Twitter)

Nous nous proposons de donner un exemple concret de la chaîne d’utilisation de ces différents outils via l’exemple de leurs utilisation par le site LinuxJobs.fr, le site d’emploi de la communauté du Logiciel Libre et opensource.

linuxjobs-horizontale

1. Contenu vers Twitter avec feed2tweet et db2twitter

1.1 Db2twitter pour exploiter votre base de données

LinuxJobs.fr est un site d’emploi et relaie les offres que les recruteurs y postent vers le réseau social Twitter. Pour cela le programme db2twitter est utilisé, qui va chercher différents champs d’une table dans une base de données MySQL afin de fabriquer un tweet.

Db2twitter utilise en dépendance SQLAlchemy et supporte donc tous les types de bases de données supportées par ce projet. La documentation officielle du projet db2twitter détaille les différentes configurations possibles mais nous donnerons ici l’exemple concret de la section qui permet de rapatrier les données à poster sur Twitter :

[twitter]
...
tweet={} recrute un {} https://www.linuxjobs.fr/jobs/{}
[database]
...
dbtables=offres,
jobs_rows=entreprise,titre,id
  • Dans la section , nous fournissons un format de tweet qui sera posté avec 3 champs pour l’instant vide à compléter par les données provenant de la base de données.
  • Dans la section [database], nous définissons la ou les tables où aller chercher les informations, puis les champs précis contenant les données à injecter dans le tweet à envoyer

Db2twitter supporte un grand nombre de bases de données, sa syntaxe très flexible vous permettra d’aller chercher les informations dans les entrailles de vos bases sans avoir à vous soucier de passer par un middleware quelconque pour présenter vos données avant leur envoi vers Twitter.

linux-jobs-encart-twitter

1.2 Feed2tweet pour exploiter vos flux RSS

Si LinuxJobs.fr utilise db2twitter pour poster les annonces depuis sa base de données vers Twitter, le blog de LinuxJobs.fr présente quant à lui un moteur de blog sans base de données qui offre à ses utilisateurs différents flux RSS. Le flux des articles est renvoyé sur Twitter grâce à Feed2tweet.

Feed2tweet est un fork de rss2twitter visant à documenter le projet et à étendre les fonctionnalités de ce dernier. Feed2tweet prend en entrée un fichier de configuration très simple dont la partie la plus intéressante se situe dans la section suivante :

  • Une erreur est survenue ; le flux est probablement indisponible. Veuillez réessayer plus tard.
uri: https://blog.linuxjobs.fr/feed.php?rss

Votre fichier de configuration devra quand même être un peu plus long et la documentation officielle de feed2tweet décrit les différentes paramètres à indiquer ou à rajouter comme en particulier la manipulation possibles des marques (tags) car, en effet, Twitter ne gére pas les mots-dièses en plusieurs mots.

2. Augmenter la visibilité de vos tweets avec Retweet

Il ne suffit malheureusement plus de poster du contenu sur un réseau social pour faire connaître votre compte. À l’heure de l’utilisation massive des réseaux sociaux par le grand public et pour le cas particulier de Twitter, il est important que vos tweets soient repris par des personnalités avec un grand nombre d’abonnés afin de faire connaître votre contenu et attirer vers votre propre compte de nouveaux abonnés. Cela peut se faire organiquement dans le temps ou, si vous possédez vous-même déjà un ou plusieurs comptes dont le contenu de votre nouveau projet pourrait intéresser les abonnés à ces comptes, vous pouvez utiliser le projet Retweet. Et c’est ce que nous avons fait avec le compte Twitter de LinuxJobs.fr.

Retweet se propose d’automatiquement retweeter tous les tweets provenant d’un compte, ou juste une partie des tweets qui remplissent certains critères, comme :

  • le nombre de retweets qu’a déjà obtenu un tweet
  • les tweets pourvus d’un ou plusieurs mots-dièse
  • les tweets âgés d’un certain nombre de minutes
  • et bien d’autres…

La consultation de la documentation officielle de Retweet donnera accès à l’ensemble des critères disponibles.

Dans le cas précis du site LinuxJobs.fr, le site a été relayé via les comptes Twitter des fondateurs qui étaient pourvus d’un grand nombre d’abonnés intéressés par la communauté du Logiciel Libre. En voyant apparaître des offres d’emplois pour la communauté FOSS, les personnes intéressées ont donc naturellement rejoint le compte Twitter de LinuxJobs.fr.

banieres linux vert

3. Surveiller votre activité Twitter avec Twitterwatch

Les outils précédemment présentés permettent d’ếmettre du contenu vers Twitter. Mais comment détecter automatiquement une anomalie dans votre chaîne d’alimentation de votre compte Twitter ?  Toujours dans notre exemple de LinuxJobs.fr, il est impératif de détecter si le compte Twitter n’est plus alimenté par de nouveaux tweets correspondant à de nouvelles offres d’emplois. Twitterwatch se charge de ce travail en vérifiant l’âge de la dernière publication et en vérifiant que cet âge reste acceptable selon des critères définis par l’utilisateur. Dans le cas contraire, une alerte par e-mail est émise.

Ce projet débute à peine mais nous a déjà permis d’identifier plusieurs pannes dans notre chaîne d’automatisation d’envoi de tweets. Un peu comme les tests fonctionnels pour un projet de programmation, Twitterwatch vous permet de tester le bout de votre chaîne de production afin de vérifier directement le résultat produit.

4. Virer ou pas son community manager ?

La réponse est bien sûr négative. Il s’agit au contraire avec la suite d’outils du projet Twitter Out Of The Browser de simplifier la vie de votre community manager, ou plus simplement de vous soulager de tâches répétitives et donc rébarbatives qui, en plus, vous font perdre du temps lorsque vous portez vous-même tous les aspects de votre nouveau projet ou le lancement de votre startup. TOOTB est  déjà en production sur plusieurs services en ligne. Nous n’attendons que vos retours pour améliorer les outils existants ou en intégrer de nouveaux.

Pensez-vous pouvoir utiliser ce type d’outils ? Nous sommes curieux de vos retours dans les commentaires de ce billet.

 

 


by Carl Chenet at February 29, 2016 11:00 PM

February 27, 2016

Raymii.org

Active Directory and Exchange Command Line Powershell

This is a collection of Powershell snippets to install Active Directory, create a new Active Directory Domain, join an existing Active Directory domain and to install Microsoft Exchange 2013. The snippets were tested on Windows Server 2012 R2.

February 27, 2016 12:00 AM