Planet SysAdmin


Upcoming sysadmin conferences/events

Contact me to have your event listed here.


July 29, 2014

Everything Sysadmin

Honest Life Hacks

I usually don't blog about "funny web pages" I found but this is relevant to the blog. People often forward me these "amazing life hacks that will blow your mind" articles because of the Time Management Book.

First of all, I shouldn't have to tell you that these are linkbait (warning: autoplay).

Secondly, here's a great response to all of these: Honest Life Hacks.

July 29, 2014 04:02 PM

Racker Hacker

Adventures in live booting Linux distributions

We’re all familiar with live booting Linux distributions. Almost every Linux distribution under the sun has a method for making live CD’s, writing live USB sticks, or booting live images over the network. The primary use case for some distributions is on a live medium (like KNOPPIX).

However, I embarked on an adventure to look at live booting Linux for a different use case. Sure, many live environments are used for demonstrations or installations — temporary activities for a desktop or a laptop. My goal was to find a way to boot a large fleet of servers with live images. These would need to be long-running, stable, feature-rich, and highly configurable live environments.

Finding off the shelf solutions wasn’t easy. Finding cross-platform off the shelf solutions for live booting servers was even harder. I worked on a solution with a coworker to create a cross-platform live image builder that we hope to open source soon. (I’d do it sooner but the code is horrific.) ;)

Debian jessie (testing)

First off, we took a look at Debian’s Live Systems project. It consists of two main parts: something to build live environments, and something to help live environments boot well off the network. At the time of this writing, the live build process leaves a lot to be desired. There’s a peculiar tree of directories that are required to get started and the documentation isn’t terribly straightforward. Although there’s a bunch of documentation available, it’s difficult to follow and it seems to skip some critical details. (In all fairness, I’m an experienced Debian user but I haven’t gotten into the innards of Debian package/system development yet. My shortcomings there could be the cause of my problems.)

The second half of the Live Systems project consist of multiple packages that help with the initial boot and configuration of a live instance. These tools work extremely well. Version 4 (currently in alpha) has tools for doing all kinds of system preparation very early in the boot process and it’s compatible with SysVinit or systemd. The live images boot up with a simple SquashFS (mounted read only) and they use AUFS to add on a writeable filesystem that stays in RAM. Reads and writes to the RAM-backed filesystem are extremely quick and you don’t run into a brick wall when the filesystem fills up (more on that later with Fedora).

Ubuntu 14.04

Ubuntu uses casper which seems to precede Debian’s Live Systems project or it could be a fork (please correct me if I’m incorrect). Either way, it seemed a bit less mature than Debian’s project and left a lot to be desired.

Fedora and CentOS

Fedora 20 and CentOS 7 are very close in software versions and they use the same mechanisms to boot live images. They use dracut to create the initramfs and there are a set of dmsquash modules that handle the setup of the live image. The livenet module allows the live images to be pulled over the network during the early part of the boot process.

Building the live images is a little tricky. You’ll find good documentation and tools for standard live bootable CD’s and USB sticks, but booting a server isn’t as straightforward. Dracut expects to find a squashfs which contains a filesystem image. When the live image boots, that filesystem image is connected to a loopback device and mounted read-only. A snapshot is made via device mapper that gives you a small overlay for adding data to the live image.

This overlay comes with some caveats. Keeping tabs on how quickly the overlay is filling up can be tricky. Using tools like df is insufficient since device mapper snapshots are concerned with blocks. As you write 4k blocks in the overlay, you’ll begin to fill the snapshot, just as you would with an LVM snapshot. When the snapshot fills up and there are no blocks left, the filesystem in RAM becomes corrupt and unusable. There are some tricks to force it back online but I didn’t have much luck when I tried to recover. The only solution I could find was to hard reboot.

Arch

The ArchLinux live boot environments seem very similar to the ones I saw in Fedora and CentOS. All of them use dracut and systemd, so this makes sense. Arch once used a project called Larch to create live environments but it’s fallen out of support due to AUFS2 being removed (according to the wiki page).

Although I didn’t build a live environment with Arch, I booted one of their live ISO’s and found their live environment to be much like Fedora and CentOS. There was a device mapper snapshot available as an overlay and once it’s full, you’re in trouble.

OpenSUSE

The path to live booting an OpenSUSE image seems quite different. The live squashfs is mounted read only onto /read-only. An ext3 filesystem is created in RAM and is mounted on /read-write. From there, overlayfs is used to lay the writeable filesystem on top of the read-only squashfs. You can still fill up the overlay filesystem and cause some temporary problems, but you can back out those errant files and still have a useable live environment.

Here’s the problem: overlayfs was given the green light for consideration in the Linux kernel by Linus in 2013. It’s been proposed for several kernel releases and it didn’t make it into 3.16 (which will be released soon). OpenSUSE has wedged overlayfs into their kernel tree just as Debian and Ubuntu have wedged AUFS into theirs.

Wrap-up

Building highly customized live images isn’t easy and running them in production makes it more challenging. Once the upstream kernel has a stable, solid, stackable filesystem, it should be much easier to operate a live environment for extended periods. There has been a parade of stackable filesystems over the years (remember funion-fs?) but I’ve been told that overlayfs seems to be a solid contender. I’ll keep an eye out for those kernel patches to land upstream but I’m not going to hold my breath quite yet.

Adventures in live booting Linux distributions is a post from: Major Hayden's blog.

Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

by Major Hayden at July 29, 2014 01:05 PM

The Nubby Admin

I Don’t Always Mistype My Password

It never fails. At the last two or three characters of a password, I’ll strike two keys at the same time. Maybe I pressed both, maybe I only pressed one. Whatever happened, certainly it doesn’t make any sense to press return and roll the dice. I’ll either get in or fail and have to type it all over again, and sysadmins can’t admit failure nor can we entertain the possibility of failing!!

So what’s the most logical maneuver? Backspace 400 times to make sure you got it all and try over. Make sure you fumble the password a few more time in a row so you can get a nice smooth spot worn on your backspace key.

Advertisement:

by WesleyDavid at July 29, 2014 11:12 AM

Chris Siebenmann

FreeBSD, cultural bad blood, and me

I set out to write a reasoned, rational elaboration of a tweet of mine, but in the course of writing it I've realized that I have some of those sticky human emotions involved too, much like my situation with Python 3. What it amounts to is that in addition to my rational reasons I have some cultural bad blood with FreeBSD.

It certainly used to be the case that a vocal segment of *BSD people, FreeBSD people among them, were elitists who looked down their noses at Linux (and sometimes other Unixes too, although that was usually quieter). They would say that Linux was not a Unix. They would say that Linux was clearly used by people who didn't know any better or who didn't have any taste. There was a fashion for denigrating Linux developers (especially kernel developers) as incompetents who didn't know anything. And so on; if you were around at the right time you probably can think of other things. In general these people seemed to venerated the holy way of UCB BSD and find little or no fault in it. Often these people believed (and propagated) other Unix mythology as well.

(This sense of offended superiority is in no way unique to *BSD people, of course. Variants have happened all through computing's history, generally from the losing side of whatever shift is going on at the time. The *BSD attitude of 'I can't believe so many people use this stupid Linux crud' echoes the Lisp reaction to Unix and the Unix reaction to Windows and Macs, at least (and the reaction of some fans of various commercial Unixes to today's free Unixes).)

This whole attitude irritated me for various reasons and made me roll my eyes extensively; to put it one way, it struck me as more religious than well informed and balanced. To this day I cannot completely detach my reaction to FreeBSD from my association of it with a league of virtual greybeards who are overly and often ignorantly attached to romantic visions of the perfection of UCB BSD et al. FreeBSD is a perfectly fine operating system and there is nothing wrong with it, but I wish it had kept different company in the late 1990s and early to mid 00s. Even today there is a part of me that doesn't want to use 'their' operating system because some of the company I'd be keeping would irritate me.

(FreeBSD keeping different company was probably impossible, though, because of where the Unix community went.)

(I date this *BSD elitist attitude only through the mid 00s because of my perception that it's mostly died down since then. Hopefully this is an accurate perception and not due to selective 'news' sources.)

by cks at July 29, 2014 03:33 AM

July 28, 2014

Chris Siebenmann

An interesting picky difference between Bourne shells

Today we ran into an interesting bug in one of our internal shell scripts. The script had worked for years on our Solaris 10 machines, but on a new OmniOS fileserver it suddenly reported an error:

script[77]: [: 232G: arithmetic syntax error

Cognoscenti of ksh error messages have probably already recognized this one and can tell me the exact problem. To show it to everyone else, here is line 77:

if [ "$qsize" -eq "none" ]; then
   ....

In a strict POSIX shell, this is an error because test's -eq operator is specifically for comparing numbers, not strings. What we wanted is the = operator.

What makes this error more interesting is that the script had been running for some time on the OmniOS fileserver without this error. However, until now the $qsize variable had always had the value 'none'. So why hadn't it failed earlier? After all, 'none' (on either side of the expression) is just as much of not-a-number as '232G' is.

The answer is that this is a picky difference between shells in terms of how they actually behave. Bash, for example, always complains about such misuse of -eq; if either side is not a number you get an error saying 'integer expression expected' (as does Dash, with a slightly different error). But on our OmniOS, /bin/sh is actually ksh93 and ksh93 has a slightly different behavior. Here:

$ [ "none" -eq "none" ] && echo yes
yes
$ [ "bogus" -eq "none" ] && echo yes
yes
$ [ "none" -eq 0 ] && echo yes
yes
$ [ "none" -eq "232G" ] && echo yes
/bin/sh: [: 232G: arithmetic syntax error

The OmniOS version of ksh93 clearly has some sort of heuristic about number conversions such that strings with no numbers are silently interpreted as '0'. Only invalid numbers (as opposed to things that aren't numbers at all) produce the 'arithmetic syntax error' message. Bash and dash are both more straightforward about things (as is the FreeBSD /bin/sh, which is derived from ash).

Update: my description isn't actually what ksh93 is doing here; per opk's comment, it's actually interpreting the none and bogus as variable names and giving them a value of 0 when unset.

Interestingly, the old Solaris 10 /bin/sh seems to basically be calling atoi() on the arguments for -eq; the first three examples work the same, the fourth is silently false, and '[ 232 -eq 232G ]' is true. This matches the 'let's just do it' simple philosophy of the original Bourne shell and test program and may be authentic original V7 behavior.

(Technically this is a difference in test behavior, but test is a builtin in basically all Bourne shells these days. Sometimes the standalone test program in /bin or /usr/bin is actually a shell script to invoke the builtin.)

by cks at July 28, 2014 06:19 PM

Standalone Sysadmin

Kerbal Space System Administration

I came to an interesting cross-pollination of ideas yesterday while talking to my wife about what I'd been doing lately, and I thought you might find it interesting too.

I've been spending some time lately playing video games. In particular, I'm especially fond of Kerbal Space Program, a space simulation game where you play the role of spaceflight director of Kerbin, a planet populated by small, green, mostly dumb (but courageous) people known as Kerbals.

Initially the game was a pure sandbox, as in, "You're in a planetary system. Here are some parts. Go knock yourself out", but recent additions to the game include a career mode in which you explore the star system and collect "science" points for doing sensor scans, taking surface samples, and so on. It adds a nice "reason" to go do things, and I've been working on building out more efficient ways to collect science and get it back to Kerbin.

Part of the problem is that when you use your sensors, whether they detect gravity, temperature, or materials science, you often lose a large percentage of the data when you transmit it back, rather than deliver it in ships - and delivering things in ships is expensive.

There is an advanced science lab called the MPL-LG-2 which allows greater fidelity in transmitted data, so my recent work in the game has been to build science ships which consist of a "mothership" with a lab, and a smaller lightweight lander craft which can go around whatever body I'm orbiting and collect data to bring to the mothership. It's working pretty well.

At the same time, I'm working on building out a collectd infrastructure that can talk to my graphite installation. It's not as easy as I'd like because we're standardized on Ubuntu Precise, which only has collectd 4.x, and the write_graphite plugin began with collectd 5.1.

To give you background, collectd is a daemon that runs and collects information, usually from the local machine, but there are an array of plugins to collect data from any number of local or remote sources. You configure collectd to collect data, and you use a write_* plugin to get that data to somewhere that can do something with it.

It was in the middle of explaining these two things - KSP's science missions and collectd - that I saw the amusing parity between them. In essence, I'm deploying science ships around my infrastructure to make it easier to get science back to my central repository so that I advance my own technology. I really like how analogous they are.

I talked about doing the collectd work on twitter, and Martijn Heemels expressed interest in what I was doing, since he would also like write_graphite on Precise, so I figured that other people probably might want to get in on the action, so to speak. I could give you the package I made, or I could show you how I made it. That sounds more fun.

Like all good things, this project involves software from Jordan Sissel - namely fpm, effing package management. Ever had to make packages and deal with spec files, control files, or esoteric rulesets that made you go into therapy? Not anymore!

So first we need to install it, which is easy, because it's a gem:


$ sudo gem install fpm

Now, lets make a place to stage files before they get packaged:

$ mkdir ~/collectd-package

And grab the source tarball and untar it:

$ wget https://collectd.org/files/collectd-5.4.1.tar.gz
$ tar zxvf collectd-5.4.1.tar.gz
$ cd collectd-5.4.1/

(if you're reading this, make sure to go to collectd.org and get the new one, not the version I have listed here.)

Configure the Makefile, just like you did when you were a kid:

$ ./configure --enable-debug --enable-static --with-perl-bindings="PREFIX=/usr"

Hat tip to Mike Julian who let me know that you can't actually enable debugging in the collectd tool unless you actually use the flag here, so save yourself some heartbreak by turning that on. Also, if I'm going to be shipping this around, I want to make sure that it's compiled statically, and for whatever reason, I found that the perl bindings were sad unless I added that flag.

Now we compile:

$ make

Now we "install":

make DESTDIR="/home/YOURUSERNAME/collectd-package" install

I've found that the install script is very grumpy about relative directory names, so I appeased it by giving it the full path to where the things would be dropped (the directory we created earlier)

We're going to be using a slightly customized init script. I took this from the version that comes with the precise 4.x collectd installation and added a prefix variable that can be changed. We didn't change the installation directories above, so by default, everything is going to eventually wind up in /opt/collectd/ and the init script needs to know about that:

$ cd ~
$ mkdir -p collectd-package/etc/init.d/
$ wget --no-check-certificate -O collectd-package/etc/init.d/collectd http://bit.ly/1mUaB7G
$ chmod +x collectd-package/etc/init.d/collectd

This is pulling in the file from this gist.

Now, we're finally ready to create the package:

fakeroot fpm -t deb -C collectd-package/ --name collectd \
--version 5.4.1 --iteration 1 --depends libltdl7 -s dir opt/ usr/ etc/

Since you may not be familiar with fpm, some of the options are obvious, but for the ones that aren't, -C changes directory to the given argument, --version is the version of the software, as opposed to --iteration is the version of the package. If you package this, deploy it, then find a bug in the packaging, when you package it again after fixing the problem, you increment the iteration flag, and your package management can treat it as an upgrade. The --depends is a library that collectd needs on the end systems. -s sets the source type to "directory", and then we give it a list of directories to include (remembering that we've changed directories with the -C flag).

Also, this was my first foray into the world of fakeroot, which you should probably read about if you run Debian-based systems.

At this point, in the current directory, there should be "collectd_5.4.1-1.deb", a package file that works for installing using 'dpkg -i' or in a PPA or in a repo, if you have one of those.

Once collectd is installed, you'll probably want to configure it to talk to your graphite host. Just edit the config in /opt/collectd/etc/collectd.conf. Make sure to uncomment the write_graphite plugin line, and change the write_graphite section. Here's mine:


  
    Host "YOURGRAPHITESERVER"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    # remember the trailing period in prefix
    #    otherwise you get CCIS.systems.linuxTHISHOSTNAME
    #    You'll probably want to change it anyway, because 
    #    this one is mine. ;-) 
    Prefix "CCIS.systems.linux."
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  

Anyway, hopefully this helped you in some way. Building a puppet module is left as an exercise to the reader. I think I could do a simplistic one in about 5 minutes, but as soon as you want to intelligently decide which modules to enable and configure, then it gets significantly harder. Hey, knock yourself out! (and let me know if you come up with anything cool!)

by Matt Simmons at July 28, 2014 10:22 AM

Chris Siebenmann

Go is still a young language

Once upon a time, young languages showed their youth by having core incapabilities (important features not implemented, important platforms not supported, or the like). This is no longer really the case today; now languages generally show their youth through limitations in their standard library. The reality is that a standard library that deals with the world of the modern Internet is both a lot of work and the expression of a lot of (painful) experience with corner cases, how specifications work out in practice, and so on. This means that such a library takes time, time to write everything and then time to find all of the corner cases. When (and while) the language is young, its standard library will inevitably have omissions, partial implementations, and rough corners.

Go is a young language. Go 1.0 was only released two years ago, which is not really much time as these things go. It's unsurprising that even today portions of the standard library are under active development (I mostly notice the net packages because that's what I primarily use) and keep gaining additional important features in successive Go releases.

Because I've come around to this view, I now mostly don't get irritated when I run across deficiencies in the corners of Go's standard packages. Such deficiencies are the inevitable consequence of using a young language, and while they're obvious to me that's because I'm immersed in the particular area that exposes them. I can't expect authors of standard libraries to know everything or to put their package to the same use that I am time. And time will cure most injuries here.

(Sometimes the omissions are deliberate and done for good reason, or so I've read. I'm not going to cite my primary example yet until I've done some more research about its state.)

This does mean that development in Go can sometimes require a certain sort of self-sufficiency and willingness to either go diving into the source of standard packages or deliberately find packages that duplicate the functionality you need but without the limitations you're running into. Some times this may mean duplicating some amount of functionality yourself, even if it seems annoying to have to do it at the time.

(Not mentioning specific issues in, say, the net packages is entirely deliberate. This entry is a general thought, not a gripe session. In fact I've deliberately written this entry as a note to myself instead of writing another irritated grump, because the world does not particularly need another irritated grump about an obscure corner of any standard Go package.)

by cks at July 28, 2014 03:07 AM

Security Monkey

Blackhat 2014 & Def Con 22 Cheat Sheets!

For those of you going to BlackHat USA 2014 in Las Vegas on Aug 5th, be sure to download the cheat sheet!  Click below for a breakdown of speakers, bios, references, and talk summaries.  

 

It's hosted on Google Docs, so feel free to add it to your Google Drive for quick and easy viewing.  I'm saving a copy off and highlighting the talks I'm most interested in.

 

July 28, 2014 01:21 AM

Blackhat & DefCon Tips: 2014 Edition

[Edited for 2014]

 

It's that time of year folks! BlackHat and DefCon kick into high gear next week! Mandalay Bay is hosting BlackHat 2014, and DefCon will be at the Rio again. 


Every year I post tips that I hope will help regular attendees and newbies alike. This year is no different, as you'll see below.

Firstly, if you have no idea what I'm talking about… 

July 28, 2014 01:17 AM

Adnans Sysadmin/Dev Blog

Customized Feeds

People are advocating for customized feeds.

I really don't like this idea. I've been moving cities/countries lately, and I see content based on where I am. Whether its Google or Facebook its just far more likely that a feed will show content based on my location. Sites popular in those areas will rank higher and more prominently in the feeds than if I was located elsewhere.

I don't like this. If I subscribe to a feed I want to see all content. Otherwise I wouldn't have subscribed to the feed. What you read, shapes what you think about. I want to see all the news, not just those items that people find compelling in that particular location.

Customized content gives you the impression that something is really popular. When really the opposite might be true. Yet because a reader moves around in circles where one view point is popular, it will seem to the reader that this view is the prevalent one. How can you make informed decisions on a topic if the feeds provide a skewed perspective?

And rarely have I seen content that is better when the feed is customized.

by Adnan Wasim (noreply@blogger.com) at July 28, 2014 12:10 AM

July 27, 2014

Chris Siebenmann

Save your test scripts and other test materials

Back in 2009 I tested ssh cipher speeds (although it later turned out to be somewhat incomplete. Recently I redid those tests on OmniOS, with some interesting results. I was able to do this (and do it easily) because I originally did something that I don't do often enough: I saved the script I used to run the tests for my original entry. I didn't save full information, though; I didn't save information on exactly how I ran it (and there's several options). I can guess a bit but I can't be completely sure.

I should do this more often. Saving test scripts and test material has two uses. First, you can go back later and repeat the tests in new environments and so on. This is not just an issue of getting comparison data, it's also an issue of getting interesting data. If the tests were interesting enough to run once in one environment they're probably going to be interesting in another environment later. Making it easy or trivial to test the new environment makes it more likely that you will. Would I have bothered to do these SSH speed tests on OmniOS and CentOS 7 if I hadn't had my test script sitting around? Probably not, and that means I'd have missed learning several things.

The second use is that saving all of this test information means that you can go back to your old test results with a lot more understanding of what they mean. It's one thing to know that I got network speeds of X Mbytes/sec between two systems, but there are a lot of potential variables in that simple number. Recording the details will give me (and other people) as many of those variables as possible later on, which means we'll understand a lot more about what the simple raw number means. One obvious aspect of this understanding is being able to fully compare a number today with a number from the past.

(This is an aspect of scripts capturing knowledge, of course. But note that test scripts by themselves don't necessarily tell you all the details unless you put a lot of 'how we ran this' documentation into their comments. This is probably a good idea, since it captures all of this stuff in one place.)

by cks at July 27, 2014 04:50 AM

Adnans Sysadmin/Dev Blog

Macbook Air

I think its interesting that apple decreased the price of the least spec Macbook Air rather than increase the specs. I wonder if this was because of the later introduction of the rumored 12 inch retina Macbook Air. Or is the 11/13 inch Macbook Air so perfect that there is no way to improve on it without increasing the price drastically.

I recently bought a 15 inch dell laptop with a touch screen. The only thing new and exciting in the machine was the touch screen. Turns out on a traditional laptop a touch screen isn't that useful or exciting. I barely use the touch screen, or touch features of Windows 8.1. Usage patterns might be different on a 2-in-1 laptop though. Since the Macbook Air is a traditional laptop, I don't see why Apple would add a touch screen.

Even a 2011 Macbook Air is fast and capable enough these days for most tasks. With that in mind, I guess I'm glad that Apple hasn't made any changes to the machine, and has left it perfect as it is. 

by Adnan Wasim (noreply@blogger.com) at July 27, 2014 04:13 AM

July 25, 2014

RISKS Digest

Everything Sysadmin

System Administrator Appreciation Day Spotlight: SuperUser.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people of the superuser.com community.

If you aren't a system administrator, but have technical questions that you want to bring to your system administrator, this is the place for you. Do you feel bad that you keep interrupting your office IT person with questions? Maybe you should try SuperUser first. You might get your answers faster.

Whether it is a question about your home cable modem, or the mysteries of having to reboot after uninstalling software (this discussion will surprise you); this community will probably reduce the number of times each week that you interrupt your IT person.

If you are a system administrator, and like to help people, consider poking around the unanswered questions page!

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 04:30 PM

System Administrator Appreciation Day Spotlight: ServerFault.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people on ServerFault.com who help system administrators find the answers to their questions. If you are patient, understanding, and looking to help fellow system administrators, this is a worthy community to join.

I like to click on the "Unanswered" button and see what questions are most in need of a little love.

Sometimes I click on the "hot this week" tab and see what has been answered recently. I always learn a lot. Today I learned:

ServerFault also has a chatroom Chatroom which is a useful place to hang out and meet the other members.

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 03:30 PM

I'm a system administrator one day a year.

Many years ago I was working at a company and our department's administrative assistant was very strict about not letting people call her a "secretary". She made one exception, however, on "Secretary Appreciation Day".

I'm an SRE at StackExchange.com. That's a Site Reliability Engineer. We try to hold to the ideals of what a SRE as set forth by Ben Treynor's keynote at SRECon.

My job title is "SRE". Except one day each year. Since today is System Administrator Appreciation Day, I'm definitely a system administrator... today.

Sysadmins go by many job titles. The company I work for provides Question and Answer communities on over 120 topics. Many of them are technical and appeal to the many things that are system administrators.

We also have fun sites of interest to sysadmins like Video Games and Skeptics.

All these sites are fun places to poke around and read random answers. Of course, contributing answers is fun too.

Happy System Administrator Appreciation Day!

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 02:00 PM

The Lone Sysadmin

Coho Data Giving Away a VMworld US 2014 Pass

tshirtIf you’re trying to get to VMworld US and want a free pass, a number of vendors are giving them away, including my friends over at Coho Data:

We are giving away a FULL conference pass to VMworld 2014 to one lucky winner along with a few goodies to others as well.

Register for a chance to win:

  • One Full-Conference Pass to VMworld 2014 in San Francisco*
  • “Don’t FSCK with the Fish” T-Shirt
  • Coho Data Chrome Industries Backpack

*The prize includes the conference pass only. The winner will be responsible for their own hotel, airfare, and other expenses. Pass valued at $1,995 USD.

Winner will be announced on August 1st via email, so register for a chance to win!

A great deal especially if you live in San Francisco or the Bay Area anyhow. We’re within a month of the conference now, so I’d suggest registering for this and others today (I think SimpliVity’s giveaway deadline is tonight, too).


Did you like this article? Please give me a +1 back at the source: Coho Data Giving Away a VMworld US 2014 Pass

This post was written by Bob Plankers for The Lone Sysadmin - Virtualization, System Administration, and Technology. Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License and copyrighted © 2005-2013. All rights reserved.

by Bob Plankers at July 25, 2014 01:47 PM

Everything Sysadmin

50% off Time Management for System Administrators

O'Reilly is running a special deal for Celebrate SysAdmin Day. For one day only, SAVE 50% on a wide range of system administration ebooks and training videos from shop.oreilly.com. If you scroll down to page 18, you'll find Time Management for System Administrators is included.

50% off is pretty astounding. Considering that the book is cheaper than most ($19.99 for the eBook) there's practically no excuse to not have a copy. Finding the time to read it... that may be another situation entirely. I hate to say "you owe it to yourself" but, seriously, if you are stressed out, overworked, and under appreciated this might be a good time to take a break and read the first chapter or two.

July 25, 2014 01:39 PM

Steve Kemp's Blog

The selfish programmer

Once upon a time I wrote a piece of software for scheduling the classes available to a college.

There was a bug in the scheduler: Students who happened to be named 'Steve Kemp' had a significantly higher chance (>=80% IIRC) of being placed in lessons where the class makeup was more than 50% female.

This bug was never fixed. Which was nice, because I spent several hours both implementing and disguising this feature.

I'm was a bad coder when I was a teenager.

These days I'm still a bad coder, but in different ways.

July 25, 2014 01:16 PM

Standalone Sysadmin

Happy SysAdmin Appreciation Day 2014!

Well, well, well, look what Friday it is!

In 1999, SysAdmin Ted Kekatos decided that, like administrative professionals, teachers, and cabbage, system administrators needed a day to recognize them, to appreciate what they do, their culture, and their impact to the business, and so he created SysAdmin Appreciation Day. And we all reap the benefit of that! Or at least, we can treat ourselves to something nice and have a really great excuse!

Speaking of, there are several things I know of going on around the world today:

As always, there are a lot of videos released on the topic. Some of the best I've seen are here:

  • A karaoke power-ballad from Spiceworks:
  • A very funny piece on users copping to some bad behavior from Sophos:
  • A heartfelt thanks from ManageEngine:
  • Imagine what life would be like without SysAdmins, by SysAid:

Also, I feel like I should mention that one of my party sponsors, Goverlan, is also giving away their software today to the first thousand people who sign up for it. It's a $900 value, so if you do Active Directory management, you should probably check that out.

Sophos was giving away socks, but it was so popular that they ran out. Not before sending me the whole set, though!

I don't even remember signing up for that, but apparently I did, because they came to my work address. Awesome!

I'm sure there are other things going on, too. Why not comment below if you know of one?

All in all, have a great day and try to find some people to get together with and hang out. Relax, and take it easy. You deserve it.

by Matt Simmons at July 25, 2014 12:52 PM

Everything Sysadmin

Special deal for Sysadmin Appreciation Day

Today is Sysadmin Appreciation Day! We appreciate all of you! The Practice of System and Network Administration is today's InformIT eBook Deal of the Day. Click on the link to get a special discount.

July 25, 2014 06:00 AM

Chris Siebenmann

The OmniOS version of SSH is kind of slow for bulk transfers

If you look at the manpage and so on, it's sort of obvious that the Illumos and thus OmniOS version of SSH is rather behind the times; Sun branched from OpenSSH years ago to add some features they felt were important and it has not really been resynchronized since then. It (and before it the Solaris version) also has transfer speeds that are kind of slow due to the SSH cipher et al overhead. I tested this years ago (I believe close to the beginning of our ZFS fileservers), but today I wound up retesting it to see if anything had changed from the relatively early days of Solaris 10.

My simple tests today were on essentially identical hardware (our new fileserver hardware) running OmniOS r151010j and CentOS 7. Because I was doing loopback tests with the server itself for simplicity, I had to restrict my OmniOS tests to the ciphers that the OmniOS SSH server is configured to accept by default; at the moment that is aes128-ctr, aes192-ctr, aes256-ctr, arcfour128, arcfour256, and arcfour. Out of this list, the AES ciphers run from 42 MBytes/sec down to 32 MBytes/sec while the arcfour ciphers mostly run around 126 MBytes/sec (with hmac-md5) to 130 Mbytes/sec (with hmac-sha1).

(OmniOS unfortunately doesn't have any of the umac-* MACs that I found to be significantly faster.)

This is actually an important result because aes128-ctr is the default cipher for clients on OmniOS. In other words, the default SSH setup on OmniOS is about a third of the speed that it could be. This could be very important if you're planning to do bulk data transfers over SSH (perhaps to migrate ZFS filesystems from old fileservers to new ones)

The good news is that this is faster than 1G Ethernet; the bad news is that this is not very impressive compared to what Linux can get on the same hardware. We can make two comparisons here to show how slow OmniOS is compared to Linux. First, on Linux the best result on the OmniOS ciphers and MACs is aes128-ctr with hmac-sha1 at 180 Mbytes/sec (aes128-ctr with hmac-md5 is around 175 MBytes/sec), and even the arcfour ciphers run about 5 Mbytes/sec faster than on OmniOS. If we open this up to the more extensive set of Linux ciphers and MACs, the champion is aes128-ctr with umac-64-etm at around 335 MBytes/sec and all of the aes GCM variants come in with impressive performances of 250 Mbytes/sec and up (umac-64-etm improves things a bit here but not as much as it does for aes128-ctr).

(I believe that one reason Linux is much faster on the AES ciphers is that the version of OpenSSH that Linux uses has tuned assembly for AES and possibly uses Intel's AES instructions.)

In summary, through a combination of missing optimizations and missing ciphers and MACs, OmniOS's normal version of OpenSSH is leaving more than half the performance it could be getting on the table.

(The 'good' news for us is that we are doing all transfers from our old fileservers over 1G Ethernet, so OmniOS's ssh speeds are not going to be the limiting factor. The bad news is that our old fileservers have significantly slower CPUs and as a result max out at about 55 Mbytes/sec with arcfour (and interestingly, hmac-md5 is better than hmac-sha1 on them).)

PS: If I thought that network performance was more of a limit than disk performance for our ZFS transfers from old fileservers to the new ones, I would investigate shuffling the data across the network without using SSH. I currently haven't seen any sign that this is the case; our 'zfs send | zfs recv' runs have all been slower than this. Still, it's an option that I may experiment with (and who knows, a slow network transfer may have been having knock-on effects).

by cks at July 25, 2014 05:36 AM

July 24, 2014

Racker Hacker

X11 forwarding request failed on channel 0

Forwarding X over ssh is normally fairly straightforward when you have the correct packages installed. I have another post about the errors that appear when you’re missing the xorg-x11-xauth (CentOS, Fedora, RHEL) or xauth (Debian, Ubuntu) packages.

Today’s error was a bit different. Each time I accessed a particular Debian server via ssh with X forwarding requested, I saw this:

$ ssh -YC myserver.example.com
X11 forwarding request failed on channel 0

The xauth package was installed and I found a .Xauthority file in root’s home directory. Removing the .Xauthority file and reconnecting via ssh didn’t help. After some searching, I stumbled upon a GitHub gist that had some suggestions for fixes.

On this particular server, IPv6 was disabled. That caused the error. The quickest fix was to restrict sshd to IPv4 only by adding this line to /etc/ssh/sshd_config:

AddressFamily inet

I restarted the ssh daemon and I was able to forward X applications over ssh once again.

X11 forwarding request failed on channel 0 is a post from: Major Hayden's blog.

Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

by Major Hayden at July 24, 2014 07:24 PM

The Tech Teapot

Software the old fashioned way

I was clearing out my old bedroom after many years nagging by my parents when I came across my two of my old floppy disk boxes. Contained within is a small snapshot of my personal computing starting from 1990 through until late 1992. Everything before and after those dates doesn’t survive I’m afraid.

XLISP Source code disk CLIPS expert system sources disk Little Smalltalk interpreter disk Micro EMACS disks

The archive contains loads of backups of work I produced, now stored on Github, as well as public domain / shareware software, magazine cover disks and commercial software I purchased. Yes, people used to actually buy software. With real money. A PC game back in the late 1980s cost around £50 in 1980s money. According to this historic inflation calculator, that would be £117 now. Pretty close to a week’s salary for me at the time.

One of my better discoveries from the late 1980s was public domain and shareware software libraries. Back then there were a number of libraries, usually advertised in the small ads at the back of computer magazines.

This is a run down of how you’d use your typical software library:

  1. Find an advert from a suitable library and write them a nice little letter requesting they send you a catalog. Include payment as necessary;
  2. Wait for a week or two;
  3. Receive a small, photocopied catalog with lists of floppies and a brief description of the contents;
  4. Send the order form back to the library with payment, usually by cheque;
  5. Wait for another week or two;
  6. Receive  a small padded envelope through the post with my selection of floppies;
  7. Explore and enjoy!

If you received your order in two weeks you were doing well. After the first order, when you have your catalog to hand, you could get your order in around a week. A week was pretty quick for pretty well anything back then.

The libraries were run as small home businesses. They were the perfect second income. Everything was done by mail, all you had to do was send catalogs when requested and process orders.

One of the really nice things about shareware libraries was that you never really knew what you were going to get. Whilst you’d have an idea of what was on the disk from the description in the catalog, they’d be a lot of programs that were not described. Getting a new delivery was like a mini MS-DOS based text adventure, discovering all of the neat things on the disks.

The libraries contained lots of different things, mostly shareware applications of every kind you can think of. The most interesting to me as an aspiring programmer was the array of public domain software. Public domain software was distributed with the source code. There is no better learning tool when programming than reading other peoples’ code. The best code I’ve ever read was the CLIPS sources for a forward chaining expert system shell written by NASA.

Happy days :)

PS All of the floppies I’ve tried so far still work :) Not bad after 23 years.

PPS I found a letter from October 1990 ordering ten disks from the library.

Letter ordering disks

 

The post Software the old fashioned way appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 24, 2014 07:00 AM

Chris Siebenmann

What influences SSH's bulk transfer speeds

A number of years ago I wrote How fast various ssh ciphers are because I was curious about just how fast you could do bulk SSH transfers and how to get them to go fast under various circumstances. Since then I have learned somewhat more about SSH speed and what controls what things you have available and can get.

To start with, my years ago entry was naively incomplete because SSH encryption has two components: it has both a cipher and a cryptographic hash used as the MAC. The choice of both of them can matter, especially if you're willing to deliberately weaken the MAC. As an example of how much of an impact this might make, in my testing on a Linux machine I could almost double SSH bandwidth by switching from the default MAC to 'umac-64-etm@openssh.com'.

(At the same time, no other MAC choice made much of a difference within a particular cipher, although hmac-sha1 was sometimes a bit faster than hmac-md5.)

Clients set the cipher list with -c and the MAC with -m, or with the Ciphers and MACs options in your SSH configuration file (either a personal one or a global one). However, what the client wants to use has to be both supported by the server and accepted by it; this is set in the server's Ciphers and MACs configuration options. The manpages for ssh_config and sshd_config on your system will hopefully document both what your system supports at all and what it's set to accept by default. Note that this is not necessarily the same thing; I've seen systems where sshd knows about ciphers that it will not accept by default.

(Some modern versions of OpenSSH also report this information through 'ssh -Q <option>'; see the ssh manpage for details. Note that such lists are not necessarily reported in preference order.)

At least some SSH clients will tell you what the server's list of acceptable ciphers (and MACs) if you tell the client to use options that the server doesn't support. If you wanted to, I suspect that you could write a program in some language with SSH protocol libraries that dumped all of this information for you for an arbitrary server (without the fuss of having to find a cipher and MAC that your client knew about but your server didn't accept).

Running 'ssh -v' will report the negotiated cipher and MAC that are being used for the connection. Technically there are two sets of them, one for the client to server and one for the server back to the client, but I believe that under all but really exceptional situations you'll use the same cipher and MAC in both directions.

Different Unix OSes may differ significantly in their support for both ciphers and MACs. In particular Solaris effectively forked a relatively old version of OpenSSH and so modern versions of Illumos (and Illumos distributions such as OmniOS) do not offer you anywhere near a modern list of choices here. How recent your distribution is will also matter; our Ubuntu 14.04 machines naturally offer us a lot more choice than our Ubuntu 10.04 ones.

PS: helpfully the latest OpenSSH manpages are online (cf), so the current manpage for ssh_config will tell you the latest set of ciphers and MACs supported by the official OpenSSH and also show the current preference order. To my interest it appears that OpenSSH now defaults to the very fast umac-64-etm MAC.

by cks at July 24, 2014 03:22 AM

July 23, 2014

The Tech Teapot

Early 1990s Software Development Tools for Microsoft Windows

The early 1990s were an interesting time for software developers. Many of the tools that are taken for granted today made their debut for a mass market audience.

I don’t mean that the tools were not available previously. Both Smalltalk  and LISP sported what would today be considered modern development environments all the way back in the 1970s, but hardware requirements put the tools well beyond the means of regular joe programmers. Not too many people had workstations at home or in the office for that matter.

I spent the early 1990s giving most of my money to software development tool companies of one flavour or another.

Actor v4.0 Floppy Disks Whitewater Object Graphics Floppy Disk MKS LEX / YACC Floppy Disks Turbo Pascal for Windows Floppy Disks

Actor was a combination of object oriented language and programming environment for very early versions of Microsoft Windows. There is a review in Info World magazine of Actor version 3 that makes interesting reading. It was somewhat similar to Smalltalk, but rather more practical for building distributable programs. Unlike Smalltalk, it was not cross platform but on the plus side, programs did look like native Windows programs. It was very much ahead of its time in terms of both the language and the programming environment and ran on pretty modest hardware.

I gave Borland quite a lot of money too. I bought Turbo Pascal for Windows when it was released, having bought regular old Turbo Pascal v6 for DOS a year or so earlier. The floppy disks don’t have a version number on so I have no idea which version it is. Turbo Pascal for Windows eventually morphed in Delphi.

I bought Microsoft C version 6 introducing as it did a DOS based IDE, it was still very much an old school C compiler. If you wanted to create Windows software you needed to buy the Microsoft Windows SDK at considerable extra cost.

Asymetrix Toolbook was marketed in the early 1990s as a generic Microsoft Windows development tool. There are old Info World reviews here and here. Asymetrix later moved the product to be a learning authorship tool. I rather liked the tool, though it didn’t really have the performance and flexibility I was looking for. Distributing your finished work was also not a strong point.

Microsoft Quick C for Windows version 1.0 was released in late 1991. Quick C bundled a C compiler with the Windows SDK so that you could build 16 bit Windows software. It also sported an integrated C text editor, resource editor  and debugger.

The first version of Visual Basic was released in 1991. I am not sure why I didn’t buy it, I imagine there was some programming language snobbery on my part. I know there are plenty of programmers of a certain age who go all glassy eyed at the mere thought of BASIC, but I’m not one of them. Visual Basic also had an integrated editor and debugger.

Both Quick C and Visual Basic are the immediate predecesors of the Visual Studio product of today.

The post Early 1990s Software Development Tools for Microsoft Windows appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 23, 2014 10:59 AM

Chris Siebenmann

One of SELinux's important limits

People occasionally push SELinux as the cure for security problems and look down on people who routinely disable it (as we do). I have some previously expressed views on this general attitude, but what I feel like pointing out today is that SELinux's security has some important intrinsic limits. One big one is that SELinux only acts at process boundaries.

By its nature, SELinux exists to stop a process (or a collection of them) from doing 'bad things' to the rest of the system and to the outside environment. But there are any number of dangerous exploits that do not cross a process's boundaries this way; the most infamous recent one is Heartbleed. SELinux can do nothing to stop these exploits because they happen entirely inside the process, in spheres fully outside its domain. SELinux can only act if the exploit seeks to exfiltrate data (or influence the outside world) through some new channel that the process does not normally use, and in many cases the exploit doesn't need to do that (and often doesn't bother).

Or in short, SELinux cannot stop your web server or your web browser from getting compromised, only from doing new stuff afterwards. Sending all of the secrets that your browser or server already has access to to someone in the outside world? There's nothing SELinux can do about that (assuming that the attacker is competent). This is a large and damaging territory that SELinux doesn't help with.

(Yes, yes, privilege separation. There are a number of ways in which this is the mathematical security answer instead of the real one, including that most network related programs today are not privilege separated. Chrome exploits also have demonstrated that privilege separation is very hard to make leak-proof.)

by cks at July 23, 2014 04:25 AM

RISKS Digest

July 22, 2014

Everything Sysadmin

Tom will be the October speaker at Philly Linux Users Group (central)

I've just been booked to speak at PLUG/Central in October. I'll be speaking about our newest book, The Practice of Cloud System Administration.

For a list of all upcoming speaking engagements, visit our appearances page: http://everythingsysadmin.com/appearances/

July 22, 2014 07:40 PM

Standalone Sysadmin

Goverlan press release on the Boston SysAdmin Day party!

Thanks again to the folks over at Goverlan for supporting my SysAdmin Appreciation Day party here in Boston! They're so excited, they issued a Press Release on it!

Goverlan Press Release

Click to read more

I'm really looking forward to seeing everyone on Friday. We've got around 60 people registered, and I imagine we'll have more soon. Come have free food and drinks. Register now!

by Matt Simmons at July 22, 2014 02:12 PM

Racker Hacker

Etsy reminds us that information security is an active process

I’m always impressed with the content published by folks at Etsy and Ben Hughes’ presentation from DevOpsDays Minneapolis 2014 is no exception.

Ben adds some levity to the topic of information security with some hilarious (but relevant) images and reminds us that security is an active process that everyone must practice. Everyone plays a part — not just traditional corporate security employees.

I’ve embedded the presentation here for your convenience:

Here’s a link to the original presentation on SpeakerDeck:

Etsy reminds us that information security is an active process is a post from: Major Hayden's blog.

Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

by Major Hayden at July 22, 2014 01:06 PM

Chris Siebenmann

What I know about the different types of SSH keys (and some opinions)

Modern versions of SSH support up to four different types of SSH keys (both for host keys to identify servers and for personal keys): RSA, DSA, ECDSA, and as of OpenSSH 6.5 we have ED25519 keys as well. Both ECDSA and ED25519 uses elliptic curve cryptography, DSA uses finite fields, and RSA is based on integer factorization. EC cryptography is said to have a number of advantages, particularly in that it uses smaller key sizes (and thus needs smaller exchanges on the wire to pass public keys back and forth).

(One discussion of this is this cloudflare blog post.)

RSA and DSA keys are supported by all SSH implementations (well, all SSH v2 implementations which is in practice 'all implementations' these days). ECDSA keys are supported primarily by reasonably recent versions of OpenSSH (from OpenSSH 5.7 onwards); they may not be in other versions, such as the SSH that you find on Solaris and OmniOS or on a Red Hat Enterprise 5 machine. ED25519 is only supported in OpenSSH 6.5 and later, which right now is very recent; of our main machines, only the Ubuntu 14.04 ones have it (especially note that it's not supported by the RHEL 7/CentOS 7 version of OpenSSH).

(I think ED25519 is also supported on Debian test (but not stable) and on up to date current FreeBSD and OpenBSD versions.)

SSH servers can offer multiple host keys in different key types (this is controlled by what HostKey files you have configured). The order that OpenSSH clients will try host keys in is controlled by two things: the setting of HostKeyAlgorithms (see 'man ssh_config' for the default) and what host keys are already known for the target host. If no host keys are known, I believe that the current order is order is ECDSA, ED25519, RSA, and then DSA; once there are known keys, they're tried first. What this really means is that for an otherwise unknown host you will be prompted to save the first of these key types that the host has and thereafter the host will be verified against it. If you already know an 'inferior' key (eg a RSA key when the host also advertises an ECDSA key), you will verify the host against the key you know and, as far as I can tell, not even save its 'better' key in .ssh/known_hosts.

(If you have a mixture of SSH client versions, people can wind up with a real mixture of your server key types in their known_hosts files or equivalent. This may mean that you need to preserve and restore multiple types of SSH host keys over server reinstalls, and probably add saving and restoring ED25519 keys when you start adding Ubuntu 14.04 servers to your mix.)

In terms of which key type is 'better', some people distrust ECDSA because the elliptic curve parameters are magic numbers from NIST and so could have secret backdoors, as appears to be all but certain for another NIST elliptic curve based cryptography standard (see also and also and more). I reflexively dislike both DSA and ECDSA because DSA implementation mistakes can be explosively fatal, as in 'trivially disclose your private keys'. While ED25519 also uses DSA it takes specific steps to avoid at least some of the explosive failures of plain DSA and ECDSA, failures that have led to eg the compromise of Sony's Playstation 3 signing keys.

(RFC 6979 discusses how to avoid this particular problem for DSA and ECDSA but it's not clear to me if OpenSSH implements it. I would assume not until explicitly stated otherwise.)

As a result of all of this I believe that the conservative choice is to advertise and use only RSA keys (both host keys and personal keys) with good bit sizes. The slightly daring choice is to use ED25519 when you have it available. I would not use either ECDSA or DSA although I wouldn't go out of my way to disable server ECDSA or DSA host keys except in a very high security environment.

(I call ED25519 'slightly daring' only because I don't believe it's undergone as much outside scrutiny as RSA, and I could be wrong about that. See here and here for a discussion of ECC and ED25519 security properties and general security issues. ED25519 is part of Dan Bernstein's work and in general he has a pretty good reputation on these issues. Certainly the OpenSSH people were willing to adopt it.)

PS: If you want to have your eyebrows raised about the choice of elliptic curve parameters, see here.

PPS: I don't know what types of keys non-Unix SSH clients support over and above basic RSA and DSA support. Some casual Internet searches suggest that PuTTY doesn't support ECDSA yet, for example. And even some Unix software may have difficulties; for example, currently GNOME Keyring apparently doesn't support ECDSA keys (via archlinux).

by cks at July 22, 2014 03:13 AM

July 21, 2014

Racker Hacker

icanhazip and CORS

I received an email from an icanhazip.com user last week about enabling cross-origin resource sharing. He wanted to use AJAX calls on a different site to pull data from icanhazip.com and use it for his visitors.

Those headers are now available for all requests to the services provided by icanhazip.com! Here’s what you’ll see:

$ curl -i icanhazip.com
---
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
---

icanhazip and CORS is a post from: Major Hayden's blog.

Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

by Major Hayden at July 21, 2014 02:31 PM

Steve Kemp's Blog

An alternative to devilspie/devilspie2

Recently I was updating my dotfiles, because I wanted to ensure that media-players were "always on top", when launched, as this suits the way I work.

For many years I've used devilspie to script the placement of new windows, and once I googled a recipe I managed to achieve my aim.

However during the course of my googling I discovered that devilspie is unmaintained, and has been replaced by something using Lua - something I like.

I'm surprised I hadn't realized that the project was dead, although I've always hated the configuration syntax it is something that I've used on a constant basis since I found it.

Unfortunately the replacement, despite using Lua, and despite being functional just didn't seem to gell with me. So I figured "How hard could it be?".

In the past I've written softare which iterated over all (visible) windows, and obviously I'm no stranger to writing Lua bindings.

However I did run into a snag. My initial implementation did two things:

  • Find all windows.
  • For each window invoke a lua script-file.

This worked. This worked well. This worked too well.

The problem I ran into was that if I wrote something like "Move window 'emacs' to desktop 2" that action would be applied, over and over again. So if I launched emacs, and then manually moved the window to desktop3 it would jump back!

In short I needed to add a "stop()" function, which would cause further actions against a given window to cease. (By keeping a linked list of windows-to-ignore, and avoiding processing them.)

The code did work, but it felt wrong to have an ever-growing linked-list of processed windows. So I figured I'd look at the alternative - the original devilspie used libwnck to operate. That library allows you to nominate a callback to be executed every time a new window is created.

If you apply your magic only on a window-create event - well you don't need to bother caching prior-windows.

So in conclusion :

I think my code is better than devilspie2 because it is smaller, simpler, and does things more neatly - for example instead of a function to get geometry and another to set it, I use one. (e.g. "xy()" returns the position of a window, but xy(3,3) sets it.).

kpie also allows you to run as a one-off job, and using the simple primitives I wrote a file to dump your windows, and their size/placement, which looks like this:

shelob ~/git/kpie $ ./kpie --single ./samples/dump.lua
-- Screen width : 1920
-- Screen height: 1080
..
if ( ( window_title() == "Buddy List" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1536,24 )
     size(384,1032 )
     workspace(2)
end
if ( ( window_title() == "feeds" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1,24 )
     size(1536,1032 )
     workspace(2)
end
..

As you can see that has dumped all my windows, along with their current state. This allows a simple starting-point - Configure your windows the way you want them, then dump them to a script file. Re-run that script file and your windows will be set back the way they were! (Obviously there might be tweaks required.)

I used that starting-point to define a simple recipe for configuring pidgin, which is more flexible than what I ever had with pidgin, and suits my tastes.

Bug-reports welcome.

July 21, 2014 02:30 PM