Internet Explorer’s User Agent Madness

by Fabian via Fabian Fischer »

Microsoft is introducing the next level of User Agent madness with IE11: Now the browser tells the server that he is basically every browser on the planet…

IE Edge mode in Windows Developer Preview and RemoteIE builds is a new “living” document mode designed for maximum interoperability with other modern browsers and contemporary web content

more …


by Jesco Freund via My Universe »

Auf meiner Workstation habe ich KDE 4 durch das neue Plasma 5 ersetzt – die entsprechenden Pakete sind bei Arch Linux mittlerweile im offiziellen Repository und nicht länger experimentell. Mit der dunklen Variante des Breeze Themas sieht mein Desktop jetzt so aus:

Leider steht derzeit zumindest unter Arch Linux nur die Plasma 5 Desktopumgebung selbst zur Verfügung; wichtige Zusatzpakete wie etwa ein Dateimanager oder ein Dokumentenbetrachter fehlen noch (die Dolphin und Okular Pakete sind gegen KDE 4 gelinkt, das sich nicht parallel mit den Plasma 5 Paketen installieren lässt).

Im Dezember sollen aber einige wichtige Applikationen released werden – so lange komme ich auch so zurecht; schließlich ist das wichtigste Werkzeug für mich ohnehin das Terminal. Da auch hierfür noch kein Paket vorliegt, nutze ich derzeit einfach das XFCE4 Terminal, das sich sehr gut einfügt und mit seinen Abhängigkeiten auch nicht mit Plasma 5 kollidiert.

DBus, FreeDesktop, and lots of madness

by Patrick via Patrick's playground »

For various reasons I've spent a bit of time dissecting how dbus is supposed to work. It's a rather funny game, but it is confusing to see people trying to use these things. It starts quite hilarious, the official documentation (version 0.25) says:
The D-Bus protocol is frozen (only compatible extensions are allowed) as of November 8, 2006. However, this specification could still use a fair bit of work to make interoperable reimplementation possible without reference to the D-Bus reference implementation. Thus, this specification is not marked 1.0
Ahem. After over a decade (first release in Sept. 2003!!) people still haven't documented what it actually does.
Allrighty then!
So we start reading:
"D-Bus is low-overhead because it uses a binary protocol"
"Immediately after connecting to the server, the client must send a single nul byte."
followed by
"A nul byte in any context other than the initial byte is an error; the protocol is ASCII-only."
Mmmh. Hmmm. Whut?
So anyway, let's not get confused ... we continue:
The string-like types are basic types with a variable length. The value of any string-like type is conceptually 0 or more Unicode codepoints encoded in UTF-8, none of which may be U+0000. The UTF-8 text must be validated strictly: in particular, it must not contain overlong sequences or codepoints above U+10FFFF. Since D-Bus Specification version 0.21, in accordance with Unicode Corrigendum #9, the "noncharacters" U+FDD0..U+FDEF, U+nFFFE and U+nFFFF are allowed in UTF-8 strings (but note that older versions of D-Bus rejected these noncharacters).
So there seems to be some confusion what things like "binary" mean, and UTF-8 seems to be quite challenging too, but no worry: At least we are endian-proof!
A block of bytes has an associated byte order. The byte order has to be discovered in some way; for D-Bus messages, the byte order is part of the message header as described in the section called “Message Format”. For now, assume that the byte order is known to be either little endian or big endian.
Hmm? Why not just define network byte order and be happy? Well ... we're even smarterer:
The signature of the header is: "yyyyuua(yv)"
Ok, so ...
1st BYTE Endianness flag; ASCII 'l' for little-endian or ASCII 'B' for big-endian. Both header and body are in this endianness.
We actually waste a BYTE on each message to encode endianness, because ... uhm ... we run on ... have been ... I don't get it. And why a full byte (with a random ASCII mnemonic) instead of a bit? The whole 32bit bytemash at the beginning of the header could be collapsed into 8 bit if it were designed. Of course performance will look silly if you use an ASCII protocol over the wire with generous padding ... so let's kdbus because performance. What teh? Just reading this "spec" makes me want to get more drunk.

Here's a radical idea: Fix the wire protocol to be more sane, then fix the configuration format to not be hilarious XML madness (which the spec says is bad, so what were the authors not thinking?)
But enough about this idiocy, let's go up the stack one layer. The freedesktop wiki uses gdbus output as an API dump (would be boring if it were an obvious format), so we have a look at it:
Looking through the man page there's no documentation what a line beginning with "@" means. Because it's obvious!@113###
So we read through the gdbus sourcecode and start crying (thanks glib, I really needed to be reminded that there are worse coders than me). And finally we can correlate it with "Annotations"

Back to the spec:
Method, interface, property, and signal elements may have "annotations", which are generic key/value pairs of metadata. They are similar conceptually to Java's annotations and C# attributes.
I have no idea what that means, but I guess as the name implies it's just a textual hint for the developer. Or not? Great to see a specification not define its own terms.

So what I interpret into this fuzzy text is most likely very wrong, and someone should define these items in the spec in a way that can be understood before you understood the spec. Less tautological circularity and all that ...
Let's assume then that we can ignore annotations for now ... here's our next funny:
The output of gdbus is not stable. If you were to write a dbus listener based on the documentation, well, the order of functions is random, so it's very hard to compare the wiki API dump 'documentation' with your output.
Oh great ... sigh. Grumble. Let's just do fuzzy matching then. Kinda looks similar, so that must be good enough.
(Radical thought: Shouldn't a specification be a bit less ambiguous and maybe more precise?)

Anyway. Ahem. Let's just assume we figure out a way to interact with dbus that is tolerable. Now we need to figure out what the dbus calls are supposed to do. Just for fun, we read the logind 'documentation':
      CreateSession(in  u arg_0,
                    in  u arg_1,
                    in  s arg_2,
                    in  s arg_3,
                    in  s arg_4,
                    in  s arg_5,
                    in  u arg_6,
                    in  s arg_7,
                    in  s arg_8,
                    in  b arg_9,
                    in  s arg_10,
                    in  s arg_11,
                    in  a(sv) arg_12,
                    out s arg_13,
                    out o arg_14,
                    out s arg_15,
                    out h arg_16,
                    out u arg_17,
                    out s arg_18,
                    out u arg_19,
                    out b arg_20);
with the details being:
CreateSession() and ReleaseSession() may be used to open or close login sessions. These calls should never be invoked directly by clients. Creating/closing sessions is exclusively the job of PAM and its pam_systemd module.
*nudge nudge wink wink*
Zero of twenty (!) parameters are defined in the 'documentation', and the same document says that it's an internal function that accidentally ended up in the public API (instead of, like, being, ah, a private API in a different namespace?)
Since it's actively undefined, and not to be used, a valid action on calling it would be to shut down the machine.

Dogbammit. What kind of code barf is this? Oh well. Let's try to figure out the other functions -
LockSession() asks the session with the specified ID to activate the screen lock.
And then we look at the sourcecode to learn that it actually just calls:
session_send_lock(session, streq(sd_bus_message_get_member(message), "LockSession"));
Which calls a function somewhere else which then does:
        return sd_bus_emit_signal(
                        lock ? "Lock" : "Unlock",
So in the end it internally sends a dbus message to a different part of itself, and that sends a dbus signal that "everyone" is supposed to listen to.
And the documentation doesn't define what is supposed to happen, instead it speaks in useless general terms.

      PowerOff(in  b arg_0);
      Reboot(in  b arg_0);
      Suspend(in  b arg_0);
      Hibernate(in  b arg_0);
      HybridSleep(in  b arg_0);
Here we have a new API call for each flavour of power management. And there's this stuff:
The main purpose of these calls is that they enforce PolicyKit policy
And I have absolutely no idea about the mechanism(s) involved. Do I need to query PK myself? How does the dbus API know? Oh well, just read more code, and interpret it how you think it might work. Must be good enough.

While this exercise has been quite educational in many ways I am surprised that this undocumented early-alpha quality code base is used for anything serious. Many concepts are either not defined, or defined by the behaviour of the implementation. The APIs are ad-hoc without any obvious structure, partially redundant (what's the difference between Terminate and Kill ?), and not documented in a way that allows a reimplementation.
If this is the future I'll stay firmly stuck in the past ...

Changing Gears, 1 Year after RTW trip

by Jeremy Olexa via »

About a year ago, I was writing about my Round the World trip winding down and returning to the workforce, my career. I’ve gone through a whole bunch of ‘things’ in the past year which mostly remind me that 1) life is short and random, and 2) I can do anything I want to.

On the first topic, I did a number to my spine and compressed L1, L2 vertebrae. About a 3 month resting period and I’m still recovering from that one, probably will be for the rest of my life. Oh, I broke my wrist too. All this from a little skydiving accident, which I’ll spare the details. I’m back at the gym, eating well, and really inspired to build myself better than I was. I count my lucky stars that I’m able to make a full recovery. Ergo, life is short and random. However, it really opened up my viewpoints on many of life’s topics and made me realize all the calculated risks that humans take everyday.

Speaking of risks, enter new job…

A year ago, I was writing about starting a new job, getting a new apartment, and new car all within two weeks. Now I’m able to say that I’m at it again. While it may not be the same as traveling to a new country every few weeks, it is still very exciting. In December, I’ll be starting a new role at a new company, SPS Commerce. It was great working at Reeher, and I have nothing but good things to say about the company and the people. I’m also moving, but only 15 minutes away.

I’m thrilled to accelerate my career and position myself where I was prior to my career break started. It hasn’t been exactly what I envisioned, but does anything work out like we think? Now, for all the naysayers that say a career-break on your mid-20s resume is career suicide… I challenge you to go for your dreams because life is short and you can do anything you want to.

When (multimedia) fiefdoms crumble

by Flameeyes via Flameeyes's Weblog »

Mike coined the term multimedia fiefdoms recently. He points to a number of different streaming, purchase and rental services for video content (movies, TV series) as the new battleground for users (consumers in this case). There are of course a few more sides in this battle, including music and books, but the idea is still perfectly valid.

What he didn't get into the details of is what happens one of those fiefdoms capitulates, declaring itself won over, and goes away. It's not a fun situation to be in, but we actually have plenty of examples of it, and these, more than anything else, should drive the discourse around and against DRM, in my opinion.

For some reasons, the main example of failed fiefdoms is to be found in books, and I lived through (and recounted) a few of those instances. For me personally, it all started four years ago, when I discovered Sony gave up on their LRF format and decided to adopt the "industry standard" ePub by supporting Adobe Digital Editions (ADEPT) DRM scheme on their devices. I was slow on the uptake, the announcement came two years earlier. For Sony, this meant tearing down their walled garden, even though they kept supporting the LRF format and their store for a while – they may even do still, I stopped following two years ago when I moved onto a Kindle – for the user it meant they were now free to buy books from a number of stores, including some publishers, bookstores with online presence and dedicated ebookstores.

But things didn't always go smoothly: two years later, WHSmith partnered with Kobo, and essentially handed the latter all their online ebook market. When I read the announcement I was actually happy, especially since I could not buy books off WHSmith any more as they started looking for UK billing addresses. Unfortunately it also meant that only a third of the books that I bought from WHSmith were going to be ported over to Kobo due to an extreme cock-up with global rights even to digital books. If I did not go and break the DRM off all my ebooks for the sake of it, I would have lost four books, having to buy them anew again. Given this was not for the seller going bankrupt but for a sell-out of their customers, it was not understandable that they refused to compensate people. Luckily, it did port The Gone-Away World which is one of my favourite books.

Fast forward another year, and the Italian bookstore LaFeltrinelli decided to go the same way, with a major exception: they decided they would keep users on both platforms — that way if you want to buy a digital version of a book you'll still buy it on the same website, but it'll be provided by Kobo and in your Kobo library. And it seems like they at least have a better deal regarding books' rights, as they seemed to have ported over most books anyway. But of course it did not work out as well as it should have been, throwing an error in my face and forcing me to call up Kobo (Italy) to have my accounts connected and the books ported.

The same year, I end up buying a Samsung Galaxy Note 10.1 2014 Edition, which is a pretty good tablet and has a great digitizer. Samsung ships Google Play in full (Store, Movies, Music, Books) but at the same time install its own App, Video, Music and Book store apps, it's not surprising. But it does not take six months for them to decide that it's not their greatest idea, in May this year, Samsung announced the turn down of their Music and Books stores — outside of South Korea at least. In this case there is no handover of the content to other providers, so any content bought on those platforms is just gone.

Not completely in vain; if you still have access to a Samsung device (and if you don't, well, you had no access to the content anyway), a different kind of almost-compensation kicks in: the Korean company partnered with Amazon of all bookstores — surprising given that they are behind the new "Nook Tablet" by Barnes & Noble. Beside a branded «Kindle for Samsung» app, they provide one out of a choice of four books every month — the books are taken from Amazon's KDP Select pool as far as I can tell, which is the same pool used as a base for the Kindle Owners' Lending Library and the Kindle Unlimited offerings; they are not great but some of them are enjoyable enough. Amazon is also keeping honest and does not force you to read the books on your Samsung device — I indeed prefer reading from my Kindle.

Now the question is: how do you loop back all this to multimedia? Sure books are entertaining but they are by definition a single media, unless you refer to the Kindle Edition of American Gods. Well, for me it's still the same problem of fiefdoms that Mike referred to; indeed every store used to be a walled garden for a long while, then Adobe came and conquered most with ePub and ADEPT — but then between Apple and their iBooks (which uses its own, incompatible DRM), and Amazon with the Kindle, the walls started crumbling down. Nowadays plenty of publishers allow you to buy the book, in ePub and usually many other formats at the same time, without DRM, because the publishers don't care which device you want to read your book on (a Kindle, a Kobo, a Nook, an iPad, a Sony Reader, an Android tablet …), they only want for you to read the book, and get hooked, and buy more books.

Somehow the same does not seem to work for video content, although it did work to an extent, for a while at least, with music. But this is a different topic.

The reason why I'm posting this right now is that just today I got an email from Samsung that they are turning down their video store too — now their "Samsung Hub" platform gets to only push you games and apps, unless you happen to live in South Korea. It's interesting to see how the battles between giants is causing small players to just get off the playing fields… but at the same time they bring their toys with them.

Once again, there is no compensation; if you rented something, watch it by the end of the year, if you bought something, sorry, you won't be able to access it after new year. It's a tough world. There is a lesson, somewhere, to be learnt about this.

Convicted ID Thief, Tax Fraudster Now Fugitive

by BrianKrebs via Krebs on Security »

In April 2014, this blog featured a story about Lance Ealy, an Ohio man arrested last year for buying Social Security numbers and banking information from an underground identity theft service that relied in part on data obtained through a company owned by big-three credit bureau Experian. Earlier this week, Ealy was convicted of using the data to fraudulently claim tax refunds with the IRS in the names of more than 175 U.S. citizens, but not before he snipped his monitoring anklet and skipped town.

Lance Ealy, in selfie he uploaded to Twitter before absconding.
On Nov. 18, a jury in Ohio convicted Ealy, 28, on all 46 charges, including aggravated identity theft, and wire and mail fraud. Government prosecutors presented evidence that Ealy had purchased Social Security numbers and financial data on hundreds of consumers, using an identity theft service called (later renamed The jury found that Ealy used that information to fraudulently file at least 179 tax refund requests with the Internal Revenue Service, and to open up bank accounts in other victims’ names — accounts he set up to receive and withdraw tens of thousand of dollars in refund payments from the IRS.

The identity theft service that Ealy used was dismantled in 2013, after investigators with the U.S. Secret Service arrested its proprietor and began tracking and finding many of his customers. Investigators later discovered that the service’s owner had obtained much of the consumer data from data brokers by posing as a private investigator based in the United States.

In reality, the owner of was a Vietnamese man paying for his accounts at data brokers using cash wire transfers from a bank in Singapore. Among the companies that Ngo signed up with was Court Ventures, a California company that was bought by credit bureau Experian nine months before the government shut down

Court records show that Ealy went to great lengths to delay his trial, and even reached out to this reporter hoping that I would write about his allegations that everyone from his lawyer to the judge in the case was somehow biased against him or unfit to participate in his trial. Early on, Ealy fired his attorney, and opted to represent himself. When the court appointed him a public defender, Ealy again choose to represent himself.

“Mr. Ealy’s motions were in a lot of respects common delay tactics that defendants use to try to avoid the inevitability of a trial,” said Alex Sistla, an assistant U.S. attorney in Ohio who helped prosecute the case.

Ealy also continued to steal peoples’ identities while he was on trial (although no longer buying from, according to the government. His bail was revoked for several months, but in October the judge in the case ordered him released on a surety bond.

It is said that a man who represents himself in court has a fool for a client, and this seems doubly true when facing criminal charges by the U.S. government. Ealy’s trial lasted 11 days, and involved more than 70 witnesses — many of the ID theft victims. His last appearance in court was on Friday. When investigators checked in on Ealy at his home over the weekend, they found his electronic monitoring bracelet but not Ealy.

Ealy faces up to 10 years in prison on each count of possessing 15 or more unauthorized access devices with intent to defraud and using unauthorized access devices to obtain items of $1,000 or more in value; up to five years in prison on each count of filing false claims for income tax refunds with the IRS; up to 20 years in prison on each count of wire fraud and each count of mail fraud; and mandatory two-year sentences on each count of aggravated identity theft that must run consecutive to whatever sentence may ultimately be handed down. Each count of conviction also carries a fine of up to $250,000.

I hope they find Mr. Ealy soon and lock him up for a very long time. Unfortunately, he is one of countless fraudsters perpetrating this costly and disruptive form of identity theft. In 2014, both my sister and I were the victims of tax ID theft, learning that unknown fraudsters had already filed tax refunds in our names when we each filed our taxes with the IRS.

I would advise all U.S. readers to request a tax filing PIN from the IRS (sadly, it turns out that I applied for mine in Feburary, only days after the thieves filed my tax return). If approved, the PIN is required on any tax return filed for that consumer before a return can be accepted. To start the process of applying for a tax return PIN from the IRS, check out the steps at this link. You will almost certainly need to file an IRS form 14039 (PDF), and provide scanned or photocopied records, such a drivers license or passport.

To read more about other ID thieves who were customers of that the Secret Service has nabbed and put on trial, check out the stories in this series. Ealy’s account on Twitter is an also an eye-opener.

Free of earthly burdens

by Zach via The Z-Issue »

So I was perusing Reddit—an activity that can be nothing more than a way to pass time, or, on occasion, can be rewarding—this evening, and found a picture of a tombstone that a father designed for his differently abled child who passed away untimely.

The picture certainly will resonant with anyone who has a child with a “disability.” The image, though, was not the part of the post that really stuck out to me. No, there was a comment about it that really put the concept of death into perspective:

When your parents or elders die, you feel like you’ve lost a connection to the past. I’ve been told that losing a child is like living through the process of losing the future.

I agree with person who responded by saying that it is a “crushingly profound statement.” The death of a child is not only untimely, but it is a chronological anomaly that simply shouldn’t occur. We as humans recognise items in space and time that are out of place on a regular basis—they catch our attention. For instance, have you ever been watching a film about a time period of long ago and noticed something that wasn’t available at that time (known as an anachronism, by the way)? The loss of a child is arguably the epitome of disturbances in the natural order of time.

For good measure, here is the full thread on Reddit, a link to the particular comment that I referenced, and the image hosted on imgur.

As a side note, the wonderful comment came from a user named Turkeybuzzard, which should be an indication to not pre-judge.


... mmm emulators.

by Adrian via Adrian Chadd's Ramblings »

I occasionally get asked to test out FreeBSD/MIPS patches for people, as they don't have physical hardware present. I can understand that - the hardware is cheap and plentiful, but not everyone wants to have a spare access point around just to test out MIPS changes on.

However QEMU does a pretty good job of emulating MIPS if you're just testing out non-hardware patches. There's even instructions on the FreeBSD wiki for how to do this! So I decided to teach my wifi build system about the various QEMU MIPS emulator targets so it can spit out a kernel and mfsroot to use for QEMU.


It turns out that it wasn't all that hard. The main trick was to use qemu-devel, not qemu. There are bugs in the non-development QEMU branch that mean it works great for Linux but not FreeBSD.

The kernel configurations in FreeBSD had bitrotted a little bit (they were missing the random device, for example) but besides that the build, install and QEMU startup just worked. I now have FreeBSD/MIPS of each variety (32 bit, 64 bit, Little-Endian, Big-Endian) running under QEMU and building FreeBSD-HEAD as a basic test.

Next is figuring out how to build gdb to target each of the above and have it speak to the QEMU GDB stub. That should make it very easy to do MIPS platform debugging.

I also hear rumours about this stuff working somewhat for ARM and PPC, so I'll see how hard it is to run QEMU for those platforms and whether FreeBSD will just boot and run on each.

RIP ns2

by ultrabug via Ultrabug »

Today we did shutdown our now oldest running Gentoo Linux production server : ns2.

Obviously this machine was happily spreading our DNS records around the world but what’s remarkable about it is that it has been doing so for 2717 straight days !

$ uptime
 13:00:45 up 2717 days,  2:20,  1 user,  load average: 0.13, 0.04, 0.01
As I mentioned when we did shutdown stabber, our beloved firewall, our company has been running Gentoo Linux servers in production for a long time now and we’re always a bit sad when we have to power off one of them.

As usual, I want to take this chance to thank everyone contributing to Gentoo Linux ! Without our collective work, none of this would have been possible.

Languages, native speakers, culture

by Flameeyes via Flameeyes's Weblog »

People who follow me probably know already that I'm not a native English speaker. Those who don't but will read this whole post will probably notice it by the end of it, just by my style, even if I were not to say it right at the start as I did.

It's not easy for me and it's often not easy for my teammates, especially so when they are native speakers. It's too easy for the both of us to underestimate or overestimate, at the same time sometimes, how much information we're conveying with a given phrase.

Something might sounds absolutely rude in English to a native speaker, but when I was forming my thought in my head it was intended to be much softer, even kind. Or the other way around: it might be actually quite polite in English, and my interpretation of it would be, to me, much ruder. And this is neither an easy or quick problem to solve, I have been working within English-based communities for a long while – this weblog is almost ten years old! – and still to this day the confusion is not completely gone.

It's interestingly sometimes easier to interact with other non-native speakers because we realize the disconnect, but other times is even harder because one or the other is not making the right amount of effort. I find it interestingly easier to talk with speakers of other Latin languages (French, Spanish, Portuguese), as the words and expressions are close enough that it can be easy to port them over — with a colleague and friend who's a native French speaker, we got to the point where it's sometimes faster to tell the other a word in our own language, rather than trying to go to English and back again; I promised him and other friends that I'll try to learn proper French.

It is not limited to language, culture is also connected: I found that there are many connections between Italian culture and Balkan, sometimes in niches that nobody would have expected it to creep up, such as rude gestures — the "umbrella gesture" seems to work just as good for Serbs as it does for Italians. This is less obvious when interacting with people exclusively online, but it is something useful when meeting people face to face.

I can only expect that newcomers – whether they are English speakers who have never worked closely with foreigners, or people whose main language is not English and who are doing their best to communicate in this language for the first time – to have a hard time.

This is not just a matter of lacking grammar or dictionary: Languages and societal customs are often interleaved and shape each other, so not understanding very well someone else's language may also mean not understanding their society and thus their point of view, and I would argue that points of view are everything.

I will make an example, but please remember I'm clearly not a philologist so I may be misspeaking, please be gentle with me. Some months ago, I've been told that English is a sexist language. While there wasn't a formal definition or reasoning for why stating that, I've been pointed at the fact you have to use "he" or "she" when talking about a third party.

I found this funny: not only in Italian you have to do so when talking about a third party, but you have to do so when talking about a second party (you) and even about a first party (me) ­— indeed, most adjectives and verbs require a gender. And while English can cop-out with the singular "they", this does not apply to Italian as easily. You can use a generic, plural "you", but the words still need a gender — it usually become feminine to match "your persons".

Because of the need for a gender in words, it is common to assume the male gender as a "default" in Italian; some documentation, especially paperwork from the public administration, will use the equivalent of "he/she" in the form of "signore/a", but it becomes cumbersome if you're writing something longer than a bank form, as every single word needs a different suffix.

I'm not trying to defend the unfortunate common mistake to assume a male gender when talking about "the user" or any other actor in a discussion, but I think it's a generally bad idea to assume that people have a perfect understanding of the language and thus assign maliciousness when there is simple naïve ignorance, as was the case with Lennart, systemd and the male pronouns. I know I try hard to use the singular "they", and I know I fall short of it too many times.

But the main point I'm trying to get across here is that yes, it's not easy, in this world that becomes more and more small, to avoid the shocking contrast of different languages and cultures. And it can't be just one side accommodating to this, we all have to make an effort, by understanding the other side's limit, and by brokering among sides that would be talking past each other anyway.

It's not easy, and it takes time, and effort. But I think it's all worth it.

2005 Volkswagen Jetta Fuse Diagram

by Jeremy Olexa via »

It is surprisingly hard to find this fuse diagram online. I actually had the diagram in the glove box of my car but it is cold out and I didn’t want to sit outside reading the manual. I went in trying to find the source of my rear window defroster failure and found the fuse blown and “melted” to the plastic. I broke the fuse when I removed it and then replaced it with a spare fuse. It looks like the previous owner used a 30A when it should have been 25A. Anyway, works like a charm now – ready for winter.

Request Tracker

by titanofold via titanofold »

So, I’ve kind of taken over Request Tracker (

Initially I took it because I’m interested in using RT at work to take track customer service emails. All I did at the time was bump the version and remove old, insecure versions from the tree.

However, as I’ve finally gotten around to working on getting it setup, I’ve discovered there were a lot of issues that had gone unreported.

The intention is for RT to run out of its virtual host root, like /var/www/localhost/rt-4.2.9/bin/rt and configured by /var/www/localhost/rt-4.2.9/etc/, and for it to reference any supplementary packages with ${VHOST_ROOT} as its root. However, because of a broken install process and a broken hook script used by webapp-config that didn’t happen. Further, the rt_apache.conf included by us was outdated by a few years, too, which in itself isn’t a bad thing, except that it was wrong for RT 4+.

I spent much longer than I care to admit trying to figure out why my settings weren’t sticking when I edited I was trying to run RT under its own path rather than on a subdomain, but Set($WebPath, ‘/rt’) wasn’t doing what it should.

It also complained about not being able to write to /usr/share/webapps/rt/rt-4.2.9/data/mason_data/obj, which clearly wasn’t right.

Once I tried moving to /usr/share/webapps/rt/rt-4.2.9/etc/, and chmod and chown on ../data/mason_data/obj, everything worked as it should.

Knowing this was wrong and that it would prevent anyone using our package from having multiple installation, aka vhosts, I set out to fix it.

It was a descent into madness. Things I expected to happen did not. Things that shouldn’t have been a problem were. Much of the trouble I had circled around webapp-config and webapp.eclass.

But, I prevailed, and now you can really have multiple RT installations side-by-side. Also, I’ve added an article ( to our wiki with updated instructions on getting RT up and running.

Caveat: I didn’t use FastCGI, so that part may be wrong still, but mod_perl is good to go.

bsdtalk248 - DragonFlyBSD with Matthew Dillon

by Mr via bsdtalk »

An interview with Matthew Dillon about the upcoming 4.0 release of DragonFly BSD.

File Info: 43Min, 20MB.

Ogg Link:

More information on PC-BSD’s new Role System

by Josh Smith via Official PC-BSD Blog »

Just a short video to explain in a little more detail what our plan is for the role system. Make sure to check it out and subscribe if you haven’t already :).

Lumina Version 0.7.2 Tagged

by Ken Moore via Official PC-BSD Blog »

The next version of the Lumina desktop environment has just been tagged in source! PC-BSD users on the “Edge” package repository should expect to see an updated package available in the next couple days. Please test it out and file bug reports or feature requests as necessary on the PC-BSD bug tracker.

What has changed in this version:

  • Streamline the startup process and Lumina utilities, with many of the utilities now being multi-threaded.
  • Enable login/logout chimes (can be disabled in the lumina-config session settings)
  • New Desktop Plugins:
    • Note Pad:  Take text notes on your desktop
    • Desktop View: Auto-generate icons for everything in the ~/Desktop folder
  • New Utility: “lumina-search“
    • Quickly search for and run applications/files/directories
    • Registered on the system applications menu under “Utilities -> Lumina Search”
    • For new Lumina users, this utility is set to automatically run with the “Alt-F2” keyboard shortcut
  • New Color Schemes:
    • Lumina-[Green, Gold, Purple, Red, Glass] now available out of box (default: Glass)
  • New backend system for registering default applications:
    • Uses mime-types instead of extensions now
    • All lumina utilities have been updated to work with the new system
    • WARNING: Previously registered defaults might not be transferred to the new system, so you may need to re-set your default web browser/email client through lumina-config after updating to the new version.
  • Miscellaneous bug fixes and minor improvements

Links for 17.11.2014

by Fabian via Fabian Fischer »

Reaching the Summit of Web Performance with Performance has become critical to the success of websites, and of e-commerce sites in particular. With customers expecting web pages to load increasingly faster, they will often lose patience, especially in a purchase process, if they have to wait for too long. – by Oliver Wegner

ASP.NET 5 Overview: ASP.NET 5 is a significant redesign of ASP.NET. This topic introduces the new concepts in ASP.NET 5 and explains how they help you develop modern web apps. Introduction to ASP.NET 5 ASP.NET 5 is a lean .NET stack for building modern web apps. – by Tom FitzMacken

Introducing Gulp, Grunt, Bower, and npm support for Visual Studio: Web Development, specifically front end web development, is fast becoming as complex and sophisticated as traditional back end development. Most projects don’t just upload some JS and CSS files via FTP. – by category

Feedback-Centric Development – The One Hacker Way: Erik Meijer got something right in his talk “One Hacker Way”. There’s a lot of bashing and ranting… but at the core there also is a precious diamond to be found. It’s his admonition to be driven by feedback.

Link Found in Staples, Michaels Breaches

by BrianKrebs via Krebs on Security »

The breach at office supply chain Staples impacted roughly 100 stores and was powered by some of the same criminal infrastructure seen in the intrusion disclosed earlier this year at Michaels craft stores, according to sources close to the investigation.

Multiple banks interviewed by this author say they’ve received alerts from Visa and MasterCard about cards impacted in the breach at Staples, and that to date those alerts suggest that a subset of Staples stores were compromised between July and September 2014.

Sources briefed on the ongoing investigation say it involved card-stealing malicious software that the intruders installed on cash registers at approximately 100 Staples locations. Framingham, Mass.-based Staples has more than 1,800 stores nationwide.

In response to questions about these details, Staples spokesman Mark Cautela would say only that the company believes it has found and removed the malware responsible for the attack. 

“We are continuing to investigate a data security incident involving an intrusion into some of our retail point of sale and computer systems,” Cautela said in a statement emailed to KrebsOnSecurity. “We believe we have eradicated the malware used in the intrusion and have taken steps to further enhance the security of our network.  The Company is working with law enforcement and is investigating whether any retail transaction data may have been compromised. It is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on a timely basis.”

A source close to the investigation said the malware found in Staples stores was communicating with some of the same control networks that attackers used in the intrusion at Michaels, another retail breach that was first disclosed on this blog. Michaels would later acknowledge that the incident was actually two separate, eight-month long breaches that resulted in the theft of more than three million customer credit and debit cards.

The same source compared the breach at Staples to the intrusion recently disclosed at the nationwide grocer chain Albertsons, noting that both breaches resulted in the theft of far fewer customer credit and debit cards that thieves might have stolen in these attacks. It remains unclear what factors may have limited the number of cards stolen in these breaches, particularly compared to tens of millions of cards stolen in breaches at similar nationwide retail chains like Target and Home Depot.

I fully expect that we’ll hear about another major retail chain getting hacked as we approach another Black Friday. Any retailers that are still handling unencrypted credit card data on their networks remain an attractive and lucrative target for attackers.

MOOC: the next hyped bubble?

by erwin via Droso »

There’s been a lot of talk in the (tech) media about the future of education being online, especially in the form of Massive Open Online Courses (MOOC). So far, I haven’t looked closely due to not enough interest and/or time, but I did fall over an interesting one on edX today and looked a bit closer. edX has an impressive list of schools and partners listed, so it much be good, right? The course description pages have a nice summary sidebar, with school, start date, course length, estimated effort in hours per week, and prerequisites, but does not mention the price. While the price actually is well-hidden half-way through the page, I at this point assumed I had to sign up before seeing the price. During this process, there is not only a Terms and Conditions, but also a Privacy Policy and an Honor Code, as the usual click-through bullets in any webpage these days. I did wonder about the Honor Code though and had a click glance. It includes wonderful titbits like this:

EdX reserves the right to modify these TOS at any time without advance notice

My interest was spiked. It continues:

Any changes to these TOS will be effective immediately upon posting on this page, with an updated effective date. By accessing the Site after any changes have been made, you signify your agreement on a prospective basis to the modified TOS and all of the changes

Aha! Not only do I have to give a carte blanche to whatever they feel like to write in there in the future without telling me, but also after reading the new terms, I already accepted them as I had to access the site to find out about them. Catch 22.

Another glance a bit further down the page, I noticed I had to grant edX a license. Interesting I thought, let’s find out. Anyone remember the hot water Facebook found itself in a while back?

License Grant to edX. By submitting or distributing your User Postings, you hereby grant to edX a worldwide, non-exclusive, transferable, assignable, sub licensable, fully paid-up, royalty-free, perpetual, irrevocable right and license to host, transfer, display, perform, reproduce, modify, distribute, re-distribute, relicense and otherwise use, make available and exploit your User Postings, in whole or in part, in any form and in any media formats and through any media channels (now known or hereafter developed).

Clearly, that’s a lot broader than what’s needed for the operation of the site where users’ comments are shared between other users and instructors.

Needless to say, I stopped reading here, not even bothering with the TOC or Privacy Policy. Next time a reporter goes raving that MOOCs are the greatest thing invented since sliced bread, solving everything from the bored life of university professors to literacy and poverty in the 3rd world, point them to the small print. At a $250 fee and those kinds of legalese, it seems the nice people behind these MOOCs are not just doing it for the good of society.

Related posts:

  1. ICANN49, Singapore For those who haven’t noticed in any of the other...
YARPP powered by AdBistro
Powered by

PC-BSD 10.1-RELEASE Now Available

by dru via Official PC-BSD Blog »

The PC-BSD team is pleased to announce the availability of PC-BSD 10.1 release!

A very special thank you goes out to all the contributors for this release, your help and feedback was greatly appreciated!

PC-BSD 10.1 Highlights

* KDE 4.14.2
* GNOME 3.12.2
* Cinnamon 2.2.16
* Chromium 38.0.2125.104_1
* Firefox 33.1
* NVIDIA Driver 340.24
* Lumina desktop 0.7.1-beta
* Pkg 1.3.8_3
* New AppCafe HTML5 web/remote interface, for both desktop / server usage
* New CD-sized text-installer ISO files for TrueOS / server deployments
* New Centos 6.6 Linux emulation base
* New HostAP mode for Wifi GUI utilities
* UEFI support for boot and installation
* Automatic tuning of ZFS memory usage at install time
* Support for full-disk (GELI) encryption without an unencrypted /boot partition (Also on mirror/raidz setups!)
* New VirtualBox / VMware / RAW disk images of desktop / server installations

For a more complete list of changes, please check our wiki page.


Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.1-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for enabling the AppCafe web-interface for remote usage out of box
* Support for full-disk (GELI) encryption without an unencrypted /boot partition  (Also on mirror/raidz setups!)


WARNING: As with any upgrade, please ensure you have backups of all important data beforehand!

Users running 10.0-RELEASE can now update to 10.1 via the online updater GUI, or via the ‘pc-updatemanager’ command as detailed here.

Users running previous RC’s of 10.1 can also update using the following commands:

# freebsd-update fetch
# freebsd-update install
(With a possible second “freebsd-update install”, if the utility requests it)

# pkg update –f
# pkg upgrade –f

Getting media

10.1-RELEASE DVD/USB/VMs can be downloaded from this URL via HTTP or Torrent.

Reporting Bugs

Found a bug in 10.1? Please report it (with as much detail as possible) to our bugs database.

Postfix SMTP server: errors – TLS not available due to local problem

by Dan Langille via Dan Langille's Other Diary »

Postfix has been trying to tell me something: your configuration is wrong. Most cleverly, Postfix has been emailing me about this. The first email came on 22 Oct 2014. I ignored it. The second email arrived five days later on 27 October 2014: I recall looking around, but I didn’t do anything. I think I [...]

Links for 16.11.2014

by Fabian via Fabian Fischer »

Thanks to WP Stacker there is an easy way to post links from Pocket to WordPress. I am going to play with that during the next weeks and post interesting stuff from my reading list here.

Making Sense of CSP Reports @ Scale: Content Security Policy isn’t new, but it is so powerful that it still feels like the new hotness. The ability to add a header to HTTP responses that tightens user-agent security rules and reports on violations is really powerful. – by Iván L.

.NET vs. MEAN: Migrating from Microsoft to Open Source: Moving from .NET to node.js

Tracking Protection on Firefox: Tracking is the collection of a person’s browsing data across multiple sites, usually via included content. Tracking domains attempt to uniquely identify a person through the use of cookies or other technologies such as fingerprinting. Firefox is going to improve the situation for the users.

Berlin’s digital exiles: where tech activists go to escape the NSA: It’s the not knowing that’s the hardest thing, Laura Poitras tells me. “Not knowing whether I’m in a private place or not.” Not knowing if someone’s watching or not. Though she’s under surveillance, she knows that. It makes working as a journalist “hard but not impossible”. – by Carole Cadwalladr

How to do VPN on Demand for iOS at zero cost despite Apple’s best efforts to prevent this: Apple support a feature called ‘VPN on Demand’ despite the best efforts of both Apple and VirnetX to sabotage this. There are three main requirements in order to implement this. – by John Lockwood

The Apple iPad Air 2 Review: As we approach the holidays, Apple has launched a new iPad as expected. As one might expect from the name, the iPad Air 2 is more of an evolution of the original iPad Air than a clean-sheet design. – by Joshua Ho

Flottenzuwachs und Perspektiven

by Jesco Freund via My Universe »

Meiner Flotte habe ich nun eine Turboprop-Maschine hinzugefügt, nämlich die Bombardier Dash 8Q–400 von FlyJSim. Die Maschine ist ansprechend modelliert, verfügt aber nicht über ein eigenes FMS (und unterstützt auch noch nicht die mit X–Plane 10.30 eingeführten GPS-Systeme Garmin G430 und G530).

Das Handling der Maschine ist Turboprop–typisch anspruchsvoll – das Moment der Propeller sorgt dafür, dass Turboprop–Maschinen gieren und rollen, wenn der Pilot nicht dagegen hält. Außerdem will natürlich nicht nur die Triebwerksleistung geregelt werden (die Dash 8Q–400 verfügt nicht über ein FADEC–System), sondern auch die Propellergeschwindigkeit.

Während die Dash 8Q–400 ein durchaus schon länger für X–Plane verfügbares Modell ist, tut sich in den Projektküchen der Designer gerade viel spannendes: Flight Factor bastelt gerade an einem Airbus A350 Modell, IXEG scheint mit dem Boeing 737 Classic Projekt auch nicht mehr sehr weit von der Vollendung entfernt, und Marian Günther hat die Weiterentwicklung seines Dornier Do 328 Modells wieder aufgenommen.

Damit bestehen sehr gute Chancen, dass bis Weihnachten noch wenigstens ein neues „großes“ Modell für X–Plane herauskommen dürfte (groß im Sinne von hoher Detailtreue – sowohl bei der Umsetzung des 3D–Modells, als auch bei der Simulation komplexer Systeme). Ich persönlich würde am ehesten auf den Airbus A350 tippen – vielleicht überrascht uns IXEG aber auch noch dieses Jahr mit einem Release. Lange genug darauf gewartet hätten wir ja …

bsdtalk247 - FreeBSD: The Next 10 Years with Jordan Hubbard

by Mr via bsdtalk »

A recording from MeetBSD 2014 in California.  A talk by Jordan Hubbard titled "FreeBSD: The Next 10 Years."

File Info: 39Min, 18MB.

Ogg Link:

RDepending on Perl itself

by Andreas via the dilfridge blog »

Writing correct dependency specifications is an art in itself. So, here's a small guide for Gentoo developers how to specify runtime dependencies on dev-lang/perl. First, the general rule.
Check the following two things: 1) is your package linking anywhere to, 2) is your package installing any Perl modules into Perl's vendor directory (e.g., /usr/lib64/perl5/vendor_perl/5.20.1/)? If at least one of these two questions is answered with yes, you need in your dependency string a slot operator, i.e. "dev-lang/perl:=" Obviously, your ebuild will have to be EAPI=5 for that. If neither 1) nor 2) are the case, "dev-lang/perl" is enough.
Now, with eclasses. If you use perl-module.eclass or perl-app.eclass, two variables control automatic adding of dependencies. GENTOO_DEPEND_ON_PERL sets whether the eclass automatically adds a dependency on Perl, and defaults to yes in both cases. GENTOO_DEPEND_ON_PERL_SUBSLOT controls whether the slot operator ":=" is used. It defaults to yes in perl-module.eclass and to no in perl-app.eclass. (This is actually the only difference between the eclasses.) The idea behind that is that a Perl module package always installs modules into vendor_dir, while an application can have its own separate installation path for its modules or not install any modules at all.
In many cases, if a package installs Perl modules you'll need Perl at build time as well since the module build system is written in Perl. If a package links to Perl, that is obviously needed at build time too.

So, summarizing:
eclass 1) or 2) true 1) false, 2) false
none "dev-lang/perl:=" needed in RDEPEND and most likely also DEPEND "dev-lang/perl" needed in RDEPEND, maybe also in DEPEND
perl-module.eclass no need to do anything GENTOO_DEPEND_ON_PERL_SUBSLOT=no possible before inherit
perl-app.eclass GENTOO_DEPEND_ON_PERL_SUBSLOT=yes needed before inherit no need to do anything

Making a new demuxer

by lu_zero via Luca Barbato »

Maxim asked me to to check a stream from a security camera that he could not decode with avconv without forcing the format to mjpeg.

Mysterious stream

Since it is served as http the first step had been checking the mime type. Time to use curl -I.

# curl -I "http://host/some.cgi?user=admin&pwd=pwd" | grep Content-Type
Interesting enough it is a multipart/x-mixed-replace

Content-Type: multipart/x-mixed-replace;boundary=object-ipcamera
Basically the cgi sends a jpeg images one after the other, we even have a (old and ugly) muxer for it!

Time to write a demuxer.

Libav demuxers

We already have some documentation on how to write a demuxer, but it is not complete so this blogpost will provide an example.


Libav code is quite object oriented: every component is a C structure containing a description of it and pointers to a set of functions and there are fixed pattern to make easier to make new code fit in.

Every major library has an all${components}.c in which the components are registered to be used. In our case we talk about libavformat so we have allformats.c.

The components are built according to CONFIG_${name}_${component} variables generated by configure. The actual code reside in the ${component} directory with a pattern such as ${name}.c or ${name}dec.c/${name}enc.c if both demuxer and muxer are available.

The code can be split in multiple files if it starts growing to an excess of 500-1000 LOCs.

We have some REGISTER_ macros that abstract some logic to make every component selectable at configure time since in Libav you can enable/disable every muxer, demuxer, codec, IO/protocol from configure.

We had already have a muxer for the format.

    REGISTER_MUXER   (MPJPEG,           mpjpeg);
Now we register both in a single line:

    REGISTER_MUXDEMUX(MPJPEG,           mpjpeg);
The all${components} files are parsed by configure to generate the appropriate Makefile and C definitions. The next run we’ll get a new
CONFIG_MPJPEG_DEMUXER variable in config.mak and config.h.

Now we can add to libavformat/Makefile a line like

OBJS-$(CONFIG_MPJPEG_DEMUXER)            += mpjpegdec.o
and put our mpjpegdec.c in libavformat and we are ready to write some code!

Demuxer structure

Usually I start putting down a skeleton file with the bare minimum:

The AVInputFormat and the core _read_probe, _read_header and _read_packet callbacks.

#include "avformat.h"

static int ${name}_read_probe(AVProbeData *p)
    return 0;

static int ${name}_read_header(AVFormatContext *s)
    return AVERROR(ENOSYS);

static int ${name}_read_packet(AVFormatContext *s, AVPacket *pkt)
    return AVERROR(ENOSYS);

AVInputFormat ff_${name}_demuxer = {
    .name           = "${name}",
    .long_name      = NULL_IF_CONFIG_SMALL("Longer ${name} description"),
    .read_probe     = ${name}_read_probe,
    .read_header    = ${name}_read_header,
    .read_packet    = ${name}_read_packet,
I make so that all the functions return a no-op value.


This function will be called by the av_probe_input functions, it receives some probe information in the form of a buffer. The function return a score between 0 and 100; AVPROBE_SCORE_MAX, AVPROBE_SCORE_MIME and AVPROBE_SCORE_EXTENSION are provided to make more evident what is the expected confidence. 0 means that we are sure that the probed stream is not parsable by this demuxer.


This function will be called by avformat_open_input. It reads the initial format information (e.g. number and kind of streams) when available, in this function the initial set of streams should be mapped with avformat_new_stream. Must return 0 on success. The skeleton is made to return ENOSYS so it can be run and just exit cleanly.


This function will be called by av_read_frame. It should return an AVPacket containing demuxed data as contained in the bytestream. It will be parsed and collated (or splitted) to a frame-worth amount of data by the optional parsers. Must return 0 on success. The skeleton again returns ENOSYS.


Now let’s implement the mpjpeg support! The format in itself is quite simple:
- a boundary line starting with --
- a Content-Type line stating image/jpeg.
- a Content-Length line with the actual buffer length.
- the jpeg data

Probe function

We just want to check if the Content-Type is what we expect basically, so we just
go over the lines (\n\r-separated) and check if there is a tag Content-Type with a value image/jpeg.

static int get_line(AVIOContext *pb, char *line, int line_size)
    int i, ch;
    char *q = line;

    for (i = 0; !pb->eof_reached; i++) {
        ch = avio_r8(pb);
        if (ch == 'n') {
            if (q > line && q[-1] == 'r')
            *q = '';

            return 0;
        } else {
            if ((q - line) < line_size - 1)
                *q++ = ch;

    if (pb->error)
        return pb->error;
    return AVERROR_EOF;

static int split_tag_value(char **tag, char **value, char *line)
    char *p = line;

    while (*p != '' && *p != ':')
    if (*p != ':')

    *p   = '';
    *tag = line;


    while (av_isspace(*p))

    *value = p;

    return 0;

static int check_content_type(char *line)
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-type") ||
        av_strcasecmp(value, "image/jpeg"))

    return 0;

static int mpjpeg_read_probe(AVProbeData *p)
    AVIOContext *pb;
    char line[128] = { 0 };
    int ret;

    pb = avio_alloc_context(p->buf, p->buf_size, 0, NULL, NULL, NULL, NULL);
    if (!pb)
        return AVERROR(ENOMEM);

    while (!pb->eof_reached) {
        ret = get_line(pb, line, sizeof(line));
        if (ret < 0)

        ret = check_content_type(line);
        if (!ret)
            return AVPROBE_SCORE_MAX;

    return 0;
Here we are using avio to be able to reuse get_line later.

Reading the header

The format is pretty much header-less, we just check for the boundary for now and
set up the minimum amount of information regarding the stream: media type, codec id and frame rate. The boundary by specification is less than 70 characters with -- as initial marker.

static int mpjpeg_read_header(AVFormatContext *s)
    MPJpegContext *mp = s->priv_data;
    AVStream *st;
    char boundary[70 + 2 + 1];
    int ret;

    ret = get_line(s->pb, boundary, sizeof(boundary));
    if (ret < 0)
        return ret;

    if (strncmp(boundary, "--", 2))

    st = avformat_new_stream(s, NULL);

    st->codec->codec_type = AVMEDIA_TYPE_VIDEO;
    st->codec->codec_id   = AV_CODEC_ID_MJPEG;

    avpriv_set_pts_info(st, 60, 1, 25);

    return 0;

Reading packets

Even this function is quite simple, please note that AVFormatContext provides an
AVIOContext. The bulk of the function boils down to reading the size of the frame,
allocate a packet using av_new_packet and write down if using avio_read.

static int parse_content_length(char *line)
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);
    long int val;

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-Length"))

    val = strtol(value, NULL, 10);
    if (val == LONG_MIN || val == LONG_MAX)
        return AVERROR(errno);
    if (val > INT_MAX)
        return AVERROR(ERANGE);
    return val;

static int mpjpeg_read_packet(AVFormatContext *s, AVPacket *pkt)
    char line[128];
    int ret, size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    ret = check_content_type(line);
    if (ret < 0)
        return ret;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    size = parse_content_length(line);
    if (size < 0)
        return size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    ret = av_new_packet(pkt, size);
    if (ret < 0)
        return ret;

    ret = avio_read(s->pb, pkt->data, size);
    if (ret < 0)
        goto fail;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    // Consume the boundary marker
    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    return ret;

    return ret;

What next

For now I walked you through on the fundamentals, hopefully next week I’ll show you some additional features I’ll need to implement in this simple demuxer to make it land in Libav: AVOptions to make possible overriding the framerate and some additional code to be able to do without Content-Length and just use the boundary line.

PS: wordpress support for syntax highlight is quite subpar, if somebody has a blog engine that can use pygments or equivalent please tell me and I’d switch to it.

Process tracking in Upstart

by Sebastian Marsching via Open-Source, Physik & Politik »

Recently, I exprienced a problem with the process tracking in Upstart:

I wanted start a daemon process running as a specific user. However, I needed root privileges in the pre-start script, so I could not use the setuid/setgid options. I tried to use su in the exec option, but then Upstart would not track the right process. The expect fork and expect daemon options did not help either. As a nasty side effect, these options cannot be tested easily, because having the wrong option will lead to Upstart waiting for an already dead process to die and there is no way to reset the status in Upstart. At least, there is a workaround for effectively resetting the status without restarting the whole computer.

The problem is that su forks when running the command it is asked to run instead of calling exec from the main process. Unfortunately, The process I had to run would fork again because I had to run it as a daemon (not running it as a daemon had some undesirable side effects). Finally, I found the solution: Instead of using su, start-stop-daemon can be used. This tool will not fork and therefore it will not upset Upstart's process tracking. For example, the line

exec start-stop-daemon --start --chuid daemonuser --exec /bin/server_cmd
will run /bin/server_cmd as daemonuser without forking. This way, expect fork or expect daemon can be used, just depending on the fork behavior of the final process.

Small differences don't matter (to unpaper)

by Flameeyes via Flameeyes's Weblog »

After my challenge with the fused multiply-add instructions I managed to find some time to write a new test utility. It's written ad hoc for unpaper but it can probably be used for other things too. It's trivial and stupid but it got the job done.

What it does is simple: it loads both a golden and a result image files, compares the size and format, and then goes through all the bytes to identify how many differences are there between them. If less than 0.1% of the image surface changed, it consider the test a pass.

It's not a particularly nice system, especially as it requires me to bundle some 180MB of golden files (they compress to just about 10 MB so it's not a big deal), but it's a strict improvement compared to what I had before, which is good.

This change actually allowed me to explore one change that I abandoned before because it resulted in non-pixel-perfect results. In particular, unpaper now uses single-precision floating points all over, rather than doubles. This is because the slight imperfection caused by this change are not relevant enough to warrant the ever-so-slight loss in performance due to the bigger variables.

But even up to here, there is very little gain in performance. Sure some calculation can be faster this way, but we're still using the same set of AVX/FMA instructions. This is unfortunate, unless you start rewriting the algorithms used for searching for edges or rotations, there is no gain to be made by changing the size of the code. When I converted unpaper to use libavcodec, I decided to make the code simple and as stupid as I could make it, as that meant I could have a baseline to improve from, but I'm not sure what the best way to improve it is, now.

I still have a branch that uses OpenMP for the processing, but since most of the filters applied are dependent on each other it does not work very well. Per-row processing gets slightly better results but they are really minimal as well. I think the most interesting parallel processing low-hanging fruit would be to execute processing in parallel on the two pages after splitting them from a single sheet of paper. Unfortunately, the loops used to do that processing right now are so complicated that I'm not looking forward to touch them for a long while.

I tried some basic profile-guided optimization execution, just to figure out what needs to be improved, and compared with codiff a proper release and a PGO version trained after the tests. Unfortunately the results are a bit vague and it means I'll probably have to profile it properly if I want to get data out of it. If you're curious here is the output when using rbelf-size -D on the unpaper binary when built normally, with profile-guided optimisation, with link-time optimisation, and with both profile-guided and link-time optimisation:

% rbelf-size -D ../release/unpaper ../release-pgo/unpaper ../release-lto/unpaper ../release-lto-pgo/unpaper
    exec         data       rodata        relro          bss     overhead    allocated   filename
   34951         1396        22284            0        11072         3196        72899   ../release/unpaper
   +5648         +312         -192           +0         +160           -6        +5922   ../release-pgo/unpaper
    -272           +0        -1364           +0         +144          -55        -1547   ../release-lto/unpaper
   +7424         +448        -1596           +0         +304          -61        +6519   ../release-lto-pgo/unpaper
It's unfortunate that GCC does not give you any diagnostic on what it's trying to do achieve when doing LTO, it would be interesting to see if you could steer the compiler to produce better code without it as well.

Anyway, enough with the microptimisations for now. If you want to make unpaper faster, feel free to send me pull requests for it, I'll be glad to take a look at them!

Having fun with networking

by Patrick via Patrick's playground »

Since the last minor upgrade my notebook has been misbehaving in funny ways.
I presumed that it was NetworkManager being itself, but ... this is even more fun. To quote from the manpage:
If the hostname is currently blank, (null) or localhost, or force_hostname is YES or TRUE or 1 then dhcpcd
     sets the hostname to the one supplied by the DHCP server.
Guess what. Now my hostname is, I mean, err...
And as a bonus this even breaks X in funny ways so that starting new apps becomes impossible. The fix?
Now the hostname is set to "localhorst". Because that's the name of the machine!111 (It doesn't have an explicit name, so localhost used to be ok)

The tenth man

by Fabian via Fabian Fischer »

A follow up to the text about responsive enterprises in video form. Take the time, it’s worth seeing.

One Hacker Way – Erik Meijer from Reaktor on Vimeo.

Thanks Tom.

‘Microsoft Partner’ Claims Fuel Support Scams

by BrianKrebs via Krebs on Security »

You can’t make this stuff up: A tech support company based in the United States that outsources its work to India says its brand is being unfairly maligned by — wait for it… support scammers based in India. In an added twist, the U.S.-based tech support firm acknowledges that the trouble may be related to its admittedly false statements about being a Microsoft Certified Partner — the same false statements made by most telephone-based tech support scams.

Tech support scams are, unfortunately, an extremely common scourge. Most such scams are the telephonic equivalent of rogue antivirus attacks, which try to frighten consumers into purchasing worthless security software and services. Both types of scams try to make the consumer believe that the caller is somehow associated with Microsoft or with a security company, and each caller tries to cajole or scare the consumer into giving up control over his or her PC.

Earlier this month, a reader shared a link to a lengthy Youtube video by freelance journalist Carey Holzman, in which Holzman turns the tables on the tech support scammers. During the video, Holzman plays along and gives the scammer remote control access to a test computer he’s set up specifically for this video.  The scammer, who speaks with a strong Indian accent but calls himself “Steve Wilson” from the “Microsoft technical department,” tries to convince Holzman that he works for a company that is a legitimate Microsoft support partner.

“Let me show you who we are,” the scammer says, opening up and typing SB3 Inc. Clicking on the first result brings up sb3inc[dot]com, which proudly displays an icon in the upper right corner of its home page stating that it is a Microsoft Certified Partner. “This is our mother company. Can you see that we are a Microsoft certified partner?”

When Holzman replies that this means nothing and that anyone can just put a logo on their site saying they’re associated with Microsoft, the scammer runs a search on for SB3. The scammer shows true chutzpah when he points to the first result, which — if clicked — leads to a page on Microsoft’s community site where members try to warn the poster away from SB3 as a scam.

When Holzman tries to get the scammer to let him load the actual search result link about SB3 on, the caller closes the browser window and proceeds to enable the SysKey utility on Windows, which allows the scammer to set a secret master password that must be entered before the computer will boot into Windows (effectively an attempt at locking Holzman out of his test computer if he tries to reboot).

The video goes on for some time more, but I decided to look more closely at SB3. The Web site registration records for the company state that it is based in New Jersey, and it took less than a minute to find the Facebook page of the company’s owner — a Suvajit “Steve” Basu in Ridgewood, NJ. Basu’s Facebook feed has him traveling the world, visiting the World Cup in Brazil in 2014, the Ryder Cup in 2012, and more recently taking delivery on a brand new Porsche.

Less than 24 hours after reaching out to him on Facebook and by phone, Basu returns my call and says he’s working to get to the bottom of this. Before I let him go, I tell Basu that I can’t find on Microsoft’s Partner Site any evidence to support SB3’s claim that it is a Microsoft Certified Partner. Basu explains that while the company at one time was in fact a partner, this stopped being the case “a few months ago.” For its part, Microsoft would only confirm that SB3 is not currently a Microsoft partner of any kind.

SB3’s homepage, before it removed the false “Microsoft Partner” claim.
Basu explained that Microsoft revoked SB3’s partner status after receiving complaints that customers were being cold-called by SB3 technicians claiming to be associated with Microsoft. “Microsoft had gotten complaints and we took out all references to Microsoft as part of our script,” that the company gives to tech support callers, Basu said.

As for why SB3 still falsely claimed to be a Microsoft Partner, Basu said his instructions to take the logo down from the site had apparently been ignored by his site’s administrators.

“That was a mistake for which we do take the blame and responsibility,” Basu said in a follow-up email. “We have corrected this immediately on hearing from you and you will no longer find a mention of Microsoft on our SB3Inc Website.”

Basu said SB3 is a legitimate company based in the USA which uses off-shore manpower and expertise to sell tech support services through its iFixo arm, and that the company never participates in the sort of scammy activities depicted in Holzman’s video. Basu maintains that scammers are impersonating the company and taking advantage of its good name, and points to a section of the video where the scammer loads a payment page at support2urpc[dot]com, suggesting that Support to Your PC is the real culprit (the latter company did not return messages seeking comment).

“After viewing your video it is obvious to us that one or more persons out there are misusing our brand and good-will,” Basu wrote.”We feel horrible and feel that along with the unknowing consumers we are also victims. This is corporate identity theft.”

SB3 may well be a legitimate company that is being scammed by the scammers, but if that’s true the company has done itsself and its reputation no favors by falsely stating it is a Microsoft partner. What’s more, complaints about tech support scammers claiming to be from SB3 are numerous and date back more than a year. I find it remarkable that a tech support company with the uncommon distinction of having secured a good name in this line of work would not act more zealously to guard that reputation. Alas, a simple Internet search on the SB3 brand would have alerted the company to these shenanigans.

SB3 has since removed the Microsoft Certified Partner logo from its home page, but the image is still on its server. Running a search on that image at — an extremely useful image search Web site — produces more than 11,700 results. No doubt Microsoft and other scam hunters have used this investigative tool to locate tech support scams, which may explain why support2urpc[dot]com does not appear to include the same image on its site but instead claims association with sites that do.

FreeBSD 10.1-RELEASE Available

by Webmaster Team via FreeBSD News Flash »

FreeBSD 10.1-RELEASE is now available. Please be sure to check the Release Notes and Release Errata before installation for any late-breaking news and/or issues with 10.1. More information about FreeBSD releases can be found on the Release Information page.

Network Hijackers Exploit Technical Loophole

by BrianKrebs via Krebs on Security »

Spammers have been working methodically to hijack large chunks of Internet real estate by exploiting a technical and bureaucratic loophole in the way that various regions of the globe keep track of the world’s Internet address ranges.

Last week, KrebsOnSecurity featured an in-depth piece about a well-known junk email artist who acknowledged sending from two Bulgarian hosting providers. These two providers had commandeered tens of thousands of Internet addresses from ISPs around the globe, including Brazil, China, India, Japan, Mexico, South Africa, Taiwan and Vietnam.

For example, a closer look at the Internet addresses hijacked by one of the Bulgarian providers — aptly named “Mega-Spred” with an email contact of “abuse@grimhosting” — shows that this provider has been slowly  gobbling up far-flung IP address ranges since late August 2014.

This table, with data from the RIPE NCC — of the regional Internet Registries, shows IP address hijacking activity by Bulgarian host Mega-Spred.
According to several security and anti-spam experts who’ve been following this activity, Mega-Spred and the other hosting provider in question (known as Kandi EOOD) have been taking advantage of an administrative weakness in the way that some countries and regions of the world keep tabs on the IP address ranges assigned to various hosting providers and ISPs. Neither Kandi nor Mega-Spred responded to requests for comment.

IP address hijacking is hardly a new phenomenon. Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker.

Experts say the hijackers also are exploiting a fundamental problem with record-keeping activities of RIPE NCC, the regional Internet registry (RIR) that oversees the allocation and registration of IP addresses for Europe, the Middle East and parts of Central Asia. RIPE is one of several RIRs, including ARIN (which handles mostly North American IP space) and APNIC (Asia Pacific), LACNIC (Latin America) and AFRINIC (Africa).

Ron Guilmette, an anti-spam crusader who is active in numerous Internet governance communities, said the problem is that a network owner in RIPE’s region can hijack Internet addresses that belong to network owners in regions managed by other RIRs, and if the hijackers then claim to RIPE that they’re the rightful owners of those hijacked IP ranges, RIPE will simply accept that claim without verifying or authenticating it.

Worse yet, Guilmette and others say, those bogus entries — once accepted by RIPE — get exported to other databases that are used to check the validity of global IP address routing tables, meaning that parties all over the Internet who are checking the validity of a route may be doing so against bogus information created by the hijacker himself.

“RIPE is now acutely aware of what is going on, and what has been going on, with the blatantly crooked activities of this rogue provider,” Guilmette said. “However, due to the exceptionally clever way that the proprietors of Mega-Spred have performed their hijacks, the people at RIPE still can’t even agree on how to even undo this mess, let alone how to prevent it from happening again in the future.”

And here is where the story perhaps unavoidably warps into Geek Factor 5. For its part, RIPE said in an emailed statement to KrebsOnSecurity that the RIPE NCC has no knowledge of the agreements made between network operators or with address space holders.

“It’s important to note the distinction between an Internet Number Registry (INR) and an Internet Routing Registry (IRR). The RIPE Database (and many of the other RIR databases) combine these separate functionalities. An INR records who holds which Internet number resources, and the sub-allocations and assignments they have made to End Users.

On the other hand, an IRRcontains route and other objects — which detail a network’s policies regarding who it will peer with, along with the Internet number resources reachable through a specific ASN/network. There are 34 separate IRRs globally — therefore, this isn’t something that happens at the RIR level, but rather at the Internet Routing Registry level.”

“It is not possible therefore for the RIRs to verify the routing information entered into Internet Routing Registries or monitor the accuracy of the route objects,” the organization concluded.

Guilmette said RIPE’s response seems crafted to draw attention away from RIPE’s central role in this mess.

“That it is somewhat disingenuous, I think for this RIPE representative to wave this whole mess off as a problem with the
IRRs when in this specific case, the IRR that first accepted and then promulgated these bogus routing validation records was RIPE,” he said.

RIPE notes that network owners can reduce the occurrence of IP address hijacking by taking advantage of Resource Certification (RPKI), a free service to RIPE members and non-members that allows network operators to request a digital certificate listing the Internet number resources they hold. This allows other network operators to verify that routing information contained in this system is published by the legitimate holder of the resources. In addition, the system enables the holder to receive notifications when a routing prefix is hijacked, RIPE said.

While RPKI (and other solutions to this project, such as DNSSEC) have been around for years, obviously not all network providers currently deploy these security methods. Erik Bais, a director at A2B Internet BV — a Dutch ISP — said while broader adoption of solutions like RPKI would certainly help in the long run, one short-term fix is for RIPE to block its Internet providers from claiming routes in address ranges managed by other RIRs.

“This is a quick fix, but it will break things in the future for legitimate usage,” Bais said.

According to RIPE, this very issue was discussed at length at the recent RIPE 69 Meeting in London last week.

“The RIPE NCC is now working with the RIPE community to investigate ways of making such improvements,” RIPE said in a statement.

This is a complex problem to be sure, but I think this story is a great reminder of two qualities about Internet security in general that are fairly static (for better or worse): First, much of the Internet works thanks to the efforts of a relatively small group of people who work very hard to balance openness and ease-of-use with security and stability concerns. Second, global Internet address routing issues are extraordinarily complex — not just in technical terms but also because they also require coordination and consensus between and among multiple stakeholders with sometimes radically different geographic and cultural perspectives. Unfortunately, complexity is the enemy of security, and spammers and other ne’er-do-wells understand and exploit this gap as often as possible.

Soon every company will be a software company

by Fabian via Fabian Fischer »

Good article regarding “Responsive Enterprises” by Erik Meijer and Vikram Kapoor.

In the next decade every business will be digitized and effectively become a software company. Leveraging software, and, in general, computational thinking, to make a business responsive to change using a closed-loop feedback system will be crucial to surviving in this new world where business = data + algorithms. Some examples of responsive companies follow.


Wireless Diagnostics on OSX – check your wifi

by Dan Langille via Dan Langille's Other Diary »

I wanted to know how many wireless access points (WAPs) were using what channels near my place. I googled and found a reference to the built-in OSX tool, Wireless Diagnostics. But to be fair, the app is hidden. To access the app, hold the Command key while clicking on the WIFI icon. This will change [...]

New attack scenarios in the cloud

by Fabian via Fabian Fischer »

Browserstack had a security incident. It is quite interesting to see what happened and how the attacker gained access to the EC2 infrastructure.

The old prototype machine had our AWS API access key and secret key. Once the hacker gained access to the keys, he created an IAM user, and generated a key-pair. He was then able to run an instance inside our AWS account using these credentials, and mount one of our backup disks. This backup was of one of our component services, used for production environment, and contained a config file with our database password. He also whitelisted his IP on our database security group, which is the AWS firewall.


Veteran’s Day is one of uncertainty

by Zach via The Z-Issue »

Today, 11 November, is an interesting holiday in the United States. It is the day in which we honour those individuals who have served in the armed forces and have defended their country. I say that it is an interesting holiday because I am torn on how I feel about the entire concept. On one hand, I am incredibly grateful for those people that have fought to defend the principles and freedoms on which the United States was founded. However, the fight itself is one that I cannot condone.

There is no flag large enough to cover the shame of killing innocent people
Threats to freedom in any nation are brought about by political groups, and should be handled in a political manner. I understand that my viewpoint here is one of pseudoutopian cosmography, but it is one that I hope will become more and more realistic as both time and humanity march onward. The “wars” should be fought by national leaders, and done so via discussion and debate; not by citizens (military or civilian) via guns, bombs, or other weaponry.

I also understand that there will be many people who disagree (in degrees that result in emotions ranging from mild irritation to infuriated hostility) with my viewpoint, and that is completely fine. Again, my dilemma comes from being simultaneously thankful for those individuals who have given their all to defend “freedom” (whatever concept that word may represent) and sorrowful that they were the ones that had to give anything at all. These men and women had to leave their families knowing that they may never return to them; knowing that they may die trying to defend something that shouldn’t be challenged in the first place—human freedoms.

Who will explain it to him?
Let us not forget a quote by former President of the United States, John F. Kennedy who stated that “mankind must put an end to war before war puts an end to mankind.”


Simple DirecTV Hack to get Netflix without another ethernet drop

by Warner Losh via Warner's Random Hacking Blog »

DirecTV has been using MoCA for some time to implement their whole home DVR. I've been a DirecTV subscriber for years. In my previous house, I had easy access to the locations I had the TV and it was trivial to pull the Cat5 cable to those locations. In my new house, however, a number of issues made it tricky to pull new cable. The finished basement sealed in many of the cable runs without a nice pull-string. The cables were run through the joists in a point-to-point manner. The exterior walls of the house are insulated with hay bales. These are great for insulation, but impossible to run electrical cable though (and even if it wasn't, fire codes prevent it). Finally, the attic has lots (48") of blown in cellulose insulation. Great for my heating bills, but not so good for access.

I went ahead and had my DirecTV professionally installed. As a life long DIY person, this was tough for me. However, time was short and this saved me a day of dinking with it. The installers have gotten much better since some bad experiences I had 15 years ago now. They got me all setup. I have a nice new HR24 in the main TV area, and a HR23 from my old house. Sadly, I had to let go of the HR10 which I'd been pulling video off of for years. Such is the price of progress.

In the basement, they installed a DECABB1MR0 (Direct TV Ethernet to COAX adapter). This is a simple device that connects to the cable plant coming off the SWM to your ethernet. They connected this so the HR23 and HR24 would be connected to the internet. This technology is called MoCA, and it drives much of the home sharing. The HR24 has MoCA built in, but the HR23 didn't. The cable guys installed the HR23 directly into a slightly different device (a DECA-II that was powered from the HR23 box and provided a single ethernet port). They told me it was the only way it could work. However, they were mistaken. I now have a ethernet switch with the DECA-II, my HR23 and my TV with Netflix. This works great. It is a bit anti-climatic, though, since I also had a cat5 port near this TV I could have used instead. But since I needed the router anyway, I thought I'd save myself some time and trouble troubleshooting the Cat5 port (all of them in this house have been wired wrong) and use this setup. I've watched movies from Netflix, DirecTV shows, shows recorded on my HR-24 as well as on-demand shows streamed over the internet. It's all good.

Next, up, I had to tackle the HR-24. There's no power for the deca-ii adapters coming off this box. And it didn't occur to me until I was writing up this blog post I could have just used a deca-ii by switching the port in the SWM from the unpowered to powered port and plugged the ethernet directly into my sony player. However, I was able to do the next best thing. I got a second DECABB1MR0 on ebay for $10.00 or so. I got a good 2-1 splitter (3.5dBm loss) and some good 1' RG-6 cables online. I was then able to wire the cable from the wall into the splitter. One leg went to the HR-24. The other leg went to the deca to go onto my sony player. The splitter did result in a small loss of signal to the HR-24, but all the signal strength levels look good on the setup screen. I've been able to stream Netflix off the Sony DVD player as well. I've not yet seen how well this responds in snow and ice.

Anyway, just a simple connection of different technologies. I learned these devices are nice MoCA bridges, and you can get decent performance over MoCA (decent enough for streaming video), but that the ping times are much greater (3-4ms rather than 10-100us that I'm used to seeing over GB ethernet). The technology is a bridging technology, with multiple devices allowed on either side of the bridge. The 3dB splitter worked better than I feared it might and is a simple way to connection. The powered port option should be kept in mind for future expansion if and when I add another receiver.

Need community feedback on new role system for PC-BSD

by Josh Smith via Official PC-BSD Blog »

Hey everyone! We are considering a new way to install a more
customized PC-BSD experience called “Roles”. Roles would be a
installation experience for PC-BSD that would allow more flexibility
and a more focused package installation based on what you need or want
for your role. If you are a web developer maybe you need an IDE or
packages specifically focused on that. If you are wanting the best
desktop workstation experience maybe you would get an installation
with libreoffice and some other productivity apps.

We hope to also be able to bring these different roles to you in the
form of pre-made virtualbox / vmware images that are ready to be
rolled out. This would hopefully save you a little bit of time as
they’d be significantly smaller by not including a bunch of
unnecessary packages for your role. You would also be able to select
during a normal PC-BSD DVD / USB installation whether or not you want
to use a pre-defined role to setup your system.

We need your help and input to define what roles are important to you
as users and what packages you would suggest that they include. (I.E.
if you are installing a
{developer/web-designer/network-admin/consumer} workstation, what
would be the custom set of packages you need? You can contribute to
the discussion by responding on the forums, blog, or mailing lists.

Forum link:

NJP accepted: Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube

by Andreas via the dilfridge blog »

Today's good news is that our manuscript "Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube" has been accepted for publication by New Journal of Physics.
In a way, this work is directly building on our previous publication on thermally induced quasiparticles in niobium-carbon nanotube hybrid systes. As a contribution mainly from our theory colleagues, now the modelling of transport processes is enhanced and extended to cotunneling processes within Coulomb blockade. A generalized master equation based on the reduced density matrix approach in the charge conserved regime is derived, applicable to any strength of the intradot interaction and to finite values of the superconducting gap.
We show both theoretically and experimentally that also in cotunneling spectroscopy distinct thermal "replica lines" due to the finite quasiparticle occupation of the superconductor occur at higher temperature T~1K: the now possible transport processes lead to additional conductance both at zero bias and at finite voltage corresponding to an excitation energy; experiment and theoretical result match very well.

"Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube"
S. Ratz, A. Donarini, D. Steininger, T. Geiger, A. Kumar, A. K. Hüttel, Ch. Strunk, and M. Grifoni
accepted for publication by New Journal of Physics, arXiv:1408.5000 (PDF)

PyPy is back, and for real this time!

by Michał Górny via Michał Górny »

As you may recall, I was looking for a dedicated PyPy maintainer for quite some time. Sadly, all the people who helped (and who I’d like to thank a lot) ended up lacking time soon enough. So finally I’ve decided to look into the hacks reducing build-time memory use and take care of the necessary ebuild and packaging work myself.

So first of all, you may notice that the new PyPy (source-code) ebuilds have a new USE flag called low-memory. When this flag is enabled, the translation process is done using PyPy with some memory-reducing adjustments suggested by upstream. The net result is that it finally is possible to build PyPy with 3.5G RAM (on amd64) and 1G of swap (the latter being used the compiler is spawned and memory used during translation is no longer necessary), at a cost of slightly increased build time.

As noted above, the low-memory option requires using PyPy to perform the translation. So while having to enforce that, I went a bit further and made the ebuild default to using PyPy whenever available. In fact, even for a first PyPy build you are recommended to install dev-python/pypy-bin first and let the ebuild use it to bootstrap your own PyPy.

Next, I have cleaned up the ebuilds a bit and enforced more consistency. Changing maintainers and binary package builders have resulted in the ebuilds being a bit inconsistent. Now you can finally expect pypy-bin to install exactly the same set of files as source-built pypy.

I have also cleaned up the remaining libpypy-c symlinks. The library is not packaged upstream currently, and therefore has no proper public name. Using is just wrong, and packages can’t reliably refer to that. I’d rather wait with installing it till there’s some precedence in renaming. The shared library is still built but it’s kept inside the PyPy home directory.

All those changes were followed by a proper version bump to 2.4.0. While you still may have issues upgrading PyPy, Zac already committed a patch to Portage and the next release should be able to handle PyPy upgrades seamlessly. I have also built all the supported binary package variants, so you can choose those if you don’t want to spend time building PyPy.

Finally, I have added the ebuilds for PyPy 3. They are a little bit more complex than regular PyPy, especially because the build process and some of the internal modules still require Python 2. Sadly, PyPy 3 is based on Python 3.2 with small backports, so I don’t expect package compatibility much greater than CPython 3.2 had.

If you want to try building some packages with PyPy 3, you can use the convenience PYTHON_COMPAT_OVERRIDE hack:

PYTHON_COMPAT_OVERRIDE='pypy3' emerge -1v mypackage
Please note that it is only a hack, and as such it doesn’t set proper USE flags (PYTHON_TARGETS are simply ignored) or enforce dependencies.

If someone wants to help PyPy on Gentoo a bit, there are still unsolved issues needing a lot of specialist work. More specifically:

  1. #465546; PyPy needs to be modified to support /usr prefix properly (right now, it requires prefix being /usr/lib*/pypy which breaks distutils packages assuming otherwise.
  2. #525940; non-SSE2 JIT does not build.
  3. #429372; we lack proper sandbox install support.

iTunes Bug: Apps not syncing to iPhone or iPad

by Sebastian Marsching via Open-Source, Physik & Politik »

While setting up my new iPad yesterday, I experienced a strange problem. iTunes (on Windows) would repeatedly crash with a problem in msvcrt10.dll when trying to copy the apps to the iPad.

In the Apple support forums, I found the explanation and a workaround for this problem: It seems like iTunes 11.4 introduces a but (that is still present in iTunes 12) that causes a crash when apps are stored on network share and referenced using a UNC path. In my case, the Music folder (which is the default location for the iTunes library) is redirected to a UNC path pointing to a DFS share. Interestingly, this bug only affects apps, not music or videos.

In order to make the apps sync again, the path that iTunes uses for referencing the files needs to be shared to a regular path with a drive letter. This can either be achieved by copying the apps to a local driver or mapping the network share to a drive letter. Either way, all apps need to be deleted from the iTunes library (but not deleted on disk) and re-added using the regular path. Obviosuly, iTunes has to be configured to not automatically copy files to its default library location. After this change, the synchronization should work. Finally, the apps can be deleted again and re-added using the UNC path - once the apps are on the device (with the newest version) iTunes will not try to copy them again, thus avoiding the bug.

However, I find it annoying that this bug has been known since mid of September and still has not been fixed by Apple.

gentooJoin 2004/04/11

by Patrick via Patrick's playground »

How time flies!
gentooJoin: 2004/04/11

Now I feel ooold

Getting to know your portmgr-lurker: ehaupt@

by culot via The Ports Management Team »

Let us welcome Emanuel, our second lurker who will learn a bit more about portmgr duties for the next four months and who started by answering our usual questionnaire.



Emanuel Haupt

Committer name


Inspiration for your IRC nick

Same as my default UID so that people can find me.

TLD of origin



System Engineer

When did you join portmgr@

Beginning of november 2014 as a lurker.

Inspiration for using FreeBSD

It’s been my primary server/desktop OS since years. I always liked the
documentation and found things generally easier to achieve than with
Linux. I was also fascinated by ports. At the time I was manually
downloading solaris packages from “sun freeware” when someone showed me
ports. I think it is no surprise that I switched to FreeBSD. I always
found the community to be very friendly and helpful. Finally with pkgng
I feel the same sense of excitement all over again.

Who was your first contact in FreeBSD

Pav Lucistnik (pav)

Who was your mentor(s)

Roman Bogorodskiy (novel)

What was your most embarrassing moment in FreeBSD

Can’t think of any particular one. In general breaking things tends to
be embarrassing.

Boxers / Briefs / other


vi(m) /  emacs / other

Mostly nvi but more and more vim.

What keeps you motivated in FreeBSD

pkgng, poudriere, the friendly and helpful community, ZFS, geli,
stability of the OS to name just a few.

What book do you have on your bedside table

Arnaldur Indriðason – The Draining Lake

coffee / tea / other


Do you have a guilty pleasure

Reddit and coin mining.

How would you describe yourself

Sysadmin, traveller, adventurer, motorcycler, dog person.

sendmail / postfix / other


Do you have a hobby outside of FreeBSD

I am a passionate motorcycler. I love riding my motorcycle in the more
mountainous regions of Europe. After a long day at work you often see
me on my motorcycle riding towards the sunset. I also have a
fascination for the nordic culture and literature. I’m taking Swedish
lessons since 2011.

Claim to Fame

Driving from Oberstaufen, Germany to Amman/Jordan using no highways in
3 weeks in a 20 year old Audi A4. Maintaining 195 ports and keeping
them all up to date and working.

What did you have for breakfast today

Swiss-Muesli with Coffee

What sports team do you support


What else do you do in the world of FreeBSD

Porting and maintaining ports that I’m interested in.

Any parting words you want to share

I’m just glad to have the opportunity to work with so many highly
skilled people on the FreeBSD project.

Home Depot: Hackers Stole 53M Email Addresses

by BrianKrebs via Krebs on Security »

As if the credit card breach at Home Depot didn’t already look enough like the Target breach: Home Depot said yesterday that the hackers who stole 56 million customer credit and debit card accounts also made off with 53 million customer email addresses.

In an update (PDF) released to its site on Thursday, Home Depot warned customers about the potential for thieves to use the email addresses in phishing attacks (think a Home Depot “survey” that offers a gift card for the first 10,000 people who open the booby-trapped attachment, for example). Home Depot stressed that the files containing the stolen email addresses did not contain passwords, payment card information or other sensitive personal information.

Home Depot said the crooks initially broke in using credentials stolen from a third-party vendor. The company said thieves used the vendor’s user name and password to enter the perimeter of Home Depot’s network, but that these stolen credentials alone did not provide direct access to the company’s point-of-sale devices. For that, they had to turn to a vulnerability in Microsoft Windows that was patched only after the breach occurred, according to a story in Thursday’s Wall Street Journal.

Recall that the Target breach also started with a hacked vendor — a heating and air conditioning company in Pennsylvania that was relieved of remote-access credentials after someone inside the company opened a virus-laden email attachment. Target also came out in the days after the breach became public and revealed that the attackers had stolen more than 70 million customer email addresses.

Home Depot also confirmed that thieves targeted its self-checkout systems, a pattern first reported on this blog on Sept. 18The Wall Street Journal reported that the intruders targeted the 7,500 self-checkout lanes at Home Depot because those terminals were clearly referenced by the company’s internal computer system as payment terminals, whereas another 70,000 regular registers were identified simply by a number.

News of the Home Depot breach broke on this blog on Sept. 2, after multiple banks confirmed that tens of thousands of their cards had just shown up for sale on the underground cybercrime shop rescator[dot]cc. That same carding shop was also the tip-off for the breach at Target, which came only after Rescator and his band of thieves pushed millions of cards stolen from Target shoppers onto the black market.

Feds Arrest Alleged ‘Silk Road 2′ Admin, Seize Servers

by BrianKrebs via Krebs on Security »

Federal prosecutors in New York today announced the arrest and charging of a San Francisco man they say ran the online drug bazaar and black market known as Silk Road 2.0. In conjunction with the arrest, U.S. and European authorities have jointly seized control over the servers that hosted Silk Road 2.0 marketplace.

The home page of the Silk Road 2.0 market has been replaced with this message indicating the community’s Web servers were seized by authorities.
On Wednesday, agents with the FBI and the Department of Homeland Security arrested 26-year-old Blake Benthall, a.k.a. “Defcon,” in San Francisco, charging him with drug trafficking, conspiracy to commit computer hacking, and money laundering, among other alleged crimes.

Benthall’s LinkedIn profile says he is a native of Houston, Texas and was a programmer and “construction worker” at Codespike, a company he apparently founded using another company, Benthall Group, Inc. Benthall’s LinkedIn and Facebook profiles both state that he was a software engineer at Space Exploration Technologies Corp. (SpaceX), although this could not be immediately confirmed. Benthall describes himself on Twitter as a “rocket scientist” and a “bitcoin dreamer.”

Blake Benthall’s public profile page at
Benthall’s arrest comes approximately a year after the launch of Silk Road 2.0, which came online less than a month after federal agents shut down the original Silk Road community and arrested its alleged proprietor — Ross William Ulbricht, a/k/a “Dread Pirate Roberts.” Ulbricht is currently fighting similar charges, and made a final pre-trial appearance in a New York court earlier this week.

According to federal prosecutors, since about December 2013, Benthall has secretly owned and operated Silk Road 2.0, which the government describes as “one of the most extensive, sophisticated, and widely used criminal marketplaces on the Internet today.” Like its predecessor, Silk Road 2.0 operated on the “Tor” network, a special network of computers on the Internet, distributed around the world, designed to conceal the true IP addresses of the computers on the network and thereby the identities of the network’s users.

“Since its launch in November 2013, Silk Road 2.0 has been used by thousands of drug dealers and other unlawful vendors to distribute hundreds of kilograms of illegal drugs and other illicit goods and services to buyers throughout the world, as well as to launder millions of dollars generated by these unlawful transactions,”reads a statement released today by Preet Bharara, the United States Attorney for the Southern District of New York. “As of September 2014, Silk Road 2.0 was generating sales of at least approximately $8 million per month and had approximately 150,000 active users.”

Benthall’s profile on Github.
The complaint against Benthall claims that by October 17, 2014, Silk Road 2.0 had over 13,000 listings for controlled substances, including, among others, 1,783 listings for “Psychedelics,” 1,697 listings for “Ecstasy,” 1,707 listings for “Cannabis,” and 379 listings for “Opioids.” Apart from the drugs, Silk Road 2.0 also openly advertised fraudulent identification documents and computer-hacking tools and services. The government alleges that in October 2014, the Silk Road 2.0 was generating at least approximately $8 million in monthly sales and at least $400,000 in monthly commissions.

The complaint describes how federal agents infiltrated Silk Road 2.0 from the very start, after an undercover agent working for Homeland Security investigators managed to infiltrate the support staff involved in the administration of the Silk Road 2.0 website.

“On or about October 7, 2013, the HSI-UC [the Homeland Security Investigations undercover agent] was invited to join a newly created discussion forum on the Tor network, concerning the potential creation of a replacement for the Silk Road 1.0 website,” the complaint recounts. “The next day, on or about October 8, 2013, the persons operating the forum gave the HSI‐UC moderator privileges, enabling the HSI‐UC to access areas of the forum available only to forum staff. The forum would later become the discussion forum associated with the Silk Road 2.0 website.”

The complaint also explains how the feds located and copied data from the Silk Road 2.0 servers. “In May 2014, the FBI identified a server located in a foreign country that was believed to be hosting the Silk Road 2.0 website at the time. On or about May 30, 2014, law enforcement personnel from that country imaged the Silk Road 2.0 Server and conducted a forensic analysis of it. Based on posts made to the SR2 Forum, complaining of service outages at the time the imaging was conducted, I know that once the Silk Road 2.0 server was taken offline for imaging, the Silk Road 2.0 website went offline as well, thus confirming that the server was used to host the Silk Road 2.0 website.”

The government’s documents detail how Benthall allegedly hatched a selfless plan to help the Silk Road 2.0 community recover from an incident in February 2014, wherein thieves stole millions of dollars worth of Bitcoins from community users.

“On or about September 11, 2014, Defcon had an online conversation with the HSI-UC, in which he discussed, in sum and substance, his intention to reopen the Silk Road 2.0 marketplace, and his plan to recoup the deficit of Bitcoins that had been stolen from Silk Road 2.0. Specifically, Defcon confirmed that the site needed to recoup approximately 2,900 Bitcoins to cover the loss, and stated that he intended to donate approximately 1,000 of his own Bitcoins to return liquidity to Silk Road 2.0 (“I’m planning to throw my 1000 BTC to kickstart the thing.”).”

“Defcon further acknowledged that the site had approximately 150,000 monthly active users (“We have 150,000 monthly active users. That’s why we have to save this thing.”). The HSI‐UC asked how long it would take to recover from the theft, and Defcon replied that it would take approximately three months’ worth of commission payments, if sales on Silk Road 2.0 continued at a steady rate (“Three months if sales continue at current pace and we don’t bottom out”). Thus, Defcon appears to have expected Silk Road 2.0 to generate approximately $6 million in monthly sales over the next three months, which would have resulted in commissions over that three‐month period totaling approximately $900,000 ‐ equal to approximately 1,900 Bitcoins at the then prevailing exchange rate.“

Benthall’s biggest mistake may have been using his own personal email to register the servers used for the Silk Road 2.0 marketplace. In the complaint against Benthall, an undercover agent who worked the case said that “based on a review of records provided by the service provider for the Silk Road 2.0 Server, I have discovered that the server was controlled and maintained during the relevant time by an individual using the email account”

“To me, it appears that both the human element, an undercover agent, plus technical attacks in discovering the hidden service, both played a key part in this arrest,” said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley.

Federal agents also say they tracked Benthall administering the Silk Road 2.0 from his own computer, and using Bitcoin exchanges to make large cash withdrawals. In one instance, he allegedly cashed out $270,000, and used $70,000 for a down payment on a Tesla Model S, a luxury electric car worth approximately USD $127,000.

Benthall faces a raft of serious charges that could send him to federal prison for life. He is facing one count of conspiring to commit narcotics trafficking, which carries a maximum sentence of life in prison and a mandatory minimum sentence of 10 years in prison; one count of conspiring to commit computer hacking, which carries a maximum sentence of five years in prison; one count of conspiring to traffic in fraudulent identification documents, which carries a maximum sentence of 15 years in prison; and one count of money laundering conspiracy, which carries a maximum sentence of 20 years in prison.

A copy of the complaint against Benthall is available here.

Update, Nov 7, 9:01 a.m. ET: The National Crime Agency in the United Kingdom is reporting that the demise of Silk Road 2.0 was part of a much larger operation targeting more than 400 “dark web” sites. From their press release:

“The six people arrested on suspicion of being concerned in the supply of controlled drugs were a 20-year-old man from Liverpool city centre, a 19-year-old man from New Waltham, Lincolnshire; a 30-year-old man from Cleethorpes; a 29-year-old man from Aberdovey, Wales; a 58-year-old man from Aberdovey, Wales; and a 58-year-old woman from Aberdovey, Wales. All six were interviewed and have been bailed pending further enquiries.” Read more here.

phpMyFAQ 2.8.17 Released!

by Thorsten via phpMyFAQ devBlog »

The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.17, the “Beware of bugs in the code” release. This release fixes a typo in the update script.

Still Spamming After All These Years

by BrianKrebs via Krebs on Security »

A long trail of spam, dodgy domains and hijacked Internet addresses leads back to a 37-year-old junk email purveyor in San Diego who was the first alleged spammer to have been criminally prosecuted 13 years ago for blasting unsolicited commercial email.

Last month, security experts at Cisco blogged about spam samples caught by the company’s SpamCop service, which maintains a blacklist of known spam sources. When companies or Internet service providers learn that their address ranges are listed on spam blacklists, they generally get in touch with the blacklister to determine and remediate the cause for the listing (because usually at that point legitimate customers of the blacklisted company or ISP are having trouble sending email).

In this case, a hosting firm in Ireland reached out to Cisco to dispute being listed by SpamCop, insisting that it had no spammers on its networks. Upon investigating further, the hosting company discovered that the spam had indeed come from its Internet addresses, but that the addresses in question weren’t actually being hosted on its network. Rather, the addresses had been hijacked by a spam gang.

Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker (for another example of IP address hijacking, also known as “network identity theft,” check out this story I wrote for The Washington Post back in 2008).

So who’s benefitting from the Internet addresses wrested from the Irish hosting company? According to Cisco, the addresses were hijacked by Mega-Spred and Visnet, hosting providers in Bulgaria and Romania, respectively. But what of the spammers using this infrastructure?

One of the domains promoted in the spam that caused this ruckus — unmetegulzoo[dot]com — leads to some interesting clues. It was registered recently by a Mike Prescott in San Diego, to the email address That email was used to register more than 1,100 similarly spammy domains that were recently seen in junk email campaigns (for the complete list, see this CSV file compiled by

Enter Ron Guilmette, an avid anti-spam researcher who tracks spammer activity not by following clues in the junk email itself but by looking for patterns in the way spammers use the domains they’re advertising in their spam campaigns. Guilmette stumbled on the domains registered to the Mike Prescott address while digging through the registration records on more than 14,000 spam-advertised domains that were all using the same method (Guilmette asked to keep that telltale pattern out of this story so as not to tip off the spammers, but I have seen his research and it is solid).

Of the 5,000 or so domains in that bunch that have accessible WHOIS registration records, hundreds of them were registered to variations on the Mike Prescott email address and to locations in San Diego. Interestingly, one email address found in the registration records for hundreds of domains advertised in this spam campaign was registered to a “” in San Diego, which also happens to be the email address tied to the Facebook account for one Michael Persaud in San Diego.

Persaud is an unabashed bulk emailer who’s been sued by AOL, the San Diego District Attorney’s office and by anti-spam activists multiple times over the last 15 years. Reached via email, Persaud doesn’t deny registering the domains in question, and admits to sending unsolicited bulk email for a variety of “clients.” But Persaud claims that all of his spam campaigns adhere to the CAN-SPAM Act, the main anti-spam law in the United States — which prohibits the sending of spam that spoofs that sender’s address and which does not give recipients an easy way to opt out of receiving future such emails from that sender.

As for why his spam was observed coming from multiple hijacked Internet address ranges, Persaud said he had no idea.

“I can tell you that my company deals with many different ISPs both in the US and overseas and I have seen a few instances where smaller ones will sell space that ends up being hijacked,” Persaud wrote in an email exchange with KrebsOnSecurity. “When purchasing IP space you assume it’s the ISP’s to sell and don’t really think that they are doing anything illegal to obtain it. If we find out IP space has been hijacked we will refuse to use it and demand a refund. As for this email address being listed with domain registrations, it is done so with accordance with the CAN-SPAM guidelines so that recipients may contact us to opt-out of any advertisements they receive.”

Guilmette says he’s not buying Persaud’s explanation of events.

“He’s trying to make it sound as if IP address hijacking is a very routine sort of thing, but it is still really quite rare,” Guilmette said.

The anti-spam crusader says the mere fact that Persaud has admitted that he deals with many different ISPs both in the US and overseas is itself telling, and typical of so-called “snowshoe” spammers — junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

“The vast majority of all legitimate small businesses ordinarily just find one ISP that they are comfortable with — one that provides them with decent service at a reasonable price — and then they just use that” to send email, Guilmette said. “Snowshoe spammers who need lots of widely dispersed IP space do often obtain that space from as many different ISPs, in the US and elsewhere, as they can.”

Persaud declined to say which companies or individuals had hired him to send email, but cached copies of some of the domains flagged by Cisco show the types of businesses you might expect to see advertised in junk email: payday loans, debt consolidation services, and various nutraceutical products.

In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers. In 2000, Persaud admitted to one felony count (PDF) of stealing from the U.S. government, after being prosecuted for fraud related to some asbestos removal work that he did for the U.S. Navy.

Update, 11:35 p.m. ET: Persaud says that the Michael Persaud who pleaded guilty to defrauding the government in 2000 was not him but a different Michael Persaud in San Diego. Persaud has provided a document from the U.S. District Court for the Southern District of California that appears to support this claim.

Original post:

Many network operators remain unaware of the threat of network address hijacking, but as Cisco notes, network administrators aren’t completely helpless in the fight against network-hijacking spammers: Resource Public Key Infrastructure (RPKI) can be leveraged to prevent this type of activity. Another approach known as DNSSEC can also help.

Just a simple webapp, they said ...

by Patrick via Patrick's playground »

The complexity of modern software is quite insanely insane. I just realized ...
Writing a small webapp with flask, I've had to deal with the following technologies/languages:
  • System package manager, in this case portage
  • SQL DBs, both SQLite (local testing) and PostgreSQL (production)
  • python/flask, the core of this webapp
  • jinja2, the template language usually used with it
  • HTML, because the templates don't just appear magically
  • CSS (mostly hidden in Bootstrap) to make it look sane
  • JavaScript, because dynamic shizzle
  • (flask-)sqlalchemy, ORMs are easier than writing SQL by hand when you're in a hurry
  • alembic, for DB migrations and updates
  • git, because version control
So that's about a dozen things that each would take years to master. And for a 'small' project there's not much time to learn them deeply, so we staple together what we can, learning as we go along ...

And there's an insane amount of context switching going on, you go from mangling CSS to rewriting SQL in the span of a few minutes. It's an impressive polyglot marathon, but how is this supposed to generate sustainable and high-quality results?

Und then I go home in the evening and play around with OpenCL and such things. Learning never ends - but how are we going to build things that last for more than 6 months? Too many moving parts, too much change, and never enough time to really understand what we're doing :)

Getting to know your portmgr-lurker: ak@

by culot via The Ports Management Team »

From now on and for the next four months the FreeBSD ports team is pleased to welcome two new portmgr-lurkers: ak@ and ehaupt@. Alex was the first to answer our questionnaire so let’s get to know him a bit better.


Alex (Олександр)

Committer name

Inspiration for your IRC nick
Committer name

TLD of origin

Current TLD (if different from above)

Independent contractor

When did you join portmgr@
Just a lurker atm, 2014-11-01


Inspiration for using FreeBSD
I tried linux first.

Who was your first contact in FreeBSD

Who was your mentor(s)
eadler, itetcu. Thanks, guys.

What was your most embarrassing moment in FreeBSD
I broke the INDEX, twice.

vi(m) /  emacs / other

What keeps you motivated in FreeBSD
It sucks less.

Favourite musician/band
Ritchie Blackmore/Queen

What book do you have on your bedside table
A Night in the Lonesome October (The Halloween was a few days ago).

coffee / tea / other
Tea, green.

Do you have a guilty pleasure
Sometimes I watch The Muppets instead of doing something useful.

How would you describe yourself
Too lazy.

sendmail / postfix / other

Do you have a hobby outside of FreeBSD
Tons. The bikes, skydiving, amateur Martial Arts, poetry.

What is your favourite TV show
The Muppets

Claim to Fame
Nothing so far.

What did you have for breakfast today
Fried rice with mushrooms

What sports team do you support
I don’t support teams, I play when I can (regrettably, not that often lately).

What else do you do in the world of FreeBSD
Fiddling with portlint, xorg, zfs filestorages, various odd jobs.

Any parting words you want to share
Have more fun.

What is your .sig at the moment

phpMyFAQ 2.8.16 Released!

by Thorsten via phpMyFAQ devBlog »

The phpMyFAQ Team is pleased to announce phpMyFAQ 2.8.16, the “Jack Bruce” release. It fixes a bug when restoring from backups and we fixed a lot of minor issues.

Notes from the PulseAudio Mini Summit 2014

by Arun via Arun Raghavan »

The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well.

I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well!

Happy faces — incontrovertible proof that everyone loves PulseAudio!
With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :)

Release plan

We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process.

Build simplification for BlueZ HFP/HSP backends

For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has)

srbchannel plans

We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads.

Routing framework patches

Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts)


As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this.

Module writing infrastructure

Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions.

Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave it to the module/library to take care of marshalling/unmarshalling.

Resampler quality evaluation

Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged.

It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation.

Addition of a “hi-fi” mode

The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates.

No objections were raised to having such a mode — I’d like to take this up at some point of time.

LFE channel module

Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core.


David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head).

The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!)


Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done.

ALSA controls spanning multiple outputs

David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed.

Audio groups

Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts.

Stream and device objects

Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up.

Filter sinks

Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time).

Dynamic PCM for HDMI

David plans to see if we can use profile availability to help determine when an HDMI device is actually available.

Browser volumes

The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this.

Handling bad rewinding code

Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly.

BlueZ native backend

Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support).

Containers and PA

Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach.

Common ALSA configuration

Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format).

(thanks to Tanu, David and Peter for reviewing this)

Dancing protocols, POODLEs and other tales from TLS

by Hanno Böck via Hanno's blog »

The latest SSL attack was called POODLE. Image source
The world of SSL/TLS Internet encryption is in trouble again. You may have heard that recently a new vulnerability called POODLE has been found in the ancient SSLv3 protocol. Shortly before another vulnerability that's called BERserk has been found (which hasn't received the attention it deserved because it was published on the same day as Shellshock).
I think it is crucial to understand what led to these vulnerabilities. I find POODLE and BERserk so interesting because these two vulnerabilities were both unnecessary and could've been avoided by intelligent design choices. Okay, let's start by investigating what went wrong.

The mess with CBC

POODLE (Padding Oracle On Downgraded Legacy Encryption) is a weakness in the CBC block mode and the padding of the old SSL protocol. If you've followed previous stories about SSL/TLS vulnerabilities this shouldn't be news. There have been a whole number of CBC-related vulnerabilities, most notably the Padding oracle (2003), the BEAST attack (2011) and the Lucky Thirteen attack (2013) (Lucky Thirteen is kind of my favorite, because it was already more or less mentioned in the TLS 1.2 standard). The POODLE attack builds on ideas already used in previous attacks.

CBC is a so-called block mode. For now it should be enough to understand that we have two kinds of ciphers we use to authenticate and encrypt connections – block ciphers and stream ciphers. Block ciphers need a block mode to operate. There's nothing necessarily wrong with CBC, it's the way CBC is used in SSL/TLS that causes problems. There are two weaknesses in it: Early versions (before TLS 1.1) use a so-called implicit Initialization Vector (IV) and they use a method called MAC-then-Encrypt (used up until the very latest TLS 1.2, but there's a new extension to fix it) which turned out to be quite fragile when it comes to security. The CBC details would be a topic on their own and I won't go into the details now. The long-term goal should be to get rid of all these (old-style) CBC modes, however that won't be possible for quite some time due to compatibility reasons. As most of these problems have been known since 2003 it's about time.

The evil Protocol Dance

The interesting question with POODLE is: Why does a security issue in an ancient protocol like SSLv3 bother us at all? SSL was developed by Netscape in the mid 90s, it has two public versions: SSLv2 and SSLv3. In 1999 (15 years ago) the old SSL was deprecated and replaced with TLS 1.0 standardized by the IETF. Now people still used SSLv3 up until very recently mostly for compatibility reasons. But even that in itself isn't the problem. SSL/TLS has a mechanism to safely choose the best protocol available. In a nutshell it works like this:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server may answer "No, sorry, I don't understand TLS 1.2, can you please connect with TLS 1.0?“
c) The client says "Ok, let's connect with TLS 1.0“

The point here is: Even if both server and client support the ancient SSLv3, they'd usually not use it. But this is the idealized world of standards. Now welcome to the real world, where things like this happen:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server thinks "Oh, TLS 1.2, never heard of that. What should I do? I better say nothing at all...“
c) The browser thinks "Ok, server doesn't answer, maybe we should try something else. Hey, server, I want to connect with TLS 1.1“
d) The browser will retry all SSL versions down to SSLv3 till it can connect.

The Protocol Dance is a Dance with the Devil. Image source
So here's our problem: There are broken servers out there that don't answer at all if they see a connection attempt with an unknown protocol. The well known SSL test by Qualys checks for this behaviour and calls it „Protocol intolerance“ (but „Protocol brokenness“ would be more precise). On connection fails the browsers will try all old protocols they know until they can connect. This behaviour is now known as the „Protocol Dance“ - and it causes all kinds of problems.

I first encountered the Protocol Dance back in 2008. Back then I already used a technology called SNI (Server Name Indication) that allows to have multiple websites with multiple certificates on a single IP address. I regularly became complains from people who saw the wrong certificates on those SNI webpages. A bug report to Firefox and some analysis revealed the reason: The protocol downgrades don't just happen when servers don't answer to new protocol requests, they also can happen on faulty or weak internet connections. SSLv3 does not support SNI, so when a downgrade to SSLv3 happens you get the wrong certificate. This was quite frustrating: A compatibility feature that was purely there to support broken hardware caused my completely legit setup to fail every now and then.

But the more severe problem is this: The Protocol Dance will allow an attacker to force downgrades to older (less secure) protocols. He just has to stop connection attempts with the more secure protocols. And this is why the POODLE attack was an issue after all: The problem was not backwards compatibility. The problem was attacker-controlled backwards compatibility.

The idea that the Protocol Dance might be a security issue wasn't completely new either. At the Black Hat conference this year Antoine Delignat-Lavaud presented a variant of an attack he calls "Virtual Host Confusion“ where he relied on downgrading connections to force SSLv3 connections.

"Whoever breaks it first“ - principle

The Protocol Dance is an example for something that I feel is an unwritten rule of browser development today: Browser vendors don't want things to break – even if the breakage is the fault of someone else. So they add all kinds of compatibility technologies that are purely there to support broken hardware. The idea is: When someone introduced broken hardware at some point – and it worked because the brokenness wasn't triggered at that point – the broken stuff is allowed to stay and all others have to deal with it.

To avoid the Protocol Dance a new feature is now on its way: It's called SCSV and the idea is that the Protocol Dance is stopped if both the server and the client support this new protocol feature. I'm extremely uncomfortable with that solution because it just adds another layer of duct tape and increases the complexity of TLS which already is much too complex.

There's another recent example which is very similar: At some point people found out that BIG-IP load balancers by the company F5 had trouble with TLS connection attempts larger than 255 bytes. However it was later revealed that connection attempts bigger than 512 bytes also succeed. So a padding extension was invented and it's now widespread behaviour of TLS implementations to avoid connection attempts between 256 and 511 bytes. To make matters completely insane: It was later found out that there is other broken hardware – SMTP servers by Ironport – that breaks when the handshake is larger than 511 bytes.

I have a principle when it comes to fixing things: Fix it where its broken. But the browser world works differently. It works with the „whoever breaks it first defines the new standard of brokenness“-principle. This is partly due to an unhealthy competition between browsers. Unfortunately they often don't compete very well on the security level. What you'll constantly hear is that browsers can't break any webpages because that will lead to people moving to other browsers.

I'm not sure if I entirely buy this kind of reasoning. For a couple of months the support for the ftp protocol in Chrome / Chromium is broken. I'm no fan of plain, unencrypted ftp and its only legit use case – unauthenticated file download – can just as easily be fulfilled with unencrypted http, but there are a number of live ftp servers that implement a legit and working protocol. I like Chromium and it's my everyday browser, but for a while the broken ftp support was the most prevalent reason I tend to start Firefox. This little episode makes it hard for me to believe that they can't break connections to some (broken) ancient SSL servers. (I just noted that the very latest version of Chromium has fixed ftp support again.)

BERserk, small exponents and PKCS #1 1.5

We have a problem with weak keys. Image source
Okay, now let's talk about the other recent TLS vulnerability: BERserk. Independently Antoine Delignat-Lavaud and researchers at Intel found this vulnerability which affected NSS (and thus Chrome and Firefox), CyaSSL, some unreleased development code of OpenSSL and maybe others.

BERserk is actually a variant of a quite old vulnerability (you may begin to see a pattern here): The Bleichenbacher attack on RSA first presented at Crypto 2006. Now here things get confusing, because the cryptographer Daniel Bleichenbacher found two independent vulnerabilities in RSA. One in the RSA encryption in 1998 and one in RSA signatures in 2006, for convenience I'll call them BB98 (encryption) and BB06 (signatures). Both of these vulnerabilities expose faulty implementations of the old RSA standard PKCS #1 1.5. And both are what I like to call "zombie vulnerabilities“. They keep coming back, no matter how often you try to fix them. In April the BB98 vulnerability was re-discovered in the code of Java and it was silently fixed in OpenSSL some time last year.

But BERserk is about the other one: BB06. BERserk exposes the fact that inside the RSA function an algorithm identifier for the used hash function is embedded and its encoded with BER. BER is part of ASN.1. I could tell horror stories about ASN.1, but I'll spare you that for now, maybe this is a topic for another blog entry. It's enough to know that it's a complicated format and this is what bites us here: With some trickery in the BER encoding one can add further data into the RSA function – and this allows in certain situations to create forged signatures.

One thing should be made clear: Both the original BB06 attack and BERserk are flaws in the implementation of PKCS #1 1.5. If you do everything correct then you're fine. These attacks exploit the relatively simple structure of the old PKCS standard and they only work when RSA is done with a very small exponent. RSA public keys consist of two large numbers. The modulus N (which is a product of two large primes) and the exponent.

In his presentation at Crypto 2006 Daniel Bleichenbacher already proposed what would have prevented this attack: Just don't use RSA keys with very small exponents like three. This advice also went into various recommendations (e. g. by NIST) and today almost everyone uses 65537 (the reason for this number is that due to its binary structure calculations with it are reasonably fast).

There's just one problem: A small number of keys are still there that use the exponent e=3. And six of them are used by root certificates installed in every browser. These root certificates are the trust anchor of TLS (which in itself is a problem, but that's another story). Here's our problem: As long as there is one single root certificate with e=3 with such an attack you can create as many fake certificates as you want. If we had deprecated e=3 keys BERserk would've been mostly a non-issue.

There is one more aspect of this story: What's this PKCS #1 1.5 thing anyway? It's an old standard for RSA encryption and signatures. I want to quote Adam Langley on the PKCS standards here: "In a modern light, they are all completely terrible. If you wanted something that was plausible enough to be widely implemented but complex enough to ensure that cryptography would forever be hamstrung by implementation bugs, you would be hard pressed to do better."

Now there's a successor to the PKCS #1 1.5 standard: PKCS #1 2.1, which is based on technologies called PSS (Probabilistic Signature Scheme) and OAEP (Optimal Asymmetric Encryption Padding). It's from 2002 and in many aspects it's much better. I am kind of a fan here, because I wrote my thesis about this. There's just one problem: Although already standardized 2002 people still prefer to use the much weaker old PKCS #1 1.5. TLS doesn't have any way to use the newer PKCS #1 2.1 and even the current drafts for TLS 1.3 stick to the older - and weaker - variant.

What to do

I would take bets that POODLE wasn't the last TLS/CBC-issue we saw and that BERserk wasn't the last variant of the BB06-attack. Basically, I think there are a number of things TLS implementers could do to prevent further similar attacks:

* The Protocol Dance should die. Don't put another layer of duct tape around it (SCSV), just get rid of it. It will break a small number of already broken devices, but that is a reasonable price for avoiding the next protocol downgrade attack scenario. Backwards compatibility shouldn't compromise security.
* More generally, I think the working around for broken devices has to stop. Replace the „whoever broke it first“ paradigm with a „fix it where its broken“ paradigm. That also means I think the padding extension should be scraped.
* Keys with weak choices need to be deprecated at some point. In a long process browsers removed most certificates with short 1024 bit keys. They're working hard on deprecating signatures with the weak SHA1 algorithm. I think e=3 RSA keys should be next on the list for deprecation.
* At some point we should deprecate the weak CBC modes. This is probably the trickiest part, because up until very recently TLS 1.0 was all that most major browsers supported. The only way to avoid them is either using the GCM mode of TLS 1.2 (most browsers just got support for that in recent months) or using a very new extension that's rarely used at all today.
* If we have better technologies we should start using them. PKCS #1 2.1 is clearly superior to PKCS #1 1.5, at least if new standards get written people should switch to it.

AWK und variable Feldanzahl

by Jesco Freund via My Universe »

AWK ist ein hilfreiches Werkzeug; insbesondere bei der Erstellung von Shell–Skripten setze ich es ob seiner Flexibilität gerne für die Auswertung von Textdateien ein. AWK ist eine Wunderwaffe, wenn es auf den Zugriff auf einzelne Felder einer Zeile geht:

BEGIN { FS = "," }  
      { print $2 }
Heute bin ich allerdings auf eine interessante Fragestellung gestoßen: Was tun, wenn eine Zeile eine beliebige Anzahl an Feldern beinhaltet? Zur Illustration betrachten wir einmal die Datei /etc/group, wie sie auf unixoiden Systemen gemeinhin verbreitet ist:

Ganz offensichtlich besteht jede Zeile aus vier Feldern, getrennt jeweils durch einen Doppelpunkt – so weit erst mal kein Problem; die kann AWK ja wunderbar auseinander nehmen:

BEGIN { FS = ":" }  
      { print $1 }
Dieses AWK-Skript liefert eine (iterierbare) Liste aller Gruppennamen. Unangenehm wird's allerdings, wenn man sich das vierte Feld vornimmt: es enthält eine durch Komma getrennte Liste der Benutzer, die der Gruppe zugeordnet sind. Preisfrage: wie kriegt man die auseinander genommen, so dass sie in einem Shell–Skript iterierbar sind? Man stelle sich dazu folgenden Pseudo-Code vor:

groups=$(awk '{ FS = ":" }{ print $1 }' /etc/group)  
for group in ${groups}; do  
    users = $(awk '{do some black magick}')
    for user in ${users}; do
        # do some clever operation which requires both, group and user
AWK kennt so direkt keinen Befehl, der alle Felder einer Zeile sequenziell ausgibt. Es gibt mehrere alternative Möglichkeiten zur Lösung des Problems; eine recht flexible ist der behelfsmäßige Einsatz der Funktion split in Zusammenspiel mit einer for Schleife:

    c = split($0, a, ",");
    for(n = 1; n <= c; ++n)
    print a[n]
Übertragen auf unser Programmbeispiel von weiter oben lässt sich jetzt also die Liste der User in einer Gruppe folgendermaßen ermitteln:

groups=$(awk '{ FS = ":" }{ print $1 }' /etc/group)  
for group in ${groups}; do  
    users=$(grep "^${group}:" /etc/group | awk -F ":" '{ c=split($4, a, ","); for(n=1;n<=c;++n)print a[n] }')
    for user in ${users}; do
        # do some clever operation which requires both, group and user
Da Variablen aus dem Shell-Skript in regulären Ausdrücken innerhalb eines AWK-Programms nicht expandiert werden, muss der kleine Kunstgriff mit grep leider sein – nicht sonderlich elegant, aber so funktioniert's wenigstens …

Thieves Cash Out Rewards, Points Accounts

by BrianKrebs via Krebs on Security »

A number of readers have complained recently about having their Hilton Honors loyalty accounts emptied by cybercrooks. This type of fraud often catches consumers off-guard, but the truth is that the recent spike in fraud against Hilton Honors members is part of a larger trend that’s been worsening for years as more companies offer rewards programs.

Many  companies give customers the ability to earn “loyalty” or “award” points and miles that can be used to book travel, buy goods and services online, or redeemed for cash. Unfortunately, the online accounts used to manage these reward programs tend to be less secured by both consumers and the companies that operate them, and increasingly cyber thieves are swooping in to take advantage.

Brendan Brothers, a frequent traveler from St. John’s in Newfoundland, Canada, discovered a few days ago that his Hilton Honors account had been relieved of more than a quarter-million points, rewards that he’d accumulated using a credit card associated with the account. Brothers said the fraudsters were brazen in their theft, using his account to redeem a half-dozen hotel stays in the last week of September, booking rooms all along the East Coast of the United States, from Atlanta, GA to Charlotte, N.C. all the way up to Stamford, CT.

The thieves reserved rooms at more affordable Hilton properties, probably to make the points stretch further, Brothers said. When they exhausted his points, they used the corporate credit card that was already associated with the account to purchase additional points.

“They got into the account and of course the first thing they did was change my primary and secondary email accounts, so that neither me nor my travel agent were getting notifications about new travel bookings,” said Brothers, co-founder of Verafin, a Canadian software security firm that focuses on anti-money laundering and fraud detection.

Brothers said he plans to dispute the credit card charges, but he’s unsure what will happen with his purloined points; nearly a week after he complained to Hilton about the fraud, Brothers has yet to receive a response from the company. Hilton also did not respond to requests for comment from KrebsOnSecurity.


Hilton gives users two ways to log into accounts: With a user name and password, or a member number and a 4-digit PIN. What could go wrong here?  Judging from changes that Hilton made recently to its login process, thieves have been breaking into Hilton Honors accounts using the latter method. According to the travel loyalty Web site LoyaltyLobby, Hilton recently added a CAPTCHA to its login process, ostensibly to make it more difficult for crooks to use brute-forcing programs (or botnets) to automate the guessing of PINs associated with member accounts.

In a post on October 30, LoyaltyLobby’s John Ollila wrote about a hacker selling Hilton Honors accounts for a tiny fraction of the real world value of points in those accounts. For example, the points stolen from Brothers would have fetched around USD $12 — even though the thieves in his case managed to redeem the stolen miles for approximately USD $1,200 worth of hotel reservations.

I did a bit of sleuthing on my own and was able to find plenty of sellers on shady forums offering them for about three to five percent of their actual value. As this ad from the online black market/drug bazaar known as Evolution Market indicates, the points can be redeemed for gift cards (as good as cash) at and other locations that convert points to currency. The points also can be used to buy items from the Hilton shopping mall, including golf clubs, watches, Apple products and other electronics.

A merchant on the Evolution black market hawking hijacked Hilton points for a fraction of their value.
“I don’t recommend using them for personal hotel stays, but they ARE safer (and cheaper) than using a carded hotel service,” the Evolution seller advises, referring to the risks associated with using purloined points versus trying to book a stay somewhere using a stolen credit card.

Hilton Honors is hardly alone in allowing logins via account numbers and 4-digit PINs; United Airlines is another big name company that lets customers log in to view, spend and transfer points balances with little more than a member number and a PIN. These companies should offer their customers additional security options, such as the ability to secure accounts with multi-factor authentication (e.g. via Security Keys or Google’s Authenticator mobile app).

If it wasn’t already painfully obvious that a lot of companies and their customers could benefit from adding multi-factor authentication, check out the screen shot below, which shows an underground site that offers automated account checking tools for more than two-dozen retail destinations online. Some of these tools will help streamline the process of dumping available awards and points to a prepaid card.

Stolen points and miles would be a great way to fund a criminal-friendly travel agency. By the way, that’s actually a thing: Check out this story about services in the underground that will book stolen flights and hotels rooms for a fraction of their actual cost.

PC-BSD 10.1-RC2 Released

by dru via Official PC-BSD Blog »

The PC-BSD team is pleased to announce the availability of RC2 images for the upcoming PC-BSD 10.1 release. This RC includes many minor bug-fixes from RC2, along with new UEFI support for boot / install.

PC-BSD 10.1 notable Changes

* KDE 4.14.2
* GNOME 3.12.2
* Cinnamon 2.2.16
* Chromium 38.0.2125.104_1
* Firefox 33.0
* NVIDIA Driver 340.24
* Lumina desktop 0.7.0-beta
* Pkg 1.3.8_3
* New AppCafe HTML5 web/remote interface, for both desktop / server usage
* New CD-sized text-installer ISO files for TrueOS / server deployments
* New Centos 6.5 Linux emulation base
* New HostAP mode for Wifi GUI utilities
* Misc bug fixes and other stability improvements
* NEW! — UEFI support for boot and installation


Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition


A testing update is available for 10.0.3 users to upgrade to 10.1-RC2. To apply this update, do the following:

edit: /usr/local/share/pcbsd/pc-updatemanager/conf/sysupdate.conf (As root)

Change “PATCHSET: updates”


“PATCHSET: test-updates”

% sudo pc-updatemanager check

This should show you a new “Update system to 10.1-RELEASE” patch available. To install run the following:

% sudo pc-updatemanager install 10.1-update-10152014–10


As with any major system upgrade, please backup important data / files beforehand!!!

This update will automatically reboot your system several times during the various upgrade phases, please expect it to take between 30–60 minutes.

Getting media

10.1-RC2 DVD/USB media can be downloaded from this URL via HTTP or Torrent.

Reporting Bugs
Found a bug in 10.1? Please report it (with as much detail as possible) to our bugs database.

No more DEPENDs for SELinux policy package dependencies

by swift via Simplicity is a form of art... »

I just finished updating 102 packages. The change? Removing the following from the ebuilds:

DEPEND="selinux? ( sec-policy/selinux-${packagename} )"
In the past, we needed this construction in both DEPEND and RDEPEND. Recently however, the SELinux eclass got updated with some logic to relabel files after the policy package is deployed. As a result, the DEPEND variable no longer needs to refer to the SELinux policy package.

This change also means that for those moving from a regular Gentoo installation to an SELinux installation will have much less packages to rebuild. In the past, getting USE="selinux" (through the SELinux profiles) would rebuild all packages that have a DEPEND dependency to the SELinux policy package. No more – only packages that depend on the SELinux libraries (like libselinux) or utilities rebuild. The rest will just pull in the proper policy package.

Trouble with IPv6 in a KVM guest running the 3.13 kernel

by Sebastian Marsching via Open-Source, Physik & Politik »

Some time ago, I wrote about two problems with the 3.13 kernel shipping with Ubuntu 14.04 LTS Trusty Tahr: One turned out to be a problem with KSM on NUMA machines acting as Linux KVM hosts and was fixed in later releases of the 3.13 kernel. The other one affected IPv6 routing between virtual machines on the same host. Finally, I figured out the cause of the second problem and how it can be solved.

I use two different kinds of network setups for Linux KVM hosts: For virtual-machine servers in our own network, the virtual machines get direct bridged access to the network (actually I use OpenVSwitch on the VM hosts for bridging specific VLANs, but this is just a technical detail). For this kind of setup, everything works fine, even when using the 3.13 kernel. However, we also have some VM hosts that are actually not in our own network, but are hosted in various data centers. For these VM hosts, I use a routed network configuration. This means that all traffic coming from and going to the virtual machines is routed by the VM host. On layer 2 (Ethernet), the virtual machines only see the VM host and the hosting provider's router only sees the physical machine.

This kind of setup has two advantages: First, it always works, even if the hosting provider expects to only see a single, well-known MAC address (which might be desirable for security reasons). Second, the VM host can act as a firewall, only allowing specific traffic to and from the outside world. In fact, the VM host can also act as a router between different virtual machines, thus protecting them from each other should one be compromised.

The problems with IPv6 only appear when using this kind of setup, where the Linux KVM host acts as a router, not a bridge. The symptoms are that IPv6 packets between two virtual machines are occasionally dropped, while communication with the VM host and the outside world continues to work fine. This is caused by the neigbor-discovery mechanism in IPv6. From the perspective of the VM host, all virtual machines are in the same network. Therefore, it sends an ICMPv6 redirect message in order to indicate that the VM should contact the other VM directly. However, this does not work because the network setup only allows traffic between the VM host and individual virtual machines, but no traffic between two virtual machines (otherwise it could not act as a firewall). Therefore, the neighbor-discovery mechanism determines the other VM to be not available (it should be on the same network but does not answer). After some time, the entry in the neighbor table (that you can inspect with ip neigh show) will expire and communication will work again for a short time, until the next redirect message is received and the same story starts again.

There are two possible solutions to this: The proper one would be to use an individual interface for each guest on the VM host. In this case, the VM host would not expect the virtual machines to be on the same network and thus stop sending redirect packets. Unfortunately, this makes the setup more complex and - if using a separate /64 for each interface - needs a lot of address space. The simpler albeit sketchy solution is to prevent the redict messages from having any effect. For IPv4, one could disable the sending of redirect messages through the sysctl option net.ipv4.conf.<interface>.send_redirects. For IPv6 however, this option is not available. So one could either use an iptables rule on the OUTPUT chain for blocking those packets or simply configure the KVM guests to ignore such packets. I chose the latter approach and added

# IPv6 redirects cause problems because of our routing scheme.
net.ipv6.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
to /etc/sysctl.conf in all affected virtual machines.

I do not know, why this behavior changed with kernel 3.13. One would expect the same problem to appear with older kernel versions, but I guess there must have been some change in the details of how NDP and redirect messages are handled.

Addendum (2014-11-02):

Adding the suggested options to sysctl.conf does not seem to fix the problem completely. For some reasons, an individual network interface can still have this setting enabled. Therefore, I now added the following line to the IPv6 configuration of the affected interface in /etc/network/interfaces:

        post-up sysctl net.ipv6.conf.$IFACE.accept_redirects=0
This finally fixes it, even if the other options are not added to sysctl.conf.

Erfrischend Altbacken

by Jesco Freund via My Universe »

So oder so ähnlich darf man wohl den jüngsten Zuwachs meiner virtuellen Luftflotte charakterisieren – die Rede ist von FlyJSims Boeing 737–200 Modell.

Für Verkehrspiloten, die an heutige moderne Flugzeuge mit Glascockpit und vor allem entsprechenden Flugmanagement–Systemen (FMS bei Boeing, FMGC bei Airbus) gewöhnt sind, bietet die Boeing 737–200 eine besondere Herausforderung: Den Instrumentenflug alter Schule.

Kein GPS, kein INS, und schon gar kein ADIRS – da bleibt nur die gute alte Funknavigation mit ADF/NDB und VOR. Entsprechend muss ein Flug schon ganz anders geplant werden: Kommen Wegpunkte ins Spiel, die nicht selbst durch ein entsprechendes Funkfeuer gekennzeichnet sind, muss die entsprechende RNAV manuell erfolgen (z. B. durch Bestimmung ihrer relativen Position zu einem Funkfeuer mit DME oder mittels Kreuzpeilung).

Bei längeren Strecken über Wasser steht am Ende gar nur der wenig präzise Magnetkompass zur Verfügung – eine echte Herausforderung; zumal zwar die Abweichung zwischen magnetisch und geographisch Nord noch aus Tabellen ermittelt, die Drift durch seitlich wirkende Winde jedoch mangels Referenz bestenfalls geschätzt werden kann.

Ähnliches gilt selbstverständlich auch für die vertikale Navigation und die Schubregelung: Fehlanzeige bei der Automatisierung. Der Autopilot kann zwar eine bestimmte Höhe oder Steigrate oder Geschwindigkeit halten (das »oder« ist exklusiv gemeint); die Berechnung der optimalen Steigrate, Reiseflughöhe sowie die Bestimmung des T/D obliegt jedoch allein dem Piloten, genauso wie die Schubregelung – anders als spätere Modelle verfügt die Boeing 737–200 noch nicht über ein FADEC System zur elektronischen Triebwerksregelung.

FlyJSim hat das alles wunderbar in seinem Modell eingebaut. Das Cockpit ist sehr detailliert gestaltet; Knöpfe, Schalter und Schutzkappen lassen sich größtenteils betätigen und verhalten sich je nach Bauart des Schalters auch unterschiedlich. Das Exterieur ist solide – sicherlich kein solches Highlight wie etwa die Modelle von Flight Factor oder JARDesign, aber ohne Schnitzer umgesetzt – und letztlich bietet die Boeing 737–200 einfach weniger Komplexität (z. B. beim Fahrwerk), an der sich ein Designer austoben könnte.

Auch die Geräuschkulisse stimmt. Gut zu hören ist etwa der „Woodpecker“, sobald das Cockpit mit Strom versorgt wird. Auch die Schalter geben bei Betätigung entsprechende Laute von sich, ebenso wie APU, Klimaanlage und natürlich die Triebwerke ihren eigenen Klang in die Geräuschkulisse einbringen.

Das Flugzeug lässt sich sehr gut von Hand steuern, und auch das Taxi–Verhalten ist sehr realistisch gestaltet (bei virtuellen Airlinern eher eine Seltenheit). Nach relativ kurzer Eingewöhnung macht das Fliegen dieses Retro–Jets großen Spaß. Gerade auf längeren Flügen kommt keine Langeweile auf; hat der Pilot doch deutlich mehr zu tun als bei moderneren Flugzeugen – schließlich will jeder Streckenabschnitt manuell eingeleitet werden.

FreeBSD turns 21 today!

by Webmaster Team via FreeBSD News Flash »

FreeBSD 1.0, the first official production-ready release of FreeBSD was announced 21 years ago today, on November 2nd, 1993. See the original announcement here.

FreeBSD 10.1-RC4 Available

by Webmaster Team via FreeBSD News Flash »

The fourth RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

EVE Online on Gentoo Linux

by titanofold via titanofold »

Good news, everyone! I’m finally rid of Windows.

A couple weeks ago my Windows installation corrupted itself on the 5 minute trip home from the community theatre. I didn’t command it to go to sleep, I just unplugged it and closed the lid. Somehow, it managed to screw up its startup files, and the restore process didn’t do what it was supposed to so I was greeted with a blank screen. No errors. Just staring into the void.

I’ve been using Windows as the sole OS on this machine with Gentoo running in VirtualBox for various reasons related to minor annoyances of unsupported hardware, but as I needed a working machine sooner rather than later and the only tools I could find to solve my Windows problem appeared to be old, defunct, and/or suspicious, I downloaded an ISO of SystemRescueCd ( and installed Gentoo in the sliver of space left on the drive.

There were only two real reasons why I was intent on keeping Windows: Netflix ( and EVE Online ( I intended to get Windows up and running once the show was over at the theatre, but then I read about Netflix being supported in Linux ( That left me with just one reason to keep Windows: EVE. I turned to Wine ( and discovered reports of it running EVE quite well ( I also learned that the official Mac OS release of EVE runs on Cider (, which is based on Wine.

I had another hitch: I chose the no-multilib stage3 for that original sliver thinking I wouldn’t be running anything other than 64 bit software, and drive space was at a premium. EVE Online is 32 bit.

So I had to begin my adventure with switching to multilib. This didn’t involve me reinstalling Gentoo thanks to a handy, but unsupported and unofficial, guide ( by Jaco Kroon.

As explained on Multilib System without emul-linux Packages (, I decided it’s better to build my own 32 bit library. So, the next step is to mask the emulation packages:

# /etc/portage/package.mask
Because I didn’t want to build a 32 bit variant for everything on my system, I iterated through what Portage wanted and marked several packages to build their 32 bit variant via use flags. This is what I wound up with:

# /etc/portage/package.use
app-arch/bzip2 abi_x86_32
app-emulation/wine mono abi_x86_32
dev-libs/elfutils static-libs abi_x86_32
dev-libs/expat abi_x86_32
dev-libs/glib abi_x86_32
dev-libs/gmp abi_x86_32
dev-libs/icu abi_x86_32
dev-libs/libffi abi_x86_32
dev-libs/libgcrypt abi_x86_32
dev-libs/libgpg-error abi_x86_32
dev-libs/libpthread-stubs abi_x86_32
dev-libs/libtasn1 abi_x86_32
dev-libs/libxml2 abi_x86_32
dev-libs/libxslt abi_x86_32
dev-libs/nettle abi_x86_32
dev-util/pkgconfig abi_x86_32
media-libs/alsa-lib abi_x86_32
media-libs/fontconfig abi_x86_32
media-libs/freetype abi_x86_32
media-libs/glu abi_x86_32
media-libs/libjpeg-turbo abi_x86_32
media-libs/libpng abi_x86_32
media-libs/libtxc_dxtn abi_x86_32
media-libs/mesa abi_x86_32
media-libs/openal abi_x86_32
media-sound/mpg123 abi_x86_32
net-dns/avahi abi_x86_32
net-libs/gnutls abi_x86_32
net-print/cups abi_x86_32
sys-apps/dbus abi_x86_32
sys-devel/llvm abi_x86_32
sys-fs/udev gudev abi_x86_32
sys-libs/gdbm abi_x86_32
sys-libs/ncurses abi_x86_32
sys-libs/zlib abi_x86_32
virtual/glu abi_x86_32
virtual/jpeg abi_x86_32
virtual/libffi abi_x86_32
virtual/libiconv abi_x86_32
virtual/libudev abi_x86_32
virtual/opengl abi_x86_32
virtual/pkgconfig abi_x86_32
x11-libs/libX11 abi_x86_32
x11-libs/libXau abi_x86_32
x11-libs/libXcursor abi_x86_32
x11-libs/libXdamage abi_x86_32
x11-libs/libXdmcp abi_x86_32
x11-libs/libXext abi_x86_32
x11-libs/libXfixes abi_x86_32
x11-libs/libXi abi_x86_32
x11-libs/libXinerama abi_x86_32
x11-libs/libXrandr abi_x86_32
x11-libs/libXrender abi_x86_32
x11-libs/libXxf86vm abi_x86_32
x11-libs/libdrm abi_x86_32
x11-libs/libvdpau abi_x86_32
x11-libs/libxcb abi_x86_32
x11-libs/libxshmfence abi_x86_32
x11-proto/damageproto abi_x86_32
x11-proto/dri2proto abi_x86_32
x11-proto/dri3proto abi_x86_32
x11-proto/fixesproto abi_x86_32
x11-proto/glproto abi_x86_32
x11-proto/inputproto abi_x86_32
x11-proto/kbproto abi_x86_32
x11-proto/presentproto abi_x86_32
x11-proto/randrproto abi_x86_32
x11-proto/renderproto abi_x86_32
x11-proto/xcb-proto abi_x86_32 python_targets_python3_4
x11-proto/xextproto abi_x86_32
x11-proto/xf86bigfontproto abi_x86_32
x11-proto/xf86driproto abi_x86_32
x11-proto/xf86vidmodeproto abi_x86_32
x11-proto/xineramaproto abi_x86_32
x11-proto/xproto abi_x86_32
Now emerge both Wine — the latest and greatest of course — and the questionable library so textures will be rendered:

emerge -av media-libs/libtxc_dxtn =app-emulation/wine-1.7.29
You may get some messages along the lines of:

emerge: there are no ebuilds to satisfy ">=sys-libs/zlib-1.2.8-r1".
This was a bit of a head scratcher for me. I have syslibs/zlib-1.2.8-r1 installed. I didn’t have to accept its keyword. It’s already stable! I haven’t really looked into why, but you have to accept its keyword to press forward:

# echo '=sys-libs/zlib-1.2.8-r1' >> /etc/portage/package.accept_keywords
You’ll have to do the above several times for other packages when you try to emerge Wine. Most of the time the particular version it wants is something you already have installed. Check what you do have installed with eix or other favorite tool so you don’t downgrade anything. Once wine is installed, as your user run:

$ winecfg
Download the EVE Online Windows installer and run it using Wine:

$ wine EVE_Online_Installer_*.exe
Once that’s done, invoke the launcher as:

$ force_s3tc_enable=true wine 'C:\Program Files (x86)\CCP\EVE\eve.exe'
force_s3tc_enable=true is needed to enable texture rendering. Without it, EVE will freeze during start up. (If you didn’t emerge media-libs/libtxc_dxtn, EVE will start, but none of the textures will load, and you’ll have a lot of black on black objects.) I didn’t have to do any of the other things I’ve found, such as disabling DirectX 11.

As for my Linux setup: I have a Radeon HD6480G (SUMO/r600) in my ThinkPad Edge E525, and I’m using the open source radeon ( drivers with graphics on high and medium anti-aliasing with Mesa and OpenGL. For the most part, I find the game play to be smooth and indistinguishable from my experience on Windows.

There are a few things that don’t work well. Psychedelic, rendering artifacts galore when I open the in-game browser (IGB) or switch to another application, but that’s resolve without logging out of EVE by changing the graphics quality to something else. It may be related to resource caching, but I need to do more testing. I haven’t tried going into the Captain’s Quarters (other users have reported crashes entering there) as back on Windows that brings my system to a crawl, and there isn’t anything particularly interesting about going in there…yet.

Overall, I’m quite happy with the EVE/Wine experience on Gentoo. It was quite easy and there wasn’t any real troubleshooting for me to do.

If you’re a fellow Gentoo-er in EVE, drop me a line. If you want to give EVE a go, have an extra week on me.

Update: I’ve been informed by Aatos Taavi that running EVE in windowed mode works quite well. I’ve also been informed that we need to declare stable packages in portage.accept_keywords because abi_x86_32 is use masked.

bsdtalk246 - Playing with tor

by Mr via bsdtalk »

Looking forward to attending MeetBSD in California this weekend.  Still working on finding a new /home for all my stuff, but thank you all who have offered suggestions and hosting.
Playing with tor, but under no illusions that it makes me safe.

One trick I did was to add the following to my pf.conf file on OpenBSD:

block out all
pass out proto { tcp, udp } all user _tor

File info: 19Min, 9MB

Ogg Link:

Using multiple priorities with modules

by swift via Simplicity is a form of art... »

One of the new features of the 2.4 SELinux userspace is support for module priorities. The idea is that distributions and administrators can override a (pre)loaded SELinux policy module with another module without removing the previous module. This lower-version module will remain in the store, but will not be active until the higher-priority module is disabled or removed again.

The “old” modules (pre-2.4) are loaded with priority 100. When policy modules with the 2.4 SELinux userspace series are loaded, they get loaded with priority 400. As a result, the following message occurs:

~# semodule -i screen.pp
libsemanage.semanage_direct_install_info: Overriding screen module at lower priority 100 with module at priority 400
So unlike the previous situation, where the older module is substituted with the new one, we now have two “screen” modules loaded; the last one gets priority 400 and is active. To see all installed modules and priorities, use the --list-modules option:

~# semodule --list-modules=all | grep screen
100 screen     pp
400 screen     pp
Older versions of modules can be removed by specifying the priority:

~# semodule -X 100 -r screen

KrebsOnSecurity Honored for Fraud Reporting

by BrianKrebs via Krebs on Security »

The Association of Certified Fraud Examiners today announced they have selected Yours Truly as the recipient of this year’s “Guardian Award,” an honor given annually to a journalist “whose determination, perseverance, and commitment to the truth have contributed significantly to the fight against fraud.”

The Guardian Award bears the inscription “For Vigilance in Fraud Reporting.”

Previous honorees include former Washington Post investigative reporter and two-time Pulitzer Prize winner Susan Schmidt; Diana Henriques, a New York Times contributing writer and author of The Wizard of Lies (a book about Bernie Madoff); and Allan Dodds Frank, a regular contributor to and The Daily Beast.

I’d like to thank the ACFE for this prestigious award, and offer a special note of thanks to all of you dear readers who continue to support my work as an independent journalist.

The ACFE’s blog post about the award is here.

Chip & PIN vs. Chip & Signature

by BrianKrebs via Krebs on Security »

The Obama administration recently issued an executive order requiring that federal agencies migrate to more secure chip-and-PIN based credit cards for all federal employees that are issued payment cards. The move marks a departure from the far more prevalent “chip-and-signature” standard, an approach that has been overwhelmingly adopted by a majority of U.S. banks that are currently issuing chip-based cards. This post seeks to explore some of the possible reasons for the disparity.

Chip-based cards are designed to be far more expensive and difficult for thieves to counterfeit than regular credit cards that most U.S. consumers have in their wallets. Non-chip cards store cardholder data on a magnetic stripe, which can be trivially copied and re-encoded onto virtually anything else with a magnetic stripe.

Magnetic-stripe based cards are the primary target for hackers who have been breaking into retailers like Target and Home Depot and installing malicious software on the cash registers: The data is quite valuable to crooks because it can be sold to thieves who encode the information onto new plastic and go shopping at big box stores for stuff they can easily resell for cash (think high-dollar gift cards and electronics).

The United States is the last of the G20 nations to move to more secure chip-based cards. Other countries that have made this shift have done so by government fiat mandating the use of chip-and-PIN. Requiring a PIN at each transaction addresses both the card counterfeiting problem, as well as the use of lost or stolen cards.

Here in the States, however, the movement to chip-based cards has evolved overwhelmingly toward the chip-and-signature approach. Naturally, if your chip-and-signature card is lost or stolen and used fraudulently, there is little likelihood that a $9-per-hour checkout clerk is going to bat an eyelash at a thief who signs your name when using your stolen card to buy stuff at retailers. Nor will a signature card stop thieves from using a counterfeit card at automated payment terminals (think gas pumps).

But just how broadly adopted is chip-and-signature versus chip-and-PIN in the United States? According to an unscientific poll that’s been running for the past two years at the travel forum Flyertalk, only a handful of major U.S. banks issue chip-and-PIN cards; most have pushed chip-and-signature. Check out Flyertalk’s comprehensive Google Docs spreadsheet here for a member-contributed rundown of which banks support chip-and-PIN versus chip-and-signature.

I’ve been getting lots of questions from readers who are curious or upset at the prevalence of chip-and-signature over chip-and-PIN cards here in the United States, and I realized I didn’t know much about the reasons behind the disparity vis-a-vis other nations that have already made the switch to chip cards. So  I reached out to several experts to get their take on it.

Julie Conroy, a fraud analyst with The Aite Group, said that by and large Visa has been pushing chip-and-signature and that MasterCard has been promoting chip-and-PIN. Avivah Litan, an analyst at Gartner Inc., said MasterCard is neutral on the technology. For its part, Visa maintains that it is agnostic on the technology, saying in an emailed statement that the company believes “requiring stakeholders to use just one form of cardholder authentication may unnecessarily complicate the adoption of this important technology.”

BK: A lot of readers seem confused about why more banks wouldn’t adopt chip-and-PIN over chip-and-signature, given that the former protects against more forms of fraud.

Conroy: The PIN only addresses fraud when the card is lost or stolen, and in the U.S. market lost-and-stolen fraud is very small in comparison with counterfeit card fraud. Also, as we looked at other geographies — and our research has substantiated this — as you see these geographies go chip-and-PIN, the lost-and-stolen fraud dips a little bit but then the criminals adjust. So in the UK, the lost-and-stolen fraud is now back above where was before the migration. The criminals there have adjusted. and that increased focus on capturing the PIN gives them more opportunity, because if they do figure out ways to compromise that PIN, then they can perpetrate ATM fraud and get more bang for their buck.

So, PIN at the end of the day is a static data element, and it only goes so far from a security perspective. And as you weigh that potential for attrition versus the potential to address the relatively small amount of fraud that is lost and stolen fraud, the business case for chip and signature is really a no-brainer.

Litan: Most card issuing banks and Visa don’t want PINs because the PINs can be stolen and used with the magnetic stripe data on the same cards (that also have a chip card) to withdraw cash from ATM machines. Banks eat the ATM fraud costs. This scenario has happened with the roll-out of chip cards with PIN – in Europe and in Canada.

BK: What are some of the things that have pushed more banks in the US toward chip-and-signature?

Conroy: As I talk to the networks and the issuers who have made their decision about where to go, there are a few things that are moving folks toward chip-and-signature. The first is that we are the most competitive market in the world, and so as you look at the business case for chip-and-signature versus chip-and-PIN, no issuer wants to have the card in the wallet that is the most difficult card to use.

BK: Are there recent examples that have spooked some of the banks away from embracing chip-and-PIN?

Conroy: There was a Canadian issuer that — when they did their migration to chip — really botched their chip-and-PIN roll out, and consumers were forgetting their PIN at the point-of-sale. That issuer saw a significant dip in transaction volume as a result. One of the missteps this issuer made was that they sent their PIN mailers out too soon before you could actually do PIN transactions at the point of sale, and consumers forgot. Also, at the time they sent out the cards, [the bank] didn’t have the capability at ATMs or IVRs (automated, phone-based customer service systems) for consumers to reset their PINs to something they could remember.

BK: But the United States has a much more complicated and competitive financial system, so wouldn’t you expect more issuers to be going with chip-and-PIN?

Conroy: With consumers having an average of about 3.3 cards in their wallet, and the US being a far more competitive card market, the issuers are very sensitive to that. As I was doing my chip-and-PIN research earlier this year, there was one issuer that said quite bluntly, “We don’t really think we can teach Americans to do two things at once. So we’re going to start with teaching them how to dip, and if we have another watershed event like the Target breach and consumers start clamoring for PIN, then we’ll adjust.” So the issuers I spoke with wanted to keep it simple: Go to market with plain vanilla, and once we get this working, we can evaluate adding some sprinkles and toppings later.

BK: What about the retailers? I would think more of them are in favor of chip-and-PIN over signature.

Litan: Retailers want PINs because they strengthen the security of the point-of-sale (POS) transaction and lessen the chances of fraud at the POS (which they would have to eat if they don’t have chip-accepting card readers but are presented with a chip card). Also retailers have traditionally been paying lower rates on PIN transactions as opposed to signature transactions, although those rates have more or less converged over time, I hear.

BK: Can you talk about the ability to use these signature cards outside the US? That’s been a sticking point in the past, no?

Conroy: The networks have actually done a good job over the last year to 18 months in pushing the [merchant banks] and terminal manufacturers to include “no cardholder verification method” as one of the options in the terminals. Which means that chip-and-signature cards are increasingly working. There was one issuer I spoke with that had issued chip-and-signature cards already for their traveling customers and they said that those moves by the networks and adjustments overseas meant that their chip-and-signature cards were working 98 percent of the time, even at the unattended kiosks, which were some of the things that were causing problems a lot of the time.

BK: Is there anything special about banks that have chosen to issue chip-and-PIN cards over chip-and-signature?

Conroy: Where we are seeing issuers go with chip-and-PIN, largely it is issuers where consumers have a very compelling reason to pull that particular card out of their wallet. So, we’re talking mostly about merchants who are issuing their own cards and have loyalty points for using that card at that store. That is where we don’t see folks worrying about the attrition risks so much, because they have another point of stickiness for that card.

BK: What did you think about the White House announcement that specifically called out chip-and-PIN as the chip standard the government is endorsing?

Conroy: The White House announcement I thought was pure political window dressing. Especially when they claimed to be taking the lead on credit card security.  Visa, for example, made their initial road map announcement back in 2011. And [the White House is] coming to the table three years later thinking that its going to influence the direction the market is taking when many banks have spent in some cases upwards of a year coding toward these specifications? That just seems ludicrous to me. The chip-card train has been out of the station for a long time. And it seemed like political posturing at its best, or worst, depending on how you look at it.

Litan: I think it is very significant. It’s basically the White House taking the side of the card acceptors and what they prefer. Whatever the government does will definitely help drive trends, so I think it’s a big statement.

BK: So, I guess we should all be grateful that banks and retailers in the United States are finally taking steps to move toward chip cards, but it seems to me that as long as these chip cards still also store cardholder data on a magnetic stripe as a backup, that the thieves can still steal and counterfeit this card data — even from chip cards.

Litan: Yes, that’s the key problem for the next few years. Once mag stripe goes away, chip-and-PIN will be a very strong solution. The estimates are now that by the end of 2015, 50 percent of the cards and terminals will be chip-enabled, but it’s going to be several years before we get closer to full compliance. So, we’re probably looking at about 2018 before we can start making plans to get rid of the magnetic stripe on these cards.

On professionalism (my first and last post explicitly about systemd)

by Flameeyes via Flameeyes's Weblog »

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don't want to have a trollfest on my blog, because moderating it is something I hate and I'm sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don't think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don't see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it's easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I've considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don't think I trust the stability promises for the moment and I don't reboot servers that often anyway.

There are good things to the systemd design. And I'm sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don't like Lennart's approach on this to begin with, and I don't think it's uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don't like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don't agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the "software biodiversity". At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I'll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.

Migrating to SELinux userspace 2.4 (small warning for users)

by swift via Simplicity is a form of art... »

In a few moments, SELinux users which have the ~arch KEYWORDS set (either globally or for the SELinux utilities in particular) will notice that the SELinux userspace will upgrade to version 2.4 (release candidate 5 for now). This upgrade comes with a manual step that needs to be performed after upgrade. The information is mentioned as post-installation message of the policycoreutils package, and basically sais that you need to execute:

~# /usr/libexec/selinux/semanage_migrate_store
The reason is that the SELinux utilities expect the SELinux policy module store (and the semanage related files) to be in /var/lib/selinux and no longer in /etc/selinux. Note that this does not mean that the SELinux policy itself is moved outside of that location, nor is the basic configuration file (/etc/selinux/config). It is what tools such as semanage manage that is moved outside that location.

I tried to automate the migration as part of the packages themselves, but this would require the portage_t domain to be able to move, rebuild and load policies, which it can’t (and to be honest, shouldn’t). Instead of augmenting the policy or making updates to the migration script as delivered by the upstream project, we currently decided to have the migration done manually. It is a one-time migration anyway.

If for some reason end users forget to do the migration, then that does not mean that the system breaks or becomes unusable. SELinux still works, SELinux aware applications still work; the only thing that will fail are updates on the SELinux configuration through tools like semanage or setsebool – the latter when you want to persist boolean changes.

~# semanage fcontext -l
ValueError: SELinux policy is not managed or store cannot be accessed.
~# setsebool -P allow_ptrace on
Cannot set persistent booleans without managed policy.
If you get those errors or warnings, all that is left to do is to do the migration. Note in the following that there is a warning about ‘else’ blocks that are no longer supported: that’s okay, as far as I know (and it was mentioned on the upstream mailinglist as well as not something to worry about) it does not have any impact.

~# /usr/libexec/selinux/semanage_migrate_store
Migrating from /etc/selinux/mcs/modules/active to /var/lib/selinux/mcs/active
Attempting to rebuild policy from /var/lib/selinux
sysnetwork: Warning: 'else' blocks in optional statements are unsupported in CIL. Dropping from output.
You can also add in -c so that the old policy module store is cleaned up. You can also rerun the command multiple times:

~# /usr/libexec/selinux/semanage_migrate_store -c
warning: Policy type mcs has already been migrated, but modules still exist in the old store. Skipping store.
Attempting to rebuild policy from /var/lib/selinux
You can manually clean up the old policy module store like so:

~# rm -rf /etc/selinux/mcs/modules
So… don’t worry – the change is small and does not break stuff. And for those wondering about CIL I’ll talk about it in one of my next posts.

Nuke Windows!

by Jesco Freund via My Universe »

In meinem letzten Posting zu Sublime hatte ich die kühne Behauptung aufgestellt, Windows-User würden von den Atom Entwicklern systematisch vernachlässigt. Dem ist nicht (mehr) so, deshalb sei diese Aussage hiermit revidiert.

Tatsächlich wurde relativ lange nur ein Build für OS X zur Verfügung gestellt. Seit v0.109 (veröffentlicht im Juli) jedoch stellen die Macher von Atom auch einen (sogar portablen) Build für Windows zur Verfügung, seit v0.128 (Mitte September) auch ein 64 Bit Ubuntu Package, und seit dem letzten Build v0.140 (vorgestern veröffentlicht) auch ein 64 Bit RPM Paket für RedHat.

PC-BSD Forums Now Support Tapatalk

by dru via Official PC-BSD Blog »

The PC-BSD forums are now accessible on Tapatalk. Tapatalk is a free forum app for your smartphone that lets you share photos, post, and reply to discussions easily on-the-go.

Tapatalk can be downloaded from here. Once installed,  search “” from the Tapatalk Explore tab. Be sure to  add the “PC-BSD Forums” search result, not just “PC-BSD”.


by dru via Official PC-BSD Blog »

Several members of the PC-BSD and FreeNAS teams will be attending MeetBSD, to be held on November 1 and 2 in San Jose, CA. There’s some great presentations lined up for this event and registration is only $75.

As usual, we’ll be giving out PC-BSD and FreeNAS media and swag. There will also be a FreeBSD Foundation booth that will accept donations to the Foundation. The BSDA certification exam will be held at 18:00 on November 1.

Pootle Translation System is now Updated to Version!

by Josh Smith via Official PC-BSD Blog »

If any of you have tried to use the PC-BSD Translation / Pootle web interface in the last year you probably don’t have a lot of good things to say about it.  A 35 word translation might take you a good 30 minutes between the load times (if you could even login to the site without it timing out).  Thankfully those days are behind us!  PC-BSD has upgraded their translation system to use Pootle version and it is blazingly fast.  I went through localizing a small 35 word applet for PC-BSD and it took me roughly 4 minutes compared to what would have taken at least half an hour before due to the slowness of the old pootle software.  Check out the new translation site at

There’s a couple of things you are going to want to keep in mind about the new translation system.  You will have to create a new account.  Upgrading Pootle directly was proving disastrous so we exported all the strings and imported them into the new Pootle server.  What this means is no accounts were transferred since a direct upgrade was not done.  This also means that the strings that were brought in also appear as “fuzzy” translations.  If you have some free time you can help by going to the translation site and approving some of the fuzzy translations.  Many languages have already been done they just need to be reviewed and marked as acceptable (uncheck the fuzzy box if you are 100% certain on the translation).

I hope you guys are as excited as I am over the new translation possibilities!  For more information on how you can help with localization / translating contact me at

Best Regards,


Happy 17th!

by Zach via The Z-Issue »

Just wanted to wish you a Happy 17th Birthday, Noah. I hope that it is a great day for you, and that the upcoming year is even better than this past one! My wish for you this year is that you are able to take time to enjoy the truly important things in life: family, friends, your health, and the events that don’t require anything more than your attention. Take the time—MAKE the time—to stop and appreciate the world around you.


How to Tell Data Leaks from Publicity Stunts

by BrianKrebs via Krebs on Security »

In an era when new consumer data breaches are disclosed daily, fake claims about data leaks are sadly becoming more common. These claims typically come from fame-seeking youngsters who enjoy trolling journalists and corporations, and otherwise wasting everyone’s time. Fortunately, a new analysis of recent bogus breach claims provides some simple tools that anyone can use to quickly identify fake data leak claims.

The following scenario plays out far too often. E-fame seekers post a fake database dump to a site like Pastebin and begin messaging journalists on Twitter and other social networks, claiming that the dump is “proof” that a particular company has been hacked. Inevitably, some media outlets will post stories questioning whether the company was indeed hacked, and the damage has been done.

Fortunately, there are some basic steps that companies, journalists and regular folk can take to quickly test whether a claimed data leak is at all valid, while reducing unwarranted damage to reputation caused by media frenzy and public concern. The fact-checking tips come in a paper from Allison Nixon, a researcher with Deloitte who — for nearly the past two years — has been my go-to person for vetting public data breach claims.

According to Nixon, the easiest way to check a leak claim is to run a simple online search for several of its components. As Nixon explains, seeking out unique-looking artifacts — such as odd passwords or email addresses — very often reveals that the supposed leak is in fact little more than a recycled leak from months or years prior. While this may seem like an obvious tip, it’s appalling at how often reporters fail to take even this basic step in fact-checking a breach claim.

A somewhat more advanced test seeks to measure how many of the “leaked” accounts are already registered at the supposedly breached organization. Most online services do not allow two different user accounts to have the same email address, so attempting to sign up for an account using an email address in the claimed leak data is an effective way to test leak claims. If several of the email addresses in the claimed leak list do not already have accounts associated with them at the allegedly breached Web site, the claim is almost certainly bogus.

To determine whether the alleged victim site requires email uniqueness for user accounts, the following test should work: Create two different accounts at the service, each using unique email addresses. Then attempt to change one of the account’s email address to the others. If the site disallows that change, no duplicate emails are allowed, and the analysis can proceed.

Importantly, Nixon notes that these techniques only demonstrate a leak is fake — not that a compromise has or hasn’t occurred. One of the sneakier ways that ne’er-do-wells produce convincing data leak claims is through the use of what’s called a “combolist.” With combolists, miscreants will try to build lists of legitimate credentials from a specific site using public lists of credentials from previous leaks at other sites.

This technique works because a fair percentage of users re-use passwords at multiple sites. Armed with various account-checking programs, e-fame seekers can quickly build a list of working credential pairs for any number of sites, and use that information to back up claims that the site has been hacked.

Account checking tools sold on the cybercriminal underground by one vendor.
But according to Nixon, there are some basic patterns that appear in lists of credentials that are essentially culled from combolists.

“Very often, you can tell a list of credentials is from a combolist because the list will be nothing more than username and password pairs, instead of password hashes and a whole bunch of other database information,” Nixon said.

A great example of this came earlier this month when multiple media outlets repeated a hacker’s claim that he’d stolen a database of almost seven million Dropbox login credentials. The author of that hoax claimed he would release on Pastebin more snippets of Dropbox account credentials as he received additional donations to his Bitcoin account. Dropbox later put up a blog post stating that the usernames and passwords posted in that “leak” were likely stolen from other services.

Other ways of vetting a claimed leak involve more detailed and time-intensive research, such as researching the online history of the hacker who’s making the leak claims.

“If you look at the motivation, it’s mostly ego-driven,” Nixon said. “They want to be a famous hacker. If they have a handle attached to the claim — a name they’ve used before — that tells me that they want a reputation, but that also means I can check their history to see if they have posted fake leaks in the past. If I see a political manifesto at the top of a list of credentials, that tells me that the suspected leak is more about the message and the ego than any sort of breach disclosure.”

Nixon said while attackers can use the techniques contained in her paper to produce higher quality fake leaks, the awareness provided by the document will provide a greater overall benefit to the public than to the attackers alone.

“For the most part, there are a few fake breaches that get posted over and over again on Pastebin,” she said. “There is just a ton of background noise, and I would say only a tiny percentage of these breach claims are legitimate.”

A full copy of the Deloitte report is available here (PDF).

More RSS UDP tests - this time on a Dell R720

by Adrian via Adrian Chadd's Ramblings »

I've recently had the chance to run my RSS UDP test suite up on a pair of Dell R720s. They came with on-board 10G Intel NICs (ixgbe(4) in FreeBSD) so I figured I'd run my test suite up on it.

Thank you to the Enterprise Storage Division at Dell for providing hardware for me to develop on!

The config is like in the previous blog post, but now I have two 8-core Sandy Bridge Xeon CPUs to play with. To simply things (and to not have to try and solve NUMA related issues) I'm running this on the first socket. The Intel NIC is attached to the first CPU socket.


  • CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz (2000.04-MHz K8-class CPU) x 2
  • RAM: 64GiB
  • HTT disabled

# ... until ncpus is tunable, make it use 8 buckets.

This time I want to test with 8 streams, so after some trial and error I found the right IPv4 addresses to use:

  • Server:
  • Client:,,,,,,,
The test was like before - the server ran one rss-udp-srv program that spawns one thread per RSS bucket. The client side runs rss-clt programs to generate traffic - but now there's eight of them instead of four.

The results are what I expected: the contention is in the same place (UDP receive) and it's per-core - it doesn't contend between CPU cores.

Each CPU is transmitting and receiving 215,000 510-byte UDP frames a second. It scales linearly - 1 CPU is 215,000 TX/RX frames a second. 8 CPUs is 215,000 TX/RX frames a second * 8. There's no degrading as the CPU core count increases.

That's 1.72 million packets per second. At 510 bytes frames it's about 7 gigabits/sec in and out.

The other 8 cores are idle. Ideally we'd be able to run an application in those cores - so hopefully I can get my network / rss library up and running enough to prototype an RSS-aware memcached and see if it'll handle this particular workload.

It's a far cry from what I think we can likely achieve - but please keep in mind that I know I could do more awesome looking results with netmap, PF_RING or Intel's DPDK software. What I'm trying to do is push the existing kernel networking subsystem to its limits so the issues can be exposed and fixed.

So, where's the CPU going?

In the UDP server program (pid 1620), it looks thus:

# pmcstat -P CPU_CLK_UNHALTED_CORE -T -w 1 -p 1620

PMC: [CPU_CLK_UNHALTED_CORE] Samples: 34298 (100.0%) , 155 unresolved

  8.0 kernel     fget_unlocked        kern_sendit:4.2 kern_recvit:3.9
  7.0 kernel     copyout              soreceive_dgram:5.6 amd64_syscall:0.9
  3.6 kernel     __mtx_unlock_flags   ixgbe_mq_start
  3.5 kernel     copyin               m_uiotombuf:1.8 amd64_syscall:1.2
  3.4 kernel     memcpy               ip_output:2.9 ether_output:0.6
  3.4 kernel     toeplitz_hash        rss_hash_ip4_2tuple
  3.3 kernel     bcopy                rss_hash_ip4_2tuple:1.4 rss_proto_software_hash_v4:0.9
  3.0 kernel     _mtx_lock_spin_cooki pmclog_reserve
  2.7 kernel     udp_send             sosend_dgram
  2.5 kernel     ip_output            udp_send

In the NIC receive / transmit thread(s) (pid 12), it looks thus:

# pmcstat -P CPU_CLK_UNHALTED_CORE -T -w 1 -p 12

PMC: [CPU_CLK_UNHALTED_CORE] Samples: 79319 (100.0%) , 0 unresolved

 10.3 kernel     ixgbe_rxeof          ixgbe_msix_que
  9.3 kernel     __mtx_unlock_flags   ixgbe_rxeof:4.8 netisr_dispatch_src:2.1 in_pcblookup_mbuf:1.3
  8.3 kernel     __mtx_lock_flags     ixgbe_rxeof:2.8 netisr_dispatch_src:2.4 udp_append:1.2 in_pcblookup_mbuf:1.1 knote:0.6
  3.8 kernel     bcmp                 netisr_dispatch_src
  3.6 kernel     uma_zalloc_arg       sbappendaddr_locked_internal:2.0 m_getjcl:1.6
  3.4 kernel     ip_input             netisr_dispatch_src
  3.4 kernel     lock_profile_release __mtx_unlock_flags
  3.4 kernel     in_pcblookup_mbuf    udp_input
  3.0 kernel     ether_nh_input       netisr_dispatch_src
  2.4 kernel     udp_input            ip_input
  2.4 kernel     mb_free_ext          m_freem
  2.2 kernel     lock_profile_obtain_ __mtx_lock_flags
  2.1 kernel     ixgbe_refresh_mbufs  ixgbe_rxeof

It looks like there's some obvious optimisations to poke at (what the heck is fget_unlocked() doing up there?) and yes, copyout/copyin are really terrible but currently unavoidable. The toeplitz hash and bcopy aren't very nice but they're occuring in the transmit path because at the moment there's no nice way to efficiently set both the outbound RSS hash and RSS bucket ID and send to a non-connected socket destination (ie, specify the destination IP:port as part of the send.) There's also some lock contention that needs to be addressed.

The output of the netisr queue statistics looks good:

root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss # netstat -Q
Setting                        Current        Limit
Thread count                         8            8
Default queue limit                256        10240
Dispatch policy                 direct          n/a
Threads bound to CPUs          enabled          n/a

Name   Proto QLimit Policy Dispatch Flags
ip         1    256    cpu   hybrid   C--
igmp       2    256 source  default   ---
rtsock     3    256 source  default   ---
arp        4    256 source  default   ---
ether      5    256    cpu   direct   C--
ip6        6    256   flow  default   ---
ip_direct     9    256    cpu   hybrid   C--

WSID CPU   Name     Len WMark   Disp'd  HDisp'd   QDrops   Queued  Handled
   0   0   ip         0    25        0 839349259        0       49 839349308
   0   0   igmp       0     0        0        0        0        0        0
   0   0   rtsock     0     2        0        0        0       92       92
   0   0   arp        0     0      118        0        0        0      118
   0   0   ether      0     0 839349600        0        0        0 839349600
   0   0   ip6        0     0        0        0        0        0        0
   0   0   ip_direct     0     0        0        0        0        0        0
   1   1   ip         0    20        0 829928186        0      286 829928472
   1   1   igmp       0     0        0        0        0        0        0
   1   1   rtsock     0     0        0        0        0        0        0
   1   1   arp        0     0        0        0        0        0        0
   1   1   ether      0     0 829928672        0        0        0 829928672
   1   1   ip6        0     0        0        0        0        0        0
   1   1   ip_direct     0     0        0        0        0        0        0
   2   2   ip         0     0        0 835558437        0        0 835558437
   2   2   igmp       0     0        0        0        0        0        0
   2   2   rtsock     0     0        0        0        0        0        0
   2   2   arp        0     0        0        0        0        0        0
   2   2   ether      0     0 835558610        0        0        0 835558610
   2   2   ip6        0     0        0        0        0        0        0
   2   2   ip_direct     0     0        0        0        0        0        0
   3   3   ip         0     1        0 850271162        0       23 850271185
   3   3   igmp       0     0        0        0        0        0        0
   3   3   rtsock     0     0        0        0        0        0        0
   3   3   arp        0     0        0        0        0        0        0
   3   3   ether      0     0 850271163        0        0        0 850271163
   3   3   ip6        0     0        0        0        0        0        0
   3   3   ip_direct     0     0        0        0        0        0        0
   4   4   ip         0    23        0 817439448        0      345 817439793
   4   4   igmp       0     0        0        0        0        0        0
   4   4   rtsock     0     0        0        0        0        0        0
   4   4   arp        0     0        0        0        0        0        0
   4   4   ether      0     0 817439625        0        0        0 817439625
   4   4   ip6        0     0        0        0        0        0        0
   4   4   ip_direct     0     0        0        0        0        0        0
   5   5   ip         0    19        0 817862508        0      332 817862840
   5   5   igmp       0     0        0        0        0        0        0
   5   5   rtsock     0     0        0        0        0        0        0
   5   5   arp        0     0        0        0        0        0        0
   5   5   ether      0     0 817862675        0        0        0 817862675
   5   5   ip6        0     0        0        0        0        0        0
   5   5   ip_direct     0     0        0        0        0        0        0
   6   6   ip         0    19        0 817281399        0      457 817281856
   6   6   igmp       0     0        0        0        0        0        0
   6   6   rtsock     0     0        0        0        0        0        0
   6   6   arp        0     0        0        0        0        0        0
   6   6   ether      0     0 817281665        0        0        0 817281665
   6   6   ip6        0     0        0        0        0        0        0
   6   6   ip_direct     0     0        0        0        0        0        0
   7   7   ip         0     0        0 813562616        0        0 813562616
   7   7   igmp       0     0        0        0        0        0        0
   7   7   rtsock     0     0        0        0        0        0        0
   7   7   arp        0     0        0        0        0        0        0
   7   7   ether      0     0 813562620        0        0        0 813562620
   7   7   ip6        0     0        0        0        0        0        0
   7   7   ip_direct     0     0        0        0        0        0        0
root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss # 

It looks like everything is being dispatched correctly; nothing is being queued and/or dropped.

But yes, we're running out of socket buffers because each core is 100% pinned:

root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss # netstat -sp udp
        6773040390 datagrams received
        0 with incomplete header
        0 with bad data length field
        0 with bad checksum
        0 with no checksum
        17450880 dropped due to no socket
        136 broadcast/multicast datagrams undelivered
        1634117674 dropped due to full socket buffers
        0 not for hashed pcb
        5121471700 delivered
        5121471044 datagrams output
        0 times multicast source filter matched

There's definitely room for improvement.

Getting openconnect & tuntap working on Yosemite OSX

by Dan Langille via Dan Langille's Other Diary »

I upgraded to Yosmite today. It was not without pain. I use openconnect, which in turn, uses tuntap. After upgrading, my connection attempts resulted in: Failed to open tun device: No such file or directory Set up tun device failed I was also seeing this in /var/log/system.log: Oct 28 15:39:39[19]: ERROR: invalid signature [...]


by Karsten via »

Schon lange hat man auf diesen Tag gewartet, jetzt ist er gekommen. Cisco schickt das „legacy IPS“ aufs Altenteil. Und das komplett:

Product Migration Options
This end-of-life announcement covers the entire Cisco IPS Family, including all hardware, software, and licenses, with no exceptions. The IPS software also includes management applications: IPS Device Manager (IDM) and IPS Manager Express (IME).
Cisco IPS Appliance 4xxx customers are encouraged to migrate to the Cisco FirePOWER Appliance.
Cisco ASA IPS Module 5xxx customers are encouraged to migrate to Cisco ASA with FirePOWER Services.

In dem EOS/EOL-Anouncement wird das IOS-IPS nicht erwähnt. Da das im IOS integriert ist, wäre es vermutlich ein zu großer Aufwand, das gegen eine FirePOWER-Implementierung zu ersetzen.
Update: Es ist aktuell geplant, das EOS/EOL des IOS-IPS in ca. einem Jahr bekanntzugeben.

Das komplette Anouncement ist hier zu finden:

OS X Yosemite

by Karsten via »

Die neue OS X Version läuft jetzt seit über einer Woche auf einem meiner Macs und ich bin nicht nur positiv angetan, sondern auch wirklich überrascht. Es funktioniert einfach (fast) alles … Während man bei älteren Major Upgrades doch besser auf die .2 oder .3 Version gewartet hat, kann man diese Version meiner Meinung nach gleich installieren.
Was gibt es zu dieser Version zu erwähnen:

  • Java muss nach dem Update neu installiert werden, für OS X 10.10 steht nur Java 8 zur Verfügung. Meine wichtigste Java-Anwendung, der ASDM (zumindest in recht neuen Versionen) funktioniert anstandslos. Probleme macht leider aus irgend einem Grund die JRE, wenn man aber das JDK installiert läuft alles wie es soll.
  • Der Cisco AnyConnect Client funktioniert (Version 3.1.05187 unterstützt OS X 10.10 offiziell). Das hat mich wirklich gewundert. Bei den letzten OS X Updates kam es mir vor, als wenn Cisco regelmäßig von den neuen Versionen überrascht wurde …
  • Shimo (mein VPN Session-Manager, den ich für EasyVPN und openVPN verwende) funktioniert ohne Probleme. Genau so läuft der Fortinet VPN-Client.
  • Die Mail-Plugins von Chungasoft (ich habe Face2face und ForgetMeNot) laufen problemlos, genau so Letter Opener Pro von Creative in Austria.
  • VMware Fusion läuft auch in der 6er Version gut. Das Update auf die 7er ist also nicht zwingend notwendig.
  • Zwei wichtige Productivity-Tools sind für mich der TotalFinder und Default Folder X  Auch diese beiden System-Tools laufen ohne Probleme.
  • Bisher kommt es mir nicht so vor, dass Yosemite langsamer als Mavericks in der Bedienung wäre. (Fast) alles läuft recht flüssig.
  • Gefühlt schlechter wurde der Zugriff auf Volumes mit sehr vielen Ordnern und Dateien. Unter Mountain Lion war z.B. der Zugriff auf externe Platten mit tausenden von Ordnern und Dateien noch recht flüssig, mit dem Wechsel auf Mavericks wurde das deutlich langsamer. Jetzt mit Yosemite gibt es noch einmal längere Wartezeiten. Das ist aber auch der bisher einzige Negativ-Punkt, der mir auffällt.
  • Weitere kleine Enttäuschung: Die unter Yosemite verwendete OpenSSH-Version ist wie unter Mavericks auch schon die 6.2p2. Interessante Crypto-Erweiterungen der neueren OpenSSH-Version sind daher nicht verfügbar.
  • Das GPGMail-Plugin ist leider auch noch nicht für Yosemite verfügbar. Für Maverics war dieses Plugin sehr schnell angepasst, für Yosmite wird die neue Version in ein paar Wochen erwartet. Diese wird dann nicht mehr frei verfügbar sein, sondern kostenpflichtig sein. Schade, dass Apple das nicht direkt in die Mail-Anwendung integriert. Auch, wenn PGP nicht mehr ganz state-of-the-art ist.
  • Endlich hat Apple einen Security-Bug geschlossen, den ich (und anscheinend einige andere) vor längerer Zeit schon gemeldet habe. Dafür wurde ich sogar im Security-Advisory erwähnt. Damit könnte ich mir doch jetzt auch „Security-Researcher“ auf die Visitenkarte drucken …
Die offizielle Aufzählung der Yosemite-Neuerungen ist bei Apple verfügabr:

The subtlety of modern CPUs, or the search for the phantom bug

by Flameeyes via Flameeyes's Weblog »

Yesterday I have released a new version of unpaper which is now in Portage, even though is dependencies are not exactly straightforward after making it use libav. But when I packaged it, I realized that the tests were failing — but I have been sure to run the tests all the time while making changes to make sure not to break the algorithms which (as you may remember) I have not designed or written — I don't really have enough math to figure out what's going on with them. I was able to simplify a few things but I needed Luca's help for the most part.

Turned out that the problem only happened when building with -O2 -march=native so I decided to restrict tests and look into it in the morning again. Indeed, on Excelsior, using -march=native would cause it to fail, but on my laptop (where I have been running the test after every single commit), it would not fail. Why? Furthermore, Luca was also reporting test failures on his laptop with OSX and clang, but I had not tested there to begin with.

A quick inspection of one of the failing tests' outputs with vbindiff showed that the diffs would be quite minimal, one bit off at some non-obvious interval. It smelled like a very minimal change. After complaining on G+, Måns pushed me to the right direction: some instruction set that differs between the two.

My laptop uses the core-avx-i arch, while the server uses bdver1. They have different levels of SSE4 support – AMD having their own SSE4a implementation – and different extensions. I should probably have paid more attention here and noticed how the Bulldozer has FMA4 instructions, but I did not, it'll show important later.

I decided to start disabling extensions in alphabetical order, mostly expecting the problem to be in AMD's implementation of some instructions pending some microcode update. When I disabled AVX, the problem went away — AVX has essentially a new encoding of instructions, so enabling AVX causes all the instructions otherwise present in SSE to be re-encoded, and is a dependency for FMA4 instructions to be usable.

The problem was reducing the code enough to be able to figure out if the problem was a bug in the code, in the compiler, in the CPU or just in the assumptions. Given that unpaper is over five thousands lines of code and comments, I needed to reduce it a lot. Luckily, there are ways around it.

The first step is to look in which part of the code the problem appears. Luckily unpaper is designed with a bunch of functions that run one after the other. I started disabling filters and masks and I was able to limit the problem to the deskewing code — which is when most of the problems happened before.

But even the deskewing code is a lot — and it depends on at least some part of the general processing to be run, including loading the file and converting it to an AVFrame structure. I decided to try to reduce the code to a standalone unit calling into the full deskewing code. But when I copied over and looked at how much code was involved, between the skew detection and the actual rotation, it was still a lot. I decided to start looking with gdb to figure out which of the two halves was misbehaving.

The interface between the two halves is well-defined: the first return the detected skew, and the latter takes the rotation to apply (the negative value to what the first returned) and the image to apply it to. It's easy. A quick look through gdb on the call to rotate() in both a working and failing setup told me that the returned value from the first half matched perfectly, this is great because it meant that the surface to inspect was heavily reduced.

Since I did not want to have to test all the code to load the file from disk and decode it into a RAW representation, I looked into the gdb manual and found the dump commands that allows you to dump part of the process's memory into a file. I dumped the AVFrame::data content, and decided to use that as an input. At first I decided to just compile it into the binary (you only need to use xxd -i to generate C code that declares the whole binary file as a byte array) but it turns out that GCC is not designed to compile efficiently a 17MB binary blob passed in as a byte array. I then opted in for just opening the raw binary file and fread() it into the AVFrame object.

My original plan involved using creduce to find the minimal set of code needed to trigger the problem, but it was tricky, especially when trying to match a complete file output to the md5. I decided to proceed with the reduction manually, starting from all the conditional for pixel formats that were not exercised… and then I realized that I could split again the code in two operations. Indeed while the main interface is only rotate(), there were two logical parts of the code in use, one translating the coordinates before-and-after the rotation, and the interpolation code that would read the old pixels and write the new ones. This latter part also depended on all the code to set the pixel in place starting from its components.

By writing as output the calls to the interpolation function, I was able to restrict the issue to the coordinate translation code, rather than the interpolation one, which made it much better: the reduced test case went down to a handful of lines:

void rotate(const float radians, AVFrame *source, AVFrame *target) {
    const int w = source->width;
    const int h = source->height;

    // create 2D rotation matrix
    const float sinval = sinf(radians);
    const float cosval = cosf(radians);
    const float midX = w / 2.0f;
    const float midY = h / 2.0f;

    for (int y = 0; y < h; y++) {
        for (int x = 0; x < w; x++) {
            const float srcX = midX + (x - midX) * cosval + (y - midY) * sinval;
            const float srcY = midY + (y - midY) * cosval - (x - midX) * sinval;
            externalCall(srcX, srcY);
Here externalCall being a simple function to extrapolate the values, the only thing it does is printing them on the standard error stream. In this version there is still reference to the input and output AVFrame objects, but as you can notice there is no usage of them, which means that now the testcase is self-contained and does not require any input or output file.

Much better but still too much code to go through. The inner loop over x was simple to remove, just hardwire it to zero and the compiler still was able to reproduce the problem, but if I hardwired y to zero, then the compiler would trigger constant propagation and just pre-calculate the right value, whether or not AVX was in use.

At this point I was able to execute creduce; I only needed to check for the first line of the output to match the "incorrect" version, and no input was requested (the radians value was fixed). Unfortunately it turns out that using creduce with loops is not a great idea, because it is well possible for it to reduce away the y++ statement or the y < h comparison for exit, and then you're in trouble. Indeed it got stuck multiple times in infinite loops on my code.

But it did help a little bit to simplify the calculation. And with again a lot of help by Måns on making sure that the sinf()/cosf() functions would not return different values – they don't, also they are actually collapsed by the compiler to a single call to sincosf(), so you don't have to write ugly code to leverage it! – I brought down the code to

extern void externCall(float);
extern float sinrotation();
extern float cosrotation();

static const float midX = 850.5f;
static const float midY = 1753.5f;

void main() {
    const float srcX = midX * cosrotation() - midY * sinrotation();
No external libraries, not even libm. The external functions are in a separate source file, and beside providing fixed values for sine and cosine, the externCall() function only calls printf() with the provided value. Oh if you're curious, the radians parameter became 0.6f, because 0, 1 and 0.5 would not trigger the behaviour, but 0.6 which is the truncated version of the actual parameter coming from the test file, would.

Checking the generated assembly code for the function then pointed out the problem, at least to Måns who actually knows Intel assembly. Here follows a diff of the code above, built with -march=bdver1 and with -march=bdver1 -mno-fma4 — because turns out the instruction causing the problem is not an AVX one but an FMA4, more on that after the diff.

        movq    -8(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmovss  -20(%rbp), %xmm2
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovss  -20(%rbp), %xmm1
+       vfmsubss        %xmm0, .LC0(%rip), %xmm1, %xmm0
        .cfi_def_cfa 7, 8
-       vsubss  %xmm0, %xmm1, %xmm0
        jmp     externCall@PLT
It's interesting that it's changing the order of the instructions as well, as well as the constants — for this diff I have manually swapped .LC0 and .LC1 on one side of the diff, as they would just end up with different names due to instruction ordering.

As you can see, the FMA4 version has one instruction less: vfmsubss replaces both one of the vmulss and the one vsubss instruction. vfmsubss is a FMA4 instruction that performs a Fused Multiply and Subtract operation — midX * cosrotation() - midY * sinrotation() indeed has a multiply and subtract!

Originally, since I was disabling the whole AVX instruction set, all the vmulss instructions would end up replaced by mulss which is the SSE version of the same instruction. But when I realized that the missing correspondence was vfmsubss and I googled for it, it was obvious that FMA4 was the culprit, not the whole AVX.

Great, but how does that explain the failure on Luca's laptop? He's not so crazy so use an AMD laptop — nobody would be! Well, turns out that Intel also have their Fused Multiply-Add instruction set, just only with three operands rather than four, starting from Haswell CPUs, which include… Luca's laptop. A quick check on my NUC which also has a Haswell CPU confirms that the problem exists also for the core-avx2 architecture, even though the code diff is slightly less obvious:

        movq    -24(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmovd   %ebx, %xmm2
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovd   %ebx, %xmm1
+       vfmsub132ss     .LC0(%rip), %xmm0, %xmm1
        addq    $24, %rsp
+       vmovaps %xmm1, %xmm0
        popq    %rbx
-       vsubss  %xmm0, %xmm1, %xmm0
        popq    %rbp
        .cfi_def_cfa 7, 8
Once again I swapped .LC0 and .LC1 afterwards for consistency.

The main difference here is that the instruction for fused multiply-subtract is vfmsub132ss and a vmovaps is involved as well. If I read the documentation correctly this is because it stores the result in %xmm1 but needs to move it to %xmm0 to pass it to the external function. I'm not enough of an expert to tell whether gcc is doing extra work here.

So why is this instruction causing problems? Well, Måns knew and pointed out that the result is now more precise, thus I should not work it around. Wikipedia, as linked before, points also out why this happens:

A fused multiply–add is a floating-point multiply–add operation performed in one step, with a single rounding. That is, where an unfused multiply–add would compute the product b×c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply–add would compute the entire sum a+b×c to its full precision before rounding the final result down to N significant bits.

Unfortunately this does mean that we can't have bitexactness of images for CPUs that implement fused operations. Which means my current test harness is not good, as it compares the MD5 of the output with the golden output from the original test. My probable next move is to use cmp to count how many bytes differ from the "golden" output (the version without optimisations in use), and if the number is low, like less than 1‰, accept it as valid. It's probably not ideal and could lead to further variation in output, but it might be a good start.

Optimally, as I said a long time ago I would like to use a tool like pdiff to tell whether there is actual changes in the pixels, and identify things like 1-pixel translation to any direction, which would be harmless… but until I can figure something out, it'll be an imperfect testsuite anyway.

A huge thanks to Måns for the immense help, without him I wouldn't have figured it out so quickly.

Why is U2F better than OTP?

by Flameeyes via Flameeyes's Weblog »

It is not really obvious to many people how U2F is better than OTP for two-factor authentication; in particular I've seen it compared with full-blown smartcard-based authentication, and I think that's a bad comparison to do.

Indeed, since the Security Key is not protected by a PIN, and the NEO-n is designed to be semi-permanently attached to a laptop or desktop. At first this seems pretty insecure, as secure as storing the authorization straight into the computer, but it's not the case.

But let's start from the target users: the Security Key is not designed to replace the pure-paranoia security devices such as 16Kibit-per-key smartcards, but rather the on-phone or by-sms OTPs two-factor authenticators, those that use the Google Authenticator or other opensource implementations or that are configured to receive SMS.

Why replacing those? At first sight they all sound like perfectly good idea, what's to be gained to replace them? Well, there are plenty of things, the first of being the user friendliness of this concept. I know it's an overuse metaphor, but I do actually consider features on whether my mother would be able to use them or not — she's not a stupid person and can use a computer mostly just fine, but adding any on more procedures is something that would frustrate her quite a bit.

So either having to open an application and figure out which of many codes to use at one time, or having to receive an SMS and then re-type the code would be not something she'd be happy with. Even more so because she does not have a smartphone, and she does not keep her phone on all the time, as she does not want to be bothered by people. Which makes both the Authenticator and SMS ways not a good choice — and let's not try to suggests that there are way to not be available on the phone without turning it off, it would be more to learn that she does not care about.

Similar to the "phone-is-not-connected" problem, but for me rather than my mother, is the "wrong-country-for-the-phone" problem: I travel a lot, this year aiming for over a hundred days on the road, and there are very few countries in which I keep my Irish phone number available – namely Italy and the UK, where Three is available and I don't pay roaming, when the roaming system works… last time I've been to London the roaming system was not working – in the others, including the US which is obviously my main destination, I have a local SIM card so I can use data and calls. This means that if my 2FA setup sends an SMS on the Irish number, I won't receive it easily.

Admittedly, an alternative way to do this would be for me to buy a cheap featurephone, so that instead of losing access to that SIM, I can at least receive calls/SMS.

This is not only a theoretical. I have been at two conferences already (USENIX LISA 13, and Percona MySQL Conference 2014) and realized I cut myself out of my LinkedIn account: the connection comes from a completely different country than usual (US rather than Ireland) and it requires reauthentication… but it was configured to send the SMS to my Irish phone, which I had no access to. Given that at conferences is when you meet people you may want to look up on LinkedIn, it's quite inconvenient — luckily the authentication on the phone persists.

The authenticator apps are definitely more reliable than that when you travel, but they also come with their set of problems. Beside the not complete coverage of services (LinkedIn noted above for instance does not support authenticator apps), which is going to be a problem for U2F as well, at least at the beginning, neither Google's or Fedora's authenticator app allow you to take a backup of the private keys used for OTP authentication, which means that when you change your phone you'll have to replace, one by one, the OTP generation parameters. For some services such as Gandi, there is also no way to have a backup code, so if you happen to lose, break, or reset your phone without disabling the second factor auth, you're now in trouble.

Then there are a few more technical problems; HOTP, similarly to other OTP implementations, relies on shared state between the generator and the validator: a counter of how many times the code was generated. The client will increase it with every generation, the server should only increase it after a successful authentication. Even discounting bugs on the server side, a malicious actor whose intent is to lock you out can just make sure to generate enough codes on your device that the server will not look ahead enough to find the valid code.

TOTP instead relies on synchronization of time between server and generator which is a much safer assumption. Unfortunately, this also means you have a limited amount of time to type your code, which is tricky for many people who're not used to type quickly — Luca, for instance.

There is one more problem with both implementations: they rely on the user to choose the right entry and in the list and copy the right OTP value. This means you can still phish an user to type in an OTP and use it to authenticate against the service: 2FA is a protection against third parties gaining access to your account by having your password posted online rather than a protection against phishing.

U2F helps for this, as it lets the browser to handshake with the service before providing the current token to authenticate the access. Sure there might still be gaps on is implementation and since I have not studied it in depth I'm not going to vouch for it to be untouchable, but I trust the people who worked on it and I feel safer with it than I would be with a simple OTP.

‘Replay’ Attacks Spoof Chip Card Charges

by BrianKrebs via Krebs on Security »

An odd new pattern of credit card fraud emanating from Brazil and targeting U.S. financial institutions could spell costly trouble for banks that are just beginning to issue customers more secure chip-based credit and debit cards.

Over the past week, at least three U.S. financial institutions reported receiving tens of thousands of dollars in fraudulent credit and debit card transactions coming from Brazil and hitting card accounts stolen in recent retail heists, principally cards compromised as part of the breach at Home Depot.

The most puzzling aspect of these unauthorized charges? They were all submitted through Visa and MasterCard‘s networks as chip-enabled transactions, even though the banks that issued the cards in question haven’t even yet begun sending customers chip-enabled cards.

The most frustrating aspect of these unauthorized charges? They’re far harder for the bank to dispute. Banks usually end up eating the cost of fraud from unauthorized transactions when scammers counterfeit and use stolen credit cards. Even so, a bank may be able to recover some of that loss through dispute mechanisms set up by Visa and MasterCard, as long as the bank can show that the fraud was the result of a breach at a specific merchant (in this case Home Depot).

However, banks are responsible for all of the fraud costs that occur from any fraudulent use of their customers’ chip-enabled credit/debit cards — even fraudulent charges disguised as these pseudo-chip transactions.


The bank I first heard from about this fraud — a small financial institution in New England — battled some $120,000 in fraudulent charges from Brazilian stores in less than two days beginning last week. The bank managed to block $80,000 of those fraudulent charges, but the bank’s processor, which approves incoming transactions when the bank’s core systems are offline, let through the other $40,000. All of the transactions were debit charges, and all came across MasterCard’s network looking to MasterCard like chip transactions without a PIN.

The fraud expert with the New England bank said the institution had decided against reissuing customer cards that were potentially compromised in the five-month breach at Home Depot, mainly because that would mean reissuing a sizable chunk of the bank’s overall card base and because the bank had until that point seen virtually no fraud on the accounts.

“We saw very low penetration rates on our Home Depot cards, so we didn’t do a mass reissue,” the expert said. “And then in one day we matched a month’s worth of fraud on those cards thanks to these charges from Brazil.”

A chip card. Image: First Data
The New England bank initially considered the possibility that the perpetrators had somehow figured out how to clone chip cards and had encoded the cards with their customers’ card data. In theory, however, it should not be possible to easily clone a chip card. Chip cards are synonymous with a standard called EMV (short for Europay, MasterCard and Visa), a global payment system that has already been adopted by every other G20 nation as a more secure alternative to cards that simply store account holder data on a card’s magnetic stripe. EMV cards contain a secure microchip that is designed to make the card very difficult and expensive to counterfeit.

In addition, there are several checks that banks can use to validate the authenticity of chip card transactions. The chip stores encrypted data about the cardholder account, as well as a “cryptogram” that allows banks to tell whether a card or transaction has been modified in any way. The chip also includes an internal counter mechanism that gets incremented with each sequential transaction, so that a duplicate counter value or one that skips ahead may indicate data copying or other fraud to the bank that issued the card.

And this is exactly what has bank fraud fighters scratching their heads: Why would the perpetrators go through all the trouble of taking plain old magnetic stripe cards stolen in the Home Depot breach (and ostensibly purchased in the cybercrime underground) and making those look like EMV transactions? Why wouldn’t the scammers do what fraudsters normally do with this data, which is simply to create counterfeit cards and use the phony cards to buy gift cards and other high-priced merchandise from big box retailers?

More importantly, how were these supposed EMV transactions on non-EMV cards being put through the Visa and MasterCard network as EMV transactions in the first place?

The New England bank said MasterCard initially insisted that the charges were made using physical chip-based cards, but the bank protested that it hadn’t yet issued its customers any chip cards. Furthermore, the bank’s processor hadn’t even yet been certified by MasterCard to handle chip card transactions, so why was MasterCard so sure that the phony transactions were chip-based?


MasterCard did not respond to multiple requests to comment for this story. Visa also declined to comment on the record. But the New England bank told KrebsOnSecurity that in a conversation with MasterCard officials the credit card company said the most likely explanation was that fraudsters were pushing regular magnetic stripe transactions through the card network as EMV purchases using a technique known as a “replay” attack.

According to the bank, MasterCard officials explained that the thieves were probably in control of a payment terminal and had the ability to manipulate data fields for transactions put through that terminal. After capturing traffic from a real EMV-based chip card transaction, the thieves could insert stolen card data into the transaction stream, while modifying the merchant and acquirer bank account on the fly.

Avivah Litan, a fraud analyst with Gartner Inc., said banks in Canada saw the same EMV-spoofing attacks emanating from Brazil several months ago. One of the banks there suffered a fairly large loss, she said, because the bank wasn’t checking the cryptograms or counters on the EMV transactions.

“The [Canadian] bank in this case would take any old cryptogram and they weren’t checking that one-time code because they didn’t have it implemented correctly,” Litan said. “If they saw an EMV transaction and didn’t see the code, they would just authorize the transaction.”

Litan said the fraudsters likely knew that the Canadian bank wasn’t checking the cryptogram and that it wasn’t looking for the dynamic counter code.

“The bad guys knew that if they encoded these as EMV transactions, the banks would loosen other fraud detection controls,” Litan said. “It appears with these attacks that the crooks aren’t breaking the EMV protocol, but taking advantage of bad implementations of it. Doing EMV correctly is hard, and there are lots of ways to break not the cryptography but to mess with the implementation of EMV.”

The thieves also seem to be messing with the transaction codes and other aspects of the EMV transaction stream. Litan said it’s likely that the perpetrators of this attack had their own payment terminals and were somehow able to manipulate the transaction fields in each charge.

“I remember when I went to Brazil a couple of years ago, their biggest problem was merchants were taking point-of-sale systems home, and then running stolen cards through them,” she said. “I’m sure they could rewire them to do whatever they wanted. That was the biggest issue at the time.”

The New England bank shared with this author a list of the fraudulent transactions pushed through by the scammers in Brazil. The bank said MasterCard is currently in the process of checking with the Brazilian merchants to see whether they had physical transactions that matched transactions shown on paper.

In the meantime, it appears that the largest share of those phony transactions were put through using a payment system called Payleven, a mobile payment service popular in Europe and Brazil that is similar in operation to Square. Most of the transactions were for escalating amounts — nearly doubling with each transaction — indicating the fraudsters were putting through debit charges to see how much money they could drain from the compromised accounts.

Litan said attacks like this one illustrate the importance of banks setting up EMV correctly. She noted that while the New England bank was able to flag the apparent EMV transactions as fraudulent in part because it hadn’t yet begun issuing EMV cards, the outcome might be different for a bank that had issued at least some chip cards.

“There’s going to be a lot of confusion when banks roll out EMV, and one thing I’ve learned from clients is how hard it is to implement properly,” Litan said. “A lot of banks will loosen other fraud controls right away, even before they verify that they’ve got EMV implemented correctly. They won’t expect the point-of-sale codes to be manipulated by fraudsters. That’s the irony: We think EMV is going to solve all our card fraud problems, but doing it correctly is going to take a lot longer than we thought. It’s not that easy.”

Packaging for the Yubikey NEO, a frustrating adventure

by Flameeyes via Flameeyes's Weblog »

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico's code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don't need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager's dream company.

But let's get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the "touch" interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can't share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward ­— only I had to figure out how to install a Python package as I never did so before. It's actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don't keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices ­— but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you're setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn't make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it "Yubikey NEO U2F+CCID". It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won't be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I'll be a good post for next week.

Setting up Yubikey NEO and U2F on Gentoo (and Linux in general)

by Flameeyes via Flameeyes's Weblog »

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we've been waiting for the day for a while. I've been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let's start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can't vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.
I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I'll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don't know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you're using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"
If you're not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"
-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I'll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you're supposed to use it right now. I don't know if the former workflow is going to be deprecated at some point, but I wouldn't be surprised.

For those like me who bought the NEO devices, you'll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it'll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you'll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I'll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let's all hope that more sites will start protecting your accounts with U2F — I'll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.


by Jesco Freund via My Universe »

Rein interessehalber habe ich für meinen Spiegelserver einmal ausgewertet, über welche Protokolle die Zugriffe überhaupt erfolgen. Hier das Ergebnis:
Betrachtet habe ich dabei einen Zeitraum von ca. 2 Wochen, wobei nur tatsächliche Dateizugriffe gezählt wurden — redirects und Fehler wurden nicht mit in die Auswertung einbezogen.

Das Gros der Zugriffe erfolgt immer noch über IPv4 — selbst unter Nerds scheint die Nutzung von IPv6 noch nicht sonderlich populär zu sein. Sicher, natives IPv6 direkt vom DSL-Provider ist vielerorts immer noch eher die Ausnahme. Ich hätte aber vermutet, dass es einen höheren Anteil an Nutzern von IPv6 Tunneldiensten wie etwa SixXS gibt.

Während der geringe Anteil bei IPv6 nicht gänzlich unerwartet kommt, fand ich den recht geringen Anteil der HTTPS-Nutzer dann doch zunächst schockierend. Pakete über eine ungesicherte Verbindung herunterzuladen spart zwar in geringem Maße Rechenleistung, macht es einem Angreifer aber gleichzeitig auch leichter, manipulierte Pakete unterzuschieben. Oder…?

Klar, je höher der Anteil verschlüsselter Verbindungen im Internet, am besten noch mit PFS, desto schwieriger wird die Manipulation von über das Netz verteilter Software. Doch wie sieht es konkret bei Arch Linux aus? Schauen wir uns doch einmal an, wo ein Angreifer überall aktiv werden könnte, um manipulierte Pakete (z. B. mit eingebauter Backdoor) in Umlauf zu bringen.

Zur Quelle

Eine theoretische Möglichkeit besteht natürlich bereits upstream, also bevor Software überhaupt in den Verantwortungsbereich einer Distribution gelangt. Das ist jedoch weitaus weniger trivial, als es auf den ersten Blick scheint. Man müsste sich schon auf eine Software konzentrieren, die fast überall zur Grundausstattung gehört — wie etwa die glibc, bash, OpenSSL oder gleich der Linux Kernel selbst.

In der Regel bedeutet das aber, eine Entwicklergruppe infiltrieren zu müssen — ein langwieriges und ressourcenintensives Vorhaben mit ungewissem Ausgang. Für den Erfolg einer solchen Operation sind nämlich eine ganze Reihe weiterer Faktoren entscheidend:

  • Der Infiltrator muss sich das Vertrauen der übrigen Entwickler erarbeiten (z. B. um Commit-Rechte zu erlangen)
  • Der Entwicklungsprozess der Software darf kein öffentliches Peer Review beinhalten (privatem Peer Reviewing kann man mit einer größeren Anzahl an Infiltratoren begegnen)
  • Das Entwicklungsmodell darf nicht auf einen einzigen zentralen Committer setzen (wie das etwa beim Linux Kernel der Fall ist)
  • Die Software selbst darf nicht Gegenstand allzu intensiver Überprüfung durch Dritte sein (so wie derzeit gerade beispielsweise OpenSSL)
  • Die Software darf von den Distributoren nicht allzu sehr durch eigene Patches verändert werden, wie das etwa beim Linux Kernel der Fall ist.
Sicher, eine eigene Backdoor in einer zentralen Komponente vieler Linux-Systeme ist ein äußerst lohnendes Ziel (sowohl für Cyber-Kriminelle als auch für Geheimdienste und Ermittlungsbehörden), aber mit so hohem Aufwand und Entdeckungsrisiko verbunden, dass es einfachere und vor allem kostengünstigere Wege gibt.


Einfacher dürfte es da schon sein, bei der Paketierung von Software für eine bestimmte Distribution manipulierend einzugreifen. Code Reviews für von Maintainern erstellten Patches sind eher die Ausnahme als die Regel, und das Risiko einer Entdeckung ist hier geringer — schließlich haben es Maintainer mit einer Vielzahl an Paketen zu tun, so dass Spezialwissen über die innere Struktur einer bestimmten Software eher die Ausnahme als die Regel darstellt.

Dennoch bleibt natürlich das Problem, dass eine bestehende Community von einem „Agenten“ infiltriert werden muss — auch hier eine langwierige und ressourcenintensive Angelegenheit; zumal sich die Reichweite eines solchen Angriffs schon auf einen Bruchteil dessen reduziert hat, was mit einer Upstream-Manipulation erreichbare wäre.


Die Manipulation einer Software auf dem Verteilungsweg ist eher ein technisches Problem und bedarf keiner langwierigen Unterwanderung einer bestimmten Personengruppe. Angegriffen werden kann sowohl die Verteilung der Software-Pakete an die verschiedenen Spiegelserver, als auch die Übertragung der Software an den Endverbraucher.

Bei Arch Linux erfolgt die Synchronisation der Spiegelserver über das rsync Protokoll, und somit leider unverschlüsselt. Das ist zwar effizient, öffnet einem man-in-the-middle Angriff aber Tür und Tor. Andere Projekte wie etwa FreeBSD synchronisieren ihre Spiegel zwar ebenfalls mit rsync, nutzen dazu aber die Variante mit SSH-Tunnel.

Bei der Verteilung an den Endanwender hat bei Arch Linux letzterer das Sagen: Viele Spiegel bieten sowohl HTTP als auch HTTPS an, so dass es letztlich an jedem selbst liegt, welche Verbindungsart genutzt wird.

Das heißt jetzt… was?

Gegen Angriffe im Upstream- und Paketierungsbereich kann man sich als Endanwender kaum schützen. Für Kriminelle dürfte jedoch der vergleichsweise hohe Aufwand abschreckend sein, während diese Methode für Behörden und Geheimdienste vermutlich nicht genügend zielgerichtet einsetzbar ist, um den Rechner einer bestimmten „Person of Interest“ zu infiltrieren (was in Zeiten von Big Data nicht heißen muss, dass sie es nicht trotzdem tun).

Angriffe auf den Verteilungsweg sind einfacher umsetzbar, und je näher der Angriff am Endanwender stattfindet, desto zielgerichteter lässt er sich gegen bestimmte Personen in Stellung bringen. Schützen kann man sich davor durch geeignete Transportverschlüsselung — dann bitte aber auf allen Transportwegen, die ein Software-Paket bereist. Hier müsste das Infrastruktur-Team von Arch Linux noch deutlich nachlegen.

Die Nutzung von HTTPS für den Weg zwischen Spiegel und Endanwender schützt derzeit also nur vor einem gezielten Angriff auf eine bestimmte Person; wer mehr Reichweite möchte, greift einfach die Synchronisation zwischen den Spiegelservern an und erwischt auch so die Endanwender, die sich aufgrund der Nutzung von HTTPS in Sicherheit wägen. Trotzdem würde ich die Nutzung von HTTPS empfehlen, da immerhin eine Angriffsform erschwert und gleichzeitig der Anteil an verschlüsseltem Traffic im Internet erhöht wird.

Als letzte Verteidigungslinie bleibt schließlich noch die Paketsignatur — immerhin hat Arch Linux vor ca. zwei Jahren mit Debian & Co gleichgezogen und Pacman mit entsprechender Funktionalität ausgestattet. Glücklicherweise nutzt Arch Linux für die Verteilung der Schlüssel nicht die Spiegelserver selbst, sondern setzt dafür auf (öffentliche) Key Server und das HKP-Protokoll. Allerdings bleibt beim Anwender immer noch die wichtige Verantwortung, neue Schlüssel nur dann zu akzeptieren, wenn sie zuvor einer gründlichen Prüfung unterzogen wurden.

PC-BSD 10.1-RC1 Released

by dru via Official PC-BSD Blog »

The PC-BSD team is pleased to announce the availability of RC1 images for the upcoming PC-BSD 10.1 release.

PC-BSD Notable Changes

* KDE 4.14.2
* GNOME 3.12.2
* Cinnamon 2.2.16
* Chromium 38.0.2125.104_1
* Firefox 33.0
* NVIDIA Driver 340.24
* Lumina desktop 0.7.0-beta
* Pkg 1.3.8_3
* New AppCafe HTML5 web/remote interface, for both desktop / server usage
* New CD-sized text-installer ISO files for TrueOS / server deployments
* New Centos 6.5 Linux emulation base
* New HostAP mode for Wifi GUI utilities
* Misc bug fixes and other stability improvements


Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition


A testing update is available for 10.0.3 users to upgrade to 10.1-RC1. To apply this update, do the following:

edit: /usr/local/share/pcbsd/pc-updatemanager/conf/sysupdate.conf (As root)

Change “PATCHSET: updates”


“PATCHSET: test-updates”

% sudo pc-updatemanager check

This should show you a new “Update system to 10.1-RELEASE” patch available. To install run the following:

% sudo pc-updatemanager install 10.1-update-10152014–10


As with any major system upgrade, please backup important data  and files beforehand!!!

This update will automatically reboot your system several times during the various upgrade phases, please expect it to take between 30–60 minutes.

Getting media

10.1-RC1 DVD/USB media can be downloaded from here via HTTP or Torrent.

Reporting Bugs

Found a bug in 10.1? Please report it (with as much detail as possible) to our bugs database.

Auf den Hund gekommen

by Jesco Freund via My Universe »

Das jüngste Ärgernis in Zusammenhang mit SSL hört auf den Namen POODLE — und ist ausnahmsweise mal kein Implementierungsunfall (wie etwa Heartbleed einer war), sondern ein Designfehler in SSLv3, der dann zum Tragen kommt, wenn der Cipher Block Chaining Modus bei der symmetrischen Verschlüsselung verwendet wird.

Nun ist SSLv3 schon lange kein aktuelles Protokoll mehr — viele Server unterstützen es aber noch, um ältere Clients wie etwa den Internet Explorer 6 oder ältere Mobilclients nicht komplett auszusperren. Damit machen sie sich jedoch gleichermaßen anfällig für einen man-in-the-middle Angriff, bei der ein Angreifer die Verwendung einer alten Protokoll-Version und einer angreifbaren Cipher Suite erzwingt.

Ivan Ristic schätzt POODLE im Vergleich zu Heartbleed als deutlich weniger kritisch ein. Im Vergleich zum BEAST-Angriff jedoch, der auf eine ähnliche Schwachstelle zielt, sieht Ristic die neue Lücke deutlich kritischer, da zu ihrer Ausnutzung weniger Vorbedingungen erforderlich sind — aktiviertes JavaScript im Browser des Opfers genügt schon, wenn der Server nicht explizit SSLv3 verbietet.

Für mich war POODLE ein Anstoß, meine SSL-Konfiguration noch einmal zu überarbeiten. Sehr alte Clients müssen hier nun leider draußen bleiben (das gilt auch für den ein oder anderen Bot), dafür dürfen Nutzer moderner Browser nun einigermaßen sicher sein, dass die Verbindung privat bleibt und in den meisten Fällen auch Forward Secrecy bietet:

TLS 1.0 habe ich allerdings aktiviert gelassen (und damit einen 5%igen Abzug in der „Protocol Support“ Wertung in Kauf genommen), da immer noch relativ viele Browser TLS 1.1 und 1.2 nicht oder nicht sauber unterstützen; darunter der Internet Explorer bis einschließlich Version 10, Android bis einschließlich 4.3, Java 6 und Java 7, die Bots von Bing und Google, …

Ähnliches bei den Cipher Suites: Einschränkungen, die einem im SSLLabs-Rating hier 100% bescheren, schließen derzeit einige Clients aus — darunter auch Java 7 und 8 sowie diverse Suchmaschinen-Bots. Aktuelle Browser stellen zwar kein Problem dar, aber einige Feedreader wären damit auch draußen — und das ist schließlich das letzte, was man bei einem Blog möchte…

Tor-ramdisk 20141022 released

by blueness via Anthony G. Basile »

Following the latest and greatest exploit in openssl, CVE-2014-3566, aka the POODLE issue, the tor team released version  For those of you not familiar, tor is a system of online anonymity which encrypts and bounces your traffic through relays so as to obfuscated the origin.  Back in 2008, I started a uClibc-based micro Linux distribution, called tor-ramdisk, whose only purpose is to host a tor relay in hardened Gentoo environment purely in RAM.

While the POODLE bug is an openssl issue and is resolved by the latest release 1.0.1j, the tor team decided to turn off the affected protocol, SSL v3 or TLS 1.0 or later.  They also fixed tor to avoid a crash when built using openssl 0.9.8zc, 1.0.0o, or 1.0.1j, with the ‘no-ssl3′ configuration option.  These important fixes to two major components of tor-ramdisk waranted a new release.  Take a look at the upstream ChangeLog for more information.

Since I was upgrading stuff, I also upgrade the kernel to vanilla 3.17.1 + Gentoo’s hardened-patches-3.17.1-1.extras.  All the other components remain the same as the previous release.



‘Spam Nation’ Publisher Discloses Card Breach

by BrianKrebs via Krebs on Security »

In the interests of full disclosure: Sourcebooks – the company that on Nov. 18 is publishing my upcoming book about organized cybercrime — disclosed last week that a breach of its Web site shopping cart software may have exposed customer credit card and personal information.

Fortunately, this breach does not affect readers who have pre-ordered Spam Nation through the retailers I’ve been recommending — Amazon, Barnes & Noble, and Politics & Prose.  I mention this breach mainly to get out in front of it, and because of the irony and timing of this unfortunate incident.

From Sourcebooks’ disclosure (PDF) with the California Attorney General’s office:

“Sourcebooks recently learned that there was a breach of the shopping cart software that supports several of our websites on April 16, 2014 – June 19, 2014 and unauthorized parties were able to gain access to customer credit card information. The credit card information included card number, expiration date, cardholder name and card verification value (CVV2). The billing account information included first name, last name, email address, phone number, and address. In some cases, shipping information was included as first name, last name, phone number, and address. In some cases, account password was obtained too. To our knowledge, the data accessed did not include any Track Data, PIN Number, Printed Card Verification Data (CVD). We are currently in the process of having a third-party forensic audit done to determine the extent of this breach.”

So again, if you have pre-ordered the book from somewhere other than Sourcebook’s site (and that is probably 99.9999 percent of you who have already pre-ordered), you are unaffected.

I think there are some hard but important lessons here about the wisdom of smaller online merchants handling credit card transactions. According to Sourcebooks founder Dominique Raccah, the breach affected approximately 5,100 people who ordered from the company’s Web site between mid-April and mid-June of this year. Raccah said the breach occurred after hackers found a security vulnerability in the site’s shopping cart software.

Experts say tens of thousands of businesses that rely on shopping cart software are a major target for malicious hackers, mainly because shopping cart software is generally hard to do well.

“Shopping cart software is extremely complicated and tricky to get right from a security perspective,” said Jeremiah Grossman, founder and chief technology officer for WhiteHat Security, a company that gets paid to test the security of Web sites.  “In fact, no one in my experience gets it right their first time out. That software must undergo serious battlefield testing.”

Grossman suggests that smaller merchants consider outsourcing the handling of credit cards to a solid and reputable third-party. Sourcebooks’ Raccah said the company is in the process of doing just that.

“Make securing credit cards someone else’s problem,” Grossman said. “Yes, you take a little bit of a margin hit, but in contrast to the effort of do-it-yourself [approaches] and breach costs, it’s worth it.”

What’s more, as an increasing number of banks begin issuing more secure chip-based cards  — and by extension more main street merchants in the United States make the switch to requiring chip cards at checkout counters — fraudsters will begin to focus more of their attention on attacking online stores. The United States is the last of the G20 nations to move to chip cards, and in virtually every country that’s made the transition the fraud on credit cards didn’t go away, it just went somewhere else. And that somewhere else in each case manifested itself as increased attacks against e-commerce merchants.

If you haven’t pre-ordered Spam Nation yet, remember that all pre-ordered copies will ship signed by Yours Truly. Also, the first 1,000 customers to order two or more copies of the book (including any combination of digital, audio or print editions) will also get a Krebs On Security-branded ZeusGard. So far, approximately 400 readers have taken us up on this offer! Please make sure that if you do pre-order, that you forward a proof-of-purchase (receipt, screen shot of your Kindle order, etc.) to

Pre-order two or more copies of Spam Nation and get this “Krebs Edition” branded ZeusGard.

FreeBSD 10.1-RC3 Available

by Webmaster Team via FreeBSD News Flash »

The third RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Getting snmpwalk to talk to snmpd on FreeBSD

by Dan Langille via Dan Langille's Other Diary »

Contrary to all the examples I found, it’s not easy to get snmpwalk to communicate with snmpd. I am using the net-mgmt/net-snmp port with the default configuration options. It was installed with: pkg install net-mgmt/net-snmp This is the minimal configuration file, which should be placed at /usr/local/etc/snmp/snmpd.conf: rocommunity public When starting snmpd for the *first* [...]

Google Accounts Now Support Security Keys

by BrianKrebs via Krebs on Security »

People who use Gmail and other Google services now have an extra layer of security available when logging into Google accounts. The company today incorporated into these services the open Universal 2nd Factor (U2F) standard, a physical USB-based second factor sign-in component that only works after verifying the login site is truly a Google site.

A $17 U2F device made by Yubico.
The U2F standard (PDF) is a product of the FIDO (Fast IDentity Online) Alliance, an industry consortium that’s been working to come up with specifications that support a range of more robust authentication technologies, including biometric identifiers and USB security tokens.

The approach announced by Google today essentially offers a more secure way of using the company’s 2-step authentication process. For several years, Google has offered an approach that it calls “2-step verification,” which sends a one-time pass code to the user’s mobile or land line phone.

2-step verification makes it so that even if thieves manage to steal your password, they still need access to your mobile or land line phone if they’re trying to log in with your credentials from a device that Google has not previously seen associated with your account. As Google notes in a support document, security key “offers better protection against this kind of attack, because it uses cryptography instead of verification codes and automatically works only with the website it’s supposed to work with.”

Unlike a one-time token approach, the security key does not rely on mobile phones (so no batteries needed), but the downside is that it doesn’t work for mobile-only users because it requires a USB port. Also, the security key doesn’t work for Google properties on anything other than Chrome.

The move comes a day after Apple launched its Apple Pay platform, a wireless payment system that takes advantage of the near-field communication (NFC) technology built into the new iPhone 6, which allows users to pay for stuff at participating merchants merely by tapping the phone on the store’s payment terminal.

I find it remarkable that Google, Apple and other major tech companies continue to offer more secure and robust authentication options than are currently available to consumers by their financial institutions. I, for one, will be glad to see Apple, Google or any other legitimate player give the entire mag-stripe based payment infrastructure a run for its money. They could hardly do worse.

Soon enough, government Web sites may also offer consumers more authentication options than many financial sites.  An Executive Order announced last Friday by The White House requires the National Security Council Staff, the Office of Science and Technology Policy and the Office of Management and Budget (OMB) to submit a plan to ensure that all agencies making personal data accessible to citizens through digital applications implement multiple layers of identity assurance, including multi-factor authentication. Verizon Enterprise has a good post with additional details of this announcement.

KDEConnect in PC-BSD — Remote Control Your Desktop From Your Android Phone

by Josh Smith via Official PC-BSD Blog »

Hey guys check out our video on KDEConnect in PC-BSD on YouTube!  It’s an awesome new app that allows you to receive text messages, phone notifications, incoming calls notifications, media remote control, and more!

Banks: Credit Card Breach at Staples Stores

by BrianKrebs via Krebs on Security »

Multiple banks say they have identified a pattern of credit and debit card fraud suggesting that several Staples Inc. office supply locations in the Northeastern United States are currently dealing with a data breach. Staples says it is investigating “a potential issue” and has contacted law enforcement.

According to more than a half-dozen sources at banks operating on the East Coast, it appears likely that fraudsters have succeeded in stealing customer card data from some subset of Staples locations, including seven Staples stores in Pennsylvania, at least three in New York City, and another in New Jersey.

Framingham, Mass.-based Staples has more than 1,800 stores nationwide, but so far the banks contacted by this reporter have traced a pattern of fraudulent transactions on a group of cards that had all previously been used at a small number of Staples locations in the Northeast.

The fraudulent charges occurred at other (non-Staples) businesses, such as supermarkets and other big-box retailers. This suggests that the cash registers in at least some Staples locations may have fallen victim to card-stealing malware that lets thieves create counterfeit copies of cards that customers swipe at compromised payment terminals.

Asked about the banks’ claims, Staples’s Senior Public Relations Manager Mark Cautela confirmed that Staples is in the process of investigating a “potential issue involving credit card data and has contacted law enforcement.”

“We take the protection of customer information very seriously, and are working to resolve the situation,” Cautela said. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on [in] a timely basis.”  

Spike in Malware Attacks on Aging ATMs

by BrianKrebs via Krebs on Security »

This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.

BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.
OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.

According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.

How do I run ~arch Perl on a stable Gentoo system?

by Andreas via the dilfridge blog »

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

VC4 driver status update

by Eric Anholt via anholt's lj »

I've just spent another week hanging out with my Broadcom and Raspberry Pi teammates, and it's unblocked a lot of my work.

Notably, I learned some unstated rules about how loading and storing from the tilebuffer work, which has significantly improved stability on the Pi (as opposed to simulation, which only asserted about following half of these rules).

I got an intro on the debug process for GPU hangs, which ultimately just looks like "run it through simpenrose (the simulator) directly. If that doesn't catch the problem, you capture a .CLIF file of all the buffers involved and feed it into RTL simulation, at which point you can confirm for yourself that yes, it's hanging, and then you hand it to somebody who understands the RTL and they tell you what the deal is." There's also the opportunity to use JTAG to look at the GPU's perspective of memory, which might be useful for some classes of problems. I've started on .CLIF generation (currently simulation-environment-only), but I've got some bugs in my generated files because I'm using packets that the .CLIF generator wasn't prepared for.

I got an overview of the cache hierarchy, which pointed out that I wasn't flushing the ARM dcache to get my writes out into system L2 (more like an L3) so that the GPU could see it. This should also improve stability, since before we were only getting lucky that the GPU would actually see our command stream.

Most importantly, I ended up fixing a mistake in my attempt at reset using the mailbox commands, and now I've got working reset. Testing cycles for GPU hangs have dropped from about 5 minutes to 2-30 seconds. Between working reset and improved stability from loads/stores, we're at the point that X is almost stable. I can now run piglit on actual hardware! (it takes hours, though)

On the X front, the modesetting driver is now merged to the X Server with glamor-based X rendering acceleration. It also happens to support DRI3 buffer passing, but not Present's pageflipping/vblank synchronization. I've submitted a patch series for DRI2 support with vblank synchronization (again, no pageflipping), which will get us more complete GLX extension support, including things like GLX_INTEL_swap_event that gnome-shell really wants.

In other news, I've been talking to a developer at Raspberry Pi who's building the KMS support. Combined with the discussions with keithp and ajax last week about compositing inside the X Server, I think we've got a pretty solid plan for what we want our display stack to look like, so that we can get GL swaps and video presentation into HVS planes, and avoid copies on our very bandwidth-limited hardware. Baby steps first, though -- he's still working on putting giant piles of clock management code into the kernel module so we can even turn on the GPU and displays on our own without using the firmware blob.

Testing status:
- 93.8% passrate on piglit on simulation
- 86.3% passrate on piglit on Raspberry Pi

All those opcodes I mentioned in the previous post are now completed -- sadly, I didn't get people up to speed fast enough to contribute before those projects were the biggest things holding back the passrate. I've started a page at for documenting the setup process and status.

And now, next steps. Now that I've got GPU reset, a high priority is switching to interrupt-based render job tracking and putting an actual command queue in the kernel so we can have multiple GPU jobs queued up by userland at the same time (the VC4 sadly has no ringbuffer like other GPUs have). Then I need to clean up user <-> kernel ABI so that I can start pushing my linux code upstream, and probably work on building userspace BO caching.

Lots of new challenges ahead

by swift via Simplicity is a form of art... »

I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)

Fix ALL the BUGS!

by lu_zero via Luca Barbato »

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.
So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches

by lu_zero via Luca Barbato »

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,


Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!