paps: heap-based buffer overflow in read_file() (paps.c)

Postby ago via agostino's blog »

Paps is an UTF-8 to PostScript converter that makes use of pango. It provides both a stand alone command line tool as well as a library

It was discovered that a crafted/empty file is able to cause an heap-based buffer overflow.
Apparently, the project does not have release(s) since 2007 and seems to be dead, but I just discovered right now that the project has moved silently to github where the PR has been sent.

The complete ASan output:

# paps $crafted.file
==30527==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000dfaf at pc 0x0000004e122d bp 0x7ffd8f3dfe90 sp 0x7ffd8f3dfe88                                                                                                                                      
READ of size 1 at 0x60200000dfaf thread T0                                                                                                                                                                                                                                     
    #0 0x4e122c in read_file /tmp/portage/app-text/paps-0.6.8-r1/work/paps-0.6.8/src/paps.c:573:7                                                                                                                                                                              
    #1 0x4e122c in main /tmp/portage/app-text/paps-0.6.8-r1/work/paps-0.6.8/src/paps.c:493                                                                                                                                                                                     
    #2 0x7fd8aff707af in __libc_start_main (/lib64/                                                                                                                                                                                                          
    #3 0x436968 in _start (/usr/bin/paps+0x436968)                                                                                                                                                                                                                             
0x60200000dfaf is located 1 bytes to the left of 4-byte region [0x60200000dfb0,0x60200000dfb4)                                                                                                                                                                                 
allocated by thread T0 here:                                                                                                                                                                                                                                                   
    #0 0x4bdc75 in realloc (/usr/bin/paps+0x4bdc75)                                                                                                                                                                                                                            
    #1 0x7fd8b111c35d in g_realloc (/usr/lib64/                                                                                                                                                                                                       
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/app-text/paps-0.6.8-r1/work/paps-0.6.8/src/paps.c:573 read_file                                                                                                                                                   
Shadow bytes around the buggy address:                                                                                                                                                                                                                                         
  0x0c047fff9ba0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0c047fff9bb0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0c047fff9bc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0c047fff9bd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0c047fff9be0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
=>0x0c047fff9bf0: fa fa fa fa fa[fa]04 fa fa fa 00 02 fa fa 00 02                                                                                                                                                                                                              
  0x0c047fff9c00: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fa                                                                                                                                                                                                              
  0x0c047fff9c10: fa fa fd fa fa fa 00 00 fa fa 00 00 fa fa 00 00                                                                                                                                                                                                              
  0x0c047fff9c20: fa fa 00 00 fa fa 00 fa fa fa 00 00 fa fa fd fa                                                                                                                                                                                                              
  0x0c047fff9c30: fa fa fd fa fa fa fd fa fa fa 00 00 fa fa 00 00                                                                                                                                                                                                              
  0x0c047fff9c40: fa fa 00 fa fa fa 00 00 fa fa 00 00 fa fa 00 fa                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5                                                                                                                                                                                                                                                  
  Stack use after scope:   f8                                                                                                                                                                                                                                                  
  Global redzone:          f9                                                                                                                                                                                                                                                  
  Global init order:       f6                                                                                                                                                                                                                                                  
  Poisoned by user:        f7                                                                                                                                                                                                                                                  
  Container overflow:      fc                                                                                                                                                                                                                                                  
  Array cookie:            ac                                                                                                                                                                                                                                                  
  Intra object redzone:    bb                                                                                                                                                                                                                                                  
  ASan internal:           fe                                                                                                                                                                                                                                                  
  Left alloca redzone:     ca                                                                                                                                                                                                                                                  
  Right alloca redzone:    cb                                                                                                                                                                                                                                                  
Affected version:
All versions.

Fixed version:
0.6.8-r2 (in Gentoo)

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.
This bug was fixed by Jason A. Donenfeld of Gentoo.


2015-06-09: bug discovered
2015-11-17: bug reported downstream (Gentoo)
2016-07-12: fixed produced downstream
2016-07-28: advisory release

This bug was found with American Fuzzy Lop.


paps: heap-based buffer overflow in read_file() (paps.c)


Building and Developing GStreamer using Visual Studio

Postby Nirbheek via Nirbheek’s Rantings »

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone
This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0
This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

libffi.recipe (only 32-bit)

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

April-June 2016 Status Report

Postby Webmaster Team via FreeBSD News Flash »

The April to June 2016 Status Report is now available.

Kimpton Hotels Probes Card Breach Claims

Postby BrianKrebs via Krebs on Security »

Kimpton Hotels, a boutique hotel brand that includes 62 properties across the United States, said today it is investigating reports of a credit card breach at multiple locations.

On July 22, KrebsOnSecurity reached out to San Francisco-based Kimpton after hearing from three different sources in the financial industry about a pattern of card fraud that suggested a card breach at close to two-dozen Kimpton hotels across the country.

Today, Kimpton responded by issuing and posting the following statement:

“Kimpton Hotels & Restaurants takes the protection of payment card data very seriously. Kimpton was recently made aware of a report of unauthorized charges occurring on cards that were previously used legitimately at Kimpton properties. As soon as we learned of this, we immediately launched an investigation and engaged a leading security firm to provide us with support.”

“We are committed to swiftly resolving this matter. In the meantime, and in line with best practice, we recommend that individuals closely monitor their payment card account statements. If there are unauthorized charges, individuals should immediately notify their bank. Payment card network rules generally state that cardholders are not responsible for such charges.”

Assuming a breach at Kimpton is confirmed, the company would join a long list of hotel brands that have acknowledged card breaches over the last year after prompting by KrebsOnSecurity, including Trump Hotels (twice), Hilton, Mandarin Oriental, and White Lodging (twice). Breaches also have hit hospitality chains Starwood Hotels and Hyatt.

In many of those incidents, thieves had planted malicious software on the point-of-sale devices at restaurants and bars inside of the hotel chains. However, the source and extent of the apparent breach at Kimpton properties is still unknown.

Point-of-sale based malware has driven most of the credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malware usually is installed via hacked remote administration tools. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the unauthorized transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

Trump, DNC, RNC Flunk Email Security Test

Postby BrianKrebs via Krebs on Security »

Donald J. Trump has repeatedly bashed Sen. Hillary Clinton for handling classified documents on her private email server, suggesting that anyone who is so lax with email security isn’t fit to become president. But a closer look at the Web sites for each candidate shows that in contrast to, has failed to take full advantage of a free and open email security technology designed to stymie email spoofing and phishing attacks.

At issue is a fairly technical proposed standard called DMARC. Short for “domain-based messaging authentication reporting and conformance,” DMARC tries to solve a problem that has plagued email since its inception: It’s surprisingly difficult for email providers and end users alike to tell whether a given email is real – i.e. that it really was sent by the person or organization identified in the “from:” portion of the missive.

DMARC may not yet be widely deployed beyond the major email providers, but that’s about to change. Google announced late last year that it will soon move to a policy of rejecting any messages that don’t pass the authentication checks spelled out in the DMARC specification. And others are already moving in the same direction.

Probably the easiest way to understand DMARC is to walk through a single site’s records. According to the DMARC compliance lookup tool at — a DMARC awareness, training and support site — has fully implemented DMARC. This means that the campaign has posted a public policy that enables email providers like Google, Microsoft and Yahoo to quickly determine whether a message claiming to have been sent from was actually sent from that domain.

Specifically, (and this is where things can quickly descend into a Geek Factor 5 realm of nerdiness) DMARC sits on top of two existing technologies that try to make email easy to identify: Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM).

SPF is basically a list of Internet addresses and domains which are authorized to send email on behalf of (in case anyone’s interested, here’s a copy of the SPF record for DKIM allows email receivers to verify that a piece of email originated from an Internet domain through the use of public key cryptography. Deploying both technologies gives email receivers two ways to figure out if a piece of email is legitimate.

The DMARC record for Clinton’s site includes the text string “p=quarantine.” The “p” bit stands for policy, and “quarantine” means the Web site’s administrators have instructed email providers to quarantine all messages sent from addresses or domains not on that list and not signed with DKIM – effectively consigning them to the intended recipient’s “spam” or “junk” folder. Another blocking option available is “p=reject,” which tells email providers to outright drop or reject any mail sent from domains or addresses not specified in the organization’s SPF records and lacking any appropriate DKIM signatures.

Turning’s tool against, we can see that although the site is thinking about turning on DMARC, it hasn’t actually done so yet. The site’s DMARC records are set to the third option — “p=none” — which means the site administrators haven’t yet asked email providers to block or quarantine any messages that fail to match the site’s SPF records. Rather, the site merely asks email providers to report to “” about the source of any email messages claiming to have been sent by that domain.

Dmarcian founder Tim Draegen said this “p=none” setting of DMARC is a data collection feature designed to give organizations a better idea of their total email footprint before setting strict DMARC “reject” or “quarantine” rules.

Why on earth would any organization not know where its email was coming from? As this video at Dmarcian notes, one reason is that anti-spam and anti-malware filters at major email providers have essentially spawned an entire email deliverability industry that exists solely to help organizations keep their emails flowing into inboxes worldwide. As a result, many companies rely on an array of third-party providers to send messages on their behalf, yet those business relationships may not be immediately evident to the geeks in charge of setting up DMARC rules for the organization.

“DMARC was designed so that it says, ‘All you email providers….give me feedback on how you’re seeing email from us being received,'” Draegen said. “Based on that feedback, the organization can then can go back and identify and specify their legitimate sources of email, and then tell the email providers, ‘Hey, if you get a piece of email not covered by these sources, reject or quarantine it.”

As for why more organizations haven’t deployed DMARC already, Draegen said larger entities often have multiple divisions (think marketing and sales teams) that may develop their own methods of getting their email messages out. Trouble is, those divisions don’t always do a great job at informing the tech folks of what they’re up to.

The “p=none” option thus gives organizations an easy and free way to tell email providers to report any and all mail claiming to be sent by the domain in question. Armed with that information, the organization can then set strict, global policies about which emails to reject or quarantine going forward.

“It really depends on the size of the infrastructure or complexity of the company,” Draegen said. “The tech part of DMARC is pretty easy, but what we tend to see in large companies is that there’s a domain that has traditionally been shared by everyone at the organization, and it often involves a lot of hard work to find all the legitimate sources of email for the organization.”

Alexander Garcia-Tobar, CEO and co-founder of email security firm Valimail, agreed.

“The answer is that it’s extremely tricky to get right,” he said, of identifying all of a company’s legitimate email sending activity. “Most organizations are lot more concerned about blocking good stuff going out, until they get phished.”

So how long does it take for organizations to gather enough information with DMARC’s “p=none” option in order to build an effective (yet not overly restrictive) “quarantine” or “reject” policy? For larger organizations, this can often amount to a long, laborious process, Draegen said. For smaller outfits — such as presidential campaigns — it shouldn’t take long to gather enough data with DMARC’s “p=none” option to fashion targeted rules that block phishing and spoofing attacks without endangering legitimate outgoing emails.

I asked Draegen whether he thought the Trump Campaign was somehow derelict in not fully adopting DMARC, given the candidate’s statements about how anyone who’s lax with email security doesn’t deserve to be the next Commander-in-Chief of the United States. Draegen admitted he “can’t stomach” Trump, and that he found Clinton’s email scandal likewise nauseating given a lifetime of experience as an email administrator and the challenges involved in protecting a private email server from determined cyber adversaries.

But Draegen said DMARC compliance is one of the easiest and cheapest ways that any organization can use to better protect itself and its customers from email-based phishing and malware attacks.

“If you’re going to invest in click-tracking technologies or enterprise security products of any flavor and you haven’t yet done DMARC, you’re wasting your time,” Draegen said, noting that enabling DMARC can often help organizations increase delivery rates by as much as five or ten percent. And for campaigns that aren’t adopting DMARC, that may mean lots of email appeals to voters (and, more importantly, potential donors) go undelivered.

“Get the easy, free tech stuff done first because you’re going to get a lot of bang for your buck by deploying DMARC,” he said. “And if you’re a presidential candidate, someone at your campaign should recognize that the first thing you do is enable DMARC.”

Incidentally, given the breaking news today about Russian hackers reportedly hacking into networks at the Democratic National Committee (DNC) — allegedly to make Mr. Trump a more sympathetic candidate — it’s worth noting that while takes full advantage of DMARC, the same cannot be said of the Web sites for the DNC ( or the Republican National Committee (

Further reading:

FreeBSD 11.0-BETA2 Available

Postby Webmaster Team via FreeBSD News Flash »

The second BETA build for the FreeBSD 11.0 release cycle is now available. ISO images for the amd64, armv6, i386, aarch64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Canadian Man Behind Popular ‘Orcus RAT’

Postby BrianKrebs via Krebs on Security »

Far too many otherwise intelligent and talented software developers these days apparently think they can get away with writing, selling and supporting malicious software and then couching their commerce as a purely legitimate enterprise. Here’s the story of how I learned the real-life identity of Canadian man who’s laboring under that same illusion as proprietor of one of the most popular and affordable tools for hacking into someone else’s computer.

Earlier this week I heard from Daniel Gallagher, a security professional who occasionally enjoys analyzing new malicious software samples found in the wild. Gallagher said he and members of @malwrhunterteam and @MalwareTechBlog recently got into a Twitter fight with the author of Orcus RAT, a tool they say was explicitly designed to help users remotely compromise and control computers that don’t belong to them.

A still frame from a Youtube video demonstrating Orcus RAT’s keylogging ability to steal passwords from Facebook and other sites.
The author of Orcus — a person going by the nickname “Ciriis Mcgraw” a.k.a. “Armada” on Twitter and other social networks — claimed that his RAT was in fact a benign “remote administration tool” designed for use by network administrators and not a “remote access Trojan” as critics charged. Gallagher and others took issue with that claim, pointing out that they were increasingly encountering computers that had been infected with Orcus unbeknownst to the legitimate owners of those machines.

The malware researchers noted another reason that Mcgraw couldn’t so easily distance himself from how his clients used the software: He and his team are providing ongoing technical support and help to customers who have purchased Orcus and are having trouble figuring out how to infect new machines or hide their activities online.

What’s more, the range of features and plugins supported by Armada, they argued, go well beyond what a system administrator would look for in a legitimate remote administration client like Teamviewer, including the ability to launch a keylogger that records the victim’s every computer keystroke, as well as a feature that lets the user peek through a victim’s Web cam and disable the light on the camera that alerts users when the camera is switched on.

A new feature of Orcus announced July 7 lets users configure the RAT so that it evades digital forensics tools used by malware researchers, including an anti-debugger and an option that prevents the RAT from running inside of a virtual machine.

Other plugins offered directly from Orcus’s tech support page (PDF) and authored by the RAT’s support team include a “survey bot” designed to “make all of your clients do surveys for cash;” a “USB/.zip/.doc spreader,” intended to help users “spread a file of your choice to all clients via USB/.zip/.doc macros;” a “ checker” made to “check a file of your choice to see if it had been scanned on VirusTotal;” and an “Adsense Injector,” which will “hijack ads on pages and replace them with your Adsense ads and disable adblocker on Chrome.”


Gallagher said he was so struck by the guy’s “smugness” and sheer chutzpah that he decided to look closer at any clues that Ciriis Mcgraw might have left behind as to his real-world identity and location. Sure enough, he found that Ciriis Mcgraw also has a Youtube account under the same name, and that a video Mcgraw posted in July 2013 pointed to a 33-year-old security guard from Toronto, Canada.

Gallagher noticed that the video — a bystander recording on the scene of a police shooting of a Toronto man — included a link to the domain policereview[dot]info. A search of the registration records attached to that Web site name show that the domain was registered to a John Revesz in Toronto and to the email address

A reverse WHOIS lookup ordered from shows the same address was used to register at least 20 other domains, including “,” “, revesztechnologies[dot]com,” and — perhaps most tellingly —  ““.

Johnrevesz[dot]com is no longer online, but this cached copy of the site from the indispensable includes his personal résumé, which states that John Revesz is a network security administrator whose most recent job in that capacity was as an IT systems administrator for TD Bank. Revesz’s LinkedIn profile indicates that for the past year at least he has served as a security guard for GardaWorld International Protective Services, a private security firm based in Montreal.

Revesz’s CV also says he’s the owner of the aforementioned Revesz Technologies, but it’s unclear whether that business actually exists; the company’s Web site currently redirects visitors to a series of sites promoting spammy and scammy surveys, come-ons and giveaways.


Contacted by KrebsOnSecurity, Revesz seemed surprised that I’d connected the dots, but beyond that did not try to disavow ownership of the Orcus RAT.

“Profit was never the intentional goal, however with the years of professional IT networking experience I have myself, knew that proper correct development and structure to the environment is no free venture either,” Revesz wrote in reply to questions about his software. “Utilizing my 15+ years of IT experience I have helped manage Orcus through its development.”

Revesz continued:

“As for your legalities question.  Orcus Remote Administrator in no ways violates Canadian laws for software development or sale.  We neither endorse, allow or authorize any form of misuse of our software.  Our EULA [end user license agreement] and TOS [terms of service] is very clear in this matter. Further we openly and candidly work with those prudent to malware removal to remove Orcus from unwanted use, and lock out offending users which may misuse our software, just as any other company would.”

Revesz said none of the aforementioned plugins were supported by Orcus, and were all developed by third-party developers, and that “Orcus will never allow implementation of such features, and or plugins would be outright blocked on our part.”

In an apparent contradiction to that claim, plugins that allow Orcus users to disable the Webcam light on a computer running the software and one that enables the RAT to be used as a “stresser” to knock sites and individuals users offline are available directly from Orcus Technologies’ Github page.

Revesz’s also offers a service to help people cover their tracks online. Using his alter ego “Armada” on the hacker forum Hackforums[dot]net, Revesz also sells a “bulletproof dynamic DNS service” that promises not to keep records of customer activity.

Dynamic DNS services allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because dynamic DNS services can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.

Unfortunately, these dynamic DNS providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic DNS allows the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Free dynamic DNS providers tend to report or block suspicious or outright malicious activity on their networks, and may well share evidence about the activity with law enforcement investigators. In contrast, Armada’s dynamic DNS service is managed solely by him, and he promises in his ad on Hackforums that the service — to which he sells subscriptions of various tiers for between $30-$150 per year — will not log customer usage or report anything to law enforcement.

According to writeups by Kaspersky Lab and Heimdal Security, Revesz’s dynamic DNS service has been seen used in connection with malicious botnet activity by another RAT known as Adwind.  Indeed, Revesz’s service appears to involve the domain “nullroute[dot]pw”, which is one of 21 domains registered to a “Ciriis Mcgraw,” (as well as orcus[dot]pw and orcusrat[dot]pw).

I asked Gallagher (the researcher who originally tipped me off about Revesz’s activities) whether he was persuaded at all by Revesz’s arguments that Orcus was just a tool and that Revesz wasn’t responsible for how it was used.

Gallagher said he and his malware researcher friends had private conversations with Revesz in which he seemed to acknowledge that some aspects of the RAT went too far, and promised to release software updates to remove certain objectionable functionalities. But Gallagher said those promises felt more like the actions of someone trying to cover himself.

“I constantly try to question my assumptions and make sure I’m playing devil’s advocate and not jumping the gun,” Gallagher said. “But I think he’s well aware that what he’s doing is hurting people, it’s just now he knows he’s under the microscope and trying to do and say enough to cover himself if it ever comes down to him being questioned by law enforcement.”


Cici’s Pizza: Card Breach at 130+ Locations

Postby BrianKrebs via Krebs on Security »

Cici’s Pizza, a Coppell, Texas-based fast-casual restaurant chain, today acknowledged a credit card breach at more than 135 locations. The disclosure comes more than a month after KrebsOnSecurity first broke the news of the intrusion, offering readers a sneak peak inside the sprawling cybercrime machine that thieves used to siphon card data from Cici’s customers in real-time.

In a statement released Tuesday evening, Cici’s said that in early March 2016, the company received reports from several of its restaurant locations that point-of-sale systems were not working properly.

“The point-of-sale vendor immediately began an investigation to assess the problem and initiated heightened security measures,” the company said in a press release. “After malware was found on some point-of-sale systems, the company began a restaurant-by-restaurant review and remediation, and retained a third-party cybersecurity firm, 403 Labs, to perform a forensic analysis.”

According to Cici’s, “the vast majority of the intrusions began in March of 2016,” but the company acknowledges that the breach started as early as 2015 at some locations. Cici’s said it was confident the malware has been removed from all stores. A list of affected locations is here (PDF).

On June 3, 2016, KrebsOnSecurity reported that sources at multiple financial institutions suspected a card breach at Cici’s. That story featured a quote from Stephen P. Warne, vice president of service and support for Datapoint POS, a point-of-sale provider that services a large number of Cici’s locations. Warne told this author that the fraudsters responsible for the intrusions had tricked employees into installing the card-stealing malicious software.

On June 8, 2016, this author published Slicing Into a Point-of-Sale Botnet, which brought readers inside of the very crime machine the perpetrators were using to steal credit card data in real-time from Cici’s customers. Along with card data, the malware had intercepted private notes that Cici’s Pizza employees left to one another about important developments between job shifts.

Point-of-sale based malware has driven most of the credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malware usually is installed via hacked remote administration tools. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the phony transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

Carbanak Gang Tied to Russian Security Firm?

Postby BrianKrebs via Krebs on Security »

Among the more plunderous cybercrime gangs is a group known as “Carbanak,” Eastern European hackers blamed for stealing more than a billion dollars from banks. Today we’ll examine some compelling clues that point to a connection between the Carbanak gang’s staging grounds and a Russian security firm that claims to work with some of the world’s largest brands in cybersecurity.

The Carbanak gang derives its name from the banking malware used in countless high-dollar cyberheists. The gang is perhaps best known for hacking directly into bank networks using poisoned Microsoft Office files, and then using that access to force bank ATMs into dispensing cash. Russian security firm Kaspersky Lab estimates that the Carbanak Gang has likely stolen upwards of USD $1 billion — but mostly from Russian banks.

Image: Kaspersky
I recently heard from security researcher Ron Guilmette, an anti-spam crusader whose sleuthing has been featured on several occasions on this site and in the blog I wrote for The Washington Post. Guilmette said he’d found some interesting commonalities in the original Web site registration records for a slew of sites that all have been previously responsible for pushing malware known to be used by the Carbanak gang.

For example, the domains “weekend-service[dot]com” “coral-trevel[dot]com” and “freemsk-dns[dot]com” all were documented by multiple security firms as distribution hubs for Carbanak crimeware. Historic registration or “WHOIS” records maintained by for all three domains contain the same phone and fax numbers for what appears to be a Xicheng Co. in China — 1066569215 and 1066549216, each preceded by either a +86 (China’s country code) or +01 (USA). Each domain record also includes the same contact address: ““.

According to data gathered by ThreatConnect, a threat intelligence provider [full disclosure: ThreatConnect is an advertiser on this blog], at least 484 domains were registered to the address or to one of 26 other email addresses that listed the same phone numbers and Chinese company.  “At least 304 of these domains have been associated with a malware plugin [that] has previously been attributed to Carbanak activity,” ThreatConnect told KrebsOnSecurity.

Going back to those two phone numbers, 1066569215 and 1066549216; at first glance they appear to be sequential, but closer inspection reveals they differ slightly in the middle. Among the very few domains registered to those Chinese phone numbers that haven’t been seen launching malware is a Web site called “cubehost[dot]biz,” which according to records was registered in Sept. 2013 to a 28-year-old Artem Tveritinov of Perm, Russia.

Cubehost[dot]biz is a dormant site, but it appears to be the sister property to a Russian security firm called Infocube (also spelled “Infokube”). The InfoKube web site — — is also registered to Mr. Tveritinov of Perm, Russia; there are dozens of records in the WHOIS history for, but only the oldest, original record from 2011 contains the email address 

That same email address was used to register a four-year-old profile account at the popular Russian social networking site Vkontakte for Artyom “LioN” Tveritinov from Perm, Russia. The “LioN” bit is an apparent reference to an Infokube anti-virus product by the same name.

Mr. Tveritinov is quoted as “the CEO of InfoKub” in a press release from FalconGaze, a Moscow-based data security firm that partnered with the InfoKube to implement “data protection and employee monitoring” at a Russian commercial research institute. InfoKube’s own press releases say the company also has been hired to develop “a system to protect information from unauthorized access” undertaken for the City of Perm, Russia, and for consulting projects relating to “information security” undertaken for and with the State Ministry of Interior of Russia.

The company’s Web site claims that InfoKube partners with a variety of established security firms — including Symantec and Kaspersky. The latter confirmed InfoKube was “a very minor partner” of Kaspersky’s, mostly involved in systems integration. Zyxel, another partner listed on InfoKube’s partners page, said it had no partners named InfoKube. Slovakia-based security firm ESET said “Infokube is not and has never been a partner of ESET in Russia.”

Presented with Guilmette’s findings, I was keen to ask Mr. Tveritinov how the phone and fax numbers for a Chinese entity whose phone number has become synonymous with cybercrime came to be copied verbatim into Cubehost’s Web site registration records. I sent requests for comment to Mr. Tveritinov via email and through his Vkontakte page.

Initially, I received a friendly reply from Mr. Tveritinov via email expressing curiosity about my inquiry, and asking how I’d discovered his email address. In the midst of composing a more detailed follow-up reply, I noticed that the Vkontakte social networking profile that Tveritinov had maintained regularly since April 2012 was being permanently deleted before my eyes. Tveritinov’s profile page and photos actually disappeared from the screen I had up on one monitor as I was in the process of composing an email to him in the other.

Not long after Tveritinov’s Vkontakte page was deleted, I heard from him via email. Ignoring my question about the sudden disappearance of his social media account, Tveritinov said he never registered and that his personal information was stolen and used in the registration records for

“Our company never did anything illegal, and conducts all activities according to the laws of Russian Federation,” Tveritinov said in an email. “Also, it’s quite stupid to use our own personal data to register domains to be used for crimes, as [we are] specialists in the information security field.”

Turns out, InfoKube/Cubehost also runs an entire swath of Internet addresses managed by Petersburg Internet Network (PIN) Ltd., an ISP in Saint Petersburg, Russia that has a less-than-stellar reputation for online badness.

For example, many of the aforementioned domain names that security firms have conclusively tied to Carbanak distribution (e.g., freemsk-dns[dot].com) are hosted in Internet address space assigned to Cubehost. A search of the RIPE registration records for the block of addresses at turns up a physical address in Ras al Khaimah, an emirate of the United Arab Emirates (UAE) that has sought to build a reputation as a tax shelter and a place where it is easy to create completely anonymous offshore companies. The same listing says abuse complaints about Internet addresses in that address block should be sent to “”

This PIN hosting provider in St. Petersburg has achieved a degree of notoriety in its own right and is probably worthy of additional scrutiny given its reputation as a haven for all kinds of online ne’er-do-wells. In fact, Doug Madory, director of Internet analysis at Internet performance management firm Dyn, has referred to the company as “…perhaps the leading contender for being named the Mos Eisley of the Internet” (a clever reference to the spaceport full of alien outlaws in the 1977 movie Star Wars).

Madory explained that PIN’s hard-won bad reputation stems from the ISP’s documented propensity for absconding with huge chunks of Internet address blocks that don’t actually belong to it, and then re-leasing that purloined Internet address space to spammers and other Internet miscreants.

For his part, Guilmette points to a decade’s worth of other nefarious activity going on at the Internet address space apparently assigned to Tveritinov and his company. For example, in 2013 Microsoft seized a bunch of domains parked there that were used as controllers for Citadel online banking malware, and all of those domains had the same “Xicheng Co.” data in their WHOIS records.  A Sept. 2011 report on the security blog notes several domains with that Xicheng Co. WHOIS information showing up in online banking heists powered by the Sinowal banking Trojan way back in 2006.

“If Mr. Tveritinov, has either knowledge of, or direct involvement in even a fraction of the criminal goings-on within his address block, then the possibility that he may perhaps also have a role in other and additional criminal enterprises… including perhaps even the Carbanak cyber banking heists… becomes all the more plausible and probable,” Guilmette said.

It remains unclear to what extent the Carbanak gang is still active. Last month, authorities in Russia arrested 50 people allegedly tied to the organized cybercrime group, whose members reportedly hail from Russia, China, Ukraine and other parts of Europe. The action was billed as the biggest ever crackdown on financial hackers in Russia.


Postby via dietanu »

Man könnte glatt meinen, der Juli ist ein Monat der Abschiede.

Moerser Wetter

Nicht nur, dass ich mich mehr und mehr von Twitter zurückziehe (warum, weshalb wieso weiter unten), nein, nun hat auch noch meine Wetterstation den Löffel abgegeben. Nach 4 1/2 Jahre treuer Dienste ist der Controller der Wetterstation hinüber. Klar, eine neue Station zu kaufen bzw. den Controller zu ersetzen wäre sicherlich ein leichtes, kostet aber Geld - und das fließt halt gerade in ein mir mittlerweile deutlich wichtigeres Hobby: der Fotografie. Mein Open Weather Data Service ist damit Geschichte.

Ich muss aber zugeben, dass mir der Aufbau der Wetterstation und der ganze Service drumherum sehr viel Freude bereitet hat. Ich hätte nie gedacht, dass ich einmal auf 4 1/2 Jahre fast kontinuierlicher Wetterdatenlogs zurückblicken kann. Dennoch, es war auch sehr sehr viel Arbeit, die ich mich da freiwillig aufgehalst habe. Denn die Daten wollen nicht nur erfasst und per Script in hübsche Graphen verwandelt werden, in eine sehr große Datenbank geschrieben werden, sondern auch auf einen für Euch per Internet erreichbaren Webserver geschoben werden. Zuletzt hatte ich eine Lösung mit einem Reverse-Proxy, der allerdings gerne mal bei Änderungen der IP-Adresse des Clients (also meiner Internetleitung zu Hause) dann ausstieg und nur noch einen “Bad Gateway” Fehler rauswarf. Nur ein Restart von nginx half hier. Auch irgendwie nicht ideal.

Ich lasse den Service mit einem Tränchen im Auge sterben. Es hat mich Linux-seitig unglaublich nach vorne gebracht, denn 2012, als ich mit dem Scripten begann, war ich noch nicht so fit in Linux, wie ich es heute bin. Das Feedback für den Service war immer toll, es riefen sogar Leute hier an, die wissen wollten ob es an einem Tag in der Vergangenheit geschneit hat, oder warum ich das denn ohne Gegenleistung und ohne Werbung mache. Die Stadt Moers hat meine Daten unglaublich in den sozialen Medien gepusht und es war ein gutes Gefühl, wenn man etwas zurückgeben kann. Open Data umgekehrt, wie Claus Arndt erst kürzlich geschrieben hat.

Mach’s gut, Wetterstation. It was a wild, but fun ride!


Jetzt geht’s nochmal um Twitter. Den Dienst, den ich seit Ende 2008 täglich eingesetzt habe um mich über alles zu informieren, was in der Welt so vor sich geht und Freundschaften zu beginnen, zu pflegen, ja aber auch zu beenden. Ganz klar formuliert ist es mir zu viel geworden. Mir fehlt das Mittelmaß. Ich war permanent mit Twitter online, über all die mobilen Devices die ich in den letzten Jahren so verwendet habe. Erreichbar war ich immer. Ob tagsüber oder Nachts um 3 um mit den Bekannten hier aus Moers über das aktuelle Unwetter zu tweeten. Ich hatte das Handy nur noch in der Hand. Das nervt. Nicht nur mich, sondern auch meine Familie - und die ist das Wichtigste. Kurz gesagt: Mein Account von Twitter wird inaktiver werden. Wie in den letzten 14 Tagen auch, werde ich Twitter höchstens noch nutzen um mal ein paar wichtige Infos zu verbreiten oder mal ein Foto von mir zu teasern. 2008. Das waren bald 8 Jahre. Unfassbar wie die Zeit vergeht. Es ist ein wenig mein kleiner digitaler Exit. Dexit? Whatever.

Erstaunlich fande ich, wie schnell man “vergessen” wird. Wenn man nichts schreibt, ist man weg vom Fenster. Das hatte ich in den letzten knapp 8 Jahren nie bemerkt; wie auch, ich habe doch dauernd etwas geschrieben und Antworten (Favs, Likes, Retweets) bekommen.

Nun denn, wieder ein US-Konzern weniger, von dem ich mich abhängig mache ;) Diejenigen, die wichtig für mich und denen ich wichtig bin, wissen ja, wo und wie sie mich weiter erreichen können. Für die anderes: es gibt auch noch so etwas wie eMail ;)


Das ist ein Thema, mit dem ich mich in den letzten Jahren immer wieder recht intensiv beschäftigt habe und dann wieder weniger intensiv. Ich rede natürlich von meiner Fotografie. Als schon fast dummer Fanboy habe ich die guten, aber auch die schlechten Seiten des Anbieters in Kauf genommen. Ich werde darüber auf meiner Fotoseite im Detail berichten, aber hier sei nur kurz angemerkt: ich habe meine a6000 vor 2 Tagen verkauft und auch mein 24mm f/1.8 Sony/Zeiss Objektiv ist bereits verkauft. Ich werde zu einem anderen Hersteller wechseln, der nicht nur mehr Tradition in Sachen Film und Fotografie hat als Sony, sondern der auch deutlich günstigere und, wenn es stimmt was irgendwie so ziemlich alle im Netz sagen, auch deutlich hochwertigere Objektive fertigt. Aber wie gesagt - mehr dazu, wenn ich die neue Kamera ausführlich getestet habe. Vielleicht gibt’s ja für die Neugierigen unter Euch schon ein paar dezente Hinweise auf meinem Instagram Profil.

Ironischer Weise werde ich diesen Post dann auch nochmal via Twitter promoten. Das Leben 2016 ist kompliziert geworden…

Insecure updates in Joomla before 3.6

Postby Hanno Böck via Hanno's blog »

In early April I reported security problems with the update process to the security contact of Joomla. While the issue has been fixed in Joomla 3.6, the communication process was far from ideal.

The issue itself is pretty simple: Up until recently Joomla fetched information about its updates over unencrypted and unauthenticated HTTP without any security measures.

The update process works in three steps. First of all the Joomla backend fetches a file list.xml from that contains information about current versions. If a newer version than the one installed is found then the user gets a button that allows him to update Joomla. The file list.xml references an URL for each version with further information about the update called extension_sts.xml. Interestingly this file is fetched over HTTPS, while - in version 3.5 - the file list.xml is not. However this does not help, as the attacker can already intervene at the first step and serve a malicious list.xml that references whatever he wants. In extension_sts.xml there is a download URL for a zip file that contains the update.

Exploiting this for a Man-in-the-Middle-attacker is trivial: Requests to need to be redirected to an attacker-controlled host. Then the attacker can place his own list.xml, which will reference his own extension_sts.xml, which will contain a link to a backdoored update. I have created a trivial proof of concept for this (just place that on the HTTP host that gets redirected to).

I think it should be obvious that software updates are a security sensitive area and need to be protected. Using HTTPS is one way of doing that. Using any kind of cryptographic signature system is another way. Unfortunately it seems common web applications are only slowly learning that. Drupal only switched to HTTPS updates earlier this year. It's probably worth checking other web applications that have integrated update processes if they are secure (Wordpress is secure fwiw).

Now here's how the Joomla developers handled this issue: I contacted Joomla via their webpage on April 6th. Their webpage form didn't have a way to attach files, so I offered them to contact me via email so I could send them the proof of concept. I got a reply to that shortly after asking for it. This was the only communication from their side. Around two months later, on June 14th, I asked about the status of this issue and warned that I would soon publish it if I don't get a reaction. I never got any reply.

In the meantime Joomla had published beta versions of the then upcoming version 3.6. I checked that and noted that they have changed the update url from to So while they weren't communicating with me it seemed a fix was on its way. I then found that there was a pull request and a Github discussion that started even before I first contacted them. Joomla 3.6 was released recently, therefore the issue is fixed. However the release announcement doesn't mention it.

So all in all I contacted them about a security issue they were already in the process of fixing. The problem itself is therefore solved. But the lack of communication about the issue certainly doesn't cast a good light on Joomla's security process.

Cybercrime Overtakes Traditional Crime in UK

Postby BrianKrebs via Krebs on Security »

In a notable sign of the times, cybercrime has now surpassed all other forms of crime in the United Kingdom, the nation’s National Crime Agency (NCA) warned in a new report. It remains unclear how closely the rest of the world tracks the U.K.’s experience, but the report reminds readers that the problem is likely far worse than the numbers suggest, noting that cybercrime is vastly under-reported by victims.

The NCA’s Cyber Crime Assessment 2016, released July 7, 2016, highlights the need for stronger law enforcement and business partnership to fight cybercrime. According to the NCA, cybercrime emerged as the largest proportion of total crime in the U.K., with “cyber enabled fraud” making up 36 percent of all crime reported, and “computer misuse” accounting for 17 percent.

One explanation for the growth of cybercrime reports in the U.K. may be that the Brits are getting better at tracking it. The report notes that the U.K. Office of National Statistics only began including cybercrime for the first time last year in its annual Crime Survey for England and Wales.

“The ONS estimated that there were 2.46 million cyber incidents and 2.11 million victims of cyber crime in the U.K. in 2015,” the report’s authors wrote. “These figures highlight the clear shortfall in established reporting, with only 16,349 cyber dependent and approximately 700,000 cyber-enabled incidents reported to Action Fraud over the same period.”

The report also focuses on the increasing sophistication of organized cybercrime gangs that develop and deploy targeted, complex malicious software — such as Dridex and Dyre, which are aimed at emptying consumer and business bank accounts in the U.K. and elsewhere.

Avivah Litan, a fraud analyst with Gartner Inc., said cyber fraudsters in the U.K. bring their best game when targeting U.K. banks, which generally require far more stringent customer-facing security measures than U.S. banks — including smart cards and one-time tokens.

“I’m definitely hearing more about advanced attacks on U.K. banks than in the U.S.,” Litan said, adding that the anti-fraud measures put in place by U.K. banks have forced cybercriminals to focus more on social engineering U.K. retail and commercial banking customers.

Litan said if organized cybercrime gangs prefer to pick on U.K. banks, businesses and consumers, it may have more to do with convenience for the fraudsters than anything else. After all, she said, London is just two time zones behind Moscow, whereas the closest time zone in the U.S. is 7 hours behind.

“In most cases, the U.K. banks are pretty close to the fraudster’s own time zone, it’s a language the criminals can speak, and they’ve studied the banks’ systems up close and know how to get around security controls,” Litan said. “Just because you have more fraud controls doesn’t mean the fraudsters can’t beat them, it just forces the [crooks] to stay on top of their game. Why would you want to stay up all night doing online fraud against banks in the U.S. when you’d rather be out drinking with your buddies?”

The report observes that “despite the growth in scale and complexity of cyber crime and intensifying attacks, visible damage and losses have not (yet) been large enough to impact long term on shareholder value. The UK has yet to experience a cyber attack on business as damaging and publicly visible as the attack on the Target US retail chain.”

Although it would likely be difficult for a large, multinational European company to hide a breach similar in scope to that of the 2013 breach at Target, European nations generally have not had to adhere to the same data breach disclosure laws that are currently on the books in nearly every U.S. state — laws which prompt multiple U.S. companies each week to publicly acknowledge when they’ve suffered data breaches.

However, under the new European Union General Data Protection Regulation, companies that do business in Europe or with European customers will need to disclose “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.”

It may be some time yet before U.K. and European businesses start coming forward about data breaches: For better or worse, the GDPR requirements don’t go into full effect for two more years.

Hat tip to Trend Micro’s blog as the inspiration for this post.

The Value of a Hacked Company

Postby BrianKrebs via Krebs on Security »

Most organizations only grow in security maturity the hard way — that is, from the intense learning that takes place in the wake of a costly data breach. That may be because so few company leaders really grasp the centrality of computer and network security to the organization’s overall goals and productivity, and fewer still have taken an honest inventory of what may be at stake in the event that these assets are compromised.

If you’re unsure how much of your organization’s strategic assets may be intimately tied up with all this technology stuff, ask yourself what would be of special worth to a network intruder. Here’s a look at some of the key corporate assets that may be of interest and value to modern bad guys.

This isn’t meant to be an exhaustive list; I’m sure we can all think of other examples, and perhaps if I receive enough suggestions from readers I’ll update this graphic. But the point is that whatever paltry monetary value the cybercrime underground may assign to these stolen assets individually, they’re each likely worth far more to the victimized company — if indeed a price can be placed on them at all.

In years past, most traditional, financially-oriented cybercrime was opportunistic: That is, the bad guys tended to focus on getting in quickly, grabbing all the data that they knew how to easily monetize, and then perhaps leaving behind malware on the hacked systems that abused them for spam distribution.

These days, an opportunistic, mass-mailed malware infection can quickly and easily morph into a much more serious and sustained problem for the victim organization (just ask Target). This is partly because many of the criminals who run large spam crime machines responsible for pumping out the latest malware threats have grown more adept at mining and harvesting stolen data.

That data mining process involves harvesting and stealthily testing interesting and potentially useful usernames and passwords stolen from victim systems. Today’s more clueful cybercrooks understand that if they can identify compromised systems inside organizations that may be sought-after targets of organized cybercrime groups, those groups might be willing to pay handsomely for such ready-made access.

It’s also never been easier for disgruntled employees to sell access to their employer’s systems or data, thanks to the proliferation of open and anonymous cybercrime forums on the Dark Web that serve as a bustling marketplace for such commerce. In addition, the past few years have seen the emergence of several very secretive crime forums wherein members routinely solicited bids regarding names of people at targeted corporations that could serve as insiders, as well as lists of people who might be susceptible to being recruited or extorted.

The sad truth is that far too many organizations spend only what they have to on security, which is often to meet some kind of compliance obligation such as HIPAA to protect healthcare records, or PCI certification to be able to handle credit card data, for example. However, real and effective security is about going beyond compliance — by focusing on rapidly detecting and responding to intrusions, and constantly doing that gap analysis to identify and shore up your organization’s weak spots before the bad guys can exploit them.

How to fashion a cybersecurity focus beyond mere compliance. Source: PWC on NIST framework.
Those weak spots very well may be your users, by the way. A number of security professionals I know and respect claim that security awareness training for employees doesn’t move the needle much. These naysayers note that there will always be employees who will click on suspicious links and open email attachments no matter how much training they receive. While this is generally true, at least such security training and evaluation offers the employer a better sense of which employees may need more heavy monitoring on the job and perhaps even additional computer and network restrictions.

If you help run an organization, consider whether the leadership is investing enough to secure everything that’s riding on top of all that technology powering your mission: Chances are there’s a great deal more at stake than you realize.

Organizational leaders in search of a clue about how to increase both their security maturity and the resiliency of all their precious technology stuff could do far worse than to start with the Cybersecurity Framework developed by the National Institute of Standards and Technology (NIST), the federal agency that works with industry to develop and apply technology, measurements, and standards. This primer (PDF) from PWC does a good job of explaining why the NIST Framework may be worth a closer look.

Image: PWC.
If you liked this post, you may enjoy the other two posts in this series — The Scrap Value of a Hacked PC and The Value of a Hacked Email Account.

Adobe, Microsoft Patch Critical Security Bugs

Postby BrianKrebs via Krebs on Security »

Adobe has pushed out a critical update to plug at least 52 security holes in its widely-used Flash Player browser plugin, and another update to patch holes in Adobe Reader. Separately, Microsoft released 11 security updates to fix vulnerabilities more than 40 flaws in Windows and related software.

First off, if you have Adobe Flash Player installed and haven’t yet hobbled this insecure program so that it runs only when you want it to, you are playing with fire. It’s bad enough that hackers are constantly finding and exploiting zero-day flaws in Flash Player before Adobe even knows about the bugs.

The bigger issue is that Flash is an extremely powerful program that runs inside the browser, which means users can compromise their computer just by browsing to a hacked or malicious site that targets unpatched Flash flaws.

The smartest option is probably to ditch this insecure program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach — as well as slightly less radical solutions — in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart.

Happily, Adobe has delayed plans to stop distributing direct download links to its Flash Player program. The company had said it would decommission the direct download page on June 30, 2016, but the latest, patched Flash version for Windows and Mac systems is still available there. The wording on the site has been changed to indicate the download links will be decommissioned “soon.”

Adobe’s advisory on the Flash flaws is here. The company also released a security update that addresses at least 30 security holes in Adobe Reader. The latest version of Reader for most Windows and Mac users is v. 15.017.20050.

Six of the 11 patches Microsoft issued this month earned its most dire “critical” rating, which Microsoft assigns to software bugs that can be exploited to remotely commandeer vulnerable machines with little to no help from users, save from perhaps browsing to a hacked or malicious site.

In fact, most of the vulnerabilities Microsoft fixed this Patch Tuesday are in the company’s Web browsers — i.e., Internet Explorer (15 vulnerabilities) and its newer Edge browser (13 flaws). Both patches address numerous browse-and-get-owned issues.

Another critical patch from Redmond tackles problems in Microsoft Office that could be exploited through poisoned Office documents.

For further breakdown on the patches this month from Adobe and Microsoft, check out these blog posts from security vendors Qualys and Shavlik. And as ever, if you encounter any problems downloading or installing any of the updates mentioned above please leave a note about your experience in the comments below.

Serial Swatter, Stalker and Doxer Mir Islam Gets Just 1 Year in Jail

Postby BrianKrebs via Krebs on Security »

Mir Islam, a 21-year-old Brooklyn man who pleaded guilty to an impressive array of cybercrimes including cyberstalking, “doxing” and “swatting” celebrities and public officials (as well as this author), was sentenced in federal court today to two years in prison. Unfortunately, thanks to time served in this and other cases, Islam will only see a year of jail time in connection with some fairly heinous assaults that are becoming all too common.

While Islam’s sentence fell well short of the government’s request for punishment, the case raises novel legal issues as to how federal investigators intend to prosecute ongoing cases involving swatting — an extremely dangerous prank in which police are tricked into responding with deadly force to a phony hostage crisis or bomb scare at a residence or business.

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy / Yours Truly is pictured in the blue shirt behind Islam.
On March 14, 2014, Islam and a group of as-yet-unnamed co-conspirators used a text-to-speech (TTY) service for the deaf to relay a message to our local police department stating that there was an active hostage situation going on at our modest town home in Annandale, Va. Nearly a dozen heavily-armed officers responded to the call, forcing me out of my home at gunpoint and putting me in handcuffs before the officer in charge realized it was all a hoax.

At the time, Islam and his pals were operating a Web site called Exposed[dot]su, which sought to “dox” public officials and celebrities by listing the name, birthday, address, previous address, phone number and Social Security number of at least 50 public figures and celebrities, including First Lady Michelle Obama, then-FBI director Robert Mueller, and then Central Intelligence Agency Director John Brennan. also documented which of these celebrities and public figures had been swatted, including a raft of California celebrities and public figures, such as former California Governor Arnold Schwartzenegger, actor Ashton Kutcher, and performer Jay Z.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.
At the time, most media outlets covering the sheer amount of celebrity exposure at Exposed[dot]su focused on the apparently starling revelation that “if they can get this sensitive information on these people, they can get it on anyone.” But for my part, I was more interested in how they were obtaining this data in the first place. On March 13, 2013 KrebsOnSecurity featured a story — Credit Reports Sold for Cheap in the Underweb –which sought to explain how the proprietors of Exposed[dot]su had obtained the records for the public officials and celebrities from a Russian online identity theft service called sssndob[dot]ru.

I noted in that story that sources close to the investigation said the assailants were using data gleaned from the ssndob[dot]ru ID theft service to gather enough information so that they could pull credit reports on targets directly from, a site mandated by Congress to provide consumers a free copy of their credit report annually from each of the three major credit bureaus.

Peeved that I’d outed his methods for doxing public officials, Islam helped orchestrate my swatting the very next day. Within the span of 45 minutes, came under a sustained denial-of-service attack which briefly knocked my site offline.

At the same time, my hosting provider received a phony letter from the FBI stating my site was hosting illegal content and needed to be taken offline. And, then there was the swatting which occurred minutes after that phony communique was sent.

All told, the government alleges that Islam swatted at least 19 other people, although only seven of the victims (or their representatives) showed up in court today to tell similarly harrowing stories (I was asked to but did not testify).

Security camera footage of Fairfax County police officers responding to my 2013 swatting incident.
Going into today’s sentencing hearing, the court advised that under the government’s sentencing guidelines Islam was facing between 37 and 46 months in prison for the crimes to which he’d pleaded guilty. But U.S. District Court Judge Randolph Moss seemed especially curious about the government’s rationale for charging Islam with conspiracy to transmit a threat to kidnap or harm using a deadly weapon.

Judge Moss said the claim raises a somewhat novel legal question: Can the government allege the use of deadly force when the perpetrator of a swatting incident did not actually possess a weapon?

Corbin Weiss, an assistant US attorney and a cybercrime coordinator with the U.S. Department of Justice, argued that in most of the swatting attacks Islam perpetrated he expressed to emergency responders that any responding officers would be shot or blown up. Thus, the government argued, Islam was using police officers as a proxy for assault with a deadly weapon by ensuring that responding officers would be primed to expect a suspect who was armed and openly hostile to police.

Islam’s lawyer argued that his client suffered from multiple psychological disorders, and that he and his co-conspirators orchestrated the swattings and the creation of exposed[dot]su out of a sense of “anarchic libertarianism,” bent on exposing government overreach on consumer privacy and use of force issues.

As if to illustrate his point, a swatting victim identified by the court only as Victim #4 was represented by Fairfax, Va. lawyer Mark Dycio. That particular victim did not wish to be named or show up in court, but follow-up interviews confirmed that Dycio was representing Wayne LaPierre, the executive vice president of the National Rifle Association.

According to Dycio, police responded to reports of a hostage situation at the NRA boss’s home just days after my swatting in March 2013. Impersonating LaPierre, Islam told police he had killed his wife and that he would shoot any officers responding to the scene. Dycio said police initially had difficulty identifying the object in LaPierre’s hand when he answered the door. It turned out to be a cell phone, but Dycio said police assumed it was a weapon and stripped the cell phone from his hands when entering his residence. The police could have easily mistaken the mobile phone for a weapon, Dycio said.

Another victim that spoke at today’s hearing was Stephen P. Heymann, an assistant U.S. attorney in Boston. Heymann was swatted because he helped prosecute the much-maligned case against the late Aaron Swartz, a computer programmer who committed suicide after the government by most estimations overstepped its bounds by charging him with hacking for figuring out an automated way to download academic journals from the Massachusetts Institute of Technology (MIT).

Heymann, whose disability requires him to walk with a cane, recounted the early morning hours of April 1, 2013, when police officers surrounded his home in response to a swatting attack launched by Islam on his residence. Heymann recalled worrying that officers responding to the phony claim might confuse his cane with a deadly weapon.

One of the victims represented by a proxy witness in today’s hearings was the wife of a SWAT team member in Arizona who recounted several tense hours hunkered down at the University of Arizona, while her husband joined a group of heavily-armed police officers who were responding to a phony threat about a shooter on the campus.

Not everyone had nightmare swatting stories that aligned neatly with Islam’s claims. A woman representing an anonymous “Victim #3” of Islam’s was appearing in lieu of a cheerleader at the University of Arizona that Islam admitted to cyberstalking for several months. When the victim stopped responding to Islam’s overtures, he phoned in an active shooter threat to the local police there that a crazed gunman was on the loose at the University of Arizona campus.

According to Robert Sommerfeld, police commander for the University of Arizona, that 2013 swatting incident involved 54 responding officers, all of whom were prevented from responding to a real emergency as they moved from building to building and room to room at the university, searching for a fictitious assailant. Sommerfeld estimates that Islam’s stunt cost local responders almost $40,000, and virtually brought the business district surrounding the university to a standstill for the better part of the day.

Toward the end of today’s sentencing hearing, Islam — bearded, dressed in a blue jumpsuit and admittedly 75 pounds lighter than at the time of his arrest — addressed the court. Those in attendance who were hoping for an apology or some show of remorse from the accused were left wanting as the defendant proceeded to blame his crimes on multiple psychological disorders which he claimed were not being adequately addressed by the U.S. prison system. Not once did Islam offer an apology to his victims, nor did he express remorse for his actions.

“I didn’t expect to go as far as I did, but because of these disorders I felt I was invincible,” Islam told the court. “The mistakes I made before, I have to pay for that. I understand that.”

Sentences that noticeably depart from the government’s sentencing guidelines are grounds for appeal by the defendant, and Judge Moss today seemed reluctant to imprison Islam for the maximum 46 months allowed under the criminals statutes to which Islam had admitted to violating. Judge Moss also seemed to ignore the fact that Islam expressed exactly zero remorse for his crimes.

Central to the judge’s reluctance to sentence Islam to the statutory maximum penalty was Islam’s 2012 arrest in connection with a separate cybercrime sting orchestrated by the FBI called Operation Card Shop, in which federal agents created a fake cybercrime forum dedicated to credit card fraud called CarderProfit[dot]biz.

U.S. law enforcement officials in Washington, D.C. involved in prosecuting Islam for his swatting, doxing and stalking crimes were confident that Islam would be sentenced to at least two years in prison for trying to sell and buy stolen credit cards from federal agents in the New York case, thanks to a law that imposes a mandatory two-year sentence for crimes involving what the government terms as “aggravated identity theft.”

Much to the government’s chagrin, however, the New York judge in that case sentenced Islam to just one day in jail. But by his own admission, even while Islam was cooperating with federal prosecutors in New York he was busy orchestrating his swatting attacks and administering the Exposed[dot]su Web site.

Islam was re-arrested in September 2013 for violating the terms of his parole, and for the swatting and doxing attacks to which he pleaded guilty. But the government didn’t detain Islam in connection with those crimes until July 2015. Since Islam has been in federal detention since then, and Judge Moss seemed eager to ensure that this would count as time served against Islam’s sentence, meaning that Islam will serve just 12 months of his 24-month sentence before being released.

There is absolutely no question that we need to have a serious, national conversation about excessive use of force by police officers, as well as the over-militarization of local police forces nationwide.

However, no one should be excused for perpetrating these potentially deadly swatting hoaxes, regardless of the rationale. Judge Moss, in explaining his brief deliberation on arriving at Islam’s two-year (attenuated) sentence, said he hoped to send a message to others who would endeavor to engage in swatting attacks. In my estimation, today’s sentence sent the wrong message, and missed that mark by a mile.

Common filesystem I/O pitfalls

Postby Michał Górny via Michał Górny »

Filesystem I/O is one of the key elements of the standard library in many programming languages. Most of them derive it from the interfaces provided by the standard C library, potentially wrapped in some portability and/or OO sugar. Most of them share an impressive set of pitfalls for careless programmers.

In this article, I would like to shortly go over a few more or less common pitfalls that come to my mind.

Overwriting the file in-place

This one will be remembered by me as the ‘setuptools screwup’ for quite some time. Consider the following snippet:

if not self.dry_run:
    f = open(target,"w"+mode)
This is the code that setuptools used to install scripts. At a first glance, it looks good — and seems to work well, too. However, think of what happens if the file at target exists already.

The obvious answer would be: it is overwritten. The more commonly noticed pitfall here is that the old contents are discarded before the new are written. If user happens to run the script before it is completely written, he’ll get unexpected results. If writes fail for some reason, user will be left with partially written new script.

While in the case of installations this is not very significant (after all, failure in middle of installation is never a good thing, mid-file or not), this becomes very important when dealing with data. Imagine that a program would update your data this way — and a failure to add new data (as well as unexpected program termination, power loss…) would instantly cause all previous data to be erased.

However, there is another problem with this concept. In fact, it does not strictly overwrite the file — it opens it in-place and implicitly truncates it. This causes more important issues in a few cases:

  • if the file is hardlinked to another file(s) or is a symbolic link, then the contents of all the linked files are overwritten,
  • if the file is a named pipe, the program will hang waiting for the other end of the pipe to be open for reading,
  • other special files may cause other unexpected behavior.
This is exactly what happened in Gentoo. Package-installed script wrappers were symlinked to python-exec, and setuptools used by pip attempted to install new scripts on top of those wrappers. But instead of overwriting the wrappers, it overwrote python-exec and broke everything relying on it.

The lesson is simple: don’t overwrite files like this. The easy way around it is to unlink the file first — ensuring that any links are broken, and special files are removed. The more correct way is to use a temporary file (created safely), and use the atomic rename() call to replace the target with it (no unlinking needed then). However, it should be noted that the rename can fail and a fallback code with unlink and explicit copy is necessary.

Path canonicalization

For some reason, many programmers have taken a fancy to canonicalize paths. While canonicalization itself is not that bad, it’s easy to do it wrongly and it cause a major headache. Let’s take a look at the following path:

You could say it’s ugly. It has a double slash, and a parent directory reference. It almost itches to canonicalize it to more pretty:

However, this path is not necessarily the same as the original.

For a start, let’s imagine that foo is actually a symbolic link to baz/ooka. In this case, its parent directory referenced by .. is actually /baz, not /, and the obvious canonicalization fails.

Furthermore, double slashes can be meaningful. For example, on Windows double slash (yes, yes, backslashes are used normally) would mean a network resource. In this case, stripping the adjacent slash would change the path to a local one.

So, if you are really into canonicalization, first make sure to understand all the rules governing your filesystem. On POSIX systems, you really need to take symbolic links into consideration — usually you start with the left-most path component and expand all symlinks recursively (you need to take into consideration that link target path may carry more symlinks). Once all symbolic links are expanded, you can safely start interpreting the .. components.

However, if you are going to do that, think of another path:

If you expand it on common Gentoo old-style multilib system, you’ll get:

However, now imaging that the /usr/lib symlink is replaced with a directory, and the appropriate files are moved to it. At this point, the path recorded by your program is no longer correct since it relies on a canonicalization done using a different directory structure.

To summarize: think twice before canonicalizing. While it may seem beneficial to have pretty paths or use real filesystem paths, you may end up discarding user’s preferences (if I set a symlink somewhere, I don’t want program automagically switching to another path). If you really insist on it, consider all the consequences and make sure you do it correctly.

Relying on xattr as an implementation for ACL/caps

Since common C libraries do not provide proper file copying functions, many people attempted to implement their own with better or worse results. While copying the data is a minor problem, preserving the metadata requires a more complex solution.

The simpler programs focused on copying the properties retrieved via stat() — modes, ownership and times. The more correct ones added also support for copying extended attributes (xattrs).

Now, it is a known fact that Linux filesystems implement many metadata extensions using extended attributes — ACLs, capabilities, security contexts. Sadly, this causes many people to assume that copying extended attributes is guaranteed to copy all of that extended metadata as well. This is a bad assumption to make, even though it is correct on Linux. It will cause your program to work fine on Linux but silently fail to copy ACLs on other systems.

Therefore: always use explicit APIs, and never rely on implementation details. If you want to work on ACLs, use the ACL API (provided by libacl on Linux). If you want to use capabilities, use the capability API (libcap or libcap-ng).

Using incompatible APIs interchangeably

Now for something less common. There are at least three different file locking mechanisms on Linux — the somehow portable, non-standardized flock() function, the POSIX lockf() and (also POSIX) fcntl() commands. The Linux manpage says that commonly both interfaces are implemented using the fcntl. However, this is not guaranteed and mixing the two can result in unpredictable results on different systems.

Dealing with the two standard file APIs is even more curious. On one hand, we have high-level stdio interfaces including FILE* and DIR*. On the other, we have all fd-oriented interfaces from unistd. Now, POSIX officially supports converting between the two — using fileno(), dirfd(), fdopen() and fddiropen().

However, it should be noted that the result of such a conversion reuses the same underlying file descriptor (rather than duplicating it). Two major points, however:

  1. There is no well-defined way to destroy a FILE* or DIR* without closing the descriptor, nor any guarantee that fclose() or closedir() will work correctly on a closed descriptor. Therefore, you should not create more than one FILE* (or DIR*) for a fd, and if you have one, always close it rather than the fd itself.
  2. The stdio streams are explicitly stateful, buffered and have some extra magic on top (like ungetc()). Once you start using stdio I/O operations on a file, you should not try to use low-level I/O (e.g. read()) or the other way around since the results are pretty much undefined. Supposedly fflush() + rewind() could help but no guarantees.
So, if you want to do I/O, decide whether you want stdio or fd-based I/O. Convert between the two types only when you need to use additional routines not available for the other one; but if those routines involve some kind of content-related operations, avoid using the other type for I/O. If you need to do separate I/O, use dup() to get a clone of the file descriptor.

To summarize: avoid combining different APIs. If you really insist on doing that, check if it is supported and what are the requirements for doing so. You have to be especially careful not to run into undefined results. And as usual — remember that different systems may implement things differently.

Atomicity of operations

For the end, something commonly known, and even more commonly repeated — race conditions due to non-atomic operations. Long story short, all the unexpected results resulting from the assumption that nothing can happen to the file between successive calls to functions.

I think the most common mistake is the ‘does the file exist?’ problem. It is awfully common for programs to use some wrappers over stat() (like os.path.exists() in Python) to check if a file exists, and then immediately proceed with opening or creating it. For example:

def do_foo(path):
    if not os.path.exists(path):
        return False

    f = open(path, 'r')
Usually, this will work. However, if the file gets removed between the precondition check and the open(), the program will raise an exception instead of returning False. For example, this can practically happen if the file is part of a large directory tree being removed via rm -r.

The double bug here could be easily fixed via introducing explicit error handling, that will also render the precondition unnecessary:

def do_foo(path):
        f = open(path, 'r')
    except OSError as e:
        if e.errno == errno.ENOENT:
            return False
The new snippet ensures that the file will be open if it exists at the point of open(). If it does not, errno will indicate an appropriate error. For other errors, we are re-raising the exception. If the file is removed post open(), the fd will still be valid.

We could extend this to a few generic rules:

  1. Always check for errors, even if you asserted that they should not happen. Proper error checks make many (unsafe) precondition checks unnecessary.
  2. Open file descriptors will remain valid even when the underlying files are removed; paths can become invalid (i.e. referencing non-existing files or directories) or start pointing to another file (created using the same path). So, prefer opening the file as soon as necessary, and fstat(), fchown(), futimes()… over stat(), chown(), utimes()
  3. Open directory descriptors will continue to reference the same directory even when the underlying path is removed or replaced; paths may start referencing another directory. When performing operations on multiple files in a directory, prefer opening the directory and using openat(), unlinkat()… However, note that the directory can still be removed and therefore further calls may return ENOENT.
  4. If you need to atomically overwrite a file with another one, use rename(). To atomically create a new file, use open() with O_EXCL. Usually, you will want to use the latter to create a temporary file, then the former to replace the actual file with it.
  5. If you need to use temporary files, use mkstemp() or mkdtemp() to create them securely. The former can be used when you only need an open fd (the file is removed immediately), the latter if you need visible files. If you want to use tmpnam(), put it in a loop and try opening with O_EXCL to ensure you do not accidentally overwrite something.
  6. When you can’t guarantee atomicity, use locks to prevent simultaneous operations. For file operations, you can lock the file in question. For directory operations, you can create and lock lock files (however, do not rely on existence of lock files alone). Note though that the POSIX locks are non-mandatory — i.e. only prevent other programs from acquiring the lock explicitly but do not block them from performing I/O ignoring the locks.
  7. Think about the order of operations. If you create a world-readable file, and afterwards chmod() it, it is possible for another program to open it before the chmod() and retain the open handle while secure data is being written. Instead, restrict the access via mode parameter of open() (or umask()).

FreeBSD 11.0-BETA1 Available

Postby Webmaster Team via FreeBSD News Flash »

The first BETA build for the FreeBSD 11.0 release cycle is now available. ISO images for the amd64, armv6, i386, aarch64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

Lab::Measurement 3.512 released

Postby Andreas via the dilfridge blog »

Immediately at the heels of the previous post, I've just uploaded Lab::Measurement 3.512. It fixes some problems in the Yokogawa GS200 driver introduced in the previous version. Enjoy!

1,025 Wendy’s Locations Hit in Card Breach

Postby BrianKrebs via Krebs on Security »

At least 1,025 Wendy’s locations were hit by a malware-driven credit card breach that began in the fall of 2015, the nationwide fast-food chain said Thursday. The announcement marks a significant expansion in a data breach that is costing banks and credit unions plenty: Previously, Wendy’s had said the breach impacted fewer than 300 locations.

An ad for Wendy’s (in Russian).
On January 27, 2016, this publication was the first to report that Wendy’s was investigating a card breach. In mid-May, the company announced in its first quarter financial statement that the fraud impacted just five percent of stores. But in a statement last month, Wendy’s warned that its estimates about the size and scope of the breach were about to get much meatier.

Wendy’s has published a page that breaks down the breached restaurant locations by state.

Wendy’s is placing blame for the breach on an unnamed third-party that serves franchised Wendy’s locations, saying that a “service provider” that had remote access to the compromised cash registers got hacked.

For better or worse, countless restaurant franchises outsource the management and upkeep of their point-of-sale systems to third party providers, most of whom use remote administration tools to access and manage the systems remotely over the Internet.

Unsurprisingly, the attackers have focused on hacking the third-party providers and have had much success with this tactic. Very often, the hackers just guess at the usernames and passwords needed to remotely access point-of-sale devices. But as more POS vendors start to tighten up on that front, the criminals are shifting their focus to social engineering attacks — that is, manipulating employees at the targeted organization into opening the backdoor for the attackers.

As detailed in Slicing Into a Point-of-Sale Botnet, hackers responsible for stealing millions of customer credit card numbers from pizza chain Cici’s Pizza used social engineering attacks to trick employees at third party point-of-sale providers into installing malicious software.

Perhaps predictably, Wendy’s has been hit with at least one class action lawsuit over the breach. First Choice Federal Credit Union reportedly alleged that the data breach could have been prevented or at least lessened had the company acted faster. That’s difficult to argue against: The company first learned about the breach in January 2016, and stores were still being milked of customer card data six months later.

More lawsuits are likely to come. As noted in Credit Unions Feeling Pinch in Wendy’s Breach, the CEO of the National Association of Federal Credit Unions believes the losses their members have suffered from cards compromised at Wendy’s locations so far eclipse those that came in the wake of the huge card breaches at Target and Home Depot.

People who are in the habit of regularly eating at or patronizing a company that is in the midst of responding to a data breach pose a frustrating challenge for smaller banks and credit unions that fight card fraud mainly by issuing customers a new card. Not long after a new card is shipped, these customers turn around and unwittingly re-compromise their cards, prompting institutions to weigh the costs of continuously re-issuing versus the chances that the cards will be sold in the underground and used for fraud.

A number of readers have written in this past week apparently concerned about my whereabouts and well-being. It’s nice to be missed; I took a few days off for a much-needed staycation and to visit with friends and family. I’m writing this post because some stories you just have to see through to the bitter end. But fear not: KrebsOnSecurity will be back in full swing next week!

The fire remnants

Postby Dan Langille via Dan Langille's Other Diary »

My neighbours had a fire in July 2015. Today, the dumpster left. They were not home today when the dumpster was hauled off. I took these photos and the video for their enjoyment. * Dumpster loading * Dumpster being driven away If you want the copies of these, I have them on my phone and [...]

New FreeBSD Core Team elected

Postby Webmaster Team via FreeBSD News Flash »

The FreeBSD Project is pleased to announce the completion of the 2016 Core Team election. The FreeBSD Core Team acts as the project's "board of directors" and is responsible for approving new src committers, resolving disputes between developers, appointing sub-committees for specific purposes (security officer, release engineering, port managers, webmaster, etc ...), and making any other administrative or policy decisions as needed. The Core Team has been elected by FreeBSD developers every two years since 2000.

Lab::Measurement 3.511 and Lab::VISA 3.04 released

Postby Andreas via the dilfridge blog »

It's been some time since the last Lab::Measurement blog post; we are at Lab::Measurement version 3.511 by now. Here are the most important changes since 3.31:
  • One big addition "under the hood", which is still in flux, was a generic input/output framework for status and error messages. 
  • The device cache code has seen updates and bugfixes. 
  • Agilent multimeter drivers have been cleaned up and rewritten. 
  • Minimal support has been added for the Agilent E8362A network analyzer.
  • The Oxford Instruments IPS driver has been sprinkled with consistency checks and debug output, the ITC driver has seen bugfixes.
  • Controlling an Oxford Instruments Triton system is work in progress.
  • The Stanford Research SR830 lock-in now supports using the auxiliary inputs as "multimeters" and the auxiliary outputs as voltage sources.
  • Support for the Keithley 2400 multimeter, the Lakeshore 224 temperature monitor, and the Rohde&Schwarz SMB100A rf-source  has been added.
  • Work on generic SCPI parsing utilities has started.
  • Sweeps can now also vary pulse length and pulse repetition rate; the "time sweep" has been enhanced.
  • Test routines (both with instruments attached and software-only) are work in progress.
 Lab::VISA has also seen a new bugfix release, 3.04. Changes since version 3.01 are:
  • Support for VXI_SERVANT ressources has been removed; these are NI-specific and not available in 64bit VISA.
  • The documentation, especially on compiling and installing on recent Windows installations, has been improved. No need for Visual Studio and similar giga-downloads anymore!
  • Compiling on both 32bit and 64bit Windows 10 should now work without manual modifications in the

WireGuard, Secure Network Tunnel with Modern Crypto

Postby via Nerdling Sapple »

After quite a bit of hard work, I've at long last launched WireGuard, a secure network tunnel that uses modern crypto, is extremely fast, and is easy and pleasurable to use. You can read about it at the website, but in short, it's based on the simple idea of an association between public keys and permitted IP addresses. Along the way it uses some nice crypto trick to achieve it's goal. For performance it lives in the kernel, though cross-platform versions in safe languages like Rust, Go, etc are on their way.

The launch was wildly successful. About 10 minutes after I ran /etc/init.d/nginx restart, somebody had already put it on Hacker News and the Twitter sphere, and within 24 hours I had received 150,000 unique IPs. The reception has been very warm, and the mailing list has already started to get some patches. Distro maintainers have stepped up and packages are being prepared. There are currently packages for Gentoo, Arch, Debian, and OpenWRT, which is very exciting.

Although it's still experimental and not yet in final stable/secure form, I'd be interested in general feedback from experimenters and testers.


Beamforming in PulseAudio

Postby Arun via Arun Raghavan »

In case you missed it — we got PulseAudio 9.0 out the door, with the echo cancellation improvements that I wrote about. Now is probably a good time for me to make good on my promise to expand upon the subject of beamforming.

As with the last post, I’d like to shout out to the wonderful folks at Aldebaran Robotics who made this work possible!


Beamforming as a concept is used in various aspects of signal processing including radio waves, but I’m going to be talking about it only as applied to audio. The basic idea is that if you have a number of microphones (a mic array) in some known arrangement, it is possible to “point” or steer the array in a particular direction, so sounds coming from that direction are made louder, while sounds from other directions are rendered softer (attenuated).

Practically speaking, it should be easy to see the value of this on a laptop, for example, where you might want to focus a mic array to point in front of the laptop, where the user probably is, and suppress sounds that might be coming from other locations. You can see an example of this in the webcam below. Notice the grilles on either side of the camera — there is a microphone behind each of these.

Webcam with 2 mics
This raises the question of how this effect is achieved. The simplest approach is called “delay-sum beamforming”. The key idea in this approach is that if we have an array of microphones that we want to steer the array at a particular angle, the sound we want to steer at will reach each microphone at a different time. This is illustrated below. The image is taken from this great article describing the principles and math in a lot more detail.

Delay-sum beamforming
In this figure, you can see that the sound from the source we want to listen to reaches the top-most microphone slightly before the next one, which in turn captures the audio slightly before the bottom-most microphone. If we know the distance between the microphones and the angle to which we want to steer the array, we can calculate the additional distance the sound has to travel to each microphone.

The speed of sound in air is roughly 340 m/s, and thus we can also calculate how much of a delay occurs between the same sound reaching each microphone. The signal at the first two microphones is delayed using this information, so that we can line up the signal from all three. Then we take the sum of the signal from all three (actually the average, but that’s not too important).

The signal from the direction we’re pointing in is going to be strongly correlated, so it will turn out loud and clear. Signals from other directions will end up being attenuated because they will only occur in one of the mics at a given point in time when we’re summing the signals — look at the noise wavefront in the illustration above as an example.


(this section is a bit more technical than the rest of the article, feel free to skim through or skip ahead to the next section if it’s not your cup of tea!)

The devil is, of course, in the details. Given the microphone geometry and steering direction, calculating the expected delays is relatively easy. We capture audio at a fixed sample rate — let’s assume this is 32000 samples per second, or 32 kHz. That translates to one sample every 31.25 µs. So if we want to delay our signal by 125µs, we can just add a buffer of 4 samples (4 × 31.25 = 125). Sound travels about 4.25 cm in that time, so this is not an unrealistic example.

Now, instead, assume the signal needs to be delayed by 80 µs. This translates to 2.56 samples. We’re working in the digital domain — the mic has already converted the analog vibrations in the air into digital samples that have been provided to the CPU. This means that our buffer delay can either be 2 samples or 3, not 2.56. We need another way to add a fractional delay (else we’ll end up with errors in the sum).

There is a fair amount of academic work describing methods to perform filtering on a sample to provide a fractional delay. One common way is to apply an FIR filter. However, to keep things simple, the method I chose was the Thiran approximation — the literature suggests that it performs the task reasonably well, and has the advantage of not having to spend a whole lot of CPU cycles first transforming to the frequency domain (which an FIR filter requires)(edit: converting to the frequency domain isn’t necessary, thanks to the folks who pointed this out).

I’ve implemented all of this as a separate module in PulseAudio as a beamformer filter module.

Now it’s time for a confession. I’m a plumber, not a DSP ninja. My delay-sum beamformer doesn’t do a very good job. I suspect part of it is the limitation of the delay-sum approach, partly the use of an IIR filter (which the Thiran approximation is), and it’s also entirely possible there is a bug in my fractional delay implementation. Reviews and suggestions are welcome!

A Better Implementation

The astute reader has, by now, realised that we are already doing a bunch of processing on incoming audio during voice calls — I’ve written in the previous article about how the webrtc-audio-processing engine provides echo cancellation, acoustic gain control, voice activity detection, and a bunch of other features.

Another feature that the library provides is — you guessed it — beamforming. The engineers at Google (who clearly are DSP ninjas) have a pretty good beamformer implementation, and this is also available via module-echo-cancel. You do need to configure the microphone geometry yourself (which means you have to manually load the module at the moment). Details are on our wiki (thanks to Tanu for that!).

How well does this work? Let me show you. The image below is me talking to my laptop, which has two microphones about 4cm apart, on either side of the webcam, above the screen. First I move to the right of the laptop (about 60°, assuming straight ahead is 0°). Then I move to the left by about the same amount (the second speech spike). And finally I speak from the center (a couple of times, since I get distracted by my phone).

The upper section represents the microphone input — you’ll see two channels, one corresponding to each mic. The bottom part is the processed version, with echo cancellation, gain control, noise suppression, etc. and beamforming.

WebRTC beamforming
You can also listen to the actual recordings …

… and the processed output.

Feels like black magic, doesn’t it?

Finishing thoughts

The webrtc-audio-processing-based beamforming is already available for you to use. The downside is that you need to load the module manually, rather than have this automatically plugged in when needed (because we don’t have a way to store and retrieve the mic geometry). At some point, I would really like to implement a configuration framework within PulseAudio to allow users to set configuration from some external UI and have that be picked up as needed.

Nicolas Dufresne has done some work to wrap the webrtc-audio-processing library functionality in a GStreamer element (and this is in master now). Adding support for beamforming to the element would also be good to have.

The module-beamformer bits should be a good starting point for folks who want to wrap their own beamforming library and have it used in PulseAudio. Feel free to get in touch with me if you need help with that.

FreeBSD on Jetson TK1

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

I finally got around to BSDify my Jetson TK1. Here is short summary of what is involved. And to save you some scrolling here are artifacts obtained from whole ordeal:


First of all – my TK1 didn’t have U-Boot. Type of bootloader depends on the version of Linux4Tegra TK1 comes with. Mine had L4T R19, with some kind of “not u-boot” bootloader. My first attempt was to use tegrarcm tool, it uses libusb, so it’s possible to build it on FreeBSD with some elbow grease, but once I tried to run it – it gave me cryptic errors and USB is not my strong skill so I took low road and installed Ubuntu VM. For what is’s worth I got the same kind of error on Ubuntu.

Next step was to use official update procedure described in Since I wasn’t going to boot Linux on the board I didn’t need sample rootfs. So the whole procedure was:

- Go to L4T R21.4 page
- Download Tegra124_Linux_R21.4.0_armhf.tbz2
- Unpack it
- Connect microUSB port on device to Linux VM
- Get device into recover mode: power cycle, press and hold recovery button, press and release power button, release recovery button
- Run ./ jetson-tk1 mmcblk0p1, this should rewrite eMMC flash on the board and after reboot you will get u-boot prompt on serial console


At this point you can boot FreeBSD on TK1. I use netboot for most of my device so in this case it was: build and deploy world to /src/FreeBSD/tftproot/tk1, build and install kernel to the same directory, copy /src/FreeBSD/tftproot/tk1/boot/kernel/kernel to kernel.TK1 in tftproot directory, add entry do DHCP config and restart DHCP server. Entry looks like this:

host tk1 {
        hardware ethernet 00:04:4b:49:08:9e;
        filename "kernel.TK1";
        option root-path "/src/FreeBSD/tftproot/tk1";
        option root-opts "nolockd";
        option routers;
And also you need to add this to sys/arm/conf/JETSON-TK1 before building kernel:

options        BOOTP
options        BOOTP_NFSROOT
options        BOOTP_COMPAT
options        BOOTP_NFSV3
On the device you just run “dhcp; bootelf” and voila – it just works.


Next step was to get ubldr running. I prefer suing ubldr because it gives more control over boot process accessible from booted FreeBSD system. ubldr requires U-Boot with API support, so I had to rebuild U-Boot from sources provided by nvidia with added #define CONFIG_API and all standard patches from sysutils/u-boot-* ports. Build procedure is standard:

export ARCH=arm
export CROSS_COMPILE=arm-linux-gnueabihf-
make jetson-tk1_config
It will generate multiple files, u-boot-dtb-tegra.bin is the one you want.

To reflash board with non-standard u-boot run ./ -L /path/to/u-boot-dtb-tegra.bin jetson-tk1 mmcblk0p1
Back to ubldr. It was easy to build and load it. Build script:

export TARGET=arm
export TARGET_ARCH=armv6
export SRCROOT=/src/FreeBSD/wip
export MAKEOBJDIRPREFIX=/src/FreeBSD/obj
export MAKESYSPATH=$SRCROOT/share/mk

set -x
set -e

buildenv=`make -C $SRCROOT TARGET_ARCH=armv6 buildenvvars`
eval $buildenv make -C $SRCROOT/sys/boot -m $MAKESYSPATH obj
eval $buildenv make -C $SRCROOT/sys/boot -m $MAKESYSPATH clean
eval $buildenv make -C $SRCROOT/sys/boot -m $MAKESYSPATH UBLDR_LOADADDR=0x80600000 all

sudo cp /src/FreeBSD/obj/arm.armv6/src/FreeBSD/wip/sys/boot/arm/uboot/ubldr /src/FreeBSD/tftpboot/ubldr.TK1
Obviously, kernel.TK1 in DHCP config needs to be replaced with ubldr.TK1. 0×80600000 is some value I came up with by looking at u-boot default environment. Something not high enough to overlap with kernel and not low enough to overlap with u-boot.

And that’s where thing got hairy. To load ubldr and then netboot kernel, you need to set u-boot env loaderdev variable first: setenv loaderdev net; saveenv. And then do the same thing as above: dhcp; bootelf. Unfortunately I got this:

## Starting application at 0x81000098 ...
Consoles: U-Boot console
Compatible U-Boot API signature found @0xffa3e410

FreeBSD/armv6 U-Boot loader, Revision 1.2
(, Mon Jun 27 19:59:22 PDT 2016)

DRAM: 2048MB
MMC: no card present
MMC Device 2 not found
MMC Device 3 not found
MMC: no card present
MMC: no card present
MMC: no card present
MMC: no card present
MMC: no card present
MMC: no card present
MMC Device 2 not found
Number of U-Boot devices: 3
U-Boot env: loaderdev='net'
Found U-Boot device: disk
Found U-Boot device: net
Booting from net0:
panic: arp: no response for

--> Press a key on the console to reboot <--
resetting ...
After some heavy thinking and code digging problem was narrowed down to u-boot network driver drivers/net/rtl8169.c. Instead of returning 0 on success and negative value on error it returns number of bytes sent on success and zero on error. Which confused ubldr into thinking nothing is sent, so recv part of exchange was never invoked. After fixing this issue kernel was loaded just fine but hang right afert

Using DTB compiled into kernel.
Kernel entry at 0x0x80800100...
Kernel args: (null)
Logn story short - it was caused by enabled D-Cache so I had to add

to u-boot config and go through rebuild/reflash cycle again. After this whole boot chain went through right to login prompt.

My next goal is to make TK1 self-contained box: get base system installed on eMMC and use attached SSD as scratch disk for swap and builds.

Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers

Postby BrianKrebs via Krebs on Security »

Fake online reviews generated by unscrupulous marketers blanket the Internet these days. Although online review pollution isn’t exactly a hot-button consumer issue, there are plenty of cases in which phony reviews may endanger one’s life or well-being. This is the story about how searching for drug abuse treatment services online could cause concerned loved ones to send their addicted, vulnerable friends or family members straight into the arms of the Church of Scientology.

As explained in last year’s piece, Don’t Be Fooled by Fake Online Reviews Part II, there are countless real-world services that are primed for exploitation online by marketers engaged in false and misleading “search engine optimization” (SEO) techniques. These shady actors specialize in creating hundreds or thousands of phantom companies online, each with different generic-sounding business names, addresses and phone numbers. The phantom firms often cluster around fake listings created in Google Maps — complete with numerous five-star reviews, pictures, phone numbers and Web site links.

The problem is that calls to any of these phony companies are routed back to the same crooked SEO entity that created them. That marketer in turn sells the customer lead to one of several companies that have agreed in advance to buy such business leads. As a result, many consumers think they are dealing with one company when they call, yet end up being serviced by a completely unrelated firm that may not have to worry about maintaining a reputation for quality and fair customer service.

Experts say fake online reviews are most prevalent in labor-intensive services that do not require the customer to come into the company’s offices but instead come to the consumer. These services include but are not limited to locksmiths, windshield replacement services, garage door repair and replacement technicians, carpet cleaning and other services that consumers very often call for immediate service.

As it happens, the problem is widespread in the drug rehabilitation industry as well. That became apparent after I spent just a few hours with Bryan Seely, the guy who literally wrote the definitive book on fake Internet reviews.

Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely knows a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up.

More recently, Seely has been tracking a network of hundreds of phony listings and reviews that lead inquiring customers to fewer than a half dozen drug rehab centers, including Narconon International — an organization that promotes the theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction.

As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts, but also for treating all drug addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. The Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect:

“Narconon has faced considerable controversy over the safety and effectiveness of its rehabilitation methods,” the Wiki entry reads. “Narconon teaches that drugs reside in body fat, and remain there indefinitely, and that to recover from drug abuse, addicts can remove the drugs from their fat through saunas and use of vitamins. Medical experts disagree with this basic understanding of physiology, saying that no significant amount of drugs are stored in fat, and that drugs can’t be ‘sweated out’ as Narconon claims.”

Source: Seely Security.


Seely said he learned that the drug rehab industry was overrun with SEO firms when he began researching rehab centers in Seattle for a family friend who was struggling with substance abuse and addiction issues. A simple search on Google for “drug rehab Seattle” turned up multiple local search results that looked promising.

One of the top three results was for a business calling itself “Drug Rehab Seattle,” and while it lists a toll-free phone number, it does not list a physical address (NB: this is not always the case with fake listings, which just as often claim the street address of another legitimate business). A click on the organization’s listing claims the Web site – a legitimate drug rehab search service. However, the owners of say this listing is unauthorized and unaffiliated with

As documented in this Youtube video, Seely called the toll-free number in the Drug Rehab Seattle listing, and was transferred to a hotline that took down his name, number and insurance information and promised an immediate call back. Within minutes, Seely said, he received a call from a woman who said she represented a Seattle treatment center but was vague about the background of the organization itself. A little digging showed that the treatment center was run by Narconon.

“You’re supposed to be getting a local drug rehab in Seattle, but instead you get taken to a call center, which can be owned by any number of rehab facilities around the country that pay legitimate vendors for calls,” Seely said. “If you run a rehab facility, you have to get people in the doors to make money. The guy who created these fake listings figured out you can use Google Maps to generate leads, and it’s free.”

The phony rehab establishment listed here is the third listing, which includes no physical address and routes the caller to a referral network that sells leads to Narconon, among others.
Here’s the crux of the problem: When you’re at and you search for something that Google believes to be a local search, Google adds local business results on top of the organic search results — complete with listings and reviews associated with Google Maps. Consumers might not even read them, but reviews left for businesses in this listings heavily influence their search rankings. The more reviews a business has, Seely said, the closer it gets to the coveted Number One spot in the search rankings.

That #1 rank attracts the most calls by a huge margin, and it can mean huge profits: Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, some facilities can then turn around and bill insurance providers for tens of thousands of dollars per patient.


Curious if he could track down the company or individual behind the phony review that prompted a call from Narconon, Seely began taking a closer look at the reviews for the facility he called. One reviewer in particular stood out — one “John Harvey,” a Google user who clearly has a great deal of experience with rehab centers.

A click on John Harvey’s Google Plus profile showed he reviewed no fewer than 82 phantom drug treatment centers around the country, offering very positive 5-star reviews on all of them. A brief search for John Harvey online shows that the person behind the account is indeed a guy named John Harvey from Sacramento who runs an SEO company in Kailua, Hawaii called TopSeek Inc., which bills itself as a collection of “local marketing experts.”

A visit to the company’s Web site shows that Narconon is among four of TopSeek’s listed clients, all of which either operate drug rehab centers or are in the business of marketing drug rehab centers.

TopSeek Inc’s client list includes Narconon, a Scientology front group that seeks to recruit new members via a network of unorthodox drug treatment facilities.
Calls and emails to Mr. Harvey went unreturned, but it’s clear he quickly figured out that the jig was up: Just hours after KrebsOnSecurity reached out to Mr. Harvey for comment, all of his phony addiction treatment center reviews mysteriously disappeared (some of the reviews are preserved in the screenshot below).

“This guy is sitting in Hawaii saying he’s retired and that he’s not taking any more clients,” Seely said. “Well, maybe he’s going to have to come out of retirement to go into prison, because he’s committed fraud in almost every state.”

While writing fake online reviews may not be strictly illegal or an offense that could send one to jail, several states have begun cracking down on “reputation management” and SEO companies that engage in writing or purchasing fake reviews. However, it’s unclear whether the fines being enforced for violations will act as a deterrent, since those fines are likely a fraction of the revenues that shady SEO companies stand gain by engaging in this deceptive practice.

Some of John Harvey’s reviews. All of these have since been deleted.


Before doing business with a company you found online, don’t just pick the company that comes up tops in the search results on Google. Unfortunately, that generally guarantees little more than the company is good at marketing.

Take the time to research the companies you wish to hire before booking them for jobs or services, especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at the aforementioned, a legitimate rehab search engine.

It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site.

Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online. A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the company and/or the organization itself.

Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing.

“Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.”

For the record, I requested comment on this story from Google — and specifically from the people at Google who handle Google Maps — but have yet to hear back from them. I’ll update this story in the event that changes.

Update, 7:47 p.m. ET: Google responded with the following statement: “We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system – and who’ve been a thorn in the Internet’s side for over a decade. Millions of businesses regularly make edits to their addresses, hours of operation and more, so we rely heavily on the community to help keep listings up-to-date and flag issues. But this kind of spam is a clear violation of our policies and we want to eradicate it. As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

Concerning Professionalism

Postby via Blog Feed »

Developing add-ons for X-Plane has become a serious business. While airport & scenery development mostly take place in a freeware or donationware environment, particularly the development of aircraft models is currently seeing some kind of arms race. Both ambition and price climbed steeply over the past two years, but I feel to a certain extend, professionalism of developers and distributors didn't keep pace with this development. Through this article, I would like to dedicate some words to observations I made, particularly getting a bit more granular on how I perceive professionalism in development and business behind those aircraft models.

What would X-Plane be without all the add-ons contributed by its community, making terrain meshes, photo overlays and 3D scenery replications of cities and airports, but also contributing aircraft models and a large variety of utilities and plugins enhancing X-Plane's capabilities…? Many of those contributors do sacrifice their spare time for this purpose, but of course, there are also those doing this for earning a living. For lack of a better term, let's simply call them “professionals”, OK? I will apply this designation to all people involved in the development and redistribution of payware addons, regardless whether it's their primary source of income or not.

Yes, you got me right: I don't care whether somebody asking me to pay money for a product or service is a full-time professional or just a hobbyist supplementing his income. I do care though how much money somebody is asking for a product or service. And here we are already, looking at one of the first things I'd ever wanted to rant about. Countless times I encountered the argument

See, developing plugin X has cost us these many hours, that's why we must ask for this sky-high price of $many
With all due respect, I don't want to object to the fact that development of aircraft models and plugins costs a lot of effort. However, if you want this to become your business, I'd expect you to understand market mechanisms first. The view quite frequently transported in a similar manner as the fake quote above does is the one of a cost accountant. As a financial controlling professional, I'm in the trade, so I know what I'm talking about here. But this is not how a market works.

The price one can ask for does not depend on the cost of development, but on the value a product brings to the customer. If you did some careful market research, found an unoccupied niche, developed a product that a lot of people urgently want and are willing to pay for, and if you kept cost (development plus support cost) at bay, you will run a positive business, gaining money. If you failed to do so, you should not blame the customers not paying whatever you consider as adequate, but ask yourself what mistakes you made. This would be a professional attitude, and suit you better than attacking customers on public bulletin boards, as some self-styled “professionals” tend to do.

Speaking of that, this is actually the second point raising my blood pressure: I consider the way some developers or distributors communicate as highly non-professional. To a certain extend this may be systemic to the X-Plane ecosystem and driven by the fact that mostly public bulletin boards are used for communication. The downside of this: As a developer or distributor, you will face negative voices, and probably even to a greater extend than positive ones. This one is not really surprising – customers rather tend to speak up when dissatisfied rather than for leaving positive feedback.

This might be annoying for you, but as a professional, you have to be aware of this mechanism and thus be above such thing, even if some postings are of provoking quality. If you are not, better go looking for a communication professional to handle it for you, or chose other means of communication. Otherwise you will simply ridicule yourself, particularly when losing your composure in a public bulletin board full of (potential) customers. Not convinced yet? Look at it this way: imagine you walking up to a car dealer, asking a few questions, telling you don't like a feature on a model he proposes to you… and this guy just goes ballistic – would you buy a car from him?

So take out a pen and write down:

My customers' opinion can never be wrong.
My customers' opinion can never be wrong.
My customers' opinion can never be wrong.
This doesn't mean your customers can't have some facts wrong, or came to a wrong conclusion due to lack of information or education – but when it comes to personal gusto (liking or not liking a feature, measuring importance of a feature, etc.), nobody can be wrong per se. The best way to deal with opinions is turning them into statistical data. Instead of arguing with customers about the way they feel related to a feature or behaviour of your product, simply count the view they shared. This will give you a solid foundation for future decisions without driving (potential) customers away.

Now let's turn to some more tangible stuff – yes you guessed right, I'm talking about the quality of your product. Before entering this new rant, I would like to elaborate what I consider as (non-)quality when speaking about X-Plane add-ons. In the past two years, we have witnessed a constant rise in complexity and in price for add-on products, particularly when talking about aircraft models. A pricing of ≥60 US-$ has become some state of the art nowadays. That means many add-ons are priced higher than X-Plane itself. In such a price region, a customer is perfectly entitled to expect a mostly bug-free product (I already talked about adding value above, so I keep this one now aside).

There is no such thing as bug-free software.
Truly spoken, but lately it seems this has developed into a standard excuse for developers disregarding any minimum standards when it comes to professional software production. Yes, software is a product, and I prefer speaking of production instead of development. The reason is simple: There are a lot more activities involved than pure development work to get a software product delivered to your customer.

I know something must be wrong when I, a customer of a finally released product, instantly stumble across obvious bugs rendering the product useless (e. g. clearly identifiable crash conditions). I honour the dedication of developers coding their heart out to push a product release to their customers. Yet I question whether this way of working falls under the notion of professionalism – I'd rather say not. In modern software production, having a well-defined release chain is crucial. It starts with appropriate code management; working with a modern SCM and related techniques (e. g. pull requests) is absolutely mandatory. But the tool is not everything; it will only deliver to your expectation if you define a corresponding workflow and stick to it.

And on it goes; continuous integration is a must for complex products. How embarrassing is it if your product fails by a simple, avoidable error (e. g. syntax error, missing variable initialisation, uncaught exception, etc.). That's what unit tests have been invented for. I understand pretty well that testing a full aircraft model is anything but trivial. However, unit-testing every function, class method or whatever component you write should be a matter of course. If your tests get too complex, your code structure is bad, period. This might sound like a harsh statement, particularly in the light of me knowing a lot of people engaged in aircraft model development are not really software engineering professionals, but rather aircraft engineering experts, 3D modellers or graphics designers. Yet again, this is about professionalism. If you want to draw level with other professional software, you have to cope with that.

Also when it comes to release building and distribution, I came across a wide variety of bad, non-professional habits. First thing to get right is a reliable release naming scheme. How hard can it be to adopt the most commonly used scheme in software production, i. e. major.minor.patchlevel for your product? Before going into a test or pre-release phase, make up your mind about the meaning of your release names, and adapt your SCM naming conventions and workflow accordingly. Learn about feature freeze, code slush, code freeze and testing phases. Then build your releases with an automated toolchain – this will guarantee reproducible results and (hopefully) unveil a broken release before delivery to the customers.

I admit however that X-Plane itself does not make it any easier to professionally develop add-on software for it. Yes, there is a plugin API, but the corresponding library (i. e. header files etc.) is not formally maintained by Laminar itself. Many state-of-the art techniques (e. g. such as multi-threading) are not supported out of the box and require a profound understanding of programming concepts. On top of that, X-Plane is a fast moving target; being subject to incremental development, it introduces new features with every minor release, and also deprecates things at nearly the same frequency. Taking this all into account, X-Plane is a hard to control environment from the point of view of a plugin developer.

In consequence that means as a developer, you haven't done with your job even if you delivered a perfect, bug-free plugin. A product that wants to be perceived as professional (and thus justifies a high price) needs to keep pace with X-Plane's development. We already have more than enough products that too obviously aimed at generating some instant revenues and were then abandoned by their developers, coining the term “abandonware”. Remember I am speaking of products priced at or above the level of X-Plane itself – all I do expect is they are supported throughout the same life cycle Laminar supports X-Plane. I do not necessarily expect new features to be developed/integrated (except developers announced they would do so), I just expect them to run also on current versions of X-Plane's major release branch. I think it is fair enough as a customer to apply this assumption when buying an add-on, except developers clearly stated that I'm dealing with a one-shot product. This is also fine; however this information will certainly influence my acceptance or non-acceptance of a price to pay.

The next logical step in the life cycle of a product is the post-release support. I already mentioned that bulletin boards are not necessarily a good platform for communication, and this also holds true to a certain extend for support purposes. While bulletin boards are in general very useful for engaging other customers in helping you answer comprehension questions of new customers (we all have seen them, not knowing what pitch & roll do, but want to fly an airliner), they are rather poorly shaped for systematically collecting bug reports. And then there is an issue with reactivity: customers are in general very impatient. Particularly when an expensive product shows a behaviour that makes it unusable (e. g. crashes or does not load), customers tend to get nervous. Some developers got it right and react quite quickly (apparently they assigned some of their team members policing the respective bulletin board in shifts to ensure reactivity), but others got it woefully wrong, leaving support requests unattended and unanswered for weeks.

Related to that, not every reaction of a developer is a good reaction. Too many times I saw developers outright refusing plausibility of a bug report – another common source of frustration, particularly when talking about high-end priced products. In general, customers don't post bug reports just to annoy you. It is their way of letting you know they are not satisfied with your product in its current shape and it needs fixing so it can deliver the promised added value. If you have an issue with that being discussed publicly (which I understand to a certain extend, as it is a kind of negative advertisment), don't use public boards for bug reporting. If you are frustrated because a lot of bugs are triggered by pollution (hardware, third party add-ons) or the environment (e. g. underlying libraries not being under your control), don't wreak it on your customers.

Summing it up, my impression is that among commercial X-Plane add-on developers, there is plenty of room for improvement in basically all areas – pricing and communication, but most obvious in quality management and release policy, not to forget after sales support. With PMDG just having entered the market, I sincerely hope this triggers a phase of rising professionalism among X-Plane add-on developers. In the end, customers will vote with their wallet.

And the times, they are a-changin'

Postby via Blog Feed »

You might have noticed this website slightly changed – again. Of course there is a new layout, quite hard to miss – I like the rather clear structures, and banning the former side bar to the bottom of the page further increases focus on content. Probably even more noticeable I switched the primary language for publication from German to English. The rationale behind is quite simple: I mostly write about X-Plane and software development; both being domains with a rather international audience, speaking English by convention. Therefore I decided to adopt to this international standard, hoping of course to increase the range of my publications. Enough said, looking forward to writing the first real content for the new platform.


py3status v3.0

Postby ultrabug via Ultrabug »

Oh boy, this new version is so amazing in terms of improvements and contributions that it’s hard to sum it up !

Before going into more explanations I want to dedicate this release to tobes whose contributions, hard work and patience have permitted this ambitious 3.0 : THANK YOU !

This is the graph of contributed commits since 2.9 just so you realise how much this version is thanks to him:
I can’t continue on without also thanking Horgix who started this madness by splitting the code base into modular files and pydsigner for his everlasting contributions and code reviews !

The git stat since 2.9 also speaks for itself:

 73 files changed, 7600 insertions(+), 3406 deletions(-)

So what’s new ?

  • the monolithic code base have been split into modules responsible for the given tasks py3status performs
  • major improvements on modules output orchestration and execution resulting in considerable CPU consumption reduction and i3bar responsiveness
  • refactoring of user notifications with added dbus support and rate limiting
  • improved modules error reporting
  • py3status can now survive an i3status crash and will try to respawn it
  • a new ‘container’ module output type gives the ability to group modules together
  • refactoring of the time and tztime modules support brings the support of all the time macros (%d, %Z etc)
  • support for stopping py3status and its modules when i3bar hide mode is used
  • refactoring of general, contribution and most noticeably modules documentation
  • more details on the rest of the changelog


Along with a cool list of improvements on the existing modules, these are the new modules:

  • new group module to cycle display of several modules (check it out, it’s insanely handy !)
  • new fedora_updates module to check for your Fedora packages updates
  • new github module to check a github repository and notifications
  • new graphite module to check metrics from graphite
  • new insync module to check your current insync status
  • new timer module to have a simple countdown displayed
  • new twitch_streaming module to check is a Twitch Streamer is online
  • new vpn_status module to check your VPN status
  • new xrandr_rotate module to rotate your screens
  • new yandexdisk_status module to display Yandex.Disk status


And of course thank you to all the others who made this version possible !

  • @egeskow
  • Alex Caswell
  • Johannes Karoff
  • Joshua Pratt
  • Maxim Baz
  • Nathan Smith
  • Themistokle Benetatos
  • Vladimir Potapev
  • Yongming Lai

Time to retire

Postby Donnie Berkholz via Striving for greatness »

I’m sad to say it’s the end of the road for me with Gentoo, after 13 years volunteering my time (my “anniversary” is tomorrow). My time and motivation to commit to Gentoo have steadily declined over the past couple of years and eventually stopped entirely. It was an enormous part of my life for more than a decade, and I’m very grateful to everyone I’ve worked with over the years.

My last major involvement was running our participation in the Google Summer of Code, which is now fully handed off to others. Prior to that, I was involved in many things from migrating our X11 packages through the Big Modularization and maintaining nearly 400 packages to serving 6 terms on the council and as desktop manager in the pre-council days. I spent a long time trying to change and modernize our distro and culture. Some parts worked better than others, but the inertia I had to fight along the way was enormous.

No doubt I’ve got some packages floating around that need reassignment, and my retirement bug is already in progress.

Thanks, folks. You can reach me by email using my nick at this domain, or on Twitter, if you’d like to keep in touch.

Tagged: gentoo,

How to Spot Ingenico Self-Checkout Skimmers

Postby BrianKrebs via Krebs on Security »

A KrebsOnSecurity story last month about credit card skimmers found in self-checkout lanes at some Walmart locations got picked up by quite a few publications. Since then I’ve heard from several readers who work at retailers that use hundreds of thousands of these Ingenico credit card terminals across their stores, and all wanted to know the same thing: How could they tell if their self-checkout lanes were compromised? This post provides a few pointers.

Happily, just days before my story point-of-sale vendor Ingenico produced a tutorial on how to spot a skimmer on self checkout lanes powered by Ingenico iSC250 card terminals. Unfortunately, it doesn’t appear that this report was widely disseminated, because I’m still getting questions from readers at retailers that use these devices.

The red calipers in the image above show the size differences in various noticeable areas of the case overlay on the left compared to the actual iSC250 on the right. Source: Ingenico.
“In order for the overlay to fit atop the POS [point-of-sale] terminal, it must be longer and wider than the target device,” reads a May 16, 2016 security bulletin obtained by KrebsOnSecurity. “For this reason, the case overlay will appear noticeably larger than the actual POS terminal. This is the primary identifying characteristic of the skimming device. A skimmer overlay of the iSC250 is over 6 inches wide and 7 inches tall while the iSC250 itself is 5 9/16 inch wide and 6 1⁄2 inches tall.”

In addition, the skimming device that thieves can attach in the blink of an eye on top of the Ingenico self-checkout card reader blocks the backlight from coming through the fake PIN pad overlay.

The backlight can be best seen while shading the keypad from room lights. The image on the left is a powered-on legitimate iSC250 viewed with the keypad shaded. The backlight can be seen in comparison to a powered-off iSC250 in the right image. Source: Ingenico.

What’s more, the skimming overlay devices currently block the green LED light that is illuminated during contactless card reads like Apple Pay.

The green LED light that is lit up during contactless payments is obscured by the overlay skimmer. Source: Ingenico.
The overlay skimming devices pictured here include their own tiny magnetic read heads to snarf card data from the magnetic stripe when customers swipe their cards. Consequently, those tiny readers often interfere with the legitimate magnetic card reader on the underlying device, meaning compromised self-checkout lines may move a bit slower than others.

“The overlay design appears to occasionally interfere with the magnetic stripe reads, leading to greater numbers of read failures,” Ingenico wrote.

Finally, all checkout terminals include a tethered stylus that customers use to sign their names after swiping their cards. According to Ingenico, the skimmers made to fit the iSC250 appear to prevent the ordinary placement of the stylus due to the obtrusive overhang of the skimmer overlay.

The overlay skimmer on the left blocks the stylus tray. The picture on the right is a device that’s not been attacked.
It’s probably true that posting information like this online gives skimmer scammers an opportunity to improve their product and to make the telltale giveaways less noticeable. However, this only goes so far without significantly driving up the cost of these overlay skimmers. Each iSC250 skimmer already retails for a few hundred bucks apiece — and that’s without the electronics needed to gather and store card data. The up-front cost of these fraud devices is important because the fraudsters have no guarantee they will be able to recover their skimmers before the devices are discovered.

On the other hand, as I mentioned earlier there are countless nationwide retailers that have hundreds of thousands of these Ingenico devices installed in self-checkout lanes, and that in turn means millions of employees and customers who are the first lines of defense against skimmers. The more people know about what to look for in these fraud devices, the more likely the fraudsters will lose their up-front investments — and maybe even get busted trying to retrieve them.

Nextcloud available in portage, migration thoughts

Postby voyageur via Voyageur's corner »

I think there has been more than enough news on nextcloud origins, the fork from owncloud, … so I will keep this post to the technical bits.

Installing nextcloud on Gentoo

For now, the differences between owncloud 9 and nextcloud 9 are mostly cosmetic, so a few quick edits to owncloud ebuild ended in a working nextcloud one. And thanks to the different package names, you can install both in parallel (as long as they do not use the same database, of course).

So if you want to test nextcloud, it’s just a command away:

# emerge -a nextcloud
With the default webapp parameters, it will install alongside owncloud.

Migrating owncloud data

Nothing official again here, but as I run a small instance (not that much data) with simple sqlite backend, I could copy the data and configuration to nextcloud and test it while keeping owncloud.
Adapt the paths and web user to your setup: I have these webapps in the default /var/www/localhost/htdocs/ path, with www-server as the web server user.

First, clean the test data and configuration (if you logged in nextcloud):

# rm /var/www/localhost/htdocs/nextcloud/config/config.php
# rm -r /var/www/localhost/htdocs/nextcloud/data/*
Then clone owncloud’s data and config. If you feel adventurous (or short on available disk space), you can move these files instead of copying them:

# cp -a /var/www/localhost/htdocs/owncloud/data/* /var/www/localhost/htdocs/nextcloud/data/
# cp -a /var/www/localhost/htdocs/owncloud/config/config.php /var/www/localhost/htdocs/nextcloud/config/
Change all owncloud occurences in config.php to nextcloud (there should be only one, for ‘datadirectory’. And then run the (nextcloud) updater. You can do it via the web interface, or (safer) with the CLI occ tool:

# sudo -u www-server php /var/www/localhost/htdocs/nextcloud/occ upgrade
As with “standard” owncloud upgrades, you will have to reactivate additional plugins after logging in. Also check the nextcloud log for potential warnings and errors.

In my test, the only non-official plugin that I use, files_reader (for ebooks)  installed fine in nextcloud, and the rest worked as fine as owncloud with a lighter default theme
For now, owncloud-client works if you point it to the new /nextcloud URL on your server, but this can (and probably will) change in the future.

More migration tips and news can be found in this nextcloud forum post, including some nice detailed backup steps for mysql-backed systems migration.

Fotos von versehentlich teilüberschriebenen SD Card mit photorec retten

Postby via dietanu »

Das Retten von Fotos von einer versehentlich mit dd teilweise überschriebenen SD Card ist gar nicht so schwer, wenn man denn weiß, wie es geht. Ich habe das mal in einem Video zusammengefasst:

Big Update

Postby via dietanu »

Der letzte Post ist viel zu lange her. Zeit für ein großes Update!

Foto Blog

Um mal wenigstens ein wenig Ordnung in das Chaos zwischen meinen Hobbys zu bringen, habe ich einen separaten Foto Blog unter erstellt.

Neue Hardware für den Server

Mein Medienserver hat ein Upgrade erhalten. Wo bis vor kurzem noch diese Hardware produktiv im Einsatz war:

  • AMD Athlon X2 245e CPU @ 2x 2,9 GHz
  • Gigabyte GA-MA770T-UD3
  • 12 GB ECC RAM
  • 5x 1,5 TB HDs im RAID5
  • PCIe Digital Devices Max S8
  • PCIe Intel Desktop CT 1000 NIC
  • PCIe Serial Port Card
  • PCIe USB3 Karte
  • PCI VGA S3 Virge
  • PCI Intel PCI-X Dual NIC
ist nun folgender Server in Betrieb:

  • Intel Core i3-4170T CPU @ 4x 3.20 GHz
  • Asus P9D-M Server Board
  • 24 GB ECC RAM
  • 4x 4 TB HDs (2x WD Green 4 TB aus der Synology + 2x neue 4 TB HGST NAS) im RAID5
  • PCIe Digital Devices Max S8
Auch dieser Server läuft nun wieder mit Ubuntu Server 14.04, auch wenn ich heute vermutlich Proxmox installieren würde und wie von Nick gezeigt ein Durchreichen der TV Karte in ein einen LXC Container versuchen würde. Der Server existiert allerdings schon einige Zeit (2 Monate ca.) und so bin ich erstmal froh, dass er als kombiniertes NAS + Medienserver + Virtualisierungsserver für kleine Dinge läuft. Für die Freigabe der Daten ins NAS verwende ich aktuell Samba für den Windows Client meiner Frau und die beiden KODIs im Haus. Auch mein eigenes Windows auf der zweiten Platte greift per Samba auf den Server zu und erreicht volle 1 GBit/s im Up- & Download vom/zum Server.

Seit ein paar Tagen experimentiere ich mit Arch Linux auf dem Desktop. Ich habe mich jahrelang gesträubt dieses Linux überhaupt nur auszuprobieren, aber nachdem ich es nun installiert habe (wie gesagt, zum Experimentieren), bin ich hellauf begeistert und boote hier quasi ständig rein und das Windows 10 auf der anderen SSD setzt Staub an.

Für den Linux Client und natürlich auch für die beiden KODIs im Haus, bietet sich allerdings NFS eher als Samba an, da Samba zu Linux Clients dann doch nicht die volle Leistung des CIFS-Protokolls vom Windows erreichen.

Virtualsierung auf dem Server betreibe ich gerade in der Testphase. Eine VM ist in KVM gebaut und läuft wirklich gut und lastet das System nicht aus. Kunststück, selbst der alte AMD konnte einige KVM-Maschinen parallel gut betreiben und die i3 CPU hat doch einiges mehr an “Bumms” :) Für performante KVMs wird auf kurze Zeit noch eine meiner SSDs als Storage in den Server wandern. Hintergrund ist ganz einfach, dass ich zwar mit den nun knapp 12 TB großem Storage mehr als genug Platz habe, aber für VMs habe ich dann doch gerne etwas mehr Speed. Zumal die SSDs hier mittlerweile stapelweise rumliegen.

Neben dem neuen Server ist auch noch eine USV eingezogen. Mein beiden alten APCs (und mit alt meine ich von 2008), sind mittlerweile beide durch (CS500 und RS800) und so habe ich dann etwas Geld in die Hand genommen und eine APC Back UPS PRO USV 900 VA - BR900GI gekauft. Diese hat mehr als genug Batteriespeicher um das System für 20 Minuten+ zu versorgen (oder eben mehrere Geräte mit dem kürzerem Zeitraum der Überbrückung). Aktuell stellt sie ziemlich genau 88 Watt zur Verfügung, genutzt vom D-Link 16 Port Gigabit Switch im Keller und von dem beschriebenen Server.

Der Hauptgrund für die Neuanschaffungen war, dass das alte Board so einige beunruhigende Zicken machte. Die Systemzeit konnte ich nicht mehr sauber einstellen und innerhalb von wenigen Tagen hatte ich einen Offset von 5 Minuten oder mehr, was für TV-Aufnahmen absolut unbrauchbar war. Das Board hat mir sehr viele Jahre als Server treu gedient und wenn man bedenkt, dass es ein Desktop-Board ist, war die Laufzeit erstaunlich gut.

Das Asus Board hat tolle neue Features wie iKVM, zwei Intel NICs onboard, SATA3 (4) und vor allem USB3, was für meine Backups wichtig ist, sowie genug PCIe Lanes in Form von PCIe-Slots. Aktuell steckt nur eine Karte in einem der 4 Slots (die Digtal Devices), sobald ich aber eine SSD dazustecken werde, wird noch eine CSL SATA3 Karte einen weiteren Slot belegen. Onboard habe ich zwar 6 SATA-Ports, aber leider sind nur 4 SATA3. Man muss allerdings dazu sagen, dass ich auf Grund von DDR4-ECC Preisen zu einem Haswell statt einem Skylake Board gegriffen habe. Die DDR3-ECC Speicher sind aktuell noch deutlich erschwinglicher und vom Energieverbrauch her läuft beides auf das selbe hinaus, da die i3 CPU lediglich 35 Watt TDP hat.

Mit dem Server habe ich aber noch ein anderes Ziel erreicht: kein Wildwuchs an Systemen mehr, sondern ein System für alles. Wo bisher die Synology DS214se durchaus einen guten Dienst geleistet hat, gefiel mit das Format in dem die Daten auf die Datenträger geschrieben wurden, so gar nicht. Ich musste mich auf das Backup-Tool von Synology verlassen ODER erst den Share auf einen anderen Server mounten um dann ein Backup dort durchführen zu können. Noch wichtiger war allerdings: Mehr Speicherplatz. Ewiges Aufräumen der Synology war zwar effektiv, aber nicht praktisch. Ich hatte einen Server (Medienserver) mit 6 TB Speicherplatz, wovon nun im nachhinein in den letzten Wochen 3 Platten gestorben sind, aber für das NAS hatte ich nur 4 TB verfügbar. So gesehen war es eine sehr gute Idee, die alten (teilweise noch von 2010) Festplatten in den Ruhestand zu bringen und neuere Platten für die wichtigen Daten zu verwenden.

Für das Backup verwende ich eine tolle Software mit dem Namen Borg Backup. Diese kann zur Laufzeit nicht nur komprimieren und verschlüsseln (die i3 CPU hat im Gegensatz zur AMD CPU AES-NI), sondern auch deduplizieren. Das spart mir auf meinem Backup Volume (einer externen 3 TB WD Elements HD (davon habe ich 2, wobei eine (fast) immer offsite gelagert wird)) ungelaublich viel Platz. Man ist ja doch mal etwas unordentlich und hat die eine oder andere SD-Card mit den RAW-Photos vielleicht in 2 Ordnern gelagert.

Nun geht es in Richtung Virtualisierung. Ich bin zwar durchaus ein Freund von ESXi, aber KVM passt mir eher in den Kram, weil mein Virtualsierungsserver ein ebenso als AMD 240e auf einem baugleichen Board ist. Also: ebenfalls alte, stromvernichtende Technik. Nein Danke. Dabei bin ich auch kein Virtualisierungsexperte wie Chris, der ein ganzes VMware Lab aufgebaut hat und live ganze Stapel von virtuellen Maschinen per 10 GBit/s verschiebt. Tolle Sache und ja, hätte ich auch gerne, aber für meine Zwecke reicht das freie Qemu/KVM und evtl. später mal Proxmox. Im Gegensatz zu Christian nutze ich ESXi nur in Form von Virtuellen Maschinen, ich manage beruflich aber keine ESXi-Farm :)

An dem Thema werde ich aber dran bleiben und weiter über mein virtuelles Setup in Zukunft hier berichten.

Neue Hardware für den Desktop

Kam eigentlich schon vor einigen Monaten, wurde aber von mir nie beschrieben. Einige kennen mein “White Rabbit” Video auf YouTube. Die Wahrheit ist, dass ich die Hardware nach einigen Tagen als “zu viel Power für meinen Bedarf” zurückgeschickt hatte (jaja, öfter mal was Neues). Einige Wochen später habe ich dann auf ein etwas kostengünstigeres System gesetzt, das Ihr evtl. von meinen Fotos kennt. Die Hardware die hier verbaut ist:

  • Intel Core i7 Skylake 6700k @ 8x 4.2 GHz
  • MSI Z170A-G43 PLUS
  • 16 GB Corsair Vengeance DDR4
  • EVGA GeForce GTX960/4 GB SSC
  • 120 GB, 240 GB und 480 GB SSDs (SATA3, m.2 ist “geplant”)
  • 1 TB WD HD als temp. Speicherplatz (ahem DVD/BluRay-Sammlung auf’s NAS bringen Projekt)
  • 2 BluRay Laufwerke (1x Writer, ein BR-ROM/DVD-Writer-Combo), beide LG
  • Fractal Design Define R5 weißes Gehäuse mit Fenster
  • Alpenföhn Matterhorn Black/White
Abgesehen von einem bereits durchgeführten Austausch des Boards, weil der erste PCIe-Sockel nicht verlötet war und ich beim Entfernen der alten GPU plötzlich den Sockel in der Hand bzw. an dem PCIe-Connector der Karte hängen hatte, bin ich mit der Hardware sehr zufrieden.


Wie üblich, kann ich Windows auf dem Desktop nicht so wirklich ertragen. Daher hatte ich auf diesem System schon einige Zeit auf OS X (paradon: macOS) laufen, aber so wirklich performant war das nicht. Ubuntu/Ubuntu Gnome und Xubuntu wollten nicht so wirklich gut laufen. Elementary OS gefiel mir dann schon deutlich besser, aber “gut abgehangen” trifft es wohl auch. Was soll’s dachte ich mir und spielte zuletzt in der vergangenen Woche Arch Linux auf das System, wie oben bereits erwähnt. Mit Gnome 3 wurde ich wegen dem Compositing leider nicht glücklich, Tearing war an der Tagesordnung. Mit xfce4 und compton als Compositing-Manager habe ich nun aber eine Oberfläche gefunden, die mir gefällt UND tearingfrei ist. Gefällt!

Wie war das? Nichts hält so lange wie ein Provisorium? Das ist durchaus im Bereich der Möglichkeiten :)

Auch in der Hostentasche gibt es etwas Neues

Nein, Apple mag ich immer noch nicht, aber Android hat mich in den letzten Jahren einfach nicht zufrieden gestellt. Sei es in Sachen Kamera (ja auch das Xperia Z3 ist nicht sooo gut), das OS selber (mein Moto G konnte keine Updates mehr bekommen, das lag aber an meinem Rooting, das ich nicht umkehren konnte) mißfiel mir immer mehr. Nun, das Moto G lief und sollte noch eine Zeit laufen, aber eines Samstags lag es ungünstig auf dem Tisch und bei dem vermutlich 1000ten Sturz von der Tischkante schlug es ungünstig auf und das Display war dahin. Am Nachmittag waren wir dann bei Saturn in Moers und nachdem mich so gar kein Android Handy flashen wollte, griff ich zu einem iPhone 6S 64 GB. Viel zu teuer, aber die elende Wechselei war ich irgendwie auch leid. Zum Glück war an dem Tag noch (und es war der letzte Tag der Aktion) das Angebot: kaufe war für 100€ und bekomme einen Gutschein über 11€. Das Ganze offen nach oben. Naja… ihr könnt Euch denken, dass ein recht großer Gutschein dabei raus kam.

Nach mittlerweile einem Monat mit dem iPhone 6S muss ich sagen: Ja! Tolles Teil. Ohne wenn und aber.

Puh - so das sollte aber erstmal reichen an Updates :)

Rise of Darknet Stokes Fear of The Insider

Postby BrianKrebs via Krebs on Security »

With the proliferation of shadowy black markets on the so-called “darknet” — hidden crime bazaars that can only be accessed through special software that obscures one’s true location online — it has never been easier for disgruntled employees to harm their current or former employer. At least, this is the fear driving a growing stable of companies seeking technical solutions to detect would-be insiders.

Avivah Litan, a fraud analyst with Gartner Inc., says she’s been inundated recently with calls from organizations asking what they can do to counter the following scenario: A disaffected or disgruntled employee creates a persona on a darknet market and offers to sell his company’s intellectual property or access to his employer’s network.

A darknet forum discussion generated by a claimed insider at music retailer Guitar Center.
Litan said a year ago she might have received one such inquiry a month; now Litan says she’s getting multiple calls a week, often from companies that are in a panic.

“I’m getting calls from lots of big companies, including manufacturers, banks, pharmaceutical firms and retailers,” she said. “A year ago, no one wanted to say whether they had or were seriously worried about insiders, but that’s changing.”

Insiders don’t have to be smart or sophisticated to be dangerous, as this darknet forum discussion thread illustrates.
Some companies with tremendous investments in intellectual property — particularly pharmaceutical and healthcare firms — are working with law enforcement or paying security firms to monitor and track actors on the darknet that promise access to specific data or organizations, Litan said.

“One pharma guy I talked to recently said he meets with [federal agents] once a week to see if his employees are active on the darknet,” she said. “Turns out there are a lot of disgruntled employees who want to harm their employers. Before, it wasn’t always clear how to go about doing that, but now they just need to create a free account on some darknet site.”

Statistics and figures only go so far in illustrating the size of the problem. A Sept. 2015 report from Intel found that internal actors were responsible for 43 percent of data loss — but only about half of that was intended to harm the employer.

Likewise, the 2016 Data Breach Investigation Report (DBIR), an annual survey of data breaches from Verizon Enterprise, found insiders and/or the misuse of employee privileges were present in a majority of incident. Yet it also concluded that much of this was not malicious but instead appeared related to employees mailing sensitive information or loading it to a file-sharing service online.

Perhaps one reason insiders are so feared is that the malicious ones very often can operate for years undetected, doing major damage to employers in the process. Indeed, Verizon’s DBIR found that insider breaches usually take months or years to discover.

Noam Jolles, a senior intelligence expert at Diskin Advanced Technologies, studies darknet communities. I interviewed her last year in “Bidding for Breaches,” a story about a secretive darknet forum called Enigma where members could be hired to launch targeted phishing attacks at companies. Some Enigma members routinely solicited bids regarding names of people at targeted corporations that could serve as insiders, as well as lists of people who might be susceptible to being recruited or extorted.

Jolles said the proliferation of darkweb communities like Enigma has lowered the barriers to entry for insiders, and provided even the least sophisticated would-be insiders with ample opportunities to betray their employer’s trust.

“I’m not sure everyone is aware of how simple and practical this phenomena looks from adversary eyes and how far it is from the notion of an insider as a sophisticated disgruntled employee,” Jolles said. “The damage from the insider is not necessarily due to his position, but rather to the sophistication of the threat actors that put their hands on him.”

Who is the typical insider? According to Verizon’s DBIR, almost one third of insiders at breaches in 2015 were found to be end users who had access to sensitive data as a requirement to do their jobs.

“Only a small percentage (14%) are in leadership roles (executive or other management), or in roles with elevated access privilege jobs such as system administrators or developers (14%),” Verizon wrote, noting that insiders were most commonly found in administrative, healthcare and public sector jobs. “The moral of this story is to worry less about job titles and more about the level of access that every Joe or Jane has (and your ability to monitor them). At the end of the day, keep up a healthy level of suspicion toward all employees.”

If tech industry analysts like Litan are getting pinged left and right about the insider threat these days, it might have something to do with how easy it is to find company proprietary information or access on offer in darknet forums — many of which allow virtually anyone to register and join.

A darknet forum discussion about possible insiders at Vodafone.
The other reason may be that there are a lot more companies looking for this information and actively notifying affected organizations. These notifications invariably become sales pitches for “dark web monitoring” or “threat intelligence services,” and a lot of companies probably aren’t sure what to make of this still-nascent industry.

How can organizations better detect insiders before the damage is done? Gartner’s Litan emphasized continuous monitoring and screening for trusted insiders with high privileges. Beyond that, Litan says there are a wide range of data-driven insider threat technology solutions. On the one end of the spectrum are companies that conduct targeted keyword searches on behalf of clients on social media networks and darknet destinations. More serious and expensive offerings apply machine learning to internal human resources (HR) records, and work to discover and infiltrate online crime rings.

What’s Verizon’s answer to the insider threat? “Love your employees, bond at the company retreat, bring in bagels on Friday, but monitor the heck out of their authorized daily activity, especially ones with access to monetizable data (financial account information, personally identifiable information (PII), payment cards, medical records).”

Additional reading: Insider Threats Escalate and Thrive in the Dark Web.

Dependency classes and allowed dependency types

Postby Michał Górny via Michał Górny »

In my previous post I have described a number of pitfalls regarding Gentoo dependency specifications. However, I have missed a minor point of correctness of various dependency types in specific dependency classes. I am going to address this in this short post.

There are three classes of dependencies in Gentoo: build-time dependencies that are installed before the source build happens, runtime dependencies that should be installed before the package is installed to the live system and ‘post’ dependencies which are pretty much runtime dependencies whose install can be delayed if necessary to avoid dependency loops. Now, there are some fun relationships between dependency classes and dependency types.


Blockers are the dependencies used to prevent a specific package from being installed, or to force its uninstall. In modern EAPIs, there are two kinds of blockers: weak blockers (single !) and strong blockers (!!).

Weak blockers indicate that if the blocked package is installed, its uninstall may be delayed until the blocking package is installed. This is mostly used to solve file collisions between two packages — e.g. it allows the package manager to replace colliding files, then unmerge remaining files of the blocked package. It can also be used if the blocked package causes runtime issues on the blocking package.

Weak blockers make sense only in RDEPEND. While they’re technically allowed in DEPEND (making it possible for DEPEND=${RDEPEND} assignment to be valid), they are not really meaningful in DEPEND alone. That’s because weak blockers can be delayed post build, and therefore may not influence the build environment at all. In turn, after the build is done, build dependencies are no longer used, and unmerging the blocker does not make sense anymore.

Strong blockers indicate that the blocked package must be uninstalled before the dependency specification is considered satisfied. Therefore, they are meaningful both for build-time dependencies (where they indicate the blocker must be uninstalled before source build starts) and for runtime dependencies (where they indicate it must be uninstalled before install starts).

This leaves PDEPEND which is a bit unclear. Again, technically both blocker types are valid. However, weak blockers in PDEPEND would be pretty much equivalent to those in RDEPEND, so there is no reason to use that class. Strong blockers in PDEPEND would logically be equivalent to weak blockers — since the satisfaction of this dependency class can be delayed post install.

Any-of dependencies and :* slot operator

This is going just to be a short reminder: those types of dependencies are valid in all dependency classes but no binding between those occurences is provided.

An any-of dependency in DEPEND indicates that at least one of the packages will be installed before the build starts. An any-of dependency in RDEPEND (or PDEPEND) indicates that at least one of them will be installed at runtime. There is no guarantee that the dependency used to satisfy DEPEND will be the same as the one used to satisfy RDEPEND, and the latter is fully satisfied when one of the listed packages is replaced by another.

A similar case occurs for :* operator — only that slots are used instead of separate packages.

:= slot operator

Now, the ‘equals’ slot operator is a fun one. Technically, it is valid in all dependency classes — for the simple reason of DEPEND=${RDEPEND}. However, it does not make sense in DEPEND alone as it is used to force rebuilds of installed package while build-time dependencies apply only during the build.

The fun part is that for the := slot operator to be valid, the matching package needs to be installed when the metadata for the package is recorded — i.e. when a binary package is created or the built package is installed from source. For this to happen, a dependency guaranteeing this must be in DEPEND.

So, the common rule would be that a package dependency using := operator would have to be both in RDEPEND and DEPEND. However, strictly speaking the dependencies can be different as long as a package matching the specification from RDEPEND is guaranteed by DEPEND.

Dependency pitfalls regarding slots, slot ops and any-of deps

Postby Michał Górny via Michał Górny »

During my work on Gentoo, I have seen many types of dependency pitfalls that developers fell in. Sad to say, their number is increasing with new EAPI features — we are constantly increasing new ways into failures rather than working on simplifying things. I can’t say the learning curve is getting much steeper but it is considerably easier to make a mistake.

In this article, I would like to point out a few common misunderstandings and pitfalls regarding slots, slot operators and any-of (|| ()) deps. All of those constructs are used to express dependencies that can be usually be satisfied by multiple packages or package versions that can be installed in parallel, and missing this point is often the cause of trouble.

Separate package dependencies are not combined into a single slot

One of the most common mistakes is to assume that multiple package dependency specifications listed in one package are going to be combined somehow. However, there is no such guarantee, and when a package becomes slotted this fact actually becomes significant.

Of course, some package managers take various precautions to prevent the following issues. However, such precautions not only can not be relied upon but may also violate the PMS.

For example, consider the following dependency specification:

It is a common way of expressing version ranges (in this case, versions 2*, 3* and 4* are acceptable). However, if app-misc/foo is slotted and there are versions satisfying the dependencies in different slots, there is no guarantee that the dependency could not be satisfied by installing foo-1 (satisfies <foo-5) and foo-6 (satisfies >=foo-2) in two slots!

Similarly, consider:

bar? ( app-misc/foo[baz] )
This one is often used to apply multiple sets of USE flags to a single package. Once again, if the package is slotted, there is no guarantee that the dependency specifications will not be satisfied by installing two slots with different USE flag configurations.

However, those problems mostly apply to fully slotted packages such as sys-libs/db where multiple slots are actually meaningfully usable by a package. With the more common use of multiple slots to provide incompatible versions of the package (e.g. binary compatibility slots), there is a more important problem: that even a single package dependency can match the wrong slot.

For non-truly multi-slotted packages, the solution to all those problems is simple: always specify the correct slot. For truly multi-slotted packages, there is no easy solution.

For example, a version range has to be expressed using an any-of dep:

|| (
Multiple sets of USE flags? Well, if you really insist, you can combine them for each matching slot separately…

|| (
    ( sys-libs/db:5.3 tools? ( sys-libs/db:5.3[cxx] ) )  
    ( sys-libs/db:5.1 tools? ( sys-libs/db:5.1[cxx] ) )  

The ‘equals’ slot operator and multiple slots

A similar problem applies to the use of the EAPI 5 ‘equals’ slot operator. The PMS notes that:

Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will break unless a matching package with slot and sub-slot equal to the slot and sub-slot of the best installed version at the time the package was installed is available.

To implement the equals slot operator, the package manager will need to store the slot/sub-slot pair of the best installed version of the matching package. […]

PMS, Slot Dependencies

The significant part is that the slot and subslot is recorded for the best package version matched by the specification containing the operator. So again, if the operator is used on multiple dependencies that can match multiple slots, multiple slots can actually be recorded.

Again, this becomes really significant in truly slotted packages:

|| (
While one may expect the code to record the slot of sys-libs/db used by the package, this may actually record any newer version that is installed while the package is being built. In other words, this may implicitly bind to db-6* (and pull it in too).

For this to work, you need to ensure that the dependency with the slot operator can not match any version newer than the two requested:

|| (
In this case, the dependency with the operator could still match earlier versions. However, the other dependency enforces (as long as it’s in DEPEND) that at least one of the two versions specified is installed at build-time, and therefore is used by the operator as the best version matching it.

The above block can easily be extended by a single set of USE dependencies (being applied to all the package dependencies including the one with slot operator). For multiple conditional sets of USE dependencies, finding a correct solution becomes harder…

The meaning of any-of dependencies

Since I have already started using the any-of dependencies in the examples, I should point out yet another problem. Many of Gentoo developers do not understand how any-of dependencies work, and make wrong assumptions about them.

In an any-of group, at least one immediate child element must be matched. A blocker is considered to be matched if its associated package dependency specification is not matched.

PMS, 8.2.3 Any-of Dependency Specifications

So, PMS guarantees that if at least one of the immediate child elements (package dependencies, nested blocks) of the any-of block, the dependency is considered satisfied. This is the only guarantee PMS gives you. The two common mistakes are to assume that the order is significant and that any kind of binding between packages installed at build time and at run time is provided.

Consider an any-of dependency specification like the following:

|| (
In this case, it is guaranteed that at least one of the listed packages is installed at the point appropriate for the dependency class. If none of the packages are installed already, it is customary to assume the Package Manager will prefer the first one — while this is not specified and may depend on satisfiability of the dependencies, it is a reasonable assumption to make.

If multiple packages are installed, it is undefined which one is actually going to be used. In fact, the package may even provide the user with explicit run time choice of the dependency used, or use multiple of them. Assuming that A will be preferred over B, and B over C is simply wrong.

Furthermore, if one of the packages is uninstalled, while one of the remaining ones is either already installed or being installed, the dependency is still considered satisfied. It is wrong to assume that in any case the Package Manager will bind to the package used at install time, or cause rebuilds when switching between the packages.

The ‘equals’ slot operator in any-of dependencies

Finally, I am reaching the point of lately recurring debates. Let me make it clear: our current policy states that under no circumstances may := appear anywhere inside any-of dependency blocks.

Why? Because it is meaningless, it is contradictory. It is not even undefined behavior, it is a case where requirements put for the slot operator can not be satisfied. To explain this, let me recall the points made in the preceding sections.

First of all, the implementation of the ‘equals’ slot operator requires the Package Manager to explicitly bind the slot/subslot of the dependency to the installed version. This can only happen if the dependency is installed — and an any-of block only guarantees that one of them will actually be installed. Therefore, an any-of block may trigger a case when PMS-enforced requirements can not be satisfied.

Secondly, the definition of an any-of block allows replacing one of the installed packages with another at run time, while the slot operator disallows changing the slot/subslot of one of the packages. The two requested behaviors are contradictory and do not make sense. Why bind to a specific version of one package, while any version of the other package is allowed?

Thirdly, the definition of an any-of block does not specify any particular order/preference of packages. If the listed packages do not block one another, you could end up having multiple of them installed, and bound to specific slots/subslots. Therefore, the Package Manager should allow you to replace A:1 with B:2 but not with B:1 nor with A:2. We’re reaching insanity now.

Now, all of the above is purely theoretical. The Package Manager can do pretty much anything given invalid input, and that is why many developers wrongly assume that slot operators work inside any-of. The truth is: they do not, the developer just did not test all the cases correctly. The Portage behavior varies from allowing replacements with no rebuilds, to requiring both of mutually exclusive packages to be installed simultaneously.

Citing Attack, GoToMyPC Resets All Passwords

Postby BrianKrebs via Krebs on Security »

GoToMyPC, a service that helps people access and control their computers remotely over the Internet, is forcing all users to change their passwords, citing a spike in attacks that target people who re-use passwords across multiple sites.

Owned by Santa Clara, Calif. based networking giant Citrix, GoToMyPC is a popular software-as-a-service product that lets users access and control their PC or Mac from anywhere in the world. On June 19, the company posted a status update and began notifying users that a system-wide password update was underway.

“Unfortunately, the GoToMYPC service has been targeted by a very sophisticated password attack,” reads the notice posted to “To protect you, the security team recommended that we reset all customer passwords immediately. Effective immediately, you will be required to reset your GoToMYPC password before you can login again. To reset your password please use your regular GoToMYPC login link.”

John Bennett, product line director at Citrix, said once the company learned about the attack it took immediate action. But contrary to previous published reports, there is no indication Citrix or its platforms have been compromised, he said.

“Citrix can confirm the recent incident was a password re-use attack, where attackers used usernames and passwords leaked from other websites to access the accounts of GoToMyPC users,” Bennett wrote in an emailed statement. “At this time, the response includes a mandatory password reset for all GoToMyPC users. Citrix encourages customers to visit the  GoToMyPC status page to learn about enabling two-step verification, and to use strong passwords in order to keep accounts as safe as possible. ”

Citrix’s GoTo division also operates GoToAssist, which is geared toward technical support specialists, and GoToMeeting, a product marketed at businesses. The company said it has no indication that user accounts at other GoTo services were compromised, but assuming that’s true it’s likely because the attackers haven’t gotten around to trying yet.

It’s a fair bet that whoever perpetrated this attack had help from huge email and password lists recently leaked online from older breaches at LinkedIn, MySpace and Tumblr to name a few. Re-using passwords at multiple sites is a bad idea to begin with, but re-using your GoToMyPC remote administrator password at other sites seems like an exceptionally lousy idea.

On good metadata.xml maintainer descriptions

Postby Michał Górny via Michał Górny »

Since GLEP 67 was approved, bug assignment became easier. However, there were still many metadata.xml files which made this suboptimal. Today, I have fixed most of them and I would like to provide this short guide on how to write good metadata.xml files.

The bug assignment procedure

To understand the points that I am going to make, let’s take a look at how bug assignment happens these days. Assuming a typical case of bug related to a specific package (or multiple packages), the procedure for assigning the bug involves, for each package:

  1. reading all <maintainer/> elements from the package’s metadata.xml file, in order;
  2. filtering the maintainers based on restrict="" attributes (if any);
  3. filtering and reordering the maintainers based on <description/>s;
  4. assigning the bug to the first maintainer left, and CC-ing the remaining ones.
I think the procedure is quite clear. Since we no longer have <herd/> elements with special meaning applied to them, the assignment is mostly influenced by maintainer occurrence order. Restrictions can be used to limit maintenance to specific versions of a package, and descriptions to apply special rules and conditions.

Now, for semi-automatic bug assignment, only the first or the first two of the above steps can be clearly automated. Applying restrictions correctly requires understanding whether the bug can be correctly isolated to a specific version range, as some bugs (e.g. invalid metadata) may require being fixed in multiple versions of the package. Descriptions, in turn, are written for humans and require a human to interpret them.

What belongs in a good description

Now, many of existing metadata.xml files had either useless or even problematic maintainer descriptions. This is a problem since it increases time needed for bug assignment, and makes automation harder. Common examples of bad maintainer descriptions include:

  1. Assign bugs to him; CC him on bugs — this is either redundant or contradictory. Ensure that maintainers are listed in correct order, and bugs will be assigned correctly. Those descriptions only force a human to read them and possibly change the automatically determined order.
  2. Primary maintainer; proxied maintainer — this is some information but it does not change anything. If the maintainer comes first, he’s obviously the primary one. If the maintainer has non-Gentoo e-mail and there are proxies listed, he’s obviously proxied. And even if we did not know that, does it change anything? Again, we are forced to read information we do not need.
Good maintainer descriptions include:

  1. Upstream; CC on bugs concerning upstream, Only CC on bugs that involve USE=”d3d9″ — useful information that influences bug assignment;
  2. Feel free to fix/update, All modifications to this package must be approved by the wxwidgets herd. — important information for other developers.
So, before adding another description, please answer two questions: will the information benefit anyone? Can’t it be expressed in machine-readable form?

Proxy-maintained packages

Since a lot of the affected packages are maintained by proxied maintainers, I’d like to explicitly point out how proxy-maintained packages are to be described. This overlaps with the current Proxy maintainers team policy.

For proxy-maintained packages, the maintainers should be listed in the following order:

  1. actual package maintainers, in appropriate order — including developers maintaining or co-maintaining the package, proxied maintainers and Gentoo projects;
  2. developer proxies, preferably described as such — i.e. developers who do not actively maintain the package but only proxy for the maintainers;
  3. Proxy-maintainers project — serving as the generic fallback proxy.
I would like to put more emphasis on the key point here — the maintainers should be listed in an order making it clearly possible to distinguish packages that are maintained only by a proxied maintainer (with developers acting as proxies) from packages that are maintained by Gentoo developers and co-maintained by a proxied maintainer.

Third-party repositories (overlays)

As a last point, I would like to point out the special case of unofficial Gentoo repositories. Unlike the core repositories, metadata.xml files can not be fully trusted there. The reason for this is quite simple — many users copy (fork) packages from Gentoo along with metadata.xml files. If we were to trust those files — we would be assigning overlay bugs to Gentoo developers maintaining the original package!

For this reason, all bugs on unofficial repository packages are assigned to the repository owners.

Debugging TDMA on the AR9380

Postby Adrian via Adrian Chadd's Ramblings »

So, it turns out that TDMA didn't work on the AR9380. I started digging into it a bit more with AR9380's in 5GHz mode and found that indeed no, it was just transmitting whenever the heck it wanted to.

The first thing I looked at was the transmit packet timing. Yes, they were going out at arbitrary times, rather than after the beacon. So I dug into the AR9380 HAL code and found the TX queue setup code just didn't know how to setup arbitrary TX queues to be beacon-gated. The CABQ does this by default, and the HAL just hard-codes that for the CAB queue, but it wasn't generic for all queues. So, I fixed that and tried again. Now, packets were exchanged, but I couldn't get more than around 1mbit of transmit throughput. The packets were correctly being beacon gated, but they were going out at very long intervals (one every 25ms or so.)

After a whole lot of digging and asking around, I found out what's going on. It turns out that the new TX DMA engine in the AR9380 treats queue gating slightly different versus previous chips. In previous chips you would see it transmit whatever it could, and then be gated until the next time it could transmit. As long as you kept poking the AR_TXE bit to re-start queue DMA it would indeed continue along transmitting whenver it could. But, the AR9380 TX DMA FIFO works differently.

Each queue has 8 TX FIFO descriptors, which can contain a list of frames or a single frame. For the CABQ I just added the whole list of frames in one hit and that works fine. But for the normal data paths it would push one frame into a TX DMA FIFO slot. If it's an A-MPDU aggregate then yes, it'd be a whole list of frames, but still a single PPDU. But for non-aggregate traffic it'd push a single frame in.

With this in mind, the TX DMA gating now works on FIFO slots, not just descriptor lists. That is, if you have the queue setup to gate on something (say a timer firing, like the beacon timer) then that un-gating is for a single FIFO slot only. If that FIFO slot has one PPDU in it then indeed it'll only burst out a single frame and then the rest of the channel burst time is ignored. It won't go to the next FIFO slot until the burst time expires and the queue is re-gated again. This is why I was only seeing one frame every 25ms - that's the beacon interval for two devices in a TDMA setup. It didn't matter that the queue had more data available - it ran out of data servicing a single TX FIFO slot and that was that.

So I did some local hacks to push more data into each TX FIFO slot. When I buffered things and only leaked out 32 frames at a time (which is roughly the whole slot time worth of large frames) then it indeed behaved roughly at the expected throughput. But there are bugs and it broke non-TDMA traffic. I won't commit it all to FreeBSD-HEAD until I figure out what's going on

There's also something else I noticed - there was some situation where it would push in a new frame and that would cause the next frame to go out immediately. I think it's actually just scheduling for the next gated burst (ie, it isn't doing multiple frames in a single burst window, but one every beacon interval) but I need to dig into it a bit more to see what's going on.

In any case, I'm getting closer to working TDMA on the AR9380 and later chips.

Oh, and it turns out that TDMA mode doesn't add some of the IEs to the beacon announcements - notably, no atheros fast-frames announcement. This means A-MSDUs or fast-frames aren't sent. I was hoping to leverage A-MSDU aggregation in its present state to improve things, even if it's just two frames at a time. Hopefully that'd double the throughput - I'm currently seeing 30mbit TX and 30mbit RX without it, so hopefully 60mbit with it.)

Adobe Update Plugs Flash Player Zero-Day

Postby BrianKrebs via Krebs on Security »

Adobe on Thursday issued a critical update for its ubiquitous Flash Player software that fixes three dozen security holes in the widely-used browser plugin, including at least one vulnerability that is already being exploited for use in targeted attacks.

The latest update brings Flash to v. for Windows and Mac users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually check for updates in Chrome an restart the browser to get the latest Flash version).

For some reason that probably has nothing to do with security, Adobe has decided to stop distributing direct links to its Flash Player software. According to the company’s Flash distribution page, on June 30, 2016 Adobe will decommission direct links to various Flash Player downloads. This will essentially force Flash users to update the program using its built-in automatic updates feature (which sometimes takes days to notice a new security update is available), or to install the program from the company’s Flash Home page — a download that currently bundles McAfee Security Scan Plus and a product called True Key by Intel Security.

Anything that makes it less likely users will update Flash seems like a bad idea, especially when we’re talking about a program that often needs security fixes more than once a month.
Top bug assignment UserJS

Postby Michał Górny via Michał Górny »

Since time does not permit me to write in more extent, just a short note: yesterday, I have published a Gentoo Bugzilla bug assignment UserJS. When enabled, it automatically tries to find package names in bug summary, fetches maintainers for them (from packages.g.o) and displays them in a table with quick assignment/CC checkboxes.

Note that it’s still early work. If you find any bugs, please let me know. Patches will be welcome too. And some redesign, since it looks pretty bad, standard Bugzilla style applied to plain HTML.

Update: now on GitHub as bug-assign-user-js

FBI Raids Spammer Outed by KrebsOnSecurity

Postby BrianKrebs via Krebs on Security »

Michael A. Persaud, a California man profiled in a Nov. 2014 KrebsOnSecurity story about a junk email artist currently flagged by anti-spam activists as one of the world’s Top 10 Worst Spammers, was reportedly raided by the FBI in connection with a federal spam investigation.

According to a June 9 story at ABC News, on April 27, 2016 the FBI raided the San Diego home of Persaud, who reportedly has been under federal investigation since at least 2013. The story noted that on June 6, 2016, the FBI asked for and was granted a warrant to search Persaud’s iCloud account, which investigators believe contained “evidence of illegal spamming’ and wire fraud to further [Persaud’s] spamming activities.”

Persaud doesn’t appear to have been charged with a crime in connection with this investigation. He maintains his email marketing business is legitimate and complies with the CAN-SPAM Act, the main anti-spam law in the United States which prohibits the sending of spam that spoofs that sender’s address or does not give recipients an easy way to opt out of receiving future such emails from that sender.

The affidavit that investigators with the FBI used to get a warrant for Persaud’s iCloud account is sealed, but a copy of it was obtained by KrebsOnSecurity. It shows that during the April 2016 FBI search of his home, Persaud told agents that he currently conducts internet marketing from his residence by sending a million emails in under 15 minutes from various domains and Internet addresses.

The affidavit indicates the FBI was very interested in the email address In my 2014 piece Still Spamming After All These Years, I called attention to this address as the one tied to Persaud’s Facebook account — and to 5,000 or so domains he was advertising in spam. The story was about how the junk email Persaud acknowledged sending was being relayed through broad swaths of Internet address space that had been hijacked from hosting firms and other companies.

FBI Special Agent Timothy J. Wilkins wrote that investigators also subpoenaed and got access to that account, and found emails between Persaud and at least four affiliate programs that hire spammers to send junk email campaigns.

A spam affiliate program is a type of business or online retailer — such as an Internet pharmacy — that pays a third party (known as affiliates or spammers) a percentage of any sales that they generate for the program (for a much deeper dive on how affiliate programs work, check out Spam Nation).

When I wrote about Persaud back in 2014, I noted that his spam generally advertised the types of businesses you might expect to see pimped in junk email: payday loans, debt consolidation services, and various “nutraceutical” products.

Persaud did not respond to requests for comment. But in an email he sent to KrebsOnSecurity in November 2014, he said:

“I can tell you that my company deals with many different ISPs both in the US and overseas and I have seen a few instances where smaller ones will sell space that ends up being hijacked,” Persaud wrote in an email exchange with KrebsOnSecurity. “When purchasing IP space you assume it’s the ISP’s to sell and don’t really think that they are doing anything illegal to obtain it. If we find out IP space has been hijacked we will refuse to use it and demand a refund. As for this email address being listed with domain registrations, it is done so with accordance with the CAN-SPAM guidelines so that recipients may contact us to opt-out of any advertisements they receive.”

Persaud is currently listed as #10 on the World’s 10 Worst Spammers list maintained by Spamhaus, an anti-spam organization. In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers.

Comparing Hadoop with mainframe

Postby Sven Vermeulen via Simplicity is a form of art... »

At my work, I have the pleasure of being involved in a big data project that uses Hadoop as the primary platform for several services. As an architect, I try to get to know the platform's capabilities, its potential use cases, its surrounding ecosystem, etc. And although the implementation at work is not in its final form (yay agile infrastructure releases) I do start to get a grasp of where we might be going.

For many analysts and architects, this Hadoop platform is a new kid on the block so I have some work explaining what it is and what it is capable of. Not for the fun of it, but to help the company make the right decisions, to support management and operations, to lift the fear of new environments. One thing I've once said is that "Hadoop is the poor man's mainframe", because I notice some high-level similarities between the two.

Somehow, it stuck, and I was asked to elaborate. So why not bring these points into a nice blog post :)

The big fat disclaimer

Now, before embarking on this comparison, I would like to state that I am not saying that Hadoop offers the same services, or even quality and functionality of what can be found in mainframe environments. Considering how much time, effort and experience was already put in the mainframe platform, it would be strange if Hadoop could match the same. This post is to seek some similarities and, who knows, learn a few more tricks from one or another.

Second, I am not an avid mainframe knowledgeable person. I've been involved as an IT architect in database and workload automation technical domains, which also spanned the mainframe parts of it, but most of the effort was within the distributed world. Mainframes remain somewhat opaque to me. Still, that shouldn't prevent me from making any comparisons for those areas that I do have some grasp on.

And if my current understanding is just wrong, I'm sure that I'll learn from the comments that you can leave behind!

With that being said, here it goes...

Reliability, Availability, Serviceability

Let's start with some of the promises that both platforms make - and generally are also able to deliver. Those promises are of reliability, availability and serviceability.

For the mainframe platform, these quality attributes are shown as the mainframe strengths. The platform's hardware has extensive self-checking and self-recovery capabilities, the systems can recover from failed components without service interruption, and failures can be quickly determined and resolved. On the mainframes, this is done through a good balance and alignment of hardware and software, design decisions and - in my opinion - tight control over the various components and services.

I notice the same promises on Hadoop. Various components are checking the state of the hardware and other components, and when something fails, it is often automatically recovered without impacting services. Instead of tight control over the components and services, Hadoop uses a service architecture and APIs with Java virtual machine abstractions.

Let's consider hardware changes.

For hardware failure and component substitutions, both platforms are capable of dealing with those without service disruption.

  • Mainframe probably has a better reputation in this matter, as its components have a very high Mean Time Between Failure (MTBF), and many - if not all - of the components are set up in a redundant fashion. Lots of error detection and failure detection processes try to detect if a component is close to failure, and ensure proper transitioning of any workload towards the other components without impact.
  • Hadoop uses redundancy on a server level. If a complete server fails, Hadoop is usually able to deal with this without impact. Either the sensor-like services disable a node before it goes haywire, or the workload and data that was running on the failed node is restarted on a different node.
Hardware (component) failures on the mainframe side will not impact the services and running transactions. Component failures on Hadoop might have a noticeable impact (especially if it is OLTP-like workload), but will be quickly recovered.

Failures are more likely to happen on Hadoop clusters though, as it was designed to work with many systems that have a worse MTBF design than a mainframe. The focus within Hadoop is on resiliency and fast recoverability. Depending on the service that is being used, active redundancy can be in use (so disruptions are not visible to the user).

If the Hadoop workload includes anything that resembles online transactional processing, you're still better off with enterprise-grade hardware such as ECC memory to at least allow improved hardware failure detection (and perform proactive workload management). CPU failures are not that common (at least not those without any upfront Machine Check Exception - MCE), and disk/controller failures are handled through the abstraction of HDFS anyway.

For system substitutions, I think both platforms can deal with this in a dynamic fashion as well:

  • For the mainframe side (and I'm guessing here) it is possible to switch machines with no service impact if the services are running on LPARs that are joined together in a Parallel Sysplex setup (sort-of clustering through the use of the Coupling Facilities of mainframe, which is supported through high-speed data links and services for handling data sharing and IPC across LPARs). My company switched to the z13 mainframe last year, and was able to keep core services available during the migration.
  • For Hadoop systems, the redundancy on system level is part of its design. Extending clusters, removing nodes, moving services, ... can be done with no impact. For instance, switching the active HiveServer2 instance means de-registering it in the ZooKeeper service. New client connects are then no longer served by that HiveServer2 instance, while active client connections remain until finished. There are also in-memory data grid solutions such as through the Ignite project, allowing for data sharing and IPC across nodes, as well as building up memory-based services with Arrow, allowing for efficient memory transfers.
Of course, also application level code failures tend to only disrupt that application, and not the other users. Be it because of different address spaces and tight runtime control (mainframe) or the use of different containers / JVMs for the applications (Hadoop), this is a good feat to have (even though it is not something that differentiates these platforms from other platforms or operating systems).

Let's talk workloads

When we look at a mainframe setup, we generally look at different workload patterns as well. There are basically two main workload approaches for the mainframe: batch, and On-Line Transactional Processing (OLTP) workload. In the OLTP type, there is often an additional distinction between synchronous OLTP and asynchronous OLTP (usually message-based).

Well, we have the same on Hadoop. It was once a pure batch-driven platform (and many of its components are still using batches or micro-batches in their underlying designs) but now also provides OLTP workload capabilities. Most of the OLTP workload on Hadoop is in the form of SQL-like or NoSQL database management systems with transaction manager support though.

To manage these (different) workloads, and to deal with prioritization of the workload, both platforms offer the necessary services to make things both managed as well as business (or "fit for purpose") focused.

  • Using the Workload Manager (WLM) on the mainframe, policies can be set on the workload classes so that an over-demand of resources (cross-LPARs) results in the "right" amount of allocations for the "right" workload. To actually manage jobs themselves, the Job Entry Subsystem (JES) to receive jobs and schedule then for processing on z/OS. For transactional workload, WLM provides the right resources to for instance the involved IMS regions.
  • On Hadoop, workload management is done through Yet Another Resource Negotiator (YARN), which uses (logical) queues for the different workloads. Workload (Application Containers) running through these queues can be, resource-wise, controlled both on the queue level (high-level resource control) as well as process level (low-level resource control) through the use of Linux Control Groups (CGroups - when using Linux based systems course).
If I would try to compare both against each other, one might say that the YARN queues are like WLMs service classes, and for batch applications, the initiators on mainframe are like the Application Containers within YARN queues. The latter can also be somewhat compared to IMS regions in case of long-running Application Containers.

The comparison will not hold completely though. WLM can be tuned based on goals and will do dynamic decision making on the workloads depending on its parameters, and even do live adjustments on the resources (through the System Resources Manager - SRM). Heavy focus on workload management on mainframe environments is feasible because extending the available resources on mainframes is usually expensive (additional Million Service Units - MSU). On Hadoop, large cluster users who notice resource contention just tend to extend the cluster further. It's a different approach.

Files and file access

Another thing that tends to confuse some new users on Hadoop is its approach to files. But when you know some things about the mainframe, this does remain understandable.

Both platforms have a sort-of master repository where data sets (mainframe) or files (Hadoop) are registered in.

  • On the mainframe, the catalog translates data set names into the right location (or points to other catalogs that do the same)
  • On Hadoop, the Hadoop Distributed File System (HDFS) NameNode is responsible for tracking where files (well, blocks) are located across the various systems
Considering the use of the repository, both platforms thus require the allocation of files and offer the necessary APIs to work with them. But this small comparison does not end here.

Depending on what you want to store (or access), the file format you use is important as well. - On mainframe, Virtual Storage Access Method (VSAM) provides both the methods (think of it as API) as well as format for a particular data organization. Inside a VSAM, multiple data entries can be stored in a structured way. Besides VSAM, there is also Partitioned Data Set/Extended (PDSE), which is more like a directory of sorts. Regular files are Physical Sequential (PS) data sets. - On Hadoop, a number of file formats are supported which optimize the use of the files across the services. One is Avro, which holds both methods and format (not unlike VSAM), another is Optimized Row Columnar (ORC). HDFS also has a number of options that can be enabled or set on certain locations (HDFS uses a folder-like structure) such as encryption, or on files themselves, such as replication factor.

Although I don't say VSAM versus Avro are very similar (Hadoop focuses more on the concept of files and then the file structure, whereas mainframe focuses on the organization and allocation aspect if I'm not mistaken) they seem to be sufficiently similar to get people's attention back on the table.

Services all around

What makes a platform tick is its multitude of supported services. And even here can we find similarities between the two platforms.

On mainframe, DBMS services can be offered my a multitude of softwares. Relational DBMS services can be provided by IBM DB2, CA Datacom/DB, NOMAD, ... while other database types are rendered by titles such as CA IDMS and ADABAS. All these titles build upon the capabilities of the underlying components and services to extend the platform's abilities.

On Hadoop, several database technologies exist as well. Hive offers a SQL layer on top of Hadoop managed data (so does Drill btw), HBase is a non-relational database (mainly columnar store), Kylin provides distributed analytics, MapR-DB offers a column-store NoSQL database, etc.

When we look at transaction processing, the mainframe platform shows its decades of experience with solutions such as CICS and IMS. Hadoop is still very much at its infancy here, but with projects such as Omid or commercial software solutions such as Splice Machine, transactional processing is coming here as well. Most of these are based on underlying database management systems which are extended with transactional properties.

And services that offer messaging and queueing are also available on both platforms: mainframe can enjoy Tibco Rendezvous and IBM WebSphere MQ, while Hadoop is hitting the news with projects such as Kafka and Ignite.

Services extend even beyond the ones that are directly user facing. For instance, both platforms can easily be orchestrated using workload automation tooling. Mainframe has a number of popular schedulers up its sleeve (such as IBM TWS, BMC Control-M or CA Workload Automation) whereas Hadoop is generally easily extended with the scheduling and workload automation software of the distributed world (which, given its market, is dominated by the same vendors, although many smaller ones exist as well). Hadoop also has its "own" little scheduling infrastructure called Oozie.

Programming for the platforms

Platforms however are more than just the sum of the services and the properties that it provides. Platforms are used to build solutions on, and that is true for both mainframe as well as Hadoop.

Let's first look at scripting - using interpreted languages. On mainframe, you can use the Restructed Extended Executor (REXX) or CLIST (Command LIST). Hadoop gives you Tez and Pig, as well as Python and R (through PySpark and SparkR).

If you want to directly interact with the systems, mainframe offers the Time Sharing Option/Extensions (TSO/E) and Interactive System Productivity Facility (ISPF). For Hadoop, regular shells can be used, as well as service-specific ones such as Spark shell. However, for end users, web-based services such as Ambari UI (Ambari Views) are generally better suited.

If you're more fond of compiled code, mainframe supports you with COBOL, Java (okay, it's "a bit" interpreted, but also compiled - don't shoot me here), C/C++ and all the other popular programming languages. Hadoop builds on top of Java, but supports other languages such as Scala and allows you to run native applications as well - it's all about using the right APIs.

To support development efforts, Integrated Development Environments (IDEs) are provided for both platforms as well. You can use Cobos, Micro Focus Enterprise Developer, Rational Developer for System z, Topaz Workbench and more for mainframe development. Hadoop has you covered with web-based notebook solutions such as Zeppelin and JupyterHub, as well as client-level IDEs such as Eclipse (with the Hadoop Development Tools plugins) and IntelliJ.

Governing and managing the platforms

Finally, there is also the aspect of managing the platforms.

When working on the mainframe, management tooling such as the Hardware Management Console (HMC) and z/OS Management Facility (z/OSMF) cover operations for both hardware and system resources. On Hadoop, central management software such as Ambari, Cloudera Manager or Zettaset Orchestrator try to cover the same needs - although most of these focus more on the software side than on the hardware level.

Both platforms also have a reasonable use for multiple roles: application developers, end users, system engineers, database adminstrators, operators, system administrators, production control, etc. who all need some kind of access to the platform to support their day-to-day duties. And when you talk roles, you talk authorizations.

On the mainframe, the Resource Access Control Facility (RACF) provides access control and auditing facilities, and supports a multitude of services on the mainframe (such as DB2, MQ, JES, ...). Many major Hadoop services, such as HDFS, YARN, Hive and HBase support Ranger, providing a single pane for security controls on the Hadoop platform.

Both platforms also offer the necessary APIs or hooks through which system developers can fine-tune the platform to fit the needs of the business, or develop new integrated solutions - including security oriented ones. Hadoop's extensive plugin-based design (not explicitly named) or mainframe's Security Access Facility (SAF) are just examples of this.

Playing around

Going for a mainframe or a Hadoop platform will always be a management decision. Both platforms have specific roles and need particular profiles in order to support them. They are both, in my opinion, also difficult to migrate away from once you are really using them actively (lock-in) although it is more digestible for Hadoop given its financial implications.

Once you want to start meddling with it, getting access to a full platform used to be hard (the coming age of cloud services makes that this is no longer the case though), and both therefore had some potential "small deployment" uses. Mainframe experience could be gained through the Hercules 390 emulator, whereas most Hadoop distributions have a single-VM sandbox available for download.

To do a full scale roll-out however is much harder to do by your own. You'll need to have quite some experience or even expertise on so many levels that you will soon see that you need teams (plural) to get things done.

This concludes my (apparently longer than expected) write-down of this matter. If you don't agree, or are interested in some insights, be sure to comment!

Trojita 0.7 with GPG encryption is available

Postby via jkt's blog »

Trojitá, a fast Qt IMAP e-mail client, has a shiny new release. A highlight of the 0.7 version is support for OpenPGP ("GPG") and S/MIME ("X.509") encryption -- in a read-only mode for now. Here's a short summary of the most important changes:

  • Verification of OpenPGP/GPG/S-MIME/CMS/X.509 signatures and support for decryption of these messages
  • IMAP, MIME, SMTP and general bugfixes
  • GUI tweaks and usability improvements
  • Zooming of e-mail content and improvements for vision-impaired users
  • New set of icons matching the Breeze theme
  • Reworked e-mail header display
  • This release now needs Qt 5 (5.2 or newer, 5.6 is recommended)
As usual, the code is available in our git as a "v0.7" tag. You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

The Trojitá developers

Microsoft Patches Dozens of Security Holes

Postby BrianKrebs via Krebs on Security »

Microsoft today released updates to address more than three dozen security holes in Windows and related software. Meanwhile, Adobe — which normally releases fixes for its ubiquitous Flash Player alongside Microsoft’s monthly Patch Tuesday cycle — said it’s putting off today’s expected Flash patch until the end of this week so it can address an unpatched Flash vulnerability that already is being exploited in active attacks.

Yes, that’s right it’s once again Patch Tuesday, better known to mere mortals as the second Tuesday of each month. Microsoft isn’t kidding around this particular Tuesday — pushing out 16 patch bundles to address at least 44 security flaws across Windows and related software.

The usual suspects earn “critical” ratings: Internet Explorer (IE), Edge (the new, improved IE), and Microsoft Office. Critical is Microsoft’s term for a flaw that allows the attacker to remotely take control over the victim’s machine without help from the victim, save for perhaps getting him to visit a booby-trapped Web site or load a poisoned ad in IE or Edge.

Windows home users aren’t the only ones who get to have all the fun: There’s plenty enough in today’s Microsoft patch batch to sow dread in any Windows system administrator, including patches that fix serious security holes in Windows SMB Server, Microsoft’s DNS Server, and Exchange Server.

I’ll put up a note later this week whenever Adobe releases the Flash update. For now, Kaspersky has more on the Flash vulnerability and its apparent use in active espionage attacks. As ever, if you experience any issues after applying any of today’s updates, please drop a note about it in the comments below.

Other resources: Takes from the SANS Internet Storm CenterQualys and Shavlik.

bsdfb platform plugin merged to Qt dev branch

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

Few weeks back Ralf Nolden, who is *BSD champion in Qt community, urged me to clean-up and submit my Qt5-related projects to upstream and scfb platform plugin was picked as a test dummy. It took 12 iterations to get things right, along the way plugin was renamed to bsdfb, but eventually patch has been merged.

Next two candidates are bsdkeyboard and bsdsysmouse input plugins.

ATM Insert Skimmers In Action

Postby BrianKrebs via Krebs on Security »

KrebsOnSecurity has featured several recent posts on “insert skimmers,” ATM skimming devices made to fit snugly and invisibly inside a cash machine’s card acceptance slot. I’m revisiting the subject again because I’ve recently acquired how-to videos produced by two different insert skimmer peddlers, and these silent movies show a great deal more than words can tell about how insert skimmers do their dirty work.

Last month I wrote about an alert from ATM giant NCR Corp., which said it was seeing an increase in cash machines compromised by what it called “deep insert” skimmers. These skimmers can hook into little nooks inside the mechanized card acceptance slot, which is a generally quite a bit wider than the width of an ATM card.

“The first ones were quite fat and were the same width of the card,” said Charlie Harrow, solutions manager for global security at NCR. “The newer ones are much thinner and sit right there where the magnetic stripe reader is.”

Operating the insert skimmer pictured in the video below requires two special tools that are sold with it: One to set the skimmer in place inside the ATM’s card acceptance slot, and another to retrieve it. NCR told me its technicians had never actually found any tools crooks use to install and retrieve the insert skimmers, but the following sales video produced by an insert skimmer vendor clearly shows a different tool is used for each job:


Same goes for a different video produced by yet another vendor of insert skimming devices:


Here’s a close-up of the insert skimmer pictured in the first sales video above:

An insert skimmer. Credit: Hold Security.
This video from another insert skimmer seller shows some type of tool I can’t quite make out that is used to retrieve the skimmer. It’s unclear if this one requires a second tool to install the device.

Skimmed card data lets you counterfeit new copies of the card, but to withdraw cash from ATMs using the counterfeit cards the crooks also need to somehow steal each customer’s PIN. That task usually falls to a false keypad or a hidden camera — the latter being far more common and cheaper. The seller of the insert skimmer pictured above also sells a hidden camera setup. Below is a false overhead panel, including a cannibalized vidocamera that peeps through a tiny hole down at the ATM keypad.

The insert skimmer, sold alongside a hidden camera embedded within a false overhead panel.
Once you know about all the ways that skimmer thieves are coming up with to fleece the banks and consumers, it’s difficult not to go through life seeing every ATM as a potential zombie threat — banging and pulling on the poor machines and half expecting half hoping parts to come unglued. I’m always disappointed, but it hasn’t stopped me all the same.

Truthfully, you probably have a better chance of getting physically mugged after withdrawing cash than you do encountering a skimmer in real life. So keep your wits about you when you’re at the ATM, and avoid dodgy-looking and standalone cash machines in low-lit areas, if possible. Stick to ATMs that are physically installed in a bank. And be especially vigilant when withdrawing cash on the weekends; thieves tend to install skimming devices on a weekend — when they know the bank won’t be open again for more than 24 hours.

Lastly but most importantly, covering the PIN pad with your hand defeats the hidden camera from capturing your PIN — and hidden cameras are used on the vast majority of the more than three dozen ATM skimming incidents that I’ve covered here. Shockingly, few people bother to take this simple, effective step, as detailed in this skimmer tale from 2012, wherein I obtained hours worth of video seized from two ATM skimming operations and saw customer after customer walk up, insert their cards and punch in their digits — all in the clear.

For more on how these insert skimmers work, check out Crooks Go Deep With ‘Deep Insert’ Skimmers. If you’re here because you find skimmers of all kinds fascinating, please see my series All About Skimmers.


VirtualBox Shared Folders: One VOP at a Time

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

Two months ago I tried to setup dev environment using FreeBSD Vagrant box just to find out that FreeBSD does not support VirtualBox shared folders. After some googling I found Li-Wen Hsu’s github repository with some work in this area. Li-Wen and Will Andrews has already done major chunk of work: patches to VirtualBox build system, skeleton VFS driver, API to talk to hypervisor but hit a block with some implementation details in VirtualBox’s virtual-memory compatibility layer. Will provided very comprehensive analysis of the problem.

Li-Wen was occupied with some other projects so he gave me his OK to work on shared folder support on my own. Will’s suggestion was easy to implement – lock only userland memory, like Solaris driver does. VFS part was more complicated though: fs nodes, vnode, their lifecycle and locking is too hairy for drive-by hacking. I used tmpfs as a reference to learn some VFS magic, but a lot of things are still obscure. Nevertheless after few weeks of tinkering first milestone has been achieved: I can mount/unmount shared VirtualBox folder and navigate mounted filesystem without immediate kernel panic. Next goal (if time permits): stable and non-leaking read-only filesystem.

IRS Re-Enables ‘Get Transcript’ Feature

Postby BrianKrebs via Krebs on Security »

The Internal Revenue Service has re-enabled a service on its Web site that allows taxpayers to get a copy of their previous year’s tax transcript. The renewed effort to beef up taxpayer authentication methods at comes more than a year after the agency disabled the transcript service because tax refund fraudsters were using it to steal sensitive data on consumers.

During the height of tax-filing season in 2015, KrebsOnSecurity warned that identity thieves involved in tax refund fraud with the IRS were using’s “Get Transcript” feature to glean salary and personal information they didn’t already have on targeted taxpayers. In May 2015, the IRS suspended the Get Transcript feature, citing its abuse by fraudsters and noting that some 100,000 taxpayers may have been victimized as a result.

In August 2015, the agency revised those estimates up to 330,000, but in February 2016, the IRS again more than doubled its estimate, saying the actual number of victims was probably closer to 724,000.

So exactly how does the new-and-improved Get Transcript feature validate that taxpayers who are requesting information aren’t cybercriminal imposters? According to the IRS’s Get Transcript FAQ, the visitor needs to supply a Social Security number (SSN) and have the following:

  • immediate access to your email account to receive a confirmation code;
  • name, birthdate, mailing address, and filing status from your most recent tax return;
  • an account number from either a credit card, auto loan, mortgage, home equity loan or home equity line of credit;
  • a mobile phone number with your name on the account.
“If you previously registered to use IRS Get Transcript Online, Identity Protection PIN, Online Payment Agreement, or ePostcard online services, log in with the same username and password you chose before,” the IRS said. “You’ll need to provide a financial account number and mobile phone number if you haven’t already done so.”

The agency said it will then verify your financial account number and mobile phone number with big-three credit bureau Equifax. Readers who have taken my advice and placed a security freeze on their credit files will need to request a temporary thaw in that freeze with Equifax before attempting to verify their identity with the IRS.

According to Federal Computer Week, central to the new setup will be knowledge-based authentication that uses supposedly harder-to-answer questions than the tests that led to the compromise of Get Transcript.

Mike Kasper, the tax fraud victim whose story ultimately earned him a chance to testify about the experience before the U.S. Senate Committee on Homeland Security & Governmental Affairs, called the new authentication methods a good step forward. But he worries that they will simply encourage tax refund thieves to commit more acts of identity theft in victim’s name.

“Looks like the investment for a $6,000 refund went from $10 to purchase credit data or now a card number for the victim, up to about $30 to buy a prepaid number although it’s probably even cheaper now,” Kasper said. “I think the ID thieves might simply open new cell phone or credit card accounts in the name of the victim or even keep changing the name on prepaid cell phone accounts acquired just for this purpose.”

Kasper notes that the same lame authentication methods that led to the Get Transcript debacle are still used by, a site mandated by Congress as the only site where consumers can get their by-rights guaranteed free copy of their credit report from each of the major bureaus. Credit reports contain quite a bit of information that may allow thieves to glean the mobile and credit card account numbers for the taxpayers they’re targeting. asks consumers to provide a bunch of personal data that can be bought for about $3-$4 from cybercrime shops online — such as date of birth, Social Security number, address and previous addresses. The site also asks the visitor to answer a series of so-called knowledge-based authentication (KBA) questions supplied by the credit bureaus.

These KBA questions — which involve four multiple choice, “out of wallet” questions such as previous address, loan amounts and dates — can be successfully enumerated with random guessing.  In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

Fraudsters also may opt to simply phish the phone and credit card information from victims, or turn to criminal data brokers in the underground that specialize in selling these dossiers on consumers, Kasper said.

“The real question is, when will more banks start to check that the incoming transfer from the IRS is for an account under the name of an actual customer,” Kasper said. “Most banks do not do this, but even if they did that is not a complete solution unless they also know their customer. There were probably thousands of fraudulent tax refunds last year where the [perpetrators] just opened up bank accounts in other peoples’ names to receive a refund from the IRS. Because if you’re a thieve and you open an account in the victim’s name, it’s a little harder to trace.”

Comet Coffee and Microbakery in Saint Louis, MO

Postby Zach via The Z-Issue »

As with most cities, Saint Louis has a plethora of places to get a cup of coffee and some pastries or treats. The vast majority of those places are good, some of them are great, even fewer are exceptional standouts, and the top-tier is comprised of those that are truly remarkable. In my opinion, Comet Coffee and Microbakery finds its way into that heralded top tier. What determines whether or not a coffee shop or café earns that high of marks, you ask? Well, to some degree, that’s up to individual preference. For me, the criteria are:

  • A relaxing and inviting environment
  • Friendly and talented people
  • Exceptional food and beverage quality
First and foremost, I look for the environment that the coffee shop provides. Having a rather hectic work schedule, I like the idea of a place that I can go to unwind and just enjoy some of the more simplistic pleasures of life. Comet Coffee offers the small, intimate café-style setting that fits the bill for me. There are five or six individual tables and some longer benches inside, and a handful of tables outside on the front patio. Though the smaller space sometimes leads to crowding and a bit of a “hustle and bustle” feel, it doesn’t ever seem distracting. Also, during non-peak times, it tends to be quiet and peaceful.

Secondly, a café—or any other eatery, really—is about more than just the space itself. The employees make all the difference! At Comet Coffee, everyone is exceptionally talented in their craft, and it’s apparent that they deeply care about not only the customers they’re serving, but also the food and drinks that they’re making!

Gretchen starting a latte

Daniel making a pour-over coffee
Thirdly, it should go without say that the food and drink quality are incredibly important factors for any café. At Comet, the coffee choices are seemingly limitless, so there are options that will satisfy any taste. Just in pour-overs alone, there are several different roasters (like Kuma, Sweet Bloom, Intelligentsia, Saint Louis’s own Blueprint, and others), from whom Comet offers an ever-changing list of varieties based on region (South American, African, et cetera). In addition to the pour-overs, there are many of the other coffee shop standards like lattes, espressos, macchiatos, cappuccinos, flat whites, and so on. Coffee’s not your thing? That’s fine too because they have an excellent and extensive selection of teas, ranging from the standard blacks, whites, and Darjeelings, to less common Oolongs, and my personal favourite green tea— the Genmaicha, which combines the delicate green tea flavours with toasted rice.

So between the coffees, espressos, and teas, you shouldn’t have any problem finding a beverage for any occasion or mood. But it isn’t just called “Comet Coffee”, it’s “Comet Coffee and Microbakery.” Though it almost sounds like an afterthought to the coffee, I assure you that the pastries and other baked goods share the stage as costars, and ones that often steal the show! There really isn’t a way for me to describe them that will do them justice. As someone who follows a rather rigid eating regimen, I won’t settle for anything less than stellar desserts and treats. That said, I’ve been blown away by every… single… one of them.


Oat Cookies

Strawberry Rhubarb Pie

Strawberry & Chocolate Ganache Macarons

Buckwheat muffin – Strawberry & Pistachio

Tomato, Basil, & Mozzerella Quiche

Chocolate Chip Cookies & Cocoa Nibblers
You should definitely click on each image to view them in full size

Though I like essentially all of the treats, I do have my favourites. I tend to get the Oat Cookies most often because they are simple and fulfilling. One time, though, I went to Comet on a Sunday afternoon and the only thing left was a Buckwheat Muffin. Knowing that they simply don’t make any bad pastries, I went for it. Little did I know that it would become my absolute favourite! The baked goods vary with the availability of seasonal and local ingredients. For instance, the spring iteration of the Buckwheat Muffin is Strawberry Pistachio, but the previous one (which was insanely delicious) was Milk Chocolate & Hazelnut (think Nutella done in the best possible way). One other testament to the quality of the treats is that Comet makes a few items that I have never liked anywhere else. For instance, I’m not a big fan of scones because they tend to be dry and often have the texture of coarse-grit sandpaper. However, the Lemon Poppy seed Scone and the Pear, Walnut & Goat Cheese Scone are both moist and satisfying. Likewise, I don’t really think much of Macarons, because they’re so light. These ones, however, have some substance to them and don’t make me think of overpriced cotton candy.

Okay, so now that I’ve sold you on Comet’s drinks and baked goods, here’s a little background about this great place. I recently had the opportunity to sit down with owners Mark and Stephanie, and talk with them about Comet’s past, current endeavours, and their future plans.


Coffee is about subtle nuances, and it can be continually improved upon. With all those nuances, I like it when one particular flavour note pops out.–Mark


Comet Coffee first opened its doors in August of 2012, and Mark immediately started renovating in order to align the space with his visions for the perfect shop. He and his fiancée Stephanie had worked together at Kaldi’s Coffee beforehand, but were inspired to open their own place. Between the two of them—Mark holding a degree in Economics, and Stephanie with degrees in both Hotel & Restaurant Management as well as Baking & Pastry Arts—the decision to foray into the industry together seemed like a given.

Mark had originally anticipated calling the shop “Demitasse,” after the small Turkish coffee cup by the same French name. Stephanie, though, did some research on the area of Saint Louis in which the shop is located (where the Forest Park Highlands amusement parks used to stand), and eventually found out about The Comet roller coaster. The name pays homage to those parks, and may have even been a little hopeful foreshadowing that the shop would become a well-established staple of the community.

When asked what separates Comet from other coffee shops, Mark readily mentioned that they themselves do not roast their own beans. He explained that doing so “requires purchasing [beans] in large quantities,” and that would disallow them from varying the coffee choices day-to-day. Similar to Mark’s comment about the quest of continuously improving the coffee experience, Stephanie indicated that the key to baking is to constantly modify the recipe based on the freshest available ingredients.


There are no compromises when baking. You must be meticulous with measurements, and you have to taste throughout the process to make adjustments.–Stephanie


Mark and Stephanie are currently in the process of opening an ice cream and bake shop in the Kirkwood area, and plan on carrying many of those items at Comet as well. Looking further to the future, Mark would like to open a doughnut shop where everything is made to order. His rationale—which, being a doughnut connoisseur myself, I find to be completely sound—is that everything fried needs to be as fresh as possible.

I, for one, can’t wait to try the new ice creams that Mark and Stephanie will offer in their Kirkwood location. For now, though, I will continue to enjoy the outstanding brews and unparalleled pastries at Comet Coffee. It has become a weekly go-to spot for me, and one that I look forward to greatly for unwinding after those difficult “Monday through Friday” stretches.

Macchiato and Seltzer

Pour-over brew

P.S. No time to leisurely enjoy the excellent café atmosphere? No worries, at least grab one to go. It will definitely beat what you can get from any of those chain coffee shops.


There’s the Beef: Wendy’s Breach Numbers About to Get Much Meatier

Postby BrianKrebs via Krebs on Security »

When news broke last month that the credit card breach at fast food chain Wendy’s impacted fewer than 300 out of the company’s 5,800 locations, the response from many readers was, “Where’s the Breach?” Today, Wendy’s said the number of stores impacted by the breach is “significantly higher” and that the intrusion may not yet be contained.

On January 27, 2016, this publication was the first to report that Wendy’s was investigating a card breach. In mid-May, the company announced in its first quarter financial statement that the fraud impacted just five percent of stores.

But since that announcement last month, a number of sources in the fraud and banking community have complained to this author that there was no way the Wendy’s breach only affected five percent of stores — given the volume of fraud that the banks have traced back to Wendy’s customers.

What’s more, some of those same sources said they were certain the breach was still ongoing well after Wendy’s made the five percent claim in May. In my March 02 piece Credit Unions Feeling Pinch in Wendy’s Breach, I quoted B. Dan Berger, CEO of the National Association of Federal Credit Unions, saying the he’d heard from three credit union CEOs who said the fraud they’ve experienced so far from the Wendy’s breach has eclipsed what they were hit with in the wake of the Home Depot and Target breaches.

Today, Wendy’s acknowledged in a statement that the breach is now expected to be “considerably higher than the 300 restaurants already implicated.” Company spokesman Bob Bertini declined to be more specific about the number of stores involved, citing an ongoing investigation. Bertini also declined to say whether the company is confident that the breach has been contained.

“Wherever we are finding it we’ve taken action,” he said. “But we can’t rule out that there aren’t others.”

Bertini said part of the problem was that the breach happened in two waves. He said the outside forensics investigators that were assigned to the case by the credit card associations initially found 300 locations that had malware on the point-of-sale devices, but that the company’s own investigators later discovered a different strain of the malware at some locations. Bertini declined to provide additional details about either of the malware strains found in the intrusions.

“In recent days, our investigator has identified this additional strain or mutation of the original malware,” he said. “It just so happens that this new strain targets a different point of sale system than the original one, and we just within the last few days discovered this.”

The company also emphasized that all of the breached stores were franchised — not company-run — entities. Here is the statement that Wendy’s provided to KrebsOnSecurity, in its entirety:

Based on the preliminary findings of the previously-disclosed investigation, the Company reported on May 11 that malware had been discovered on the point of sale (POS) system at fewer than 300 franchised North America Wendy’s restaurants. An additional 50 franchise restaurants were also suspected of experiencing, or had been found to have, other cybersecurity issues. As a result of these issues, the Company directed its investigator to continue to investigate.

In this continued investigation, the Company has recently discovered a variant of the malware, similar in nature to the original, but different in its execution. The attackers used a remote access tool to target a POS system that, as of the May 11 th announcement, the Company believed had not been affected. This malware has been discovered on some franchise restaurants’ POS systems, and the number of franchise restaurants impacted by these cybersecurity attacks is now expected to be considerably higher than the 300 restaurants already implicated. To date, there has been no indication in the ongoing investigation that any Company-operated restaurants were impacted by this activity.

Many franchisees and operators throughout the retail and restaurant industries contract with third-party service providers to maintain and support their POS systems. The Company believes this series of cybersecurity attacks resulted from certain service providers’ remote access credentials being compromised, allowing access to the POS system in certain franchise restaurants serviced by those providers.

The malware used by attackers is highly sophisticated in nature and extremely difficult to detect. Upon detecting the new variant of malware in recent days, the Company has already disabled it in all franchise restaurants where it has been discovered, and the Company continues to work aggressively with its experts and federal law enforcement to continue its investigation.

Customers may call a toll-free number (888-846- 9467) or email with specific questions.

Wendy’s statement that the attackers got access by stealing credentials that allowed remote access to point-of-sale terminals should hardly be surprising: The vast majority of the breaches involving restaurant and hospitality chains over the past few years have been tied to hacked remote access accounts that POS service providers use to remotely manage the devices.

Wednesday’s story about a point-of-sale botnet that has stolen at least 1.2 million credit cards from more than 100 Cici’s Pizza locations and other restaurants noted that Cici’s point-of-sale provider believes the attackers in this case used social engineering and remote access tools to compromise and maintain control over hacked cash registers.

Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register. Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Many retailers are now moving to install card readers that can handle transactions from more secure chip-based credit and debit cards, which are far more expensive for thieves to clone.

Gavin Waugh, vice president and treasurer at The Wendy’s Company, declined to say whether Wendy’s has any timetable for deploying chip-based readers across it’s fleet of stores — the vast majority of which are franchise operations.

“I don’t think that would have solved this problem, and it’s a bit of a misnomer,” Waugh said, in response to questions about plans for the deployment of chip-based readers across the company’s U.S. footprint. “I think it makes it harder [for the attackers], but I don’t think it makes it impossible.”

Avivah Litan, a fraud analyst with Gartner Inc., said chip readers at Wendy’s would help, but only if the company can turn them on to accept chip transactions. As I noted in February, although a large number of merchants have chip card readers in place, many still  face delays in getting the systems up to snuff with the chip card standards.

Litan said the biggest bottleneck right now to more merchants accepting chip cards is first getting their new systems certified as compliant with the chip card standard (known as Europay, Mastercard and Visa or EMV). And the backlog among firms that certify retailers as EMV compliant is rapidly growing.

Litan said the reality is that chip cards will continue to have magnetic stripes on them for many years to come.

“Unless the mag stripe data is not transmitted anymore and you get rid of the mag stripe, there is always going to be card data compromised, stolen and counterfeited,” Litan said.

Update June 13, 9:49 a.m. ET: Added referenced to interview with NAFCU CEO Berger.

Slicing Into a Point-of-Sale Botnet

Postby BrianKrebs via Krebs on Security »

Last week, KrebsOnSecurity broke the news of an ongoing credit card breach involving CiCi’s Pizza, a restaurant chain in the United States with more than 500 locations. What follows is an exclusive look at a point-of-sale botnet that appears to have enslaved dozens of hacked payment terminals inside of CiCi’s locations that are being relieved of customer credit card data in real time.

Over the weekend, I heard from a source who said that since November 2015 he’s been tracking a collection of hacked cash registers. This point-of-sale botnet currently includes more than 100 infected systems, and according to the administrative panel for this crime machine at least half of the compromised systems are running a malicious Microsoft Windows process called cicipos.exe.

This admin panel shows the Internet address of a number of infected point-of-sale devices as of June 4, 2016. Many of these appear to be at Cici’s Pizza locations.
KrebsOnSecurity has not been able to conclusively tie the botnet to CiCi’s. Neither CiCi’s nor its outside public relations firm have responded to multiple requests for comment. However, the control panel for this botnet includes the full credit card number and name attached to the card, and several individuals whose names appeared in the botnet control panel confirmed having eaten at CiCi’s Pizza locations on the same date that their credit card data was siphoned by this botnet.

Among those was Richard Higgins of Prattville, Ala., whose card data was recorded in the botnet logs on June 4, 2016. Reached via phone, Higgins confirmed that he used his debit card to pay for a meal he and his family enjoyed at a CiCi’s location in Prattville on that same date.

An analysis of the botnet data reveals more than 100 distinct infected systems scattered across the country. However, the panel only displayed hacked systems that were presently reachable online, so the actual number of infected systems may be larger.

Most of the hacked cash registers map back to dynamic Internet addresses assigned by broadband Internet service providers, and those addresses provide little useful information about the owners of the infected systems — other than offering a general idea of the city and state tied to each address.

For example, the Internet address of the compromised point-of-sale system that stole Mr. Higgins’ card data is, which maps back to an Earthlink system in a pool of IP addresses managed out of Montgomery, Ala.

Many of the botnet logs include brief notes or messages apparently left by CiCi’s employees for other employees. Most of these messages concern banal details about an employee’s shift, or issues that need to be addressed when the next employee shift comes in to work.

In total, there are more than 1.2 million unique credit and debit card numbers recorded in the botnet logs seen by this reporter. However, the total number of card accounts harvested by the cybercrooks in charge of this crime machine is probably far greater. That’s because the botnet logs go back to early April 2016, but it appears that someone reset and/or cleared those records prior to that date.

Only about half of the 1.2 million stolen accounts appear to have been taken from compromised CiCi’s locations. The majority of the other Internet addresses that appear in the bot logs could not be traced back to specific establishments. Others seem to be tied to individual businesses, including a cinema in Wallingford, Ct., a pizza establishment in Chicago (the famous Lou Malnatis), a hotel in Pennsylvania, and a restaurant at a Holiday Inn hotel in Washington, D.C.

This particular point-of-sale botnet looks to be powered by Punkey, a POS malware strain first detailed last year by researchers at Trustwave Spiderlabs. According to Trustwave, Punkey includes a component that records keystrokes on the infected device, which may explain why short notes left by CiCi’s employees show up frequently in the bot logs alongside credit card data.

Although CiCi’s has remained silent so far, the company’s main point-of-sale service provider — Clearwater, Fla.-based Datapoint POS — told KrebsOnSecurity last week that the hackers behind this botnet used social engineering to trick employees into installing the malware, and that the breach impacted multiple other point-of-sale providers.

“All of these attacks have been traced to social engineering/Team Viewer breaches because stores from SEVERAL POS vendors let supposed techs in to conduct ‘support,'” said Stephen P. Warne, vice president of service and support, in an email to this author. “Nothing to do with any of our support mechanisms which are highly restricted and well within PCI Compliance.”

Point-of-sale based malware has driven most of the credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malware usually is installed via hacked remote administration tools. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the phony transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

PC-BSD @ South East Linux Fest

Postby Josh Smith via Official PC-BSD Blog »

Attention people on the east coast!  Make sure to stop by and see our booth at South East Linux Fest (SELF) June 10th – the 12th if you’re going to be near the Charlotte area.  We’ll be handing out some awesome schwag and filling you in on all the awesome new features coming up in PC-BSD and FreeBSD.   Ken Moore and Joshua Smith will be giving BSD focused talks this year .  Josh’s talk will be “The PC-BSD Advantage” which will offer a bird’s eye view of PC-BSD for new users and Linux users (as well as some really cool new features for long term users!), and Ken Moore’s talk “Lumina: Lighting the Way to the Future” will be detailing the excellent Lumina desktop environment.  Ken will touch on the new sysadm utility and how we plan on integrating it within Lumina and PC-BSD.  We hope to see you there!

Password Re-user? Get Ready to Get Busy

Postby BrianKrebs via Krebs on Security »

In the wake of megabreaches at some of the Internet’s most-recognized destinations, don’t be surprised if you receive password reset requests from numerous companies that didn’t experience a breach: Some big name companies — including Facebook and Netflix — are in the habit of combing through huge data leak troves for credentials that match those of their customers and then forcing a password reset for those users.

Netflix sent out notices to customers who re-used their Netflix password at other sites that were hacked. This notice was shared by a reader who had re-used his Netflix password at one of the breached companies., for example, sent out a notification late last week to users who made the mistake of re-using their Netflix password at Linkedin, Tumblr or MySpace. All of three of those breaches are years old, but the scope of the intrusions (more than a half billion usernames and passwords leaked in total) only became apparent recently when the credentials were posted online at various sites and services.

“We believe your Netflix account credentials may have been included in a recent release of email addresses and passwords from an older breach at another company,” the message from Neflix reads. “Just to be safe, we’ve reset your password as a precautionary measure.”

The missive goes on to urge recipients to visit and click the “forgot your email or password” link to reset their passwords.

Netflix is taking this step because it knows from experience that cybercriminals will be using the credentials leaked from Tumblr, MySpace and LinkedIn to see if they work on a variety of third-party sites (including Netflix).

As I wrote last year in the aftermath of the AshleyMadison breach that exposed tens of millions of user credentials, Netflix’s forensics team has been using a tool that the company released in 2014 called Scumblr, which scours high-profile sites for specific terms and data.

“Some Netflix members have received emails encouraging them to change their account passwords as a precautionary measure due to the recent disclosure of additional credentials from an older breach at another internet company,” Netflix said in a statement released to KrebsOnSecurity. “Note that we are always engaged in these types of proactive security measures (leveraging Scumblr in addition to other mechanisms and data sources), not just in the case of major security breaches such as this one.”

Facebook also has been known to mine data leaked in major external password breaches for any signs that users are re-using their passwords at the hacked entity. After at a breach discovered at Adobe in 2013 exposed tens of millions Adobe customer credentials, Facebook scoured the leaked Adobe password data for credential recycling among its users.

The last time I wrote about this preemptive security measure, many readers seem to have hastily and erroneously concluded that whichever company is doing the alerting doesn’t properly secure its users passwords if it can simply compare them in plain text to leaked passwords that have already been worked out.

What’s going on here is that Facebook, Netflix, or any other company who wants to can take a corpus of leaked passwords that have already been guessed or cracked can simply hash those passwords with whatever one-way hashing mechanism(s) they use internally. After that, it’s just a matter of finding any overlapping email addresses that use the same password.

Message that Facebook has used in the past to alert users who have re-used their Facebook passwords at other breached sites.

Gentoo, Openstack and OSIC

Postby Matthew Thode (prometheanfire) via Let's Play a Game »

What to use it for

I recently applied for, and use an allocation from do extend more support for running Openstack on Gentoo. The end goal of this is to allow Gentoo to become a gated job within the Openstack test infrastructure. To do that, we need to add support for building an image that can be used.


To speed up the work on adding support for generating an openstack infra Gentoo image I already completed work on adding Gentoo to diskimage builder. You can see images at


The actual work has been slow going unfortunately, working with upstreams to add Gentoo support has tended to find other issues that need fixing along the way. The main thing that slowed me down though was the Openstack summit (Newton). That went on at the same time and reveiws were delated at least a week, usually two.

Since then though I've been able to work though some of the issues and have started testing the final image build in diskimage builder.

More to do

The main things left to do is to add gentoo support to the bindep elemet within diskimage builder and finish and other rough edges in other elements (if they exist). After that, Openstack Infra can start caching a Gentoo image and the real work can begin. Adding Gentoo support to the Openstack Ansible project to allow for better deployments.

Auf Achse mit der Cloudmaster

Postby via Blog Feed »

Nein, mit der Administration von Datenwolken hat das hier nichts zu tun. Die Rede ist vielmehr von einem Klassiker der Luftfahrtgeschichte; genauer gesagt von der Douglas DC-6B, Spitzname „Cloudmaster“ – einer fliegenden Legende aus der Ära der großen Propellermaschinen mit transatlantischer Reichweite. Angetrieben von vier je 2.400 PS starken Doppelsternmotoren Pratt & Whitney R-2800 Double Wasp mit jeweils 18 Zylindern bietet das Original eine beeindruckende Soundkulisse.

So habe ich jedenfalls eine der letzten gebauten DC-6B (N996DM der Flying Bulls) auf diversen Airshows erlebt. PMDG hat nun deren Schwestermaschine V5-NCG (in Betrieb in Namibia) in ein Modell für X-Plane umgesetzt. Für Vintage-Fans auf jeden Fall ein Leckerbissen mit einer großartigen Cockpit-Nachbildung – und als i-Tüpfelchen funktionieren sogar (fast) alle Circuit Breakers. Dieses Modell fliegt sich verglichen mit modernen Jets noch sehr handwerklich; schon allein durch die recht geringe Reiseflughöhe (max. 25.000 Fuß) und die sehr gemächliche Steig- und Sinkrate (üblich sind ca. 500 - 800 Fuß/min) eine völlig andere Art des fliegens.

Auch die betagten Sternmotoren erfordern eine ganz andere Aufmerksamkeit als moderne Triebwerke: sind die Drosselklappen richtig gestellt, bildet sich etwa Eis im Vergaser, oder haben die gierigen Brummer mal wieder die Ölwanne leergesoffen? Trotzdem empfinde ich das Flugzeug als weniger lebendig als IXEGs Boeing 737-300. Das Handling in der Luft ist sehr glatt, die Instrumente bleiben stets synchron, und alle vier Triebwerke verhalten sich exakt identisch. Am Boden schiebt die Maschine sehr stark über das Bugrad – auch PMDG ist es offenbar nicht vollständig gelungen, das Traktionsmodell von X-Plane (eine der wenigen echten Schwächen) zu überlisten.

Am Ende bleibt die Frage, die viele eine lange Zeit gestellt haben: Brächte PMDG ein Modell für X-Plane auf den Markt, wäre es dann all den anderen Payware-Modellen überlegen? Ich denke, diese Frage kann ich (zumindest Stand heute) mit nein beantworten. Die DC-6 gehört definitiv zu den besten Modellen, die ich für X-Plane je in den Händen hatte. Sie lässt aber andere Top-Addons wie etwa die IXEG Boeing 737-300 oder die Saab 340A von Leading Edge Simulations nicht mit großem Abstand hinter sich, sondern liegt in meinen Augen in etwa gleich auf.

Banks: Credit Card Breach at CiCi’s Pizza

Postby BrianKrebs via Krebs on Security »

CiCi’s Pizza, an American fast food business based in Coppell, Texas with more than 500 stores in 35 states, appears to be the latest restaurant chain to struggle with a credit card breach. The data available so far suggests that hackers obtained access to card data at affected restaurants by posing as technical support specialists for the company’s point-of-sale provider, and that multiple other retailers have been targeted by this same cybercrime gang.

Over the past two months, KrebsOnSecurity has received inquiries from fraud fighters at more than a half-dozen financial institutions in the United States — all asking if I had any information about a possible credit card breach at CiCi’s. Every one of these banking industry sources said the same thing: They’d detected a pattern of fraud on cards that all had all been used in the last few months at various CiCi’s Pizza locations.

Earlier today, I finally got around to reaching out to the CiCi’s headquarters in Texas and was referred to a third-party restaurant management firm called Champion Management. When I called Champion and told them why I was inquiring, they said “the issue” was being handled by an outside public relations firm called SPM Communications.

I never did get a substantive response from SPM, which according to their email and phone messages closes at 1 pm on Fridays during the summer. So I decided to follow up on a tip I’d received from a fraud fighter at one affected bank who said they’d heard from the U.S. Secret Service that the fraud was related to a breach or security weakness at Datapoint (CiCi’s point-of-sale provider).

Incredibly, I went to look up the contact information for datapoint[dot]com, and found that Google was trying to prevent me from visiting this site: According to the search engine giant, Datapoint’s Web site appears to be compromised! It appears Google has listed the site as hacked and that it was once abused by spammers to promote knockoff male enhancement pills. 

Google thinks Datapoint’s Web site is trying to foist malicious software.
A quick look at Datapoint’s site via a virtual machine-protected Linux browser indicates that CiCi’s Pizza is indeed one of the company’s largest clients. The Secret Service did not immediately respond to requests for comment.

Undeterred, I phoned and emailed Datapoint, and heard back via email from Stephen P. Warne, vice president of service and support for the company. Warne said I was jumping to conclusions and that my “sources” must have had a beef with the company. Here’s his email to me, verbatim:

If you did indeed talk to the Secret Service you would know that the breaches they have investigated involved multiple POS vendors in one particular franchise, including Harbortouch and Granbury Restaurant Systems.

You would also know that not one Agent we spoke and cooperated with came to any conclusion of wrong doing on our part after scans months ago. The SS actually helped point out that these hackers used among Team Viewer, Screen Connect and some others they installed.

All of these attacks have been traced to social engineering/Team Viewer breaches because stores from SEVERAL POS vendors let supposed techs in to conduct ‘support’. Nothing to do with any of our support mechanisms which are highly restricted and well within PCI Compliance.

I won’t say much else on this as this is not a Datapoint breach. We just happened to have by far the most systems in that particular franchise overwhelmingly.

Interestingly, this apparent breach comes to light amid a great deal of speculation on Reddit and other places online about a possible data breach at Teamviewer. The idea that countless credit card terminals or cash registers at CiCi’s Pizza establishments and other businesses could have been compromised by cybercriminals who simply phoned up the establishments posing as tech support technicians for various point-of-sale vendors is remarkable (and frankly pretty ingenious).

I’ll no doubt have updates to this story as the weekend progresses. Stay tuned.

Update, June 4, 5:01 p.m. ET: Edited the sentence about Google’s listing the site as hacked.

Dropbox Smeared in Week of Megabreaches

Postby BrianKrebs via Krebs on Security »

Last week, LifeLock and several other identity theft protection firms erroneously alerted their customers to a breach at cloud storage giant — an incident that reportedly exposed some 73 million usernames and passwords. The only problem with that notification was that Dropbox didn’t have a breach; the data appears instead to have come from another breach revealed this week at social network Tumblr.

Today’s post examines some of the missteps that preceded this embarrassing and potentially brand-damaging “oops.” We’ll also explore the limits of automated threat intelligence gathering in an era of megabreaches like the ones revealed over the past week that exposed more than a half billion usernames and passwords stolen from Tumblr, MySpace and LinkedIn.

The credentials leaked in connection with breaches at those social networking sites were stolen years ago, but the full extent of the intrusions only became clear recently — when several huge archives of email addresses and hashed passwords from each service were posted to the dark web and to file-sharing sites.

Last week, a reader referred me to a post by a guy named Andrew on the help forum. Andrew said he’d just received alerts blasted out by two different credit monitoring firms that his dropbox credentials had been compromised and were found online (see screenshot below).

A user on the dropbox forum complains of receiving alerts from separate companies warning of a huge password breach at
Here’s what LifeLock sent out on May 23, 2016 to many customers who pay for the company’s credential recovery services:

Alert Date: 05-23-2016
Alert Type: Monitoring
Alert Category: Internet-Black Market Website
**Member has received a File Sharing Network alert Email: *****
Password: ****************************************
Where your data was found: social media
Type of Compromise: breach
Breached Sector: business
Breached Site:
Breached Record Count: 73361477
Password Status: hashed
Severity: red|email,password

LifeLock said it got the alert data via an information sharing agreement with a third party threat intelligence service, but it declined to name the service that sent the false positive alert.

“We can confirm that we recently notified a small segment of LifeLock members that a version of their credentials were detected on the internet,” LifeLock said in a written statement provided to KrebsOnSecurity. “When we are notified about this type of information from a partner, it is usually a “list” that is being given away, traded or sold on the dark web. The safety and security of our members’ data is our highest priority. We are continuing to monitor for any activity within our source network. At this time, we recommend that these LifeLock members change their Dropbox password(s) as a precautionary measure.”

Dropbox says it didn’t have a breach, and if it had the company would be seeing huge amounts of account checking activity and other oddities going on right now. And that’s just not happening, they say.

“We have learned that LifeLock and are reporting that Dropbox account details of some of their customers are potentially compromised,” said Patrick Heim, head of trust and security at Dropbox. “An initial investigation into these reports has found no evidence of Dropbox accounts being impacted. We’re continuing to look into this issue and will update our users if we find evidence that Dropbox accounts have been impacted.”


After some digging, I learned that the bogus attribution of the Tumblr breach to Dropbox came from CSID, an identity monitoring firm that is in the midst of being acquired by credit bureau giant Experian.

Fascinated by anything related to security and false positives, I phoned Bryan Hjelm, vice president of product and marketing for CSID. Hjelm took issue with my classifying this as a threat intel false positive, since from CSID’s perspective the affected individual customers were in fact alerted that their credentials were compromised (just not their Dropbox credentials).

“Our mandate is to alert our client subscribers when we find their information on the darkweb,” Hjelm said. “Regardless of the source, this is compromised data that belongs to them.”

Hjelm acknowledged that CSID was “experiencing some reputational concerns” from Dropbox and others as a result of its breach mis-attribution, but he said the incident was the first time this kind of snafu has occurred for CSID.

I wanted to know exactly how this could have happened, so I asked Hjelm to describe what transpired in more detail. He told me that CSID relies on a number of sources online who have been accurate, early indicators of breaches past. One such actor — a sort of cyber gadfly best known by his hacker alias “w0rm” — had proven correct in previous posts on Twitter about new data breaches, Hjelm said.

In this case, w0rm posted to Twitter a link to download a file containing what he claimed were 100M records stolen from Dropbox. Perhaps one early sign that something didn’t quite add up is that the download he linked to as the Dropbox user file actually only included 73 million usernames and passwords.

In any case, CSID analysts couldn’t determine one way or the other whether it actually was Dropbox’s data. Nonetheless, they sent it out as such anyway, based on little more than w0rm’s say-so.

w0rm’s advertisement of the claimed dropbox credentials.
Hjelm said his analysts never test the validity of stolen credentials they’re harvesting from the dark web (i.e. they don’t try to log in using those credentials to see if they’re valid). But he said CSID may take steps such as attempting to crack some of the hashed passwords to see whether a preponderance of them point to a certain online merchant or social network.

In the LinkedIn breach involving more than 100 million stolen usernames and passwords, for example, investigators were able to connect a corpus of hashed passwords posted on a password cracking form to LinkedIn because a large number of users in the hashed password list had a password with some form of “linkedin” in it.

I asked CSID whether its researchers took the basic step of attempting to register accounts at the suspected breached service using the email addresses included in the supposed user data dump. As I discussed in the post How to Tell Data Leaks from Publicity Stunts, most online services do not allow two different user accounts to have the same email address, so attempting to sign up for an account using an email address in the claimed leak data is an effective way to test leak claims. If a large number of email addresses in the claimed leak list do not already have accounts associated with them at the allegedly breached Web site, the claim is almost certainly bogus.

Hjelm said CSID doesn’t currently use this rather manual technique, but that the company is open to suggestions about how to improve the accuracy of their breach victim attribution. He said CSID only started providing attribution information about a year ago because clients were demanding it.

Allison Nixon, a cybercrime researcher and director of security research at dark web monitoring firm Flashpoint, was the genesis of that aforementioned story about data leaks vs. publicity stunts. She’s done more research than anyone I know to date on ways to quickly tell whether a claimed breach is real, and how to source it. Nixon said automating threat intel only goes so far.

“In general, the skill of human skepticism performed today by threat intelligence experts is extremely difficult to automate,” Nixon said. “Even with advancements in cognitive and artificial intelligence technologies, humans will still and always be needed to validate the nuances associated with accurate intelligence. Security experts must be intimately involved in the fact checking process of threat intelligence, or otherwise, will run the risk of losing valuable time, resources and possibly even more, by validating false information perceived as accurate by automated technologies.”

Flashpoint found closer examination of the file that w0rm leaked maps back to a 2013 recycled breach at Tumblr.

There is no question w0rm has a history of sharing real dumps. But according to Flashpoint that reputation must be taken with a grain of salt because even though the dumps are real, they are usually publicly available yet are portrayed by w0rm as evidence of his hacking proficiency.

In short: The intended victim of guys like w0rm is probably other cybercriminals, but threat intel companies can get caught up in this as well.

Many readers have asked me to weigh in on reports of a possible breach at Teamviewer, a service that lets users share their desktops, audio chat and other applications with friends and contacts online. Teamviewer has so far denied experiencing a breach.

My guess is that a large number of Teamviewer users either re-used passwords at some of the social networking services whose usernames and hashed passwords were posted online this week, or they are Teamviewer users who unfortunately were caught up in the day-to-day churn of systems compromised through other malware. In any case, there is a lengthy thread on Reddit populated by Teamviewer users who mostly claim they didn’t re-use their Teamviewer password anywhere else.

It’s interesting to note that early versions of remote access Trojans like Zeus contained a Teamviewer-like component called “backconnect” that let the attackers use the systems much like Teamviewer enables its users. These days, however, cybercriminals often forgo that homegrown backconnect feature and rely instead on either equipping the victim with a Teamviewer account and/or hijacking the victim’s existing Teamviewer account credentials, and then exfiltrating stolen credentials and other data through a Teamviewer installation. Hence, a compromise of one’s Teamviewer account may indicate that the victim’s system already is compromised by sophisticated Windows-based malware.

For its part, Dropbox is using this opportunity to encourage users to beef up the security of their accounts. According to Dropbox’s Patrick Heim, less than one percent of the Dropbox user base is taking advantage of the company’s two-factor authentication feature, which makes it much harder for thieves and other ne’er-do-wells to use stolen passwords.

“In matters of security, we always suggest users take an abundance of caution and reset their passwords if they receive any notification of a potential compromise,” Heim said. “Dropbox strongly encourages individuals use strong and unique passwords for each service.  We also encourage Dropbox users to enable two-factor authentication to further protect their account.”

I hope it goes without saying that re-using passwords across multiple sites that may hold personal information about you is an extremely bad idea. If you’re guilty of this apparently common practice, please change that. If you need some inspiration on this front, check out this post.

Mir Islam – the Guy the Govt Says Swatted My Home – to be Sentenced June 22

Postby BrianKrebs via Krebs on Security »

On March 14, 2013 our humble home in Annandale, Va. was “swatted” — that is to say, surrounded by a heavily-armed police force that was responding to fraudulent reports of a hostage situation at our residence. Later this month the government will sentence 21-year-old hacker named Mir Islam for that stunt and for leading a criminal conspiracy allegedly engaged in a pattern of swatting, identity theft and wire fraud.

Mir Islam
Mir Islam briefly rose to Internet infamy as one of the core members of UGNazi, an online mischief-making group that claimed credit for hacking and attacking a number of high-profile Web sites.

On June 25, 2012, Islam and nearly two-dozen others were caught up in an FBI dragnet dubbed Operation Card Shop. The government accused Islam of being a founding member of carders[dot]org — a credit card fraud forum — trafficking in stolen credit card information, and possessing information for more than 50,000 credit cards.

Most importantly for the government, however, Islam was active on CarderProfit, a carding forum created and run by FBI agents.

Islam ultimately pleaded guilty to aggravated identity theft and conspiracy to commit computer hacking, among other offenses tied to his activities on CarderProfit. In March 2016 a judge for the Southern District of New York sentenced (PDF) Islam to just one day in jail, a $500 fine, and three years of probation.

Not long after Islam’s plea in New York, I heard from the U.S. Justice Department. The DOJ told me that I was one of several swatting victims of Mir Islam, who was awaiting sentencing after pleading guilty of leading a cybercrime conspiracy. Although that case remains sealed — i.e. there are no documents available to the press or the public about the case — the government granted a waiver that allows the Justice Department to contact victims of the accused and to provide them with an opportunity to attend Islam’s sentencing hearing — and even to address the court.

Corbin Weiss, an assistant US attorney and a cybercrime coordinator with the Department of Justice, said Islam pleaded guilty to one count of conspiracy, and that the objects of that conspiracy were seven:

-identity theft;
-misuse of access devices;
-misuse of Social Security numbers;
-computer fraud;
-wire fraud;
-attempts to interfere with federal officials;
-interstate transmission of threats.

Weiss said my 2013 blog post about my swatting incident — The World Has No Room for Cowards — was part of the government’s “statement of offense” or argument before the court as to why a given suspect should be arrested and charged with a violation of law.

“Your swatting is definitely one of the incidents specifically brought to the attention of the court in this case,” Weiss said. “In part because we didn’t have that many swat victims who were able to describe to us the entire process of their victimization. Your particular swat doesn’t fit neatly within any of those charges, but it was part of the conspiracy to engage in swats and some of the swats are covered by those charges.”

Fairfax County Police outside my home on 3/14/13
Weiss said while the Justice Department prosecutors couldn’t stop me from writing about the case before Islam’s sentencing (and the subsequent unsealing of the case), the government would almost certainly prefer it that way. I thanked him and said while I might be a victim this case, I’m a journalist first.

I’m gratified to see the wheels of justice turning, and that swatting is being creatively addressed with federal felony charges in the absence of a federal anti-swatting law.

The Interstate Swatting Hoax Act of 2015, introduced by Rep. Katherine Clark (D-Mass.) and Rep. Patrick Meehan (R-PA), was passed by the House Energy & Commerce Committee in April 2016. It would impose up to a 20-year prison sentence and heavy fines for swatting. According to the FBI, swatting incidents cost local first responders $10,000 on average and divert important services away from real emergencies.

The Swatting Hoax Act targets what proponents call a loophole in current law. “While federal law prohibits using the telecommunications system to falsely report a bomb threat hoax or terrorist attack, falsely reporting other emergency situations is not currently prohibited,” reads a statement by the House co-sponsors.

To address this shortcoming, the bill “would close this loophole by prohibiting the use of the internet telecommunications system to knowingly transmit false information with the intent to cause an emergency law enforcement response.”

Explicitly making swatting a federal crime is a good first step, but unfortunately a great many people launching swatting attacks are minors, and the federal law enforcement system is simply not built to handle minors (with few exceptions).

By way of example, one of Islam/Josh the God’s best buddies — a then-16-year-old hacker named Cosmo the God — also was involved in my swatting as well as the CarderProfit sting. But it’s unclear whether he is tied to the Islam conspiracy. The DOJ’s Weiss said he couldn’t talk about any others associated with the case who were minors.

“Other individuals who may have been involved were juveniles when they committed the offenses, and those [cases] are going to remain under seal,” he said. “Victims have far fewer rights with respect to juveniles.”

Mir Islam is slated to be sentenced in Washington, D.C. on June 22. Weiss said the judge presiding over the case can sentence him to a maximum of five years in prison.

This summer promises to be a good one for closure. Sergey Vovnenko, another convicted cybercriminal who sought to cause trouble for this author (by trying to frame me for heroin possession) is slated to be sentenced in New Jersey in August on unrelated cybercrime charges.

bsdtalk265 - Sunset on BSD

Postby Mr via bsdtalk »

A brief description of playing around with SunOS 4.1.4, which was the last version of SunOS to be based on BSD.

File Info: 17Min, 8Mb

Ogg Link:

Got $90,000? A Windows 0-Day Could Be Yours

Postby BrianKrebs via Krebs on Security »

How much would a cybercriminal, nation state or organized crime group pay for blueprints on how to exploit a serious, currently undocumented, unpatched vulnerability in all versions of Microsoft Windows? That price probably depends on the power of the exploit and what the market will bear at the time, but here’s a look at one convincing recent exploit sales thread from the cybercrime underworld where the current asking price for a Windows-wide bug that allegedly defeats all of Microsoft’s current security defenses is USD $90,000.

So-called “zero-day” vulnerabilities are flaws in software and hardware that even the makers of the product in question do not know about. Zero-days can be used by attackers to remotely and completely compromise a target — such as with a zero-day vulnerability in a browser plugin component like Adobe Flash or Oracle’s Java. These flaws are coveted, prized, and in some cases stockpiled by cybercriminals and nation states alike because they enable very stealthy and targeted attacks.

The $90,000 Windows bug that went on sale at the semi-exclusive Russian language cybercrime forum exploit[dot]in earlier this month is in a slightly less serious class of software vulnerability called a “local privilege escalation” (LPE) bug. This type of flaw is always going to be used in tandem with another vulnerability to successfully deliver and run the attacker’s malicious code.

LPE bugs can help amplify the impact of other exploits. One core tenet of security is limiting the rights or privileges of certain programs so that they run with the rights of a normal user — and not under the all-powerful administrator or “system” user accounts that can delete, modify or read any file on the computer. That way, if a security hole is found in one of these programs, that hole can’t be exploited to worm into files and folders that belong only to the administrator of the system.

This is where a privilege escalation bug can come in handy. An attacker may already have a reliable exploit that works remotely — but the trouble is his exploit only succeeds if the current user is running Windows as an administrator. No problem: Chain that remote exploit with a local privilege escalation bug that can bump up the target’s account privileges to that of an admin, and your remote exploit can work its magic without hindrance.

The seller of this supposed zero-day — someone using the nickname “BuggiCorp” — claims his exploit works on every version of Windows from Windows 2000 on up to Microsoft’s flagship Windows 10 operating system. To support his claims, the seller includes two videos of the exploit in action on what appears to be a system that was patched all the way up through this month’s (May 2016) batch of patches from Microsoft (it’s probably no accident that the video was created on May 10, the same day as Patch Tuesday this month).

A second video (above) appears to show the exploit working even though the test machine in the video is running Microsoft’s Enhanced Mitigation Experience Toolkit (EMET), a free software framework designed to help block or blunt exploits against known and unknown Windows vulnerabilities and flaws in third-party applications that run on top of Windows.

The sales thread on exploit[dot]in.
Jeff Jones, a cybersecurity strategist with Microsoft, said the company was aware of the exploit sales thread, but stressed that the claims were still unverified. Asked whether Microsoft would ever consider paying for information about the zero-day vulnerability, Jones pointed to the company’s bug bounty program that rewards security researchers for reporting vulnerabilities. According to Microsoft, the program to date has paid out more than $500,000 in bounties.

Microsoft heavily restricts the types of vulnerabilities that qualify for bounty rewards, but a bug like the one on sale for $90,000 would in fact qualify for a substantial bounty reward. Last summer, Microsoft raised its reward for information about a vulnerability that can fully bypass EMET from $50,000 to $100,000. Incidentally, Microsoft said any researcher with a vulnerability or who has questions can reach out to the Microsoft Security Response Center to learn more about the program and process.


It’s interesting that this exploit’s seller could potentially make more money by peddling his find to Microsoft than to the cybercriminal community. Of course, the videos and the whole thing could be a sham, but that’s probably unlikely in this case. For one thing, a scammer seeking to scam other thieves would not insist on using the cybercrime forum’s escrow service to consummate the transaction, as this vendor has.

As I noted in my book Spam Nation, cybercrime forums run on reputation-based systems similar to eBay’s “feedback” mechanism — in the form of reputation points granted or revoked by established members. Rookie and established members alike are all encouraged to use the forum’s “escrow” system to ensure transactions are completed honorably among thieves.

The escrow service can act as a sort of proxy for reputation. The forum administrators hold the buyer’s money in escrow until the seller can demonstrate he has held up his end of the bargain, be it delivering the promised goods, services or crypto-currency. The forum admins keep a small percentage of the overall transaction amount (usually in Bitcoins) for acting as the broker and insurer of the transaction.

Thus, if a member states up front that he’ll only work through a crime forum’s escrow service, that member’s cybercriminal pitches are far more likely to be taken seriously by others on the forum.

Security researchers at Trustwave first pointed my attention to the exploit[dot]in zero-day sales thread last week. Ziv Mador, vice president of security research at Trustwave, said he believes the exploit is legitimate.

“It seems the seller has put in the effort to present himself/herself as a trustworthy seller with a valid offering,” he said. Mador noted Trustwave can’t be 100% certain of the details without the vulnerability in their possession, but that the videos and translation provide further evidence. The company has published more detail on the sales thread and the claimed capabilities of the exploit.

Is $90,000 the right price for this vulnerability? Depends on whom you ask. For starters, not everyone values the same types of exploits similarly. For example, the vulnerability prices listed by exploit broker Zerodium indicate that the company places a far lesser value on exploits in the Windows operating system and far more on vulnerabilities in mobile systems and Web browser components. Zerodium says the price it might be willing to pay for a similar Windows exploit is about $30,000, whereas a critical bug in Apple’s iOS mobile operating system could fetch up to $100,000.

Vlad Tsyrklevich, a researcher who’s published quite a bit about the shadowy market for zero-day exploits, says price comparisons for different exploits should be taken with a grain of salt. In his analysis, Tsyrklevich points to a product catalog from exploit vendor Netragard, which in 2014 priced a non-exclusive Windows LPE vulnerability at $90,000.

“Exploit developers have an incentive to state high prices and brokers offer to sell both low-quality and high-quality exploits,” Tsyrklevich wrote. “If a buyer negotiates poorly or chooses a shoddy exploit, the vendor still benefits. Moreover, it’s difficult to compare the reliability and projected longevity of vulnerabilities or exploits offered by different developers. Many of the exploits offered by exploit brokers are not sold.”

BuggiCorp, the seller of the Windows LPE zero-day flaw, was asked by several forum members whether his zero-day was related to a vulnerability that Microsoft patched on April 12, 2016. BuggiCorp responds that his is different. But as documented by security vendor FireEye, that flaw was a similar LPE vulnerability that FireEye said was featured in a series of spear phishing attacks aimed at gaining access to point-of-sale systems at targeted retail, restaurant and hospitality industries. FireEye called the downloader used in those attacks “Punchbuggy,” but it did not specify why it chose that name.

If nothing else, this zero-day thread is an unusual sight on such an open cybercrime forum, Trustwave’s Mador said.

“Finding a zero day listed in between these fairly common offerings is definitely an anomaly,” he said. “It goes to show that zero days are coming out of the shadows and are fast becoming a commodity for the masses, a worrying trend indeed.”

Updating the broadcom driver part #2

Postby Adrian via Adrian Chadd's Ramblings »

In Part 1, I described updating the FreeBSD bwn(4) driver and adding some support for the PHY-N driver from b43. It's GPL, but it works, and it gets me over the initial hump of getting support for updated NICs and initial 5GHz operation.

In this part, I'll describe what I did to tidy up RSSI handling and bring up the BCM4322 support.

To recap - I ported over PHY-N support from b43, updated the SPROM handling in the bus glue (siba(4)), and made 11a OFDM transmission work. I was lucky - I chose the first 11n, non-MIMO NIC that Broadcom made which behaved sufficiently similarly to the previous 11abg generation. It was non-MIMO and I could run non-MIMO microcode, which already shipped with the existing firmware FreeBSD builds. But, the BCM4322 is a 2x2 MIMO device, and requires updated firmware, which brought over a whole new firmware API.

Now, bwn(4) handles the earlier two firmware interfaces, but not the newer one that b43 also supports. I chose BCM4321 because it didn't require firmware API changes and anything in the Broadcom siba(4) bus layer, so I could focus on porting the PHY-N code and updating the MAC driver to work. This neatly compartmentalised the problem so I wouldn't be trying to make a completely changed thing work and spending days chasing down obscure bugs.

The BCM4322 is a bit of a different beast. It uses PHY-N, which is good. It requires the transmit path setup the PLCP header bits for OFDM to work (ie, 11a, 11g) which I had to do for BCM4321, so that's good. But, it required firmware API changes, and it required siba(4) changes. I decided to tackle the firmware changes first, so I could at least get the NIC loaded and ready.

So, I first fixed up the RX descriptor handling, and found that we were missing a whole lot of RSSI calculation math. I dutifully wrote it down on paper and reimplemented it from b43. That provided some much better looking RSSI values, which made the NIC behave much better. The existing bwn(4) driver just didn't decode the RSSI values in any sensible way and so some Very Poor Decisions were made about which AP to associate to.

Next up, the firmware API. I finished adding the new structure definitions and updating the descriptor sizes/offsets. There were a couple of new things I had to handle for later chip revision devices, and the transmit/receive descriptor layout changed. That took most of a weekend in Palm Springs (my first non-working holiday in .. well, since Atheros, really) and I had the thing up and doing DMA. But, I wasn't seeing any packets.

So, I next decided to finish implementing the siba(4) bus pieces. The 4322 uses a newer generation power management unit (PMU) with some changes in how clocking is configured. I did that, verified I was mostly doing the right thing, and fired that up - but it didn't show anything in the scan list. Now, I was wondering whether the PMU/clock configuration was wrong and not enabling the PHY, so I found some PHY reset code that bwn(4) was doing wrong, and I fixed that. Nope, still no scan results. I wondered if the thing was set up to clock right (since if we fed the PHY the wrong clock, I bet it wouldn't configure the radio with the right clock, and we'd tune to the wrong frequency) which was complete conjecture on my part - but, I couldn't see anything there I was missing.

Next up, I decided to debug the PHY-N code. It's a different PHY revision and chip revision - and the PHY code does check these to do different things. I first found that some of the PHY table programming was completely wrong, so after some digging I found I had used the wrong SPROM offsets in the siba(4) code I had added. It didn't matter for the BCM4321 because the PHY-N revision was early enough that these SPROM values weren't used. But they were used on the BCM4322. But, it didn't come up.

Then I decided to check the init path in more detail. I added some debug prints to the various radio programming functions to see what's being called in what order, and I found that none of them were being called. That sounded a bit odd, so I went digging to see what was supposed to call them.

The first thing it does when it changes channel is to call the rfkill method with the "on" flag set on, so it should program on the RF side of things. It turns out that, hilariously, the BCM4322 PHY revision has a slightly different code path, which checks the value of 'rfon' in the driver state. And, for reasons I don't yet understand, it's set to '1' in the PHY init path and never set to '0' before we start calling PHY code. So, the PHY-N code thought the radio was already up and didn't need reprogramming.


I commented out that check, and just had it program the radio each time. Voila! It came up.

So, next on the list (as I do it) is adding PHY-HT support, and starting the path of supporting the newer bus (bhnd(4)) NICs. Landon Fuller is writing the bhnd(4) support and we're targeting the BCM943225 as the first bcma bus device. I'll write something once that's up and working!

Did the Clinton Email Server Have an Internet-Based Printer?

Postby BrianKrebs via Krebs on Security »

The Associated Press today points to a remarkable footnote in a recent State Department inspector general report on the Hillary Clinton email scandal: The mail was managed from the vanity domain “” But here’s a potentially more explosive finding: A review of the historic domain registration records for that domain indicates that whoever built the private email server for the Clintons also had the not-so-bright idea of connecting it to an Internet-based printer.

According to historic Internet address maps stored by San Mateo, Calif. based Farsight Security, among the handful of Internet addresses historically assigned to the domain “” was the numeric address The subdomain attached to that Internet address was….wait for it…. ““.

Interestingly, that domain was first noticed by Farsight in March 2015, the same month the scandal broke that during her tenure as United States Secretary of State Mrs. Clinton exclusively used her family’s private email server for official communications.

Farsight’s record for, the Internet address which once mapped to “”.
I should emphasize here that it’s unclear whether an Internet-capable printer was ever connected to Nevertheless, it appears someone set it up to work that way.

Ronald Guilmette, a private security researcher in California who prompted me to look up this information, said printing things to an Internet-based printer set up this way might have made the printer data vulnerable to eavesdropping.

“Whoever set up their home network like that was a security idiot, and it’s a dumb thing to do,” Guilmette said. “Not just because any idiot on the Internet can just waste all your toner. Some of these printers have simple vulnerabilities that leave them easy to be hacked into.”

More importantly, any emails or other documents that the Clintons decided to print would be sent out over the Internet — however briefly — before going back to the printer. And that data may have been sniffable by other customers of the same ISP, Guilmette said.

“People are getting all upset saying hackers could have broken into her server, but what I’m saying is that people could have gotten confidential documents easily without breaking into anything,” Guilmette said. “So Mrs. Clinton is sitting there, tap-tap-tapping on her computer and decides to print something out. A clever Chinese hacker could have figured out, ‘Hey, I should get my own Internet address on the same block as the Clinton’s server and just sniff the local network traffic for printer files.'”

I should note that it’s possible the Clintons were encrypting all of their private mail communications with a “virtual private network” (VPN). Other historical “passive DNS” records indicate there were additional, possibly interesting and related subdomains once directly adjacent to the aforementioned Internet address

Akonadi for e-mail needs to die

Postby Andreas via the dilfridge blog »

So, I'm officially giving up on kmail2 (i.e., the Akonadi-based version of kmail) on the last one of my PCs now. I have tried hard and put in a lot of effort to get it working. However, it costs me a significant amount of time and effort just to be able to receive and read e-mail - meaning hanging IMAP resources every few minutes, the feared "Multiple merge candidates" bug popping up again and again, and other surprise events. That is plainly not acceptable in the workplace, where I need to rely on e-mail as means of communication. By leaving kmail2 I seem to be following many many other people... Even dedicated KDE enthusiasts that I know have by now migrated to Trojita or Thunderbird.

My conclusion after all these years, based on my personal experience, is that the usage of Akonadi for e-mail is a failed experiment. It was a nice idea in theory, and may work fine for some people. I am certain that a lot of effort has been put into improving it, I applaud the developers of both kmail and Akonadi for their tenaciousness and vision and definitely thank them for their work. Sadly, however, if something doesn't become robust and error-tolerant after over 5 (five) years of continuous development effort, the question pops up whether the initial architectural idea wasn't a bad one in the first place - in particular in terms of unhandleable complexity.

I am not sure why precisely in my case things turn out so badly. One possible candidate is the university mail server that I'm stuck with, running Novell Groupwise. I've seen rather odd behaviour in the IMAP replies in the past there. That said, there's the robustness principle for software to consider, and even if Groupwise were to do silly things, other IMAP clients seem to get along with it fine.

Recently I've heard some rumors about a new framework called Sink (or Akonadi-Next), which seems to be currently under development... I hope it'll be less fragile, and less overcomplexified. The choice of name is not really that convincing though (where did my e-mails go again)?

Now for the question and answer session...

Question: Why do you post such negative stuff? You are only discouraging our volunteers.
Answer: Because the motto of the drowned god doesn't apply to software. What is dead should better remain dead, and not suffer continuous revival efforts while users run away and the brand is damaged. Also, I'm a volunteer myself and invest a lot of time and effort into Linux. I've been seeing the resulting fallout. It likely scared off other prospective help.

Question: Have you tried restarting Akonadi? Have you tried clearing the Akonadi cache? Have you tried starting with a fresh database?
Answer: Yes. Yes. Yes. Many times. And yes to many more things. Did I mention that I spent a lot of time with that? I'll miss the akonadiconsole window. Or maybe not.

Question: Do you think kmail2 (the Akonadi-based kmail) can be saved somehow?
Answer: Maybe. One could suggest an additional agent as replacement to the usual IMAP module. Let's call it IMAP-stupid, and mandate that it uses only a bare minimum of server features and always runs in disconnected mode... Then again, I don't know the code, and don't know if that is feasible. Also, for some people kmail2 seems to work perfectly fine.

Question: So what e-mail program will you use now?
Answer: I will use kmail. I love kmail. Precisely, I will use Pali Rohar's noakonadi fork, which is based on kdepim 4.4. It is neither perfect nor bug-free, but accesses all my e-mail accounts reliably. This is what I've been using on my home desktop all the time (never upgraded) and what I downgraded my laptop to some time ago after losing many mails.

Question: So can you recommend running this ages-old kmail1 variant?
Answer: Yes and no. Yes, because (at least in my case) it seems to get the basic job done much more reliably. Yes, because it feels a lot snappier and produces far less random surprises. No, because it is essentially unmaintained, has some bugs, and is written for KDE 4, which is slowly going away. No, because Qt5-based kmail2 has more features and does look sexier. No, because you lose the useful Akonadi integration of addressbook and calendar.
That said, here are the two bugs of kmail1 that I find most annoying right now: 1) PGP/MIME cleartext signature is broken (at random some signatures are not verified correctly and/or bad signatures are produced), and 2), only in a Qt5 / Plasma environment, attachments don't open on click anymore, but can only be saved. (Which is odd since e.g. Okular as viewer is launched but never appears on screen, and the temporary file is written but immediately disappears... need to investigate.)

Question: I have bugfixes / patches for kmail1. What should I do?
Answer: Send them!!! I'll be happy to test and forward.

Question: What will you do when Qt4 / kdelibs goes away?
Answer: Dunno. Luckily I'm involved in packaging myself. :)


Skimmers Found at Walmart: A Closer Look

Postby BrianKrebs via Krebs on Security »

Recent local news stories about credit card skimmers found in self-checkout lanes at some Walmart locations reminds me of a criminal sales pitch I saw recently for overlay skimmers made specifically for the very same card terminals.

Much like the skimmers found at some Safeway locations earlier this year, the skimming device pictured below was designed to be installed in the blink of an eye at self-checkout lanes — as in recent incidents at Walmart stores in Fredericksburg, Va. and Fort Wright, Ky. In these attacks, the skimmers were made to piggyback on card readers sold by payment solutions company Ingenico.

A skimmer made to be fitted to an Ingenico credit card terminal of the kind used at Walmart stores across the country. Image: Hold Security.
This Ingenico “overlay” skimmer has a PIN pad overlay to capture the user’s PIN, and a mechanism for recording the data stored on a card’s magnetic stripe when customers swipe their cards at self-checkout aisles. The wire pictured at the bottom is for offloading the data from the card skimmers once thieves have retrieved the devices from compromised checkout lanes.

This particular skimmer retails for between $200 to $300, but that price doesn’t include the electronics that power the device and store the stolen card data.

Here’s how this skimmer looks when it’s attached. Think you’d be able to spot it?

Image credit: Hold Security.
Walmart last year began asking customers with more secure chip-enabled cards to dip the chip instead of swipe the stripe. Chip-based cards are more expensive and difficult for thieves to counterfeit, and they can help mitigate the threat from most modern card-skimming methods that read the cardholder data in plain text from the card’s magnetic stripe. Those include malicious software at the point-of-sale terminal, as well as physical skimmers placed over card readers at self-checkout lanes.

In a recent column – The Great EMV Fake-Out: No Chip for You! – I explored why so few retailers currently allow or require chip transactions, even though many of them already have all the hardware in place to accept chip transactions.

For its part, Walmart has deployed chip-enabled readers, and last year began requiring customers with chip cards to use them as such. Indeed, it’s interesting to note that the Ingenico overlay skimmer pictured above also includes the slot at the bottom center of the device where customers can insert a chip card, although in these recent skimming incidents at Walmart the thieves were no doubt hoping more customers would simply swipe.

The Mercator Advisory Group notes that only 60 percent of all credit cards in the United States have been updated with chip cards, with debit cards lagging further behind. Even so, only 20 percent of card terminals in the U.S. have been activated for chip use as of April 2016, Mercator found.

The United States is the last of the G20 nations to move to chip-based cards — much to the delight of fraudsters and organized cybercrime gangs that have siphoned tens of millions of credit and debit cards in major data breaches at retailers these past few years. Financial industry consultant Aite Group predicts that credit card fraud stemming from hacking will reach a record level in 2016 — $4 billion. Aite Group says fraudsters are busy milking this cash cow for all it’s worth as U.S. merchants start to pivot toward chip-card transactions.

Footage of crooks installing the card skimmers at a Walmart self-checkout terminal in Kentucky this month. Source: WLWT.
Update, 12:41 p.m. ET: Corrected location of Kentucky Walmart.

Graduating Class – The New Yorker – 30 May 2016

Postby Zach via The Z-Issue »

Today I saw the cover of the 30 May 2016 edition of The New Yorker, which was designed by artist R. Kikuo Johnson, and it really hit home for me. The illustration depicts the graduating class of 2016 walking out of their commencement ceremony whilst a member of the 2015 graduating class is working as a groundskeeper:

Click for full quality
I won’t go into a full tirade here about my thoughts of higher education within the United States throughout recent years, but I do think that this image sums up a few key points nicely:

  • Many graduates (either from baccalaureate or higher-level programmes) are not working in their respective fields of study
  • A vast majority of students have accrued a nearly insurmountable amount of debt
  • Those two points may be inextricably linked to one another
I know that, for me, I am not able to work in my field of study (child and adolescent development / elementary education) for those very reasons—the corresponding jobs (which I find incredibly rewarding), unfortunately, do not yield high enough salaries for me to even make ends meet. Though the cover artwork doesn’t necessarily offer any suggestion as to a solution to the problem, I think that it very poignantly brings further attention to it.


kmail 16.04.1 and Novell Groupwise 2014 IMAP server - anyone?

Postby Andreas via the dilfridge blog »

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.


GStreamer and Meson: A New Hope

Postby Nirbheek via Nirbheek’s Rantings »

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

balde internals, part 1: Foundations

Postby Rafael G. Martins via Rafael Martins »

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.

PGCon 2016 charity auction

Postby Dan Langille via Dan Langille's Other Diary »

Every year PGCon holds a charity auction as part of the closing session. All proceeds go to The Ottawa Mission, a local group. The auction includes items you would keep as art, and some you would consume before you left town. Others, such as empty paper bags or cardboard boxes are left in the recycling [...]

gpiokeys support committed

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

To those who do not track FreeBSD commit messages: I committed gpiokeys driver to -CURRENT as r299475. The driver is not enabled in any of the kernels but can be built as a loadable module.

For now it stays disconnected from main build because it breaks some MIPS kernel configs. Configs in question include “modules/gpio” as part of MODULES_OVERRIDE variable and since gpiokeys can be built only with FDT-enabled kernel the build fails.

gpiokeys can be used as a base for more input device driver: “gpio-keys-polled” and “gpio-matrix-keypad“. I do not have hardware to test this at the moment. If you do and you’re looking for small FreeBSD project to work on – here you go.

Next step on my ToDo list is to try tricking people into committing evdev patch, which at the moment is the only requirement for unlocking touchscreen support.

FLAC encoding of WAV fails with error of unsupported format type 3

Postby Zach via The Z-Issue »


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
Whenever I purchase music from Bandcamp, SoundCloud, or other music sites that offer uncompressed, full-quality downloads (I can’t bring myself to download anything but original or lossless music files), I will always download the original WAV if it is offered. I prefer to keep the original copy around just in case, but usually I will put that on external storage, and use FLAC compression for actual listening (see my post comparing FLAC compression levels, if you’re interested).

Typically, my workflow for getting songs/albums ready is:

  • Purchase and download the full-quality WAV files
  • Rename them all according to my naming conventions
  • Batch convert them to FLAC using the command line (below)
  • Add in all the tags using EasyTag
  • Standardise the album artwork
  • Add the FLAC files to my playlists on my computers and audio server
  • Batch convert the FLACs to OGG Vorbis using a flac2all command (below) for my mobile and other devices with limited storage
It takes some time, but it’s something that I only have to do once per album, and it’s worth it for someone like me (read “OCD”). For good measure, here are the commands that I run:

Batch converting files from WAV to FLAC:
find music/wavs/$ARTIST/$ALBUM/ -iname '*.wav' -exec flac -3 {} \;
obviously replacing $ARTIST and $ALBUM with the name of the artist and album, respectively.

Batch converting files from FLAC to OGG using flac2all:
python2 vorbis ./music/flac/ -v 'quality=7' -c -o ./music/ogg/
By the way, flac2all is awesome because it copies the tags and the album art as well. That’s a huge time saver for me.

Normally this process goes smoothly, and I’m on my way to enjoying my new music rather quickly. However, I recently downloaded some WAVs from SoundCloud and couldn’t figure out why I was coming up with fewer FLACs than WAVs after converting. I looked back through the output from the command, and saw the following error message on some of the track conversions:

05-Time Goes By.wav: ERROR: unsupported format type 3
That was a rather nebulous and obtuse error message, so I decided to investigate a file that worked versus these ones that didn’t:

File that failed:
$ file vexento/02-inspiration/05-Time\ Goes\ By.wav
RIFF (little-endian) data, WAVE audio, stereo 44100 Hz

File that succeeded:
$ file vexento/02-inspiration/04-Riot.wav
RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz

The differences are that the working files indicated “Microsoft PCM” and “16 bit.” The fix for the problem was rather simple, actually. I used Audacity (which is a fantastic piece of cross-platform, open-source software for audio editing), and just re-exported the tracks that were failing. Basically, open the file in Audacity, make no edits, and just go to File –> Export –> “Wav (Microsoft) signed 16 bit PCM”, which you can see in the screenshot below:

Click to enlarge
Just like that, the problem was gone! Also, I noticed that the file size changed substantially. I’m used to a WAV being about 10MiB for every minute of audio. Before re-exporting these files, they were approximately 20MiB for every minute. So, this track went from ~80MiB to ~40MiB.

Hope that helps!


P.S. By the way, Vexento (the artist who released the tracks mentioned here) is amazingly fun, and I recommend that everyone give him a chance. He’s a young Norwegian guy (actually named Alexander Hansen) who creates a wide array of electronic music. Two tracks (that are very different from one another) that I completely adore are Trippy Love (upbeat and fun), and Peace (calming yet cinematic).

Updating the broadcom softmac driver (bwn), or "damnit, I said I'd never do this!"

Postby Adrian via Adrian Chadd's Ramblings »

If you're watching the FreeBSD commit lists, you may have noticed that I .. kinda sprayed a lot of changes into the broadcom softmac driver.

Firstly, I swore I'd never touch this stuff. But, we use Broadcom (fullmac!) parts at work, so in order to get a peek under the hood to see how they work, I decided fixing up bwn(4) was vaguely work related. Yes, I did the work outside of work; no, it's not sponsored by my employer.

I found a small cache of broadcom 43xx cards that I have and I plugged one in. Nope, didn't work. Tried another. Nope, didn't work. Oh wait - I need to make sure the right firmware module is loaded for it to continue. That was the first hiccup.

Then I set up the interface and connected it to my home AP. It worked .. for about 30 seconds. Then, 100% packet loss. It only worked when I was right up against my AP. I could receive packets fine, but transmits were failing. So, off I went to read the transmit completion path code.

Here's the first fun bit - there's no TX completion descriptor that's checked. There is in the v3 firmware driver (bwi), but not in the v4 firmware. Instead, it reads a pair shared memory registers to get completion status for each packet. This is where I learnt my first fun bits about the hardware API - it's a mix of PIO/DMA, firmware, descriptors and shared memory mailboxes. Those completion registers? Reading them advances the internal firmware state to read the next descriptor completion. You can't just read them for fun, or you'll miss transmit completions.

So, yes, we were transmitting, and we were failing them. The retry count was 7, and the ACK bit was 0. Ok, so it failed. It's using the net80211 rate control code, so I turned on rate control debugging (wlandebug +rate) and watched the hilarity.

The rate control code was never seeing any failures, so it just thought everything was hunky dory and kept pushing the rate up to 54mbit. Which was the exact wrong thing to do. It turns out the rate control code was only called if ack=1, which meant it was only notified if packets succeeded. I fixed up (through some revisions) the rate control notification path to be called always, error and success, and it began behaving better.

Now, bwn(4) was useful. But, it needs updating to support any of the 11n chipsets, and it certainly didn't do 5GHz operation on anything. So, off I went to investigate that.

There are, thankfully, three major sources of broadcom softmac information:
  • Linux b43
  • Linux brcmsmac
The linux folk did a huge reverse engineering effort on the binary broadcom driver (wl) over many years, and generated a specification document with which they implemented b43 (and bcm-v3 for b43legacy.) It's .. pretty amazing, to be honest. So, armed with that, I went off to attempt to implement support for the first 11n chip, the BCM4321.

Now, there's some architectural things to know about these chips. Firstly, the broadcom hardware is structured (like all chips, really) with a bunch of cores on-die with an interconnect, and then some host bus glue. So, the hardware design can just reuse the same internals but a different host bus (USB, PCI, SDIO, etc) and reuse 90% of the chip design. That's a huge win. But, most of the other chips out there lie to you about the internal layout so you don't care - they map the internal cores into one big register window space so it looks like one device.

The broadcom parts don't. They expose each of the cores internally on a bus, and then you need to switch the cores on/off and also map them into the MMIO register window to access them.

Yes, that's right. There's not one big register window that it maps things to, PCI style. If you want to speak to a core, you have to unmap the existing core, map in the core you want, and do register access.

Secondly, the 802.11 core exposes MAC and PHY registers, but you can't have them both on at once. You switch on/off the MAC register window before you poke at the PHY.

Armed with this, I now understand why you need 'sys/dev/siba' (siba(4)) before you can use bwn(4). The siba driver provides the interface to PCI (and MIPS for an older Broadcom part) to probe/attach a SIBA bus, then enumerate all of the cores, then attach drivers to each. There's typically a PCI/PCIe core, then an 802.11 core, then a chipcommon core for the clock/power management, and then other things as needed (memory, USB, PCMCIA, etc.) bwn(4) doesn't attach to the PCI device, it sits on the siba bus as a child device.

So, to add support for a new chip, I needed to do a few things.

  • The device needs to probe/attach to siba(4);
  • The SPROM parsing is done by siba(4), so new fields have to be added there;
  • The 802.11 core revision is what's probe/attached by bwn(4), so add it there;
  • Then I needed to ensure the right microcode and radio initvals are added in bwn(4);
  • Then, new PHY code is needed. For the BCM4321, it's PHY-N.
There are two open PHY-N implementations - brcmfmac is BSD licenced, and b43's is GPL licenced. I looked at the brcmfmac one, which includes full 11n support, but I decided the interface was too different for me to do a first port with. The b43 PHY-N code is smaller, simpler and the API matched what was in the bcm-4 specification. And, importantly, bwn(4) was written from the same specification, so it's naturally in alignment.

This meant that I would be adding GPLv2'ed code to bwn(4). So, I decided to dump it in sys/gnu/dev/bwn so it's away from the main driver, and make compiling it in non-standard. At some point yes, I'd like to port the brcmfmac PHYs to FreeBSD, but I wanted to get familiar with the chips and make sure the driver worked fine. Debugging /all/ broken and new pieces didn't sound like fun to me.

So after a few days, I got PHY-N compiling and I fired it up. I needed to add SPROM field parsing too, so I did that too. Then, the moment of truth - I fired it up, and it connected. It scanned on both 2G and 5G, and it worked almost first time! But, two things were broken:
  • 5GHz operation just failed entirely for transmit, and
  • 2GHz operation failed transmitting all OFDM frames, but CCK was fine.
Since probing, association and authentication in 2GHz did it at the lowest rate (CCK), this worked fine. Data packets at OFDM rates failed with a PHY error of 0x80 (which isn't documented anywhere, so god knows what that means!) but CCK was fine. So, off I went to b43 and the brcmfmac driver to see what the missing pieces were.

There were two. Well, three, but two that broke everything.

Firstly, there's a "I'm 5GHz!" flag in the tx descriptor. I set that for 5GHz operation - but nothing.

Secondly, the driver tries a fallback rate if the primary rate fails. Those are hardcoded, same as the RTS/CTS rates. It turns out the fallback rate for 6MB OFDM is 11MB CCK, which is invalid for 5GHz. I fixed that, but I haven't yet fixed the 1MB CCK RTS/CTS rates. I'll go do that soon. (I also submitted a patch to Linux b43 to fix that!)

Thirdly, and this was the kicker - the PHY-N and later PHYs require more detailed TX setup. We were completely missing initializing some descriptor fields. It turns out it's also required for PHY-LP (which we support) but somehow the PHY was okay with that. Once I added those fields in, OFDM transmit worked fine.

So, a week after I started, I had a stable driver on 11bg chips, as well as 5GHz operation on the PHY-N BCM4321 NIC. No 11n yet, obviously, that'll have to wait.

In the next post I'll cover fixing up the RX RSSI calculations and then what I needed to do for the BCM94322MC, which is also a PHY-N chip, but is a much later core, and required new microcode with a new descriptor interface.

Noodles & Company Probes Breach Claims

Postby BrianKrebs via Krebs on Security »

Noodles & Company [NASDAQ: NDLS]a fast-casual restaurant chain with more than 500 stores in 35 U.S. states, says it has hired outside investigators to probe reports of a credit card breach at some locations.

Over the past weekend, KrebsOnSecurity began hearing from sources at multiple financial institutions who said they’d detected a pattern of fraudulent charges on customer cards that were used at various Noodles & Company locations between January 2016 and the present.

Asked to comment on the reports, Broomfield, Colo.-based Noodles & Company issued the following statement:

“We are currently investigating some unusual activity reported to us Tuesday, May 16, 2016 by our credit card processor. Once we received this report, we alerted law enforcement officials and we are working with third party forensic experts. Our investigation is ongoing and we will continue to share information.”

The investigation comes amid a fairly constant drip of card breaches at main street retailers, restaurant chains and hospitality firms. Wendy’s reported last week that a credit card breach that began in the autumn of 2015 impacted 300 of its 5,500 locations.

Cyber thieves responsible for these attacks use security weaknesses or social engineering to remotely install malicious software on retail point-of-sale systems. This allows the crooks to read account data off a credit or debit card’s magnetic stripe in real time as customers are swiping them at the register.

U.S. banks have been transitioning to providing customers more secure chip-based credit and debit cards, and a greater number of retailers are installing checkout systems that can read customer card data off the chip. The chip encrypts the card data and makes it much more difficult and expensive for thieves to counterfeit cards.

However, most of these chip cards will still hold customer data in plain text on the card’s magnetic stripe, and U.S. merchants that continue to allow customers to swipe the stripe or who do not have chip card readers in place face shouldering all of the liability for any transactions later determined to be fraudulent.

While a great many U.S. retail establishments have already deployed chip-card readers at their checkout lines, relatively few have enabled those readers, and are still asking customers to swipe the stripe. For its part, Noodles & Company says it’s in the process of testing and implementing chip-based readers.

“The ongoing program we have in place to aggressively test and implement chip-based systems across our network is moving forward,” the company said in a statement. “We are actively working with our key business partners to deploy this system as soon as they are ready.”


Postby bed via Zockertown: Nerten News »

Ich bin ein großer Fan des Vorgängers XCOM Enemy within, mittlerweile habe ich ca 100 Stunden XCOM2 gespielt.

25 Stunden habe ich benötigt, um das Spiel das erste Mal auf leicht durchzuspielen.

Es braucht eine gewisse Zeit sich an die Spielmechanik zu gewöhnen, deshalb war die "Rookie" Stufe genau richtig, auch wenn es dann irgendwann etwas zu leicht war. Die vielen neuen Spielelemente habe ich dabei nicht wirklich zur Gänze be- /ausnutzen können, deshalb begann ich einen zweiten Anlauf auf der "normal"en Stufe. Hier habe ich mich nicht durch die ständigen Aufforderungen den Schädelstecker einzusetzen aus der Ruhe bringen lassen. Dadurch habe ich mehr Zeit, die einzelnen Aspekte auszuloten und im Spiel zu geniessen. Das die Story nun etwas ganz anderes ist, als bei dem Vorgänger macht die Sache sehr interessant. Ist doch mal etwas anderes in futuristischen Städten auf Dächern rumzuklettern und gezielt Benzinfässer in die Luft jagen zu können.

Die neuen Aliens sind einfach eine Augenweide, überhaupt scheint sehr viel Wert auf das optische gelegt worden zu sein. Dennoch kommen die taktischen neuen Elemente wie Waffenmodifikation nicht zu kurz. Endlich kann man Waffenmodifikatoren und Persöhnlichkeitsupgrades einsammeln und für den folgendenen Einsatz an die Soldaten verteilen.

Was mich besconders beeindruckt hat, sind die sehr stimmige professionelle deutsche Übersetzung und ziemlich ausführlichen Dialoge, die mich in die Geschichte eintauchen lassen und das Gefühl vermitteln mittendrin zu sein. Die Klassen wie z.B. der neue Spezialist mit seinem Gremlin ist einfach super. Hier sind noch zusätzlichen Spieloptionen verborgen die man wirklich alle ausprobieren sollte. U.a. kann man fernhacken und sehr verschiedene Extras erreichen. Während eines Einsatzes wechseln die Optionen sogar, wenn man Glück hat kann man durchaus auch mal einen gegnerisches Geschütz kontrollieren, oder auch alle Züge seiner Soldaten wieder auffüllen, was im fortgeschrittenen Spiel durchaus spielentscheidend sein kann.

Man kann übrigens nicht nur die VIPs niederschlagen und auf der Schulter mitnehmen, sondern auch seinen betäubten Kameraden auf diese Weise an einen sicheren Ort transportieren oder auch den geordneten Rückzug antreten, falls man gar nicht mehr weiterkommt.

Vor allem Anfangs sind einem die Gegner sehr überlegen, bis man etwas durch Forschung und Beförderung aufschließt.

Kritikpunkte habe ich allerdings auch.
Das Anordnen der Räume in der eigenen Basis erlaubt kaum noch Synergien. Schade, dass dies nicht vom Vorgänger übernommen wurde.
Die PSI-Klasse kommt leider erst ziemlich spät ins Spiel und geht dadurch ein bisschen unter - die Soldaten können auch in den Missionen keinerlei Erfahrung sammeln, was das ganze Konzept dieser Klasse zumindest für mich etwas merkwürdig erscheinen lässt.

Die letzte Mission hatte am Ende bei mir einen Farb/Textur Bug, die letzten Gegner musste ich mehr oder weniger mit Raten und auf die Anzeige der roten Alien Köpfe vertrauend zu ende führen.

Die Navigation von Weltkarte zur Avenger ansicht ist zwar toll animiert, aber kann nach einer Weile etwas nerven. Die Rückkehr nach einem Angriff dauert auch lange, bis man endlich auf "weiter" klicken kann. Keine Ahnung, ob das Laden des Basis Levels solange dauert, an meiner SSD scheint es jedenfalls nicht zu liegen.

Die von vielen angemeckerten Performanceprobleme habe ich nicht, als einziger nerviger Bug ist mir bei einem Soldaten in der Fähigkeitsmodifikation (...) aufgefallen, hier kann man den letzten Zug nicht nutzen und man kommt mit der TAB Taste nicht darüber weg, sondern muss erst eine Aktion ankllicken, mit Rechtsklick abbrechen, um mit TAB den nächsten Soldaten anwählen zu können.

Der Angriff eines Archon. Der Archon ist sehr beweglich und vorsicht vor den Feuerfedern, einem Angriff auf 3 Soldaten aus großer Höhe, nicht stehenbleiben, sonst gehts schlecht aus.

Aber auch der Gesichtlose ist nicht von schlechten Eltern.

Ps: wer es nicht lassen kann zu cheaten, wird hier auch fündig. Ich habe sehr häufig meine verwundeten Soldaten auf diese Weise geheilt, das macht es möglich sie im nächsten Einsatz wieder einzusetzen, allerdings leiden sie ein einem starken Rückgang des Willens, dem Meßpunkt bei PSI Atacken, die von den Sektoiden gerne durchgeführt werden.

As Scope of 2012 Breach Expands, LinkedIn to Again Reset Passwords for Some Users

Postby BrianKrebs via Krebs on Security »

A 2012 data breach that was thought to have exposed 6.5 million hashed passwords for LinkedIn users instead likely impacted more than 117 million accounts, the company now says. In response, the business networking giant said today that it would once again force a password reset for individual users thought to be impacted in the expanded breach.

The 2012 breach was first exposed when a hacker posted a list of some 6.5 million unique passwords to a popular forum where members volunteer or can be hired to hack complex passwords. Forum members managed to crack some the passwords, and eventually noticed that an inordinate number of the passwords they were able to crack contained some variation of “linkedin” in them.

LinkedIn responded by forcing a password reset on all 6.5 million of the impacted accounts, but it stopped there. But earlier today, reports surfaced about a sales thread on an online cybercrime bazaar in which the seller offered to sell 117 million records stolen in the 2012 breach. In addition, the paid hacked data search engine LeakedSource claims to have a searchable copy of the 117 million record database (this service said it found my LinkedIn email address in the data cache, but it asked me to pay $4.00 for a one-day trial membership in order to view the data; I declined).

Inexplicably, LinkedIn’s response to the most recent breach is to repeat the mistake it made with original breach, by once again forcing a password reset for only a subset of its users.

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012,” wrote Cory Scott, in a post on the company’s blog. “We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

LinkedIn spokesman Hani Durzy said the company has obtained a copy of the 117 million record database, and that LinkedIn believes it to be real.

“We believe it is from the 2012 breach,” Durzy said in an email to KrebsOnSecurity. “How many of those 117m are active and current is still being investigated.”

Regarding the decision not to force a password reset across the board back in 2012, Durzy said “We did at the time what we thought was in the best interest of our member base as a whole, trying to balance security for those with passwords that were compromised while not disrupting the LinkedIn experience for those who didn’t appear impacted.”

The 117 million figure makes sense: LinkedIn says it has more than 400 million users, but reports suggest only about 25 percent of those accounts are used monthly.

Alex Holden, co-founder of security consultancy Hold Security, was among the first to discover the original cache of 6.5 million back in 2012 — shortly after it was posted to the password cracking forum InsidePro. Holden said the 6.5 million encrypted passwords were all unique, and did not include any passwords that were simple to crack with rudimentary tools or resources [full disclosure: Holden’s site lists this author as an adviser, however I receive no compensation for that role].

“These were just the ones that the guy who posted it couldn’t crack,” Holden said. “I always thought that the hacker simply didn’t post to the forum all of the easy passwords that he could crack himself.”

The top 20 most commonly used LinkedIn account passwords, according to LeakedSource.
According to LeakedSource, just 50 easily guessed passwords made up more than 2.2 million of the 117 million encrypted passwords exposed in the breach.

“Passwords were stored in SHA1 with no salting,” the password-selling site claims. “This is not what internet standards propose. Only 117m accounts have passwords and we suspect the remaining users registered using FaceBook or some similarity.”

SHA1 is one of several different methods for “hashing” — that is, obfuscating and storing — plain text passwords. Passwords are “hashed” by taking the plain text password and running it against a theoretically one-way mathematical algorithm that turns the user’s password into a string of gibberish numbers and letters that is supposed to be challenging to reverse. 

The weakness of this approach is that hashes by themselves are static, meaning that the password “123456,” for example, will always compute to the same password hash. To make matters worse, there are plenty of tools capable of very rapidly mapping these hashes to common dictionary words, names and phrases, which essentially negates the effectiveness of hashing. These days, computer hardware has gotten so cheap that attackers can easily and very cheaply build machines capable of computing tens of millions of possible password hashes per second for each corresponding username or email address.

But by adding a unique element, or “salt,” to each user password, database administrators can massively complicate things for attackers who may have stolen the user database and rely upon automated tools to crack user passwords.

LinkedIn said it added salt to its password hashing function following the 2012 breach. But if you’re a LinkedIn user and haven’t changed your LinkedIn password since 2012, your password may not be protected with the added salting capabilities. At least, that’s my reading of the situation from LinkedIn’s 2012 post about the breach.

If you haven’t changed your LinkedIn password in a while, that would probably be a good idea. Most importantly, if you use your LinkedIn password at other sites, change those passwords to unique passwords. As this breach reminds us, re-using passwords at multiple sites that hold personal and/or financial information about you is a less-than-stellar idea.

Microsoft Disables Wi-Fi Sense on Windows 10

Postby BrianKrebs via Krebs on Security »

Microsoft has disabled its controversial Wi-Fi Sense feature, a component embedded in Windows 10 devices that shares access to WiFi networks to which you connect with any contacts you may have listed in Outlook and Skype — and, with an opt-in — your Facebook friends.

Redmond made the announcement almost as a footnote in its Windows 10 Experience blog, but the feature caused quite a stir when the company’s flagship operating system first debuted last summer.

Microsoft didn’t mention the privacy and security concerns raised by Wi-Fi Sense, saying only that the feature was being removed because it was expensive to maintain and that few Windows 10 users were taking advantage of it.

“We have removed the Wi-Fi Sense feature that allows you to share Wi-Fi networks with your contacts and to be automatically connected to networks shared by your contacts,” wrote Gabe Aul, corporate vice president of Microsoft’s engineering systems team. “The cost of updating the code to keep this feature working combined with low usage and low demand made this not worth further investment. Wi-Fi Sense, if enabled, will continue to get you connected to open Wi-Fi hotspots that it knows about through crowdsourcing.”

Wi-Fi Sense doesn’t share your WiFi network password per se — it shares an encrypted version of that password. But it does allow anyone in your Skype or Outlook or Hotmail contacts lists to waltz onto your Wi-Fi network — should they ever wander within range of it or visit your home (or hop onto it secretly from hundreds of yards away with a good ‘ole cantenna!).

When the feature first launched, Microsoft sought to reassure would-be Windows 10 users that their Wi-Fi password would be sent encrypted and stored encrypted — on a Microsoft server. The company also pointed out that Windows 10 users had to initially agree to share their network during the Windows 10 installation process before the feature would be turned on.

But these assurances rang hollow for many Windows users already suspicious about a feature that could share access to a user’s wireless network even after that user changed their Wi-Fi network password.

“Annoyingly, because they didn’t have your actual password, just authorization to ask the Wi-Fi Sense service to supply it on their behalf, changing your password down the line wouldn’t keep them out – Wi-Fi Sense would learn the new password directly from you and supply it for them in future,” John Zorabedian wrote for security firm Sophos.

Microsoft’s solution for those concerned required users to change the name (a.k.a. “SSID“) of their Wi-Fi network to include the text “_optout” somewhere in the network name (for example, “oldnetworknamehere_optout”).

I commend Microsoft for taking this step, if albeit belatedly. Much security is undone by ill-advised features in software and hardware that are unnecessarily enabled by default.

Saving a non-booting Asus UX31A laptop

Postby Flameeyes via Flameeyes's Weblog »

I have just come back from a long(ish) trip through UK and US, and decided it's time for me to go back to some simple OSS tasks, while I finally convince myself to talk about the doubts I'm having lately.

To start on that, I tried to turn on my laptop, the Asus UX31A I got three and a half years ago. It didn't turn on. This happened before, so I just left it to charge and tried again. No luck.

Googling around I found a number of people with all kind of problems about it, and one of them is something getting stuck at the firmware level. Given how I had found a random problem with PCIE settings in my previous laptop, that would make it reboot every time I turned it off, but only if the power was still plugged in, I was not completely surprised. Unfortunately following the advice I read (take off the battery and power over AC) didn't help.

I knew it was not the (otherwise common) problem with the power plug, because when I plugged the cable in, the Yubikey Neo-n would turn on, which means power arrived to the board fine.

Then I remembered two things: one of the advices was about the keyboard, and the keyboard itself has had problems before (the control key sometimes would stop working for half an hour at a time.) Indeed, once I re-seated the keyboards' ribbon cable, it turned on again, yay!

But here's the other problem: the laptop would turn on, the caps-lock LED on and stay there. And even letting the main battery run out would not be enough to return it to working conditions. What to do? Well, I got a hunch, and turned out to be right.

One of the things that I tried before was to remove the CMOS battery — either I kept it out not long enough to properly clear, or something else went wrong, but it turned out that removing the CMOS battery allowed the system to start up — but that would mean no RTC, which is not great, if you start the laptop without an Internet connection.

The way I solved it was as follows:

  • disconnect the CMOS battery;
  • start up the laptop;
  • enter "BIOS" (EFI) setup;
  • make any needed change (such as time);
  • "Save and exit";
  • let the laptop boot up;
  • connect the CMOS battery.
Yes this does involve running the laptop without the lower plate for a while, be careful about it, but to the other hand, it did save my laptop from being stomped on, on the ground out of sheer rage.

How LINGUAS are thrice wrong!

Postby Michał Górny via Michał Górny »

The LINGUAS environment variable serves two purposes in Gentoo. On one hand, it’s the USE_EXPAND flag group for USE flags controlling installation of localizations. On the other, it’s a gettext-specfic environment variable controlling installation of localizations in some of build systems supporting gettext. Fun fact is, both uses are simply wrong.

Why LINGUAS as an environment variable is wrong?

Let’s start with the upstream-blessed LINGUAS environment variable. If set, it limits localization files installed by autotools+gettext-based build systems (and some more) to the subset matching specified locales. At first, it may sound like a useful feature. However, it is an implicit feature, and therefore one causing a lot of confusion for the package manager.

Long story short, in this context the package manager does not know anything about LINGUAS. It’s just a random environment variable, that has some value and possibly may be saved somewhere in package metadata. However, this value can actually affect the installed files in a hardly predictable way. So, even if package managers actually added some special meaning to LINGUAS (which would be non-PMS compliant), it still would not be good enough.

What does this practically mean? It means that if I set LINGUAS to some value on my system, then most of the binary packages produced by it suddenly have files stripped, as compared to non-LINGUAS builds. If I installed the binary package on some other system, it would match the LINGUAS of build host rather than the install host. And this means the binary packages are simply incorrect.

Even worse, any change to LINGUAS can not be propagated correctly. Even if the package manager decided to rebuild packages based on changes in LINGUAS, it has no way of knowing which locales were supported by a package, and if LINGUAS was used at all. So you end up rebuilding all installed packages, just in case.

Why LINGUAS USE flags are wrong?

So, how do we solve all those problems? Of course, we introduce explicit LINGUAS flags. This way, the developer is expected to list all supported locales in IUSE, the package manager can determine the enabled localizations and match binary packages correctly. All seems fine. Except, there are two problems.

The first problem is that it is cumbersome. Figuring out supported localizations and adding a dozen flags on a number of packages is time-consuming. What’s even worse, those flags need to be maintained once added. Which means you have to check supported localizations for changes on every version bump. Not all developers do that.

The second problem is that it is… a QA violation, most of the time. We already have quite a clear policy that USE flags are not supposed to control installation of small files with no explicit dependencies — and most of the localization files are exactly that!

Let me remind you why we have that policy. There are two reasons: rebuilds and binary packages.

Rebuilds are bad because every time you change LINGUAS, you end up rebuilding relevant packages, and those can be huge. You may think it uncommon — but just imagine you’ve finally finished building your new shiny Gentoo install, and noticed that you forgot to enable the localization. And guess what! You have to build a number of packages, again.

Binary packages are even worse since they are tied to a specific USE flag combination. If you build a binary package with specific LINGUAS, it can only be installed on hosts with exactly the same LINGUAS. While it would be trivial to strip localizations from installed binary package, you have to build a fresh one. And with dozen lingua-flags… you end up having thousands of possible binary package variants, if not more.

Why EAPI 5 makes things worse… or better?

Reusing the LINGUAS name for the USE_EXPAND group looked like a good idea. After all, the value would end up in ebuild environment for use by the build system, and in most of the affected packages, LINGUAS worked out of the box with no ebuild changes! Except that… it wasn’t really guaranteed to before EAPI 5.

In earlier EAPIs, LINGUAS could contain pretty much anything, since no special behavior was reserved for it. However, starting with EAPI 5 the package manager guarantees that it will only contain those values that correspond to enabled flags. This is a good thing, after all, since it finally makes LINGUAS work reliably. It has one side effect though.

Since LINGUAS is reduced to enabled USE flags, and enabled USE flags can only contain defined USE flags… it means that in any ebuild missing LINGUAS flags, LINGUAS should be effectively empty (yes, I know Portage does not do that currently, and it is a bug in Portage). To make things worse, this means set to an empty value rather than unset. In other words, disabling localization completely.

This way, a small implicit QA issue of implicitly affecting installed localization files turned out into a bigger issue of suddenly stopping to install localizations. Which in turn can’t be fixed without introducing proper set of LINGUAS everywhere, causing other kind of QA issues and additional maintenance burden.

What would be the good solution, again?

First of all, kill LINGUAS. Either make it unset for good (and this won’t be easy since PMS kinda implies making all PM-defined variables read-only), or disable any special behavior associated with it. Make the build system compile and install all localizations.

Then, use INSTALL_MASK. It’s designed to handle this. It strips files from installed systems while preserving them in binary packages. Which means your binary packages are now more portable, and every system you install them to will get correct localizations stripped. Isn’t that way better than rebuilding things?

Now, is that going to happen? I doubt it. People are rather going to focus on claiming that buggy Portage behavior was good, that QA issues are fine as long as I can strip some small files from my system in the ‘obvious’ way, that the specification should be changed to allow a corner case…

PMDG und X-Plane

Postby via Blog Feed »

PMDG entwickelt gerade mit der DC-6 Cloudmaster ihr erstes Addon für X-Plane. Das ist schon länger bekannt, zumal die Beta-Phase nun wohl dem Ende entgegen geht (und natürlich freue ich mich auf das Release, zumal für mich persönlich die DC-6  die eleganteste und schönste der Maschinen mit vier Sternmotoren ist).

Interessant in diesem Zusammenhang ist allerdings ein Statement von Robert S. Randazzo von PMDG, der kürzlich bei AVSim folgendes verlauten ließ:

From PMDG's perspective, it doesn't matter whether you like FSX, FSX-SE, Prepar3D or X-Plane.
In the not-too-distant future you will have access to our products across all of these platforms- and that gives you the choice to go with the platform that suits you best.
Das lässt wohl hoffen, dass PMDGs Boeing-Modelle 737NG und 777 irgendwann auch für X-Plane zu haben sein werden. Ob bzw. wann auch die Jetstream 4100 (bisher nur für FSX verfügbar) und die in Entwicklung befindliche Boeing 747-400 „Queen of the Skies II“ für X-Plane portiert werden, bleibt aber abzuwarten.

Mein Tipp: PMDG wird nach dem Release der DC-6 erst einmal etwas abwarten, um Erfahrungen mit der Maintenance von X-Plane Produkten zu sammeln – und sicherlich auch, um den kommerziellen Erfolg eines X-Plane Addons in deren Preisregion bewerten zu können. Danach wird man weiter sehen…

NVIDIA Linux drivers, PowerMizer, Coolbits, Performance Levels and GPU fan settings

Postby Zach via The Z-Issue »


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
Whew, I know that’s a long title for a post, but I wanted to make sure that I mentioned every term so that people having the same problem could readily find the post that explains what solved it for me. For some time now (ever since the 346.x series [340.76, which was the last driver that worked for me, was released on 27 January 2015]), I have had a problem with the NVIDIA Linux Display Drivers (known as nvidia-drivers in Gentoo Linux). The problem that I’ve experienced is that the newer drivers would, upon starting an X session, immediately clock up to Performance Level 2 or 3 within PowerMizer.

Before using these newer drivers, the Performance Level would only increase when it was really required (3D rendering, HD video playback, et cetera). I probably wouldn’t have even noticed that the Performance Level was changing, except that it would cause the GPU fan to spin faster, which was noticeably louder in my office.

After scouring the interwebs, I found that I was not the only person to have this problem. For reference, see this article, and this one about locking to certain Performance Levels. However, I wasn’t able to find a solution for the exact problem that I was having. If you look at the screenshot below, you’ll see that the Performance Level is set at 2 which was causing the card to run quite hot (79°C) even when it wasn’t being pushed.

Click to enlarge
It turns out that I needed to add some options to my X Server Configuration. Unfortunately, I was originally making changes in /etc/X11/xorg.conf, but they weren’t being honoured. I added the following lines to /etc/X11/xorg.conf.d/20-nvidia.conf, and the changes took effect:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"

The portion in bold (the RegistryDwords option) was what ultimately fixed the problem for me. More information about the NVIDIA drivers can be found in their README and Installation Guide, and in particular, these settings are described on the X configuration options page. The PowerMizerDefaultAC setting may seem like it is for laptops that are plugged in to AC power, but as this system was a desktop, I found that it was always seen as being “plugged in to AC power.”

As you can see from the screenshots below, these settings did indeed fix the PowerMizer Performance Levels and subsequent temperatures for me:

Click to enlarge
Whilst I was adding X configuration options, I also noticed that Coolbits (search for “Coolbits” on that page) were supported with the Linux driver. Here’s the excerpt about Coolbits for version 364.19 of the NVIDIA Linux driver:

Option “Coolbits” “integer”
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer’s design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer’s operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

When “2” (Bit 1) is set in the “Coolbits” option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.

When “4” (Bit 2) is set in the “Coolbits” option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.

When “8” (Bit 3) is set in the “Coolbits” option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or performance levels may be modified.

When “16” (Bit 4) is set in the “Coolbits” option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.

When this option is set for an X screen, it will be applied to all X screens running on the same GPU.

The default for this option is 0 (unsupported features are disabled).

I found that I would personally like to have the options enabled by “4” and “8”, and that one can combine Coolbits by simply adding them together. For instance, the ones I wanted (“4” and “8”) added up to “12”, so that’s what I put in my configuration:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "Coolbits" "12"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"

and that resulted in the following options being available within the nvidia-settings utility:

Click to enlarge
Though the Coolbits portions aren’t required to fix the problems that I was having, I find them to be helpful for maintenance tasks and configurations. I hope, if you’re having problems with the NVIDIA drivers, that these instructions help give you a better understanding of how to workaround any issues you may face. Feel free to comment if you have any questions, and we’ll see if we can work through them.


Carding Sites Turn to the ‘Dark Cloud’

Postby BrianKrebs via Krebs on Security »

Crooks who peddle stolen credit cards on the Internet face a constant challenge: Keeping their shops online and reachable in the face of meddling from law enforcement officials, security firms, researchers and vigilantes. In this post, we’ll examine a large collection of hacked computers around the world that currently serves as a criminal cloud hosting environment for a variety of cybercrime operations, from sending spam to hosting malicious software and stolen credit card shops.

I first became aware of this botnet, which I’ve been referring to as the “Dark Cloud” for want of a better term, after hearing from Noah Dunker, director of security labs at  Kansas City-based vendor RiskAnalytics. Dunker reached out after watching a Youtube video I posted that featured some existing and historic credit card fraud sites. He asked what I knew about one of the carding sites in the video: A fraud shop called “Uncle Sam,” whose home page pictures a pointing Uncle Sam saying “I want YOU to swipe.”

The “Uncle Sam” carding shop is one of a half-dozen that reside on a Dark Cloud criminal hosting environment.
I confessed that I knew little of this shop other than its existence, and asked why he was so interested in this particular crime store. Dunker showed me how the Uncle Sam card shop and at least four others were hosted by the same Dark Cloud, and how the system changed the Internet address of each Web site roughly every three minutes. The entire robot network, or”botnet,” consisted of thousands of hacked home computers spread across virtually every time zone in the world, he said. 

Dunker urged me not to take his word for it, but to check for myself the domain name server (DNS) settings of the Uncle Sam shop every few minutes. DNS acts as a kind of Internet white pages, by translating Web site names to numeric addresses that are easier for computers to navigate. The way this so-called “fast-flux” botnet works is that it automatically updates the DNS records of each site hosted in the Dark Cloud every few minutes, randomly shuffling the Internet address of every site on the network from one compromised machine to another in a bid to frustrate those who might try to take the sites offline.

Sure enough, a simple script was all it took to find a few dozen Internet addresses assigned to the Uncle Sam shop over just 20 minutes of running the script. When I let the DNS lookup script run overnight, it came back with more than 1,000 unique addresses to which the site had been moved during the 12 or so hours I let it run. According to Dunker, the vast majority of those Internet addresses (> 80 percent) tie back to home Internet connections in Ukraine, with the rest in Russia and Romania.

‘Mr. Bin,’ another carding shop hosting on the dark cloud service. A ‘bin’ is the “bank identification number” or the first six digits on a card, and it’s mainly how fraudsters search for stolen cards.
“Right now there’s probably over 2,000 infected endpoints that are mostly broadband subscribers in Eastern Europe,” enslaved as part of this botnet, Dunker said. “It’s a highly functional network, and it feels kind of like a black market version of Amazon Web Services. Some of the systems appear to be used for sending spam and some are for big dynamic scaled content delivery.”

Dunker said that historic DNS records indicate that this botnet has been in operation for at least the past year, but that there are signs it was up and running as early as Summer 2014.

Wayne Crowder, director of threat intelligence for RiskAnalytics, said the botnet appears to be a network structure set up to push different crimeware, including ransomware, click fraud tools, banking Trojans and spam.

Crowder said the Windows-based malware that powers the botnet assigns infected hosts different roles, depending on the victim machine’s strengths or weaknesses: More powerful systems might be used as DNS servers, while infected systems behind home routers may be infected with a “reverse proxy,” which lets the attackers control the system remotely.

“Once it’s infected, it phones home and gets a role assigned to it,” Crowder said. “That may be to continue sending spam, host a reverse proxy, or run a DNS server. It kind of depends on what capabilities it has.”

“Popeye,” another carding site hosted on the criminal cloud network.
Indeed, this network does feel rather spammy. In my book Spam Nation, I detailed how the largest spam affiliate program on the planet at the time used a similar fast-flux network of compromised systems to host its network of pill sites that were being promoted in the junk email. Many of the domains used in those spam campaigns were two- and three-word domains that appeared to be randomly created for use in malware and spam distribution.

“We’re seeing two English words separated by a dash,” Dunker said the hundreds of hostnames found on the dark cloud network that do not appear to be used for carding shops. “It’s a very spammy naming convention.”

It’s unclear whether this botnet is being used by more than one individual or group. The variety of crimeware campaigns that RiskAnalytics has tracked operated through the network suggests that it may be rented out to multiple different cybercrooks. Still, other clues suggests the whole thing may have been orchestrated by the same gang.

For example, nearly all of the carding sites hosted on the dark cloud network — including Uncle Sam, Scrooge McDuck, Mr. Bin, Try2Swipe, Popeye, and Royaldumps — share the same or very similar site designs. All of them say that customers can look up available cards for sale at the site, but that purchasing the cards requires first contacting the proprietor of the shops directly via instant message.

All six of these shops — and only these six — are advertised prominently on the cybercrime forum prvtzone[dot]su. It is unclear whether this forum is run or frequented by the people who run this botnet, but the forum does heavily steer members interested in carding toward these six carding services. It’s unclear why, but Prvtzone has a Google Analytics tracking ID (UA-65055767) embedded in the HTML source of its page that may hold clues about the proprietors of this crime forum.

The “dumps” section of the cybercrime forum Prvtzone advertises all six of the carding domains found on the fast-flux network.
Dunker says he’s convinced it’s one group that occasionally rents out the infrastructure to other criminals.

“At this point, I’m positive that there’s one overarching organized crime operation driving this whole thing,” Dunker said. “But they do appear to be leasing parts of it out to others.”

Dunker and Crowder say they hope to release an initial report on their findings about the botnet sometime next week, but that for now the rabbit hole appears to go quite deep with this crime machine. For instance, there  are several sites hosted on the network that appear to be clones of real businesses selling expensive farm equipment in Europe, and multiple sites report that these are fake companies looking to scam the unwary.

“There are a lot of questions that this research poses that we’d like to be able to answer,” Crowder said.

For now, I’d invite anyone interested to feel free to contribute to the research. This text file contains a historic record of domains I found that are or were at one time tied to the 40 or so Internet addresses I found in my initial, brief DNS scans of this network. Here’s a larger list of some 1,024 addresses that came up when I ran the scan for about 12 hours.

If you liked this story, check out this piece about another carding forum called Joker’s Stash, which also uses a unique communications system to keep itself online and reachable to all comers.

ZFSv28 Ready for Testing on FreeBSD

Postby via A Year in the Life of a BSD Guru »

From yesterday's announcement:

Wendy’s: Breach Affected 5% of Restaurants

Postby BrianKrebs via Krebs on Security »

Wendy’s said today that an investigation into a credit card breach at the nationwide fast-food chain uncovered malicious software on point-of-sale systems at fewer than 300 of the company’s 5,500 franchised stores. The company says the investigation into the breach is continuing, but that the malware has been removed from all affected locations.

“Based on the preliminary findings of the investigation and other information, the Company believes that malware, installed through the use of compromised third-party vendor credentials, affected one particular point of sale system at fewer than 300 of approximately 5,500 franchised North America Wendy’s restaurants, starting in the fall of 2015,” Wendy’s disclosed in their first quarter financial statement today. The statement continues:

“These findings also indicate that the Aloha point of sale system has not been impacted by this activity. The Aloha system is already installed at all Company-operated restaurants and in a majority of franchise-operated restaurants, with implementation throughout the North America system targeted by year-end 2016. The Company expects that it will receive a final report from its investigator in the near future.”

“The Company has worked aggressively with its investigator to identify the source of the malware and quantify the extent of the malicious cyber-attacks, and has disabled and eradicated the malware in affected restaurants. The Company continues to work through a defined process with the payment card brands, its investigator and federal law enforcement authorities to complete the investigation.”

“Based upon the investigation to date, approximately 50 franchise restaurants are suspected of experiencing, or have been found to have, unrelated cybersecurity issues. The Company and affected franchisees are working to verify and resolve these issues.”

The findings come as many banks and credit unions feeling card fraud pain because of the breach have been grumbling about the extent and duration of the breach. Sources at multiple financial institutions say their data indicates that some of the breached Wendy’s locations were still leaking customer card data as late as the end of March 2016 and into early April. The breach was first disclosed on this blog on January 27, 2016.

“Our ongoing investigation into unusual payment card activity at some Wendy’s restaurants is being led by a third party PFI and is proceeding as expeditiously as possible,” Wendy’s spokesman Bob Bertini said in response to questions about the duration of the breach at some stores. “As you are aware, our investigator is required to follow certain protocols in this type of comprehensive investigation and this takes time. Adding to the complexity is the fact that most Wendy’s restaurants are owned and operated by independent franchisees.”

Adobe, Microsoft Push Critical Updates

Postby BrianKrebs via Krebs on Security »

Adobe has issued security updates to fix weaknesses in its PDF Reader and Cold Fusion products, while pointing to an update to be released later this week for its ubiquitous Flash Player browser plugin. Microsoft meanwhile today released 16 update bundles to address dozens of security flaws in Windows, Internet Explorer and related software.

Microsoft’s patch batch includes updates for “zero-day” vulnerabilities (flaws that attackers figure out how to exploit before before the software maker does) in Internet Explorer (IE) and in Windows. Half of the 16 patches that Redmond issued today earned its “critical” rating, meaning the vulnerabilities could be exploited remotely through no help from the user, save for perhaps clicking a link, opening a file or visiting a hacked or malicious Web site.

According to security firm Shavlik, two of the Microsoft patches tackle issues that were publicly disclosed prior to today’s updates, including bugs in IE and the Microsoft .NET Framework.

Anytime there’s a .NET Framework update available, I always uncheck those updates to install and then reboot and install the .NET updates; I’ve had too many .NET update failures muddy the process of figuring out which update borked a Windows machine after a batch of patches to do otherwise, but your mileage may vary.

On the Adobe side, the pending Flash update fixes a single vulnerability that apparently is already being exploited in active attacks online. However, Shavlik says there appears to be some confusion about how many bugs are fixed in the Flash update.

“If information gleaned from [Microsoft’s account of the Flash Player update] MS16-064 is accurate, this Zero Day will be accompanied by 23 additional CVEs, with the release expected on May 12th,” Shavlik wrote. “With this in mind, the recommendation is to roll this update out immediately.”

Adobe says the vulnerability is included in Adobe Flash Player and earlier versions for Windows, Macintosh, Linux, and Chrome OS, and that the flaw will be fixed in a version of Flash to be released May 12.

As far as Flash is concerned, the smartest option is probably best to hobble or ditch the program once and for all — and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you use Adobe Reader to display PDF documents, you’ll need to update that, too. Alternatively, consider switching to another reader that is perhaps less targeted. Adobe Reader comes bundled with a number of third-party software products, but many Windows users may not realize there are alternatives, including some good free ones. For a time I used Foxit Reader, but that program seems to have grown more bloated with each release. My current preference is Sumatra PDF; it is lightweight (about 40 times smaller than Adobe Reader) and quite fast.

Finally, if you run a Web site that in any way relies on Adobe’s Cold Fusion technology, please update your software soon. Cold Fusion vulnerabilities have traditionally been targeted by cyber thieves to compromise countless online shops.

New Company: Edge Security

Postby via Nerdling Sapple »

I've just launched a website for my new information security consulting company, Edge Security. We're expert hackers, with a fairly diverse skill set and a lot of experience. I mention this here because in a few months we plan to release an open-source kernel module for Linux called WireGuard. No details yet, but keep your eyes open in this space.

EuroBSDCon 2010

Postby via A Year in the Life of a BSD Guru »

I'll be heading to the airport later this afternoon, enroute to Karlsruhe for this year's EuroBSDCon. Here are my activities for the conference:

Indymedia-Beitrag über Legida-Gegenprotest-Bündnisse

Postby feltel via Sebastians Blog »

Normalerweise sind Postings wie diese hier nicht mein Ding, aber das, was ich auf heute las, das macht mich richtiggehend wütend und es ist wie der sprichwörtliche Schlag ins Gesicht. Nicht nur in meines, sondern in das all derer, die sich seit nunmehr eineinhalb Jahren aktiv gegen Legida, Pegida, AfD, OfD usw. und damit […]

Crooks Grab W-2s from Credit Bureau Equifax

Postby BrianKrebs via Krebs on Security »

Identity thieves stole tax and salary data from big-three credit bureau Equifax Inc., according to a letter that grocery giant Kroger sent to all current and some former employees on Thursday. The nation’s largest grocery chain by revenue appears to be one of several Equifax customers that were similarly victimized this year.

Atlanta-based Equifax’s W-2Express site makes electronic W-2 forms accessible for download for many companies, including Kroger — which employs more than 431,000 people. According to a letter Kroger sent to employees dated May 5, thieves were able to access W-2 data merely by entering at Equifax’s portal the employee’s default PIN code, which was nothing more than the last four digits of the employee’s Social Security number and their four-digit birth year.

“It appears that unknown individuals have accessed [Equifax’s] W2Express website using default log-in information based on Social Security numbers (SSN) and dates of birth, which we believe were obtained from some other source, such as a prior data breach at other institutions,” Kroger wrote in a FAQ about the incident that was included with the letter sent to employees. “We have no indication that Kroger’s systems have been compromised.”

The FAQ continued:

“At this time, we have no indication that associates who had created a new password (did not use the default PIN) were affected, and we are still identifying which associates still using the default PIN may have been affected. We believe individuals gained access to some Kroger associates’ electronic W-2 forms and may have used the information to file tax returns in their names in an effort to claim a fraudulent refund.”

“Kroger is working with Equifax and the authorities to determine who is affected and restore secure access to W-2Express. At this time, we believe you are among our current and former Kroger associates using the default PIN in the W-2Express system. This does not necessarily mean your W-2 was accessed as part of this security incident. We are still working to identify which individuals’ information was accessed.”

Kroger said it doesn’t yet know how many of its employees may have been affected.

The incident comes amid news first reported on this blog earlier this week that tax fraudsters similarly targeted employees of companies that used payroll giant ADP to give employees access to their W-2 data. ADP acknowledged that the incident affected employees at U.S. Bank and at least 11 other companies.

Equifax did not respond to requests for comment about how many other customer companies may have been affected by the same default (in)security. But Kroger spokesman Keith Dailey said other companies that relied on Equifax for W-2 data also relied on the last four of the SSN and 4-digit birth year as authenticators.

“As far as I know, it’s the standard Equifax setup,” Dailey said.

Last month, Stanford University alerted 600 current and former employees that their data was similarly accessed by ID thieves via Equifax’s W-2Express portal. Northwestern University also just alerted 150 employees that their salary and tax data was stolen via Equifax this year.

In a statement released to KrebsOnSecurity, Equifax spokeswoman Dianne Bernez confirmed that the company had been made aware of suspected fraudulent access to payroll information through its W-2Express service by Kroger.

“The information in question was accessed by unauthorized individuals who were able to gain access by using users’ personally identifiable information,” the statement reads. “We have no reason to believe the personally identifiable information was attained through Equifax systems. Unfortunately, as individuals’ personally identifiable information has become more publicly available, these types of online fraud incidents have escalated. As a result, it is critical for consumers and businesses to take steps to protect consumers’ personally identifiable information including the use of strong passwords and PIN codes. We are working closely with Kroger to assess and monitor the situation.”

ID thieves go after W-2 data because it contains much of the information needed to fraudulently request a large tax refund from the IRS in someone else’s name. Kroger told employees they would know they were victims in this breach if they received a notice from the IRS about a fraudulent refund request filed in their name.

However, most victims first learn of the crime after having their returns rejected by the IRS because the scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

Kroger said it would offer free credit monitoring services to employees affected by the breach. Kroger spokesman Dailey declined to say which company would be providing that monitoring, but he did confirm that it would not be Equifax.

Update, May 7, 9:44 a.m.: Added mention of the Northerwestern University incident involving Equifax’s W-2 portal.