Planet

Got $90,000? A Windows 0-Day Could Be Yours

Postby BrianKrebs via Krebs on Security »

How much would a cybercriminal, nation state or organized crime group pay for blueprints on how to exploit a serious, currently undocumented, unpatched vulnerability in all versions of Microsoft Windows? That price probably depends on the power of the exploit and what the market will bear at the time, but here’s a look at one convincing recent exploit sales thread from the cybercrime underworld where the current asking price for a Windows-wide bug that allegedly defeats all of Microsoft’s current security defenses is USD $90,000.

So-called “zero-day” vulnerabilities are flaws in software and hardware that even the makers of the product in question do not know about. Zero-days can be used by attackers to remotely and completely compromise a target — such as with a zero-day vulnerability in a browser plugin component like Adobe Flash or Oracle’s Java. These flaws are coveted, prized, and in some cases stockpiled by cybercriminals and nation states alike because they enable very stealthy and targeted attacks.

The $90,000 Windows bug that went on sale at the semi-exclusive Russian language cybercrime forum exploit[dot]in earlier this month is in a slightly less serious class of software vulnerability called a “local privilege escalation” (LPE) bug. This type of flaw is always going to be used in tandem with another vulnerability to successfully deliver and run the attacker’s malicious code.

LPE bugs can help amplify the impact of other exploits. One core tenet of security is limiting the rights or privileges of certain programs so that they run with the rights of a normal user — and not under the all-powerful administrator or “system” user accounts that can delete, modify or read any file on the computer. That way, if a security hole is found in one of these programs, that hole can’t be exploited to worm into files and folders that belong only to the administrator of the system.

This is where a privilege escalation bug can come in handy. An attacker may already have a reliable exploit that works remotely — but the trouble is his exploit only succeeds if the current user is running Windows as an administrator. No problem: Chain that remote exploit with a local privilege escalation bug that can bump up the target’s account privileges to that of an admin, and your remote exploit can work its magic without hindrance.

The seller of this supposed zero-day — someone using the nickname “BuggiCorp” — claims his exploit works on every version of Windows from Windows 2000 on up to Microsoft’s flagship Windows 10 operating system. To support his claims, the seller includes two videos of the exploit in action on what appears to be a system that was patched all the way up through this month’s (May 2016) batch of patches from Microsoft (it’s probably no accident that the video was created on May 10, the same day as Patch Tuesday this month).

A second video (above) appears to show the exploit working even though the test machine in the video is running Microsoft’s Enhanced Mitigation Experience Toolkit (EMET), a free software framework designed to help block or blunt exploits against known and unknown Windows vulnerabilities and flaws in third-party applications that run on top of Windows.

The sales thread on exploit[dot]in.
Jeff Jones, a cybersecurity strategist with Microsoft, said the company was aware of the exploit sales thread, but stressed that the claims were still unverified. Asked whether Microsoft would ever consider paying for information about the zero-day vulnerability, Jones pointed to the company’s bug bounty program that rewards security researchers for reporting vulnerabilities. According to Microsoft, the program to date has paid out more than $500,000 in bounties.

Microsoft heavily restricts the types of vulnerabilities that qualify for bounty rewards, but a bug like the one on sale for $90,000 would in fact qualify for a substantial bounty reward. Last summer, Microsoft raised its reward for information about a vulnerability that can fully bypass EMET from $50,000 to $100,000. Incidentally, Microsoft said any researcher with a vulnerability or who has questions can reach out to the Microsoft Security Response Center to learn more about the program and process.

ANALYSIS

It’s interesting that this exploit’s seller could potentially make more money by peddling his find to Microsoft than to the cybercriminal community. Of course, the videos and the whole thing could be a sham, but that’s probably unlikely in this case. For one thing, a scammer seeking to scam other thieves would not insist on using the cybercrime forum’s escrow service to consummate the transaction, as this vendor has.

As I noted in my book Spam Nation, cybercrime forums run on reputation-based systems similar to eBay’s “feedback” mechanism — in the form of reputation points granted or revoked by established members. Rookie and established members alike are all encouraged to use the forum’s “escrow” system to ensure transactions are completed honorably among thieves.

The escrow service can act as a sort of proxy for reputation. The forum administrators hold the buyer’s money in escrow until the seller can demonstrate he has held up his end of the bargain, be it delivering the promised goods, services or crypto-currency. The forum admins keep a small percentage of the overall transaction amount (usually in Bitcoins) for acting as the broker and insurer of the transaction.

Thus, if a member states up front that he’ll only work through a crime forum’s escrow service, that member’s cybercriminal pitches are far more likely to be taken seriously by others on the forum.

Security researchers at Trustwave first pointed my attention to the exploit[dot]in zero-day sales thread last week. Ziv Mador, vice president of security research at Trustwave, said he believes the exploit is legitimate.

“It seems the seller has put in the effort to present himself/herself as a trustworthy seller with a valid offering,” he said. Mador noted Trustwave can’t be 100% certain of the details without the vulnerability in their possession, but that the videos and translation provide further evidence. The company has published more detail on the sales thread and the claimed capabilities of the exploit.

Is $90,000 the right price for this vulnerability? Depends on whom you ask. For starters, not everyone values the same types of exploits similarly. For example, the vulnerability prices listed by exploit broker Zerodium indicate that the company places a far lesser value on exploits in the Windows operating system and far more on vulnerabilities in mobile systems and Web browser components. Zerodium says the price it might be willing to pay for a similar Windows exploit is about $30,000, whereas a critical bug in Apple’s iOS mobile operating system could fetch up to $100,000.

Image: Zerodium.com
Vlad Tsyrklevich, a researcher who’s published quite a bit about the shadowy market for zero-day exploits, says price comparisons for different exploits should be taken with a grain of salt. In his analysis, Tsyrklevich points to a product catalog from exploit vendor Netragard, which in 2014 priced a non-exclusive Windows LPE vulnerability at $90,000.

“Exploit developers have an incentive to state high prices and brokers offer to sell both low-quality and high-quality exploits,” Tsyrklevich wrote. “If a buyer negotiates poorly or chooses a shoddy exploit, the vendor still benefits. Moreover, it’s difficult to compare the reliability and projected longevity of vulnerabilities or exploits offered by different developers. Many of the exploits offered by exploit brokers are not sold.”

BuggiCorp, the seller of the Windows LPE zero-day flaw, was asked by several forum members whether his zero-day was related to a vulnerability that Microsoft patched on April 12, 2016. BuggiCorp responds that his is different. But as documented by security vendor FireEye, that flaw was a similar LPE vulnerability that FireEye said was featured in a series of spear phishing attacks aimed at gaining access to point-of-sale systems at targeted retail, restaurant and hospitality industries. FireEye called the downloader used in those attacks “Punchbuggy,” but it did not specify why it chose that name.

If nothing else, this zero-day thread is an unusual sight on such an open cybercrime forum, Trustwave’s Mador said.

“Finding a zero day listed in between these fairly common offerings is definitely an anomaly,” he said. “It goes to show that zero days are coming out of the shadows and are fast becoming a commodity for the masses, a worrying trend indeed.”
Top

Updating the broadcom driver part #2

Postby Adrian via Adrian Chadd's Ramblings »

In Part 1, I described updating the FreeBSD bwn(4) driver and adding some support for the PHY-N driver from b43. It's GPL, but it works, and it gets me over the initial hump of getting support for updated NICs and initial 5GHz operation.

In this part, I'll describe what I did to tidy up RSSI handling and bring up the BCM4322 support.

To recap - I ported over PHY-N support from b43, updated the SPROM handling in the bus glue (siba(4)), and made 11a OFDM transmission work. I was lucky - I chose the first 11n, non-MIMO NIC that Broadcom made which behaved sufficiently similarly to the previous 11abg generation. It was non-MIMO and I could run non-MIMO microcode, which already shipped with the existing firmware FreeBSD builds. But, the BCM4322 is a 2x2 MIMO device, and requires updated firmware, which brought over a whole new firmware API.

Now, bwn(4) handles the earlier two firmware interfaces, but not the newer one that b43 also supports. I chose BCM4321 because it didn't require firmware API changes and anything in the Broadcom siba(4) bus layer, so I could focus on porting the PHY-N code and updating the MAC driver to work. This neatly compartmentalised the problem so I wouldn't be trying to make a completely changed thing work and spending days chasing down obscure bugs.

The BCM4322 is a bit of a different beast. It uses PHY-N, which is good. It requires the transmit path setup the PLCP header bits for OFDM to work (ie, 11a, 11g) which I had to do for BCM4321, so that's good. But, it required firmware API changes, and it required siba(4) changes. I decided to tackle the firmware changes first, so I could at least get the NIC loaded and ready.

So, I first fixed up the RX descriptor handling, and found that we were missing a whole lot of RSSI calculation math. I dutifully wrote it down on paper and reimplemented it from b43. That provided some much better looking RSSI values, which made the NIC behave much better. The existing bwn(4) driver just didn't decode the RSSI values in any sensible way and so some Very Poor Decisions were made about which AP to associate to.

Next up, the firmware API. I finished adding the new structure definitions and updating the descriptor sizes/offsets. There were a couple of new things I had to handle for later chip revision devices, and the transmit/receive descriptor layout changed. That took most of a weekend in Palm Springs (my first non-working holiday in .. well, since Atheros, really) and I had the thing up and doing DMA. But, I wasn't seeing any packets.

So, I next decided to finish implementing the siba(4) bus pieces. The 4322 uses a newer generation power management unit (PMU) with some changes in how clocking is configured. I did that, verified I was mostly doing the right thing, and fired that up - but it didn't show anything in the scan list. Now, I was wondering whether the PMU/clock configuration was wrong and not enabling the PHY, so I found some PHY reset code that bwn(4) was doing wrong, and I fixed that. Nope, still no scan results. I wondered if the thing was set up to clock right (since if we fed the PHY the wrong clock, I bet it wouldn't configure the radio with the right clock, and we'd tune to the wrong frequency) which was complete conjecture on my part - but, I couldn't see anything there I was missing.

Next up, I decided to debug the PHY-N code. It's a different PHY revision and chip revision - and the PHY code does check these to do different things. I first found that some of the PHY table programming was completely wrong, so after some digging I found I had used the wrong SPROM offsets in the siba(4) code I had added. It didn't matter for the BCM4321 because the PHY-N revision was early enough that these SPROM values weren't used. But they were used on the BCM4322. But, it didn't come up.

Then I decided to check the init path in more detail. I added some debug prints to the various radio programming functions to see what's being called in what order, and I found that none of them were being called. That sounded a bit odd, so I went digging to see what was supposed to call them.

The first thing it does when it changes channel is to call the rfkill method with the "on" flag set on, so it should program on the RF side of things. It turns out that, hilariously, the BCM4322 PHY revision has a slightly different code path, which checks the value of 'rfon' in the driver state. And, for reasons I don't yet understand, it's set to '1' in the PHY init path and never set to '0' before we start calling PHY code. So, the PHY-N code thought the radio was already up and didn't need reprogramming.

Oops.

I commented out that check, and just had it program the radio each time. Voila! It came up.

So, next on the list (as I do it) is adding PHY-HT support, and starting the path of supporting the newer bus (bhnd(4)) NICs. Landon Fuller is writing the bhnd(4) support and we're targeting the BCM943225 as the first bcma bus device. I'll write something once that's up and working!
Top

Did the Clinton Email Server Have an Internet-Based Printer?

Postby BrianKrebs via Krebs on Security »

The Associated Press today points to a remarkable footnote in a recent State Department inspector general report on the Hillary Clinton email scandal: The mail was managed from the vanity domain “clintonemail.com.” But here’s a potentially more explosive finding: A review of the historic domain registration records for that domain indicates that whoever built the private email server for the Clintons also had the not-so-bright idea of connecting it to an Internet-based printer.

According to historic Internet address maps stored by San Mateo, Calif. based Farsight Security, among the handful of Internet addresses historically assigned to the domain “clintonemail.com” was the numeric address 24.187.234.188. The subdomain attached to that Internet address was….wait for it…. “printer.clintonemail.com“.

Interestingly, that domain was first noticed by Farsight in March 2015, the same month the scandal broke that during her tenure as United States Secretary of State Mrs. Clinton exclusively used her family’s private email server for official communications.

Farsight’s record for 24.187.234.188, the Internet address which once mapped to “printer.clintonemail.com”.
I should emphasize here that it’s unclear whether an Internet-capable printer was ever connected to printer.clintonemail.com. Nevertheless, it appears someone set it up to work that way.

Ronald Guilmette, a private security researcher in California who prompted me to look up this information, said printing things to an Internet-based printer set up this way might have made the printer data vulnerable to eavesdropping.

“Whoever set up their home network like that was a security idiot, and it’s a dumb thing to do,” Guilmette said. “Not just because any idiot on the Internet can just waste all your toner. Some of these printers have simple vulnerabilities that leave them easy to be hacked into.”

More importantly, any emails or other documents that the Clintons decided to print would be sent out over the Internet — however briefly — before going back to the printer. And that data may have been sniffable by other customers of the same ISP, Guilmette said.

“People are getting all upset saying hackers could have broken into her server, but what I’m saying is that people could have gotten confidential documents easily without breaking into anything,” Guilmette said. “So Mrs. Clinton is sitting there, tap-tap-tapping on her computer and decides to print something out. A clever Chinese hacker could have figured out, ‘Hey, I should get my own Internet address on the same block as the Clinton’s server and just sniff the local network traffic for printer files.'”

I should note that it’s possible the Clintons were encrypting all of their private mail communications with a “virtual private network” (VPN). Other historical “passive DNS” records indicate there were additional, possibly interesting and related subdomains once directly adjacent to the aforementioned Internet address 24.187.234.188:

24.187.234.186 rosencrans.dyndns.ws
24.187.234.187 wjcoffice.com
24.187.234.187 mail.clintonemail.com
24.187.234.187 mail.presidentclinton.com
24.187.234.188 printer.clintonemail.com
24.187.234.188 printer.presidentclinton.com
24.187.234.190 sslvpn.clintonemail.com
Top

Akonadi for e-mail needs to die

Postby Andreas via the dilfridge blog »

So, I'm officially giving up on kmail2 (i.e., the Akonadi-based version of kmail) on the last one of my PCs now. I have tried hard and put in a lot of effort to get it working. However, it costs me a significant amount of time and effort just to be able to receive and read e-mail - meaning hanging IMAP resources every few minutes, the feared "Multiple merge candidates" bug popping up again and again, and other surprise events. That is plainly not acceptable in the workplace, where I need to rely on e-mail as means of communication. By leaving kmail2 I seem to be following many many other people... Even dedicated KDE enthusiasts that I know have by now migrated to Trojita or Thunderbird.

My conclusion after all these years, based on my personal experience, is that the usage of Akonadi for e-mail is a failed experiment. It was a nice idea in theory, and may work fine for some people. I am certain that a lot of effort has been put into improving it, I applaud the developers of both kmail and Akonadi for their tenaciousness and vision and definitely thank them for their work. Sadly, however, if something doesn't become robust and error-tolerant after over 5 (five) years of continuous development effort, the question pops up whether the initial architectural idea wasn't a bad one in the first place - in particular in terms of unhandleable complexity.

I am not sure why precisely in my case things turn out so badly. One possible candidate is the university mail server that I'm stuck with, running Novell Groupwise. I've seen rather odd behaviour in the IMAP replies in the past there. That said, there's the robustness principle for software to consider, and even if Groupwise were to do silly things, other IMAP clients seem to get along with it fine.

Recently I've heard some rumors about a new framework called Sink (or Akonadi-Next), which seems to be currently under development... I hope it'll be less fragile, and less overcomplexified. The choice of name is not really that convincing though (where did my e-mails go again)?

Now for the question and answer session...

Question: Why do you post such negative stuff? You are only discouraging our volunteers.
Answer: Because the motto of the drowned god doesn't apply to software. What is dead should better remain dead, and not suffer continuous revival efforts while users run away and the brand is damaged. Also, I'm a volunteer myself and invest a lot of time and effort into Linux. I've been seeing the resulting fallout. It likely scared off other prospective help.

Question: Have you tried restarting Akonadi? Have you tried clearing the Akonadi cache? Have you tried starting with a fresh database?
Answer: Yes. Yes. Yes. Many times. And yes to many more things. Did I mention that I spent a lot of time with that? I'll miss the akonadiconsole window. Or maybe not.

Question: Do you think kmail2 (the Akonadi-based kmail) can be saved somehow?
Answer: Maybe. One could suggest an additional agent as replacement to the usual IMAP module. Let's call it IMAP-stupid, and mandate that it uses only a bare minimum of server features and always runs in disconnected mode... Then again, I don't know the code, and don't know if that is feasible. Also, for some people kmail2 seems to work perfectly fine.

Question: So what e-mail program will you use now?
Answer: I will use kmail. I love kmail. Precisely, I will use Pali Rohar's noakonadi fork, which is based on kdepim 4.4. It is neither perfect nor bug-free, but accesses all my e-mail accounts reliably. This is what I've been using on my home desktop all the time (never upgraded) and what I downgraded my laptop to some time ago after losing many mails.

Question: So can you recommend running this ages-old kmail1 variant?
Answer: Yes and no. Yes, because (at least in my case) it seems to get the basic job done much more reliably. Yes, because it feels a lot snappier and produces far less random surprises. No, because it is essentially unmaintained, has some bugs, and is written for KDE 4, which is slowly going away. No, because Qt5-based kmail2 has more features and does look sexier. No, because you lose the useful Akonadi integration of addressbook and calendar.
That said, here are the two bugs of kmail1 that I find most annoying right now: 1) PGP/MIME cleartext signature is broken (at random some signatures are not verified correctly and/or bad signatures are produced), and 2), only in a Qt5 / Plasma environment, attachments don't open on click anymore, but can only be saved. (Which is odd since e.g. Okular as viewer is launched but never appears on screen, and the temporary file is written but immediately disappears... need to investigate.)

Question: I have bugfixes / patches for kmail1. What should I do?
Answer: Send them!!! I'll be happy to test and forward.

Question: What will you do when Qt4 / kdelibs goes away?
Answer: Dunno. Luckily I'm involved in packaging myself. :)


Top

Skimmers Found at Walmart: A Closer Look

Postby BrianKrebs via Krebs on Security »

Recent local news stories about credit card skimmers found in self-checkout lanes at some Walmart locations reminds me of a criminal sales pitch I saw recently for overlay skimmers made specifically for the very same card terminals.

Much like the skimmers found at some Safeway locations earlier this year, the skimming device pictured below was designed to be installed in the blink of an eye at self-checkout lanes — as in recent incidents at Walmart stores in Fredericksburg, Va. and Fort Wright, Ky. In these attacks, the skimmers were made to piggyback on card readers sold by payment solutions company Ingenico.

A skimmer made to be fitted to an Ingenico credit card terminal of the kind used at Walmart stores across the country. Image: Hold Security.
This Ingenico “overlay” skimmer has a PIN pad overlay to capture the user’s PIN, and a mechanism for recording the data stored on a card’s magnetic stripe when customers swipe their cards at self-checkout aisles. The wire pictured at the bottom is for offloading the data from the card skimmers once thieves have retrieved the devices from compromised checkout lanes.

This particular skimmer retails for between $200 to $300, but that price doesn’t include the electronics that power the device and store the stolen card data.

Here’s how this skimmer looks when it’s attached. Think you’d be able to spot it?

Image credit: Hold Security.
Walmart last year began asking customers with more secure chip-enabled cards to dip the chip instead of swipe the stripe. Chip-based cards are more expensive and difficult for thieves to counterfeit, and they can help mitigate the threat from most modern card-skimming methods that read the cardholder data in plain text from the card’s magnetic stripe. Those include malicious software at the point-of-sale terminal, as well as physical skimmers placed over card readers at self-checkout lanes.

In a recent column – The Great EMV Fake-Out: No Chip for You! – I explored why so few retailers currently allow or require chip transactions, even though many of them already have all the hardware in place to accept chip transactions.

For its part, Walmart has deployed chip-enabled readers, and last year began requiring customers with chip cards to use them as such. Indeed, it’s interesting to note that the Ingenico overlay skimmer pictured above also includes the slot at the bottom center of the device where customers can insert a chip card, although in these recent skimming incidents at Walmart the thieves were no doubt hoping more customers would simply swipe.

The Mercator Advisory Group notes that only 60 percent of all credit cards in the United States have been updated with chip cards, with debit cards lagging further behind. Even so, only 20 percent of card terminals in the U.S. have been activated for chip use as of April 2016, Mercator found.

The United States is the last of the G20 nations to move to chip-based cards — much to the delight of fraudsters and organized cybercrime gangs that have siphoned tens of millions of credit and debit cards in major data breaches at retailers these past few years. Financial industry consultant Aite Group predicts that credit card fraud stemming from hacking will reach a record level in 2016 — $4 billion. Aite Group says fraudsters are busy milking this cash cow for all it’s worth as U.S. merchants start to pivot toward chip-card transactions.

Footage of crooks installing the card skimmers at a Walmart self-checkout terminal in Kentucky this month. Source: WLWT.
Update, 12:41 p.m. ET: Corrected location of Kentucky Walmart.
Top

Graduating Class – The New Yorker – 30 May 2016

Postby Zach via The Z-Issue »

Today I saw the cover of the 30 May 2016 edition of The New Yorker, which was designed by artist R. Kikuo Johnson, and it really hit home for me. The illustration depicts the graduating class of 2016 walking out of their commencement ceremony whilst a member of the 2015 graduating class is working as a groundskeeper:


Click for full quality
I won’t go into a full tirade here about my thoughts of higher education within the United States throughout recent years, but I do think that this image sums up a few key points nicely:

  • Many graduates (either from baccalaureate or higher-level programmes) are not working in their respective fields of study
  • A vast majority of students have accrued a nearly insurmountable amount of debt
  • Those two points may be inextricably linked to one another
I know that, for me, I am not able to work in my field of study (child and adolescent development / elementary education) for those very reasons—the corresponding jobs (which I find incredibly rewarding), unfortunately, do not yield high enough salaries for me to even make ends meet. Though the cover artwork doesn’t necessarily offer any suggestion as to a solution to the problem, I think that it very poignantly brings further attention to it.

Cheers,
Zach
Top

kmail 16.04.1 and Novell Groupwise 2014 IMAP server - anyone?

Postby Andreas via the dilfridge blog »

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.

Thanks!!!
Top

GStreamer and Meson: A New Hope

Postby Nirbheek via Nirbheek’s Rantings »

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.
Top

balde internals, part 1: Foundations

Postby Rafael G. Martins via Rafael Martins »

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.
Top

PGCon 2016 charity auction

Postby Dan Langille via Dan Langille's Other Diary »

Every year PGCon holds a charity auction as part of the closing session. All proceeds go to The Ottawa Mission, a local group. The auction includes items you would keep as art, and some you would consume before you left town. Others, such as empty paper bags or cardboard boxes are left in the recycling [...]
Top

gpiokeys support committed

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

To those who do not track FreeBSD commit messages: I committed gpiokeys driver to -CURRENT as r299475. The driver is not enabled in any of the kernels but can be built as a loadable module.

For now it stays disconnected from main build because it breaks some MIPS kernel configs. Configs in question include “modules/gpio” as part of MODULES_OVERRIDE variable and since gpiokeys can be built only with FDT-enabled kernel the build fails.

gpiokeys can be used as a base for more input device driver: “gpio-keys-polled” and “gpio-matrix-keypad“. I do not have hardware to test this at the moment. If you do and you’re looking for small FreeBSD project to work on – here you go.

Next step on my ToDo list is to try tricking people into committing evdev patch, which at the moment is the only requirement for unlocking touchscreen support.
Top

FLAC encoding of WAV fails with error of unsupported format type 3

Postby Zach via The Z-Issue »

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
Whenever I purchase music from Bandcamp, SoundCloud, or other music sites that offer uncompressed, full-quality downloads (I can’t bring myself to download anything but original or lossless music files), I will always download the original WAV if it is offered. I prefer to keep the original copy around just in case, but usually I will put that on external storage, and use FLAC compression for actual listening (see my post comparing FLAC compression levels, if you’re interested).

Typically, my workflow for getting songs/albums ready is:

  • Purchase and download the full-quality WAV files
  • Rename them all according to my naming conventions
  • Batch convert them to FLAC using the command line (below)
  • Add in all the tags using EasyTag
  • Standardise the album artwork
  • Add the FLAC files to my playlists on my computers and audio server
  • Batch convert the FLACs to OGG Vorbis using a flac2all command (below) for my mobile and other devices with limited storage
It takes some time, but it’s something that I only have to do once per album, and it’s worth it for someone like me (read “OCD”). For good measure, here are the commands that I run:

Batch converting files from WAV to FLAC:
find music/wavs/$ARTIST/$ALBUM/ -iname '*.wav' -exec flac -3 {} \;
obviously replacing $ARTIST and $ALBUM with the name of the artist and album, respectively.

Batch converting files from FLAC to OGG using flac2all:
python2 flac2all_v3.38.py vorbis ./music/flac/ -v 'quality=7' -c -o ./music/ogg/
By the way, flac2all is awesome because it copies the tags and the album art as well. That’s a huge time saver for me.

Normally this process goes smoothly, and I’m on my way to enjoying my new music rather quickly. However, I recently downloaded some WAVs from SoundCloud and couldn’t figure out why I was coming up with fewer FLACs than WAVs after converting. I looked back through the output from the command, and saw the following error message on some of the track conversions:

05-Time Goes By.wav: ERROR: unsupported format type 3
That was a rather nebulous and obtuse error message, so I decided to investigate a file that worked versus these ones that didn’t:

File that failed:
$ file vexento/02-inspiration/05-Time\ Goes\ By.wav
RIFF (little-endian) data, WAVE audio, stereo 44100 Hz

File that succeeded:
$ file vexento/02-inspiration/04-Riot.wav
RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz

The differences are that the working files indicated “Microsoft PCM” and “16 bit.” The fix for the problem was rather simple, actually. I used Audacity (which is a fantastic piece of cross-platform, open-source software for audio editing), and just re-exported the tracks that were failing. Basically, open the file in Audacity, make no edits, and just go to File –> Export –> “Wav (Microsoft) signed 16 bit PCM”, which you can see in the screenshot below:


Click to enlarge
Just like that, the problem was gone! Also, I noticed that the file size changed substantially. I’m used to a WAV being about 10MiB for every minute of audio. Before re-exporting these files, they were approximately 20MiB for every minute. So, this track went from ~80MiB to ~40MiB.

Hope that helps!

Cheers,
Zach

P.S. By the way, Vexento (the artist who released the tracks mentioned here) is amazingly fun, and I recommend that everyone give him a chance. He’s a young Norwegian guy (actually named Alexander Hansen) who creates a wide array of electronic music. Two tracks (that are very different from one another) that I completely adore are Trippy Love (upbeat and fun), and Peace (calming yet cinematic).
Top

Updating the broadcom softmac driver (bwn), or "damnit, I said I'd never do this!"

Postby Adrian via Adrian Chadd's Ramblings »

If you're watching the FreeBSD commit lists, you may have noticed that I .. kinda sprayed a lot of changes into the broadcom softmac driver.

Firstly, I swore I'd never touch this stuff. But, we use Broadcom (fullmac!) parts at work, so in order to get a peek under the hood to see how they work, I decided fixing up bwn(4) was vaguely work related. Yes, I did the work outside of work; no, it's not sponsored by my employer.

I found a small cache of broadcom 43xx cards that I have and I plugged one in. Nope, didn't work. Tried another. Nope, didn't work. Oh wait - I need to make sure the right firmware module is loaded for it to continue. That was the first hiccup.

Then I set up the interface and connected it to my home AP. It worked .. for about 30 seconds. Then, 100% packet loss. It only worked when I was right up against my AP. I could receive packets fine, but transmits were failing. So, off I went to read the transmit completion path code.

Here's the first fun bit - there's no TX completion descriptor that's checked. There is in the v3 firmware driver (bwi), but not in the v4 firmware. Instead, it reads a pair shared memory registers to get completion status for each packet. This is where I learnt my first fun bits about the hardware API - it's a mix of PIO/DMA, firmware, descriptors and shared memory mailboxes. Those completion registers? Reading them advances the internal firmware state to read the next descriptor completion. You can't just read them for fun, or you'll miss transmit completions.

So, yes, we were transmitting, and we were failing them. The retry count was 7, and the ACK bit was 0. Ok, so it failed. It's using the net80211 rate control code, so I turned on rate control debugging (wlandebug +rate) and watched the hilarity.

The rate control code was never seeing any failures, so it just thought everything was hunky dory and kept pushing the rate up to 54mbit. Which was the exact wrong thing to do. It turns out the rate control code was only called if ack=1, which meant it was only notified if packets succeeded. I fixed up (through some revisions) the rate control notification path to be called always, error and success, and it began behaving better.

Now, bwn(4) was useful. But, it needs updating to support any of the 11n chipsets, and it certainly didn't do 5GHz operation on anything. So, off I went to investigate that.

There are, thankfully, three major sources of broadcom softmac information:
  • Linux b43
  • Linux brcmsmac
  • http://bcm-v4.sipsolutions.net/
The linux folk did a huge reverse engineering effort on the binary broadcom driver (wl) over many years, and generated a specification document with which they implemented b43 (and bcm-v3 for b43legacy.) It's .. pretty amazing, to be honest. So, armed with that, I went off to attempt to implement support for the first 11n chip, the BCM4321.

Now, there's some architectural things to know about these chips. Firstly, the broadcom hardware is structured (like all chips, really) with a bunch of cores on-die with an interconnect, and then some host bus glue. So, the hardware design can just reuse the same internals but a different host bus (USB, PCI, SDIO, etc) and reuse 90% of the chip design. That's a huge win. But, most of the other chips out there lie to you about the internal layout so you don't care - they map the internal cores into one big register window space so it looks like one device.

The broadcom parts don't. They expose each of the cores internally on a bus, and then you need to switch the cores on/off and also map them into the MMIO register window to access them.

Yes, that's right. There's not one big register window that it maps things to, PCI style. If you want to speak to a core, you have to unmap the existing core, map in the core you want, and do register access.

Secondly, the 802.11 core exposes MAC and PHY registers, but you can't have them both on at once. You switch on/off the MAC register window before you poke at the PHY.

Armed with this, I now understand why you need 'sys/dev/siba' (siba(4)) before you can use bwn(4). The siba driver provides the interface to PCI (and MIPS for an older Broadcom part) to probe/attach a SIBA bus, then enumerate all of the cores, then attach drivers to each. There's typically a PCI/PCIe core, then an 802.11 core, then a chipcommon core for the clock/power management, and then other things as needed (memory, USB, PCMCIA, etc.) bwn(4) doesn't attach to the PCI device, it sits on the siba bus as a child device.

So, to add support for a new chip, I needed to do a few things.

  • The device needs to probe/attach to siba(4);
  • The SPROM parsing is done by siba(4), so new fields have to be added there;
  • The 802.11 core revision is what's probe/attached by bwn(4), so add it there;
  • Then I needed to ensure the right microcode and radio initvals are added in bwn(4);
  • Then, new PHY code is needed. For the BCM4321, it's PHY-N.
There are two open PHY-N implementations - brcmfmac is BSD licenced, and b43's is GPL licenced. I looked at the brcmfmac one, which includes full 11n support, but I decided the interface was too different for me to do a first port with. The b43 PHY-N code is smaller, simpler and the API matched what was in the bcm-4 specification. And, importantly, bwn(4) was written from the same specification, so it's naturally in alignment.

This meant that I would be adding GPLv2'ed code to bwn(4). So, I decided to dump it in sys/gnu/dev/bwn so it's away from the main driver, and make compiling it in non-standard. At some point yes, I'd like to port the brcmfmac PHYs to FreeBSD, but I wanted to get familiar with the chips and make sure the driver worked fine. Debugging /all/ broken and new pieces didn't sound like fun to me.

So after a few days, I got PHY-N compiling and I fired it up. I needed to add SPROM field parsing too, so I did that too. Then, the moment of truth - I fired it up, and it connected. It scanned on both 2G and 5G, and it worked almost first time! But, two things were broken:
  • 5GHz operation just failed entirely for transmit, and
  • 2GHz operation failed transmitting all OFDM frames, but CCK was fine.
Since probing, association and authentication in 2GHz did it at the lowest rate (CCK), this worked fine. Data packets at OFDM rates failed with a PHY error of 0x80 (which isn't documented anywhere, so god knows what that means!) but CCK was fine. So, off I went to b43 and the brcmfmac driver to see what the missing pieces were.

There were two. Well, three, but two that broke everything.

Firstly, there's a "I'm 5GHz!" flag in the tx descriptor. I set that for 5GHz operation - but nothing.

Secondly, the driver tries a fallback rate if the primary rate fails. Those are hardcoded, same as the RTS/CTS rates. It turns out the fallback rate for 6MB OFDM is 11MB CCK, which is invalid for 5GHz. I fixed that, but I haven't yet fixed the 1MB CCK RTS/CTS rates. I'll go do that soon. (I also submitted a patch to Linux b43 to fix that!)

Thirdly, and this was the kicker - the PHY-N and later PHYs require more detailed TX setup. We were completely missing initializing some descriptor fields. It turns out it's also required for PHY-LP (which we support) but somehow the PHY was okay with that. Once I added those fields in, OFDM transmit worked fine.

So, a week after I started, I had a stable driver on 11bg chips, as well as 5GHz operation on the PHY-N BCM4321 NIC. No 11n yet, obviously, that'll have to wait.

In the next post I'll cover fixing up the RX RSSI calculations and then what I needed to do for the BCM94322MC, which is also a PHY-N chip, but is a much later core, and required new microcode with a new descriptor interface.
Top

Noodles & Company Probes Breach Claims

Postby BrianKrebs via Krebs on Security »

Noodles & Company [NASDAQ: NDLS]a fast-casual restaurant chain with more than 500 stores in 35 U.S. states, says it has hired outside investigators to probe reports of a credit card breach at some locations.

Over the past weekend, KrebsOnSecurity began hearing from sources at multiple financial institutions who said they’d detected a pattern of fraudulent charges on customer cards that were used at various Noodles & Company locations between January 2016 and the present.

Asked to comment on the reports, Broomfield, Colo.-based Noodles & Company issued the following statement:

“We are currently investigating some unusual activity reported to us Tuesday, May 16, 2016 by our credit card processor. Once we received this report, we alerted law enforcement officials and we are working with third party forensic experts. Our investigation is ongoing and we will continue to share information.”

The investigation comes amid a fairly constant drip of card breaches at main street retailers, restaurant chains and hospitality firms. Wendy’s reported last week that a credit card breach that began in the autumn of 2015 impacted 300 of its 5,500 locations.

Cyber thieves responsible for these attacks use security weaknesses or social engineering to remotely install malicious software on retail point-of-sale systems. This allows the crooks to read account data off a credit or debit card’s magnetic stripe in real time as customers are swiping them at the register.

U.S. banks have been transitioning to providing customers more secure chip-based credit and debit cards, and a greater number of retailers are installing checkout systems that can read customer card data off the chip. The chip encrypts the card data and makes it much more difficult and expensive for thieves to counterfeit cards.

However, most of these chip cards will still hold customer data in plain text on the card’s magnetic stripe, and U.S. merchants that continue to allow customers to swipe the stripe or who do not have chip card readers in place face shouldering all of the liability for any transactions later determined to be fraudulent.

While a great many U.S. retail establishments have already deployed chip-card readers at their checkout lines, relatively few have enabled those readers, and are still asking customers to swipe the stripe. For its part, Noodles & Company says it’s in the process of testing and implementing chip-based readers.

“The ongoing program we have in place to aggressively test and implement chip-based systems across our network is moving forward,” the company said in a statement. “We are actively working with our key business partners to deploy this system as soon as they are ready.”
Top

XCOM2

Postby bed via Zockertown: Nerten News »

Ich bin ein großer Fan des Vorgängers XCOM Enemy within, mittlerweile habe ich ca 100 Stunden XCOM2 gespielt.

25 Stunden habe ich benötigt, um das Spiel das erste Mal auf leicht durchzuspielen.

Es braucht eine gewisse Zeit sich an die Spielmechanik zu gewöhnen, deshalb war die "Rookie" Stufe genau richtig, auch wenn es dann irgendwann etwas zu leicht war. Die vielen neuen Spielelemente habe ich dabei nicht wirklich zur Gänze be- /ausnutzen können, deshalb begann ich einen zweiten Anlauf auf der "normal"en Stufe. Hier habe ich mich nicht durch die ständigen Aufforderungen den Schädelstecker einzusetzen aus der Ruhe bringen lassen. Dadurch habe ich mehr Zeit, die einzelnen Aspekte auszuloten und im Spiel zu geniessen. Das die Story nun etwas ganz anderes ist, als bei dem Vorgänger macht die Sache sehr interessant. Ist doch mal etwas anderes in futuristischen Städten auf Dächern rumzuklettern und gezielt Benzinfässer in die Luft jagen zu können.

Die neuen Aliens sind einfach eine Augenweide, überhaupt scheint sehr viel Wert auf das optische gelegt worden zu sein. Dennoch kommen die taktischen neuen Elemente wie Waffenmodifikation nicht zu kurz. Endlich kann man Waffenmodifikatoren und Persöhnlichkeitsupgrades einsammeln und für den folgendenen Einsatz an die Soldaten verteilen.

Was mich besconders beeindruckt hat, sind die sehr stimmige professionelle deutsche Übersetzung und ziemlich ausführlichen Dialoge, die mich in die Geschichte eintauchen lassen und das Gefühl vermitteln mittendrin zu sein. Die Klassen wie z.B. der neue Spezialist mit seinem Gremlin ist einfach super. Hier sind noch zusätzlichen Spieloptionen verborgen die man wirklich alle ausprobieren sollte. U.a. kann man fernhacken und sehr verschiedene Extras erreichen. Während eines Einsatzes wechseln die Optionen sogar, wenn man Glück hat kann man durchaus auch mal einen gegnerisches Geschütz kontrollieren, oder auch alle Züge seiner Soldaten wieder auffüllen, was im fortgeschrittenen Spiel durchaus spielentscheidend sein kann.

Man kann übrigens nicht nur die VIPs niederschlagen und auf der Schulter mitnehmen, sondern auch seinen betäubten Kameraden auf diese Weise an einen sicheren Ort transportieren oder auch den geordneten Rückzug antreten, falls man gar nicht mehr weiterkommt.

Vor allem Anfangs sind einem die Gegner sehr überlegen, bis man etwas durch Forschung und Beförderung aufschließt.



Kritikpunkte habe ich allerdings auch.
Das Anordnen der Räume in der eigenen Basis erlaubt kaum noch Synergien. Schade, dass dies nicht vom Vorgänger übernommen wurde.
Die PSI-Klasse kommt leider erst ziemlich spät ins Spiel und geht dadurch ein bisschen unter - die Soldaten können auch in den Missionen keinerlei Erfahrung sammeln, was das ganze Konzept dieser Klasse zumindest für mich etwas merkwürdig erscheinen lässt.

Die letzte Mission hatte am Ende bei mir einen Farb/Textur Bug, die letzten Gegner musste ich mehr oder weniger mit Raten und auf die Anzeige der roten Alien Köpfe vertrauend zu ende führen.

Die Navigation von Weltkarte zur Avenger ansicht ist zwar toll animiert, aber kann nach einer Weile etwas nerven. Die Rückkehr nach einem Angriff dauert auch lange, bis man endlich auf "weiter" klicken kann. Keine Ahnung, ob das Laden des Basis Levels solange dauert, an meiner SSD scheint es jedenfalls nicht zu liegen.


Die von vielen angemeckerten Performanceprobleme habe ich nicht, als einziger nerviger Bug ist mir bei einem Soldaten in der Fähigkeitsmodifikation (...) aufgefallen, hier kann man den letzten Zug nicht nutzen und man kommt mit der TAB Taste nicht darüber weg, sondern muss erst eine Aktion ankllicken, mit Rechtsklick abbrechen, um mit TAB den nächsten Soldaten anwählen zu können.




Der Angriff eines Archon. Der Archon ist sehr beweglich und vorsicht vor den Feuerfedern, einem Angriff auf 3 Soldaten aus großer Höhe, nicht stehenbleiben, sonst gehts schlecht aus.

Aber auch der Gesichtlose ist nicht von schlechten Eltern.

Ps: wer es nicht lassen kann zu cheaten, wird hier auch fündig. Ich habe sehr häufig meine verwundeten Soldaten auf diese Weise geheilt, das macht es möglich sie im nächsten Einsatz wieder einzusetzen, allerdings leiden sie ein einem starken Rückgang des Willens, dem Meßpunkt bei PSI Atacken, die von den Sektoiden gerne durchgeführt werden.
Top

As Scope of 2012 Breach Expands, LinkedIn to Again Reset Passwords for Some Users

Postby BrianKrebs via Krebs on Security »

A 2012 data breach that was thought to have exposed 6.5 million hashed passwords for LinkedIn users instead likely impacted more than 117 million accounts, the company now says. In response, the business networking giant said today that it would once again force a password reset for individual users thought to be impacted in the expanded breach.

The 2012 breach was first exposed when a hacker posted a list of some 6.5 million unique passwords to a popular forum where members volunteer or can be hired to hack complex passwords. Forum members managed to crack some the passwords, and eventually noticed that an inordinate number of the passwords they were able to crack contained some variation of “linkedin” in them.

LinkedIn responded by forcing a password reset on all 6.5 million of the impacted accounts, but it stopped there. But earlier today, reports surfaced about a sales thread on an online cybercrime bazaar in which the seller offered to sell 117 million records stolen in the 2012 breach. In addition, the paid hacked data search engine LeakedSource claims to have a searchable copy of the 117 million record database (this service said it found my LinkedIn email address in the data cache, but it asked me to pay $4.00 for a one-day trial membership in order to view the data; I declined).

Inexplicably, LinkedIn’s response to the most recent breach is to repeat the mistake it made with original breach, by once again forcing a password reset for only a subset of its users.

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012,” wrote Cory Scott, in a post on the company’s blog. “We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

LinkedIn spokesman Hani Durzy said the company has obtained a copy of the 117 million record database, and that LinkedIn believes it to be real.

“We believe it is from the 2012 breach,” Durzy said in an email to KrebsOnSecurity. “How many of those 117m are active and current is still being investigated.”

Regarding the decision not to force a password reset across the board back in 2012, Durzy said “We did at the time what we thought was in the best interest of our member base as a whole, trying to balance security for those with passwords that were compromised while not disrupting the LinkedIn experience for those who didn’t appear impacted.”

The 117 million figure makes sense: LinkedIn says it has more than 400 million users, but reports suggest only about 25 percent of those accounts are used monthly.

Alex Holden, co-founder of security consultancy Hold Security, was among the first to discover the original cache of 6.5 million back in 2012 — shortly after it was posted to the password cracking forum InsidePro. Holden said the 6.5 million encrypted passwords were all unique, and did not include any passwords that were simple to crack with rudimentary tools or resources [full disclosure: Holden’s site lists this author as an adviser, however I receive no compensation for that role].

“These were just the ones that the guy who posted it couldn’t crack,” Holden said. “I always thought that the hacker simply didn’t post to the forum all of the easy passwords that he could crack himself.”

The top 20 most commonly used LinkedIn account passwords, according to LeakedSource.
According to LeakedSource, just 50 easily guessed passwords made up more than 2.2 million of the 117 million encrypted passwords exposed in the breach.

“Passwords were stored in SHA1 with no salting,” the password-selling site claims. “This is not what internet standards propose. Only 117m accounts have passwords and we suspect the remaining users registered using FaceBook or some similarity.”

SHA1 is one of several different methods for “hashing” — that is, obfuscating and storing — plain text passwords. Passwords are “hashed” by taking the plain text password and running it against a theoretically one-way mathematical algorithm that turns the user’s password into a string of gibberish numbers and letters that is supposed to be challenging to reverse. 

The weakness of this approach is that hashes by themselves are static, meaning that the password “123456,” for example, will always compute to the same password hash. To make matters worse, there are plenty of tools capable of very rapidly mapping these hashes to common dictionary words, names and phrases, which essentially negates the effectiveness of hashing. These days, computer hardware has gotten so cheap that attackers can easily and very cheaply build machines capable of computing tens of millions of possible password hashes per second for each corresponding username or email address.

But by adding a unique element, or “salt,” to each user password, database administrators can massively complicate things for attackers who may have stolen the user database and rely upon automated tools to crack user passwords.

LinkedIn said it added salt to its password hashing function following the 2012 breach. But if you’re a LinkedIn user and haven’t changed your LinkedIn password since 2012, your password may not be protected with the added salting capabilities. At least, that’s my reading of the situation from LinkedIn’s 2012 post about the breach.

If you haven’t changed your LinkedIn password in a while, that would probably be a good idea. Most importantly, if you use your LinkedIn password at other sites, change those passwords to unique passwords. As this breach reminds us, re-using passwords at multiple sites that hold personal and/or financial information about you is a less-than-stellar idea.
Top

Microsoft Disables Wi-Fi Sense on Windows 10

Postby BrianKrebs via Krebs on Security »

Microsoft has disabled its controversial Wi-Fi Sense feature, a component embedded in Windows 10 devices that shares access to WiFi networks to which you connect with any contacts you may have listed in Outlook and Skype — and, with an opt-in — your Facebook friends.

Redmond made the announcement almost as a footnote in its Windows 10 Experience blog, but the feature caused quite a stir when the company’s flagship operating system first debuted last summer.

Microsoft didn’t mention the privacy and security concerns raised by Wi-Fi Sense, saying only that the feature was being removed because it was expensive to maintain and that few Windows 10 users were taking advantage of it.

“We have removed the Wi-Fi Sense feature that allows you to share Wi-Fi networks with your contacts and to be automatically connected to networks shared by your contacts,” wrote Gabe Aul, corporate vice president of Microsoft’s engineering systems team. “The cost of updating the code to keep this feature working combined with low usage and low demand made this not worth further investment. Wi-Fi Sense, if enabled, will continue to get you connected to open Wi-Fi hotspots that it knows about through crowdsourcing.”

Wi-Fi Sense doesn’t share your WiFi network password per se — it shares an encrypted version of that password. But it does allow anyone in your Skype or Outlook or Hotmail contacts lists to waltz onto your Wi-Fi network — should they ever wander within range of it or visit your home (or hop onto it secretly from hundreds of yards away with a good ‘ole cantenna!).

When the feature first launched, Microsoft sought to reassure would-be Windows 10 users that their Wi-Fi password would be sent encrypted and stored encrypted — on a Microsoft server. The company also pointed out that Windows 10 users had to initially agree to share their network during the Windows 10 installation process before the feature would be turned on.

But these assurances rang hollow for many Windows users already suspicious about a feature that could share access to a user’s wireless network even after that user changed their Wi-Fi network password.

“Annoyingly, because they didn’t have your actual password, just authorization to ask the Wi-Fi Sense service to supply it on their behalf, changing your password down the line wouldn’t keep them out – Wi-Fi Sense would learn the new password directly from you and supply it for them in future,” John Zorabedian wrote for security firm Sophos.

Microsoft’s solution for those concerned required users to change the name (a.k.a. “SSID“) of their Wi-Fi network to include the text “_optout” somewhere in the network name (for example, “oldnetworknamehere_optout”).

I commend Microsoft for taking this step, if albeit belatedly. Much security is undone by ill-advised features in software and hardware that are unnecessarily enabled by default.
Top

Saving a non-booting Asus UX31A laptop

Postby Flameeyes via Flameeyes's Weblog »

I have just come back from a long(ish) trip through UK and US, and decided it's time for me to go back to some simple OSS tasks, while I finally convince myself to talk about the doubts I'm having lately.

To start on that, I tried to turn on my laptop, the Asus UX31A I got three and a half years ago. It didn't turn on. This happened before, so I just left it to charge and tried again. No luck.

Googling around I found a number of people with all kind of problems about it, and one of them is something getting stuck at the firmware level. Given how I had found a random problem with PCIE settings in my previous laptop, that would make it reboot every time I turned it off, but only if the power was still plugged in, I was not completely surprised. Unfortunately following the advice I read (take off the battery and power over AC) didn't help.

I knew it was not the (otherwise common) problem with the power plug, because when I plugged the cable in, the Yubikey Neo-n would turn on, which means power arrived to the board fine.

Then I remembered two things: one of the advices was about the keyboard, and the keyboard itself has had problems before (the control key sometimes would stop working for half an hour at a time.) Indeed, once I re-seated the keyboards' ribbon cable, it turned on again, yay!

But here's the other problem: the laptop would turn on, the caps-lock LED on and stay there. And even letting the main battery run out would not be enough to return it to working conditions. What to do? Well, I got a hunch, and turned out to be right.

One of the things that I tried before was to remove the CMOS battery — either I kept it out not long enough to properly clear, or something else went wrong, but it turned out that removing the CMOS battery allowed the system to start up — but that would mean no RTC, which is not great, if you start the laptop without an Internet connection.

The way I solved it was as follows:

  • disconnect the CMOS battery;
  • start up the laptop;
  • enter "BIOS" (EFI) setup;
  • make any needed change (such as time);
  • "Save and exit";
  • let the laptop boot up;
  • connect the CMOS battery.
Yes this does involve running the laptop without the lower plate for a while, be careful about it, but to the other hand, it did save my laptop from being stomped on, on the ground out of sheer rage.
Top

How LINGUAS are thrice wrong!

Postby Michał Górny via Michał Górny »

The LINGUAS environment variable serves two purposes in Gentoo. On one hand, it’s the USE_EXPAND flag group for USE flags controlling installation of localizations. On the other, it’s a gettext-specfic environment variable controlling installation of localizations in some of build systems supporting gettext. Fun fact is, both uses are simply wrong.



Why LINGUAS as an environment variable is wrong?

Let’s start with the upstream-blessed LINGUAS environment variable. If set, it limits localization files installed by autotools+gettext-based build systems (and some more) to the subset matching specified locales. At first, it may sound like a useful feature. However, it is an implicit feature, and therefore one causing a lot of confusion for the package manager.

Long story short, in this context the package manager does not know anything about LINGUAS. It’s just a random environment variable, that has some value and possibly may be saved somewhere in package metadata. However, this value can actually affect the installed files in a hardly predictable way. So, even if package managers actually added some special meaning to LINGUAS (which would be non-PMS compliant), it still would not be good enough.

What does this practically mean? It means that if I set LINGUAS to some value on my system, then most of the binary packages produced by it suddenly have files stripped, as compared to non-LINGUAS builds. If I installed the binary package on some other system, it would match the LINGUAS of build host rather than the install host. And this means the binary packages are simply incorrect.

Even worse, any change to LINGUAS can not be propagated correctly. Even if the package manager decided to rebuild packages based on changes in LINGUAS, it has no way of knowing which locales were supported by a package, and if LINGUAS was used at all. So you end up rebuilding all installed packages, just in case.

Why LINGUAS USE flags are wrong?

So, how do we solve all those problems? Of course, we introduce explicit LINGUAS flags. This way, the developer is expected to list all supported locales in IUSE, the package manager can determine the enabled localizations and match binary packages correctly. All seems fine. Except, there are two problems.

The first problem is that it is cumbersome. Figuring out supported localizations and adding a dozen flags on a number of packages is time-consuming. What’s even worse, those flags need to be maintained once added. Which means you have to check supported localizations for changes on every version bump. Not all developers do that.

The second problem is that it is… a QA violation, most of the time. We already have quite a clear policy that USE flags are not supposed to control installation of small files with no explicit dependencies — and most of the localization files are exactly that!

Let me remind you why we have that policy. There are two reasons: rebuilds and binary packages.

Rebuilds are bad because every time you change LINGUAS, you end up rebuilding relevant packages, and those can be huge. You may think it uncommon — but just imagine you’ve finally finished building your new shiny Gentoo install, and noticed that you forgot to enable the localization. And guess what! You have to build a number of packages, again.

Binary packages are even worse since they are tied to a specific USE flag combination. If you build a binary package with specific LINGUAS, it can only be installed on hosts with exactly the same LINGUAS. While it would be trivial to strip localizations from installed binary package, you have to build a fresh one. And with dozen lingua-flags… you end up having thousands of possible binary package variants, if not more.

Why EAPI 5 makes things worse… or better?

Reusing the LINGUAS name for the USE_EXPAND group looked like a good idea. After all, the value would end up in ebuild environment for use by the build system, and in most of the affected packages, LINGUAS worked out of the box with no ebuild changes! Except that… it wasn’t really guaranteed to before EAPI 5.

In earlier EAPIs, LINGUAS could contain pretty much anything, since no special behavior was reserved for it. However, starting with EAPI 5 the package manager guarantees that it will only contain those values that correspond to enabled flags. This is a good thing, after all, since it finally makes LINGUAS work reliably. It has one side effect though.

Since LINGUAS is reduced to enabled USE flags, and enabled USE flags can only contain defined USE flags… it means that in any ebuild missing LINGUAS flags, LINGUAS should be effectively empty (yes, I know Portage does not do that currently, and it is a bug in Portage). To make things worse, this means set to an empty value rather than unset. In other words, disabling localization completely.

This way, a small implicit QA issue of implicitly affecting installed localization files turned out into a bigger issue of suddenly stopping to install localizations. Which in turn can’t be fixed without introducing proper set of LINGUAS everywhere, causing other kind of QA issues and additional maintenance burden.

What would be the good solution, again?

First of all, kill LINGUAS. Either make it unset for good (and this won’t be easy since PMS kinda implies making all PM-defined variables read-only), or disable any special behavior associated with it. Make the build system compile and install all localizations.

Then, use INSTALL_MASK. It’s designed to handle this. It strips files from installed systems while preserving them in binary packages. Which means your binary packages are now more portable, and every system you install them to will get correct localizations stripped. Isn’t that way better than rebuilding things?

Now, is that going to happen? I doubt it. People are rather going to focus on claiming that buggy Portage behavior was good, that QA issues are fine as long as I can strip some small files from my system in the ‘obvious’ way, that the specification should be changed to allow a corner case…
Top

PMDG und X-Plane

Postby via www.my-universe.com Blog Feed »

PMDG entwickelt gerade mit der DC-6 Cloudmaster ihr erstes Addon für X-Plane. Das ist schon länger bekannt, zumal die Beta-Phase nun wohl dem Ende entgegen geht (und natürlich freue ich mich auf das Release, zumal für mich persönlich die DC-6  die eleganteste und schönste der Maschinen mit vier Sternmotoren ist).

Interessant in diesem Zusammenhang ist allerdings ein Statement von Robert S. Randazzo von PMDG, der kürzlich bei AVSim folgendes verlauten ließ:

From PMDG's perspective, it doesn't matter whether you like FSX, FSX-SE, Prepar3D or X-Plane.
In the not-too-distant future you will have access to our products across all of these platforms- and that gives you the choice to go with the platform that suits you best.
Das lässt wohl hoffen, dass PMDGs Boeing-Modelle 737NG und 777 irgendwann auch für X-Plane zu haben sein werden. Ob bzw. wann auch die Jetstream 4100 (bisher nur für FSX verfügbar) und die in Entwicklung befindliche Boeing 747-400 „Queen of the Skies II“ für X-Plane portiert werden, bleibt aber abzuwarten.

Mein Tipp: PMDG wird nach dem Release der DC-6 erst einmal etwas abwarten, um Erfahrungen mit der Maintenance von X-Plane Produkten zu sammeln – und sicherlich auch, um den kommerziellen Erfolg eines X-Plane Addons in deren Preisregion bewerten zu können. Danach wird man weiter sehen…
Top

NVIDIA Linux drivers, PowerMizer, Coolbits, Performance Levels and GPU fan settings

Postby Zach via The Z-Issue »

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
Whew, I know that’s a long title for a post, but I wanted to make sure that I mentioned every term so that people having the same problem could readily find the post that explains what solved it for me. For some time now (ever since the 346.x series [340.76, which was the last driver that worked for me, was released on 27 January 2015]), I have had a problem with the NVIDIA Linux Display Drivers (known as nvidia-drivers in Gentoo Linux). The problem that I’ve experienced is that the newer drivers would, upon starting an X session, immediately clock up to Performance Level 2 or 3 within PowerMizer.

Before using these newer drivers, the Performance Level would only increase when it was really required (3D rendering, HD video playback, et cetera). I probably wouldn’t have even noticed that the Performance Level was changing, except that it would cause the GPU fan to spin faster, which was noticeably louder in my office.

After scouring the interwebs, I found that I was not the only person to have this problem. For reference, see this article, and this one about locking to certain Performance Levels. However, I wasn’t able to find a solution for the exact problem that I was having. If you look at the screenshot below, you’ll see that the Performance Level is set at 2 which was causing the card to run quite hot (79°C) even when it wasn’t being pushed.


Click to enlarge
It turns out that I needed to add some options to my X Server Configuration. Unfortunately, I was originally making changes in /etc/X11/xorg.conf, but they weren’t being honoured. I added the following lines to /etc/X11/xorg.conf.d/20-nvidia.conf, and the changes took effect:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

The portion in bold (the RegistryDwords option) was what ultimately fixed the problem for me. More information about the NVIDIA drivers can be found in their README and Installation Guide, and in particular, these settings are described on the X configuration options page. The PowerMizerDefaultAC setting may seem like it is for laptops that are plugged in to AC power, but as this system was a desktop, I found that it was always seen as being “plugged in to AC power.”

As you can see from the screenshots below, these settings did indeed fix the PowerMizer Performance Levels and subsequent temperatures for me:


Click to enlarge
Whilst I was adding X configuration options, I also noticed that Coolbits (search for “Coolbits” on that page) were supported with the Linux driver. Here’s the excerpt about Coolbits for version 364.19 of the NVIDIA Linux driver:

Option “Coolbits” “integer”
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer’s design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer’s operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

When “2” (Bit 1) is set in the “Coolbits” option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.

When “4” (Bit 2) is set in the “Coolbits” option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.

When “8” (Bit 3) is set in the “Coolbits” option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or performance levels may be modified.

When “16” (Bit 4) is set in the “Coolbits” option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.

When this option is set for an X screen, it will be applied to all X screens running on the same GPU.

The default for this option is 0 (unsupported features are disabled).

I found that I would personally like to have the options enabled by “4” and “8”, and that one can combine Coolbits by simply adding them together. For instance, the ones I wanted (“4” and “8”) added up to “12”, so that’s what I put in my configuration:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "Coolbits" "12"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

and that resulted in the following options being available within the nvidia-settings utility:


Click to enlarge
Though the Coolbits portions aren’t required to fix the problems that I was having, I find them to be helpful for maintenance tasks and configurations. I hope, if you’re having problems with the NVIDIA drivers, that these instructions help give you a better understanding of how to workaround any issues you may face. Feel free to comment if you have any questions, and we’ll see if we can work through them.

Cheers,
Zach
Top

Carding Sites Turn to the ‘Dark Cloud’

Postby BrianKrebs via Krebs on Security »

Crooks who peddle stolen credit cards on the Internet face a constant challenge: Keeping their shops online and reachable in the face of meddling from law enforcement officials, security firms, researchers and vigilantes. In this post, we’ll examine a large collection of hacked computers around the world that currently serves as a criminal cloud hosting environment for a variety of cybercrime operations, from sending spam to hosting malicious software and stolen credit card shops.

I first became aware of this botnet, which I’ve been referring to as the “Dark Cloud” for want of a better term, after hearing from Noah Dunker, director of security labs at  Kansas City-based vendor RiskAnalytics. Dunker reached out after watching a Youtube video I posted that featured some existing and historic credit card fraud sites. He asked what I knew about one of the carding sites in the video: A fraud shop called “Uncle Sam,” whose home page pictures a pointing Uncle Sam saying “I want YOU to swipe.”

The “Uncle Sam” carding shop is one of a half-dozen that reside on a Dark Cloud criminal hosting environment.
I confessed that I knew little of this shop other than its existence, and asked why he was so interested in this particular crime store. Dunker showed me how the Uncle Sam card shop and at least four others were hosted by the same Dark Cloud, and how the system changed the Internet address of each Web site roughly every three minutes. The entire robot network, or”botnet,” consisted of thousands of hacked home computers spread across virtually every time zone in the world, he said. 

Dunker urged me not to take his word for it, but to check for myself the domain name server (DNS) settings of the Uncle Sam shop every few minutes. DNS acts as a kind of Internet white pages, by translating Web site names to numeric addresses that are easier for computers to navigate. The way this so-called “fast-flux” botnet works is that it automatically updates the DNS records of each site hosted in the Dark Cloud every few minutes, randomly shuffling the Internet address of every site on the network from one compromised machine to another in a bid to frustrate those who might try to take the sites offline.

Sure enough, a simple script was all it took to find a few dozen Internet addresses assigned to the Uncle Sam shop over just 20 minutes of running the script. When I let the DNS lookup script run overnight, it came back with more than 1,000 unique addresses to which the site had been moved during the 12 or so hours I let it run. According to Dunker, the vast majority of those Internet addresses (> 80 percent) tie back to home Internet connections in Ukraine, with the rest in Russia and Romania.

‘Mr. Bin,’ another carding shop hosting on the dark cloud service. A ‘bin’ is the “bank identification number” or the first six digits on a card, and it’s mainly how fraudsters search for stolen cards.
“Right now there’s probably over 2,000 infected endpoints that are mostly broadband subscribers in Eastern Europe,” enslaved as part of this botnet, Dunker said. “It’s a highly functional network, and it feels kind of like a black market version of Amazon Web Services. Some of the systems appear to be used for sending spam and some are for big dynamic scaled content delivery.”

Dunker said that historic DNS records indicate that this botnet has been in operation for at least the past year, but that there are signs it was up and running as early as Summer 2014.

Wayne Crowder, director of threat intelligence for RiskAnalytics, said the botnet appears to be a network structure set up to push different crimeware, including ransomware, click fraud tools, banking Trojans and spam.

Crowder said the Windows-based malware that powers the botnet assigns infected hosts different roles, depending on the victim machine’s strengths or weaknesses: More powerful systems might be used as DNS servers, while infected systems behind home routers may be infected with a “reverse proxy,” which lets the attackers control the system remotely.

“Once it’s infected, it phones home and gets a role assigned to it,” Crowder said. “That may be to continue sending spam, host a reverse proxy, or run a DNS server. It kind of depends on what capabilities it has.”

“Popeye,” another carding site hosted on the criminal cloud network.
Indeed, this network does feel rather spammy. In my book Spam Nation, I detailed how the largest spam affiliate program on the planet at the time used a similar fast-flux network of compromised systems to host its network of pill sites that were being promoted in the junk email. Many of the domains used in those spam campaigns were two- and three-word domains that appeared to be randomly created for use in malware and spam distribution.

“We’re seeing two English words separated by a dash,” Dunker said the hundreds of hostnames found on the dark cloud network that do not appear to be used for carding shops. “It’s a very spammy naming convention.”

It’s unclear whether this botnet is being used by more than one individual or group. The variety of crimeware campaigns that RiskAnalytics has tracked operated through the network suggests that it may be rented out to multiple different cybercrooks. Still, other clues suggests the whole thing may have been orchestrated by the same gang.

For example, nearly all of the carding sites hosted on the dark cloud network — including Uncle Sam, Scrooge McDuck, Mr. Bin, Try2Swipe, Popeye, and Royaldumps — share the same or very similar site designs. All of them say that customers can look up available cards for sale at the site, but that purchasing the cards requires first contacting the proprietor of the shops directly via instant message.

All six of these shops — and only these six — are advertised prominently on the cybercrime forum prvtzone[dot]su. It is unclear whether this forum is run or frequented by the people who run this botnet, but the forum does heavily steer members interested in carding toward these six carding services. It’s unclear why, but Prvtzone has a Google Analytics tracking ID (UA-65055767) embedded in the HTML source of its page that may hold clues about the proprietors of this crime forum.

The “dumps” section of the cybercrime forum Prvtzone advertises all six of the carding domains found on the fast-flux network.
Dunker says he’s convinced it’s one group that occasionally rents out the infrastructure to other criminals.

“At this point, I’m positive that there’s one overarching organized crime operation driving this whole thing,” Dunker said. “But they do appear to be leasing parts of it out to others.”

Dunker and Crowder say they hope to release an initial report on their findings about the botnet sometime next week, but that for now the rabbit hole appears to go quite deep with this crime machine. For instance, there  are several sites hosted on the network that appear to be clones of real businesses selling expensive farm equipment in Europe, and multiple sites report that these are fake companies looking to scam the unwary.

“There are a lot of questions that this research poses that we’d like to be able to answer,” Crowder said.

For now, I’d invite anyone interested to feel free to contribute to the research. This text file contains a historic record of domains I found that are or were at one time tied to the 40 or so Internet addresses I found in my initial, brief DNS scans of this network. Here’s a larger list of some 1,024 addresses that came up when I ran the scan for about 12 hours.

If you liked this story, check out this piece about another carding forum called Joker’s Stash, which also uses a unique communications system to keep itself online and reachable to all comers.
Top

ZFSv28 Ready for Testing on FreeBSD

Postby via A Year in the Life of a BSD Guru »

From yesterday's announcement:
Top

Wendy’s: Breach Affected 5% of Restaurants

Postby BrianKrebs via Krebs on Security »

Wendy’s said today that an investigation into a credit card breach at the nationwide fast-food chain uncovered malicious software on point-of-sale systems at fewer than 300 of the company’s 5,500 franchised stores. The company says the investigation into the breach is continuing, but that the malware has been removed from all affected locations.

“Based on the preliminary findings of the investigation and other information, the Company believes that malware, installed through the use of compromised third-party vendor credentials, affected one particular point of sale system at fewer than 300 of approximately 5,500 franchised North America Wendy’s restaurants, starting in the fall of 2015,” Wendy’s disclosed in their first quarter financial statement today. The statement continues:

“These findings also indicate that the Aloha point of sale system has not been impacted by this activity. The Aloha system is already installed at all Company-operated restaurants and in a majority of franchise-operated restaurants, with implementation throughout the North America system targeted by year-end 2016. The Company expects that it will receive a final report from its investigator in the near future.”

“The Company has worked aggressively with its investigator to identify the source of the malware and quantify the extent of the malicious cyber-attacks, and has disabled and eradicated the malware in affected restaurants. The Company continues to work through a defined process with the payment card brands, its investigator and federal law enforcement authorities to complete the investigation.”

“Based upon the investigation to date, approximately 50 franchise restaurants are suspected of experiencing, or have been found to have, unrelated cybersecurity issues. The Company and affected franchisees are working to verify and resolve these issues.”

The findings come as many banks and credit unions feeling card fraud pain because of the breach have been grumbling about the extent and duration of the breach. Sources at multiple financial institutions say their data indicates that some of the breached Wendy’s locations were still leaking customer card data as late as the end of March 2016 and into early April. The breach was first disclosed on this blog on January 27, 2016.

“Our ongoing investigation into unusual payment card activity at some Wendy’s restaurants is being led by a third party PFI and is proceeding as expeditiously as possible,” Wendy’s spokesman Bob Bertini said in response to questions about the duration of the breach at some stores. “As you are aware, our investigator is required to follow certain protocols in this type of comprehensive investigation and this takes time. Adding to the complexity is the fact that most Wendy’s restaurants are owned and operated by independent franchisees.”
Top

Adobe, Microsoft Push Critical Updates

Postby BrianKrebs via Krebs on Security »

Adobe has issued security updates to fix weaknesses in its PDF Reader and Cold Fusion products, while pointing to an update to be released later this week for its ubiquitous Flash Player browser plugin. Microsoft meanwhile today released 16 update bundles to address dozens of security flaws in Windows, Internet Explorer and related software.

Microsoft’s patch batch includes updates for “zero-day” vulnerabilities (flaws that attackers figure out how to exploit before before the software maker does) in Internet Explorer (IE) and in Windows. Half of the 16 patches that Redmond issued today earned its “critical” rating, meaning the vulnerabilities could be exploited remotely through no help from the user, save for perhaps clicking a link, opening a file or visiting a hacked or malicious Web site.

According to security firm Shavlik, two of the Microsoft patches tackle issues that were publicly disclosed prior to today’s updates, including bugs in IE and the Microsoft .NET Framework.

Anytime there’s a .NET Framework update available, I always uncheck those updates to install and then reboot and install the .NET updates; I’ve had too many .NET update failures muddy the process of figuring out which update borked a Windows machine after a batch of patches to do otherwise, but your mileage may vary.

On the Adobe side, the pending Flash update fixes a single vulnerability that apparently is already being exploited in active attacks online. However, Shavlik says there appears to be some confusion about how many bugs are fixed in the Flash update.

“If information gleaned from [Microsoft’s account of the Flash Player update] MS16-064 is accurate, this Zero Day will be accompanied by 23 additional CVEs, with the release expected on May 12th,” Shavlik wrote. “With this in mind, the recommendation is to roll this update out immediately.”



Adobe says the vulnerability is included in Adobe Flash Player 21.0.0.226 and earlier versions for Windows, Macintosh, Linux, and Chrome OS, and that the flaw will be fixed in a version of Flash to be released May 12.

As far as Flash is concerned, the smartest option is probably best to hobble or ditch the program once and for all — and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you use Adobe Reader to display PDF documents, you’ll need to update that, too. Alternatively, consider switching to another reader that is perhaps less targeted. Adobe Reader comes bundled with a number of third-party software products, but many Windows users may not realize there are alternatives, including some good free ones. For a time I used Foxit Reader, but that program seems to have grown more bloated with each release. My current preference is Sumatra PDF; it is lightweight (about 40 times smaller than Adobe Reader) and quite fast.

Finally, if you run a Web site that in any way relies on Adobe’s Cold Fusion technology, please update your software soon. Cold Fusion vulnerabilities have traditionally been targeted by cyber thieves to compromise countless online shops.
Top

New Company: Edge Security

Postby via Nerdling Sapple »

I've just launched a website for my new information security consulting company, Edge Security. We're expert hackers, with a fairly diverse skill set and a lot of experience. I mention this here because in a few months we plan to release an open-source kernel module for Linux called WireGuard. No details yet, but keep your eyes open in this space.
Top

EuroBSDCon 2010

Postby via A Year in the Life of a BSD Guru »

I'll be heading to the airport later this afternoon, enroute to Karlsruhe for this year's EuroBSDCon. Here are my activities for the conference:
Top

Indymedia-Beitrag über Legida-Gegenprotest-Bündnisse

Postby feltel via Sebastians Blog »

Normalerweise sind Postings wie diese hier nicht mein Ding, aber das, was ich auf linksunten.indymedia.org heute las, das macht mich richtiggehend wütend und es ist wie der sprichwörtliche Schlag ins Gesicht. Nicht nur in meines, sondern in das all derer, die sich seit nunmehr eineinhalb Jahren aktiv gegen Legida, Pegida, AfD, OfD usw. und damit […]
Top


Crooks Grab W-2s from Credit Bureau Equifax

Postby BrianKrebs via Krebs on Security »

Identity thieves stole tax and salary data from big-three credit bureau Equifax Inc., according to a letter that grocery giant Kroger sent to all current and some former employees on Thursday. The nation’s largest grocery chain by revenue appears to be one of several Equifax customers that were similarly victimized this year.

Atlanta-based Equifax’s W-2Express site makes electronic W-2 forms accessible for download for many companies, including Kroger — which employs more than 431,000 people. According to a letter Kroger sent to employees dated May 5, thieves were able to access W-2 data merely by entering at Equifax’s portal the employee’s default PIN code, which was nothing more than the last four digits of the employee’s Social Security number and their four-digit birth year.

“It appears that unknown individuals have accessed [Equifax’s] W2Express website using default log-in information based on Social Security numbers (SSN) and dates of birth, which we believe were obtained from some other source, such as a prior data breach at other institutions,” Kroger wrote in a FAQ about the incident that was included with the letter sent to employees. “We have no indication that Kroger’s systems have been compromised.”

The FAQ continued:

“At this time, we have no indication that associates who had created a new password (did not use the default PIN) were affected, and we are still identifying which associates still using the default PIN may have been affected. We believe individuals gained access to some Kroger associates’ electronic W-2 forms and may have used the information to file tax returns in their names in an effort to claim a fraudulent refund.”

“Kroger is working with Equifax and the authorities to determine who is affected and restore secure access to W-2Express. At this time, we believe you are among our current and former Kroger associates using the default PIN in the W-2Express system. This does not necessarily mean your W-2 was accessed as part of this security incident. We are still working to identify which individuals’ information was accessed.”

Kroger said it doesn’t yet know how many of its employees may have been affected.

The incident comes amid news first reported on this blog earlier this week that tax fraudsters similarly targeted employees of companies that used payroll giant ADP to give employees access to their W-2 data. ADP acknowledged that the incident affected employees at U.S. Bank and at least 11 other companies.

Equifax did not respond to requests for comment about how many other customer companies may have been affected by the same default (in)security. But Kroger spokesman Keith Dailey said other companies that relied on Equifax for W-2 data also relied on the last four of the SSN and 4-digit birth year as authenticators.

“As far as I know, it’s the standard Equifax setup,” Dailey said.

Last month, Stanford University alerted 600 current and former employees that their data was similarly accessed by ID thieves via Equifax’s W-2Express portal. Northwestern University also just alerted 150 employees that their salary and tax data was stolen via Equifax this year.

In a statement released to KrebsOnSecurity, Equifax spokeswoman Dianne Bernez confirmed that the company had been made aware of suspected fraudulent access to payroll information through its W-2Express service by Kroger.

“The information in question was accessed by unauthorized individuals who were able to gain access by using users’ personally identifiable information,” the statement reads. “We have no reason to believe the personally identifiable information was attained through Equifax systems. Unfortunately, as individuals’ personally identifiable information has become more publicly available, these types of online fraud incidents have escalated. As a result, it is critical for consumers and businesses to take steps to protect consumers’ personally identifiable information including the use of strong passwords and PIN codes. We are working closely with Kroger to assess and monitor the situation.”

ID thieves go after W-2 data because it contains much of the information needed to fraudulently request a large tax refund from the IRS in someone else’s name. Kroger told employees they would know they were victims in this breach if they received a notice from the IRS about a fraudulent refund request filed in their name.

However, most victims first learn of the crime after having their returns rejected by the IRS because the scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

Kroger said it would offer free credit monitoring services to employees affected by the breach. Kroger spokesman Dailey declined to say which company would be providing that monitoring, but he did confirm that it would not be Equifax.

Update, May 7, 9:44 a.m.: Added mention of the Northerwestern University incident involving Equifax’s W-2 portal.
Top

Meet Joe Maloney – Lead System Architect for PC-BSD

Postby Josh Smith via Official PC-BSD Blog »

We’d like to take a moment to officially welcome Joe Maloney to the PC-BSD project as our Lead System Architect.  Joe has been working as a volunteer developer with the PC-BSD project for several years and began committing to the project in 2013.   Joe’s duties will include working on our command line utilities, bug fixing, managing the ports tree, and general creative control over old utilities that need to be revamped.  Joe has 12+ years experience working with FreeBSD and works day to day as a Quality Assurance Engineer for iXsystems and has been doing excellent work writing new tests to make FreeNAS more resilient going forward.   Take a moment and help us welcome Joe to the team!
Top

Crooks Go Deep With ‘Deep Insert’ Skimmers

Postby BrianKrebs via Krebs on Security »

ATM maker NCR Corp. says it is seeing a rapid rise in reports of what it calls “deep insert skimmers,” wafer-thin fraud devices made to be hidden inside of the card acceptance slot on a cash machine.

KrebsOnSecurity’s All About Skimmers series has featured several stories about insert skimmers. But the ATM manufacturer said deep insert skimmers are different from typical insert skimmers because they are placed in various positions within the card reader transport, behind the shutter of a motorized card reader and completely hidden from the consumer at the front of the ATM.

Deep insert skimmers removed from hacked ATMs.
NCR says these deep insert skimming devices — usually made of metal or PCB plastic — are unlikely to be affected by most active anti-skimming jamming solutions, and they are unlikely to be detected by most fraudulent device detection solutions.

“Neither NCR Skimming Protection Solution, nor other anti-skimming devices can prevent skimming with these deep insert skimmers,” NCR wrote in an alert sent to banks and other customers. “This is due to the fact the skimmer sits well inside the card reader, away from the detectors or jammers of [NCR’s skimming protection solution].

The company said it has received reports of these skimming devices on all ATM manufacturers in Greece, Ireland, Italy, Switzerland, Sweden, Bulgaria, Turkey, United Kingdom and the United States.

“This suggests that ‘deep insert skimming’ is becoming more viable for criminals as a tactic to avoid bezel mounted anti-skimming devices,” NCR wrote. The company said it is currently testing a firmware update for NCR machines that should help detect the insertion of deep insert skimmers and send an alert.

A DEEP DIVE ON DEEP INSERT SKIMMERS

Charlie Harrow, solutions manager for global security at NCR, said the early model insert skimmers used a rudimentary wireless transmitter to send card data. But those skimmers were all powered by tiny coin batteries like the kind found in watches, and that dramatically limits the amount of time that the skimmer can transmit card data.

Harrow said NCR suspects that the deep insert skimmer makers are using tiny pinhole cameras hidden above or beside the PIN pad to record customers entering their PINs, and that the hidden camera doubles as a receiver for the stolen card data sent by the skimmer nestled inside the ATM’s card slot. He suspects this because NCR has never actually found a hidden camera along with an insert skimmer. Also, a watch-battery run wireless transmitter wouldn’t last long if the signal had to travel very far.

According to Harrow, the early model insert skimmers weren’t really made to be retrieved. Turns out, that may have something to do with the way card readers work on ATMs.

“Usually what happens is the insert skimmer causes a card jam,” at which point the thief calls it quits and retrieves his hidden camera — which has both the card data transmitted from the skimmer and video snippets of unwitting customers entering their PINs, he said. “These skimming devices can usually cope with most cards, but it’s just a matter of time before a customer sticks an ATM card in the machine that is in less-that-perfect condition.”

The latest model deep insert skimmers, Harrow said, include a tiny memory chip that can hold account data skimmed off the cards. Presumably this is preferable to sending the data wirelessly because writing the card data to a memory chip doesn’t drain as much power from the wimpy coin battery that powers the devices.

The deep insert skimmers also are designed to be retrievable:

“The ones I’ve seen will snap into some of the features inside the card reader, which has got various nooks and crannies,” Harrow said. “The latest ones also have magnets in them which are used to hold them down against the card reader.” Harrow says the magnets are on the opposite side of the device from the card reader, so the magnets don’t interfere with the skimmer’s job of reading the data off of the card’s magnetic stripe.

Many readers have asked why the fraudsters would bother skimming cards from ATMs in Europe, which long ago were equipped to read data off the chip embedded in the cards issued by European banks. The trouble is that virtually all chip cards still have the account data encoded in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM locations that cannot yet read chip-based cards (i.e., the United States).

When thieves skim data from ATMs in Europe, they generally sell the data to fraudsters who will encode the card data onto counterfeit cards and withdraw cash at ATMs in the United States or in other countries that haven’t yet fully moved to chip-based cards. In response, some European financial institutions have taken to enacting an anti-fraud mechanism called “geo-blocking,” which prevents the cards from being used in certain areas.

“Where geo-blocking has been widely or partially implemented, the international loss profile is very different, with minimal losses reported,” wrote the European ATM Security Team (EAST) in their latest roundup of ATM skimming attacks in 2015 (for more on that, see this story). “From the perspective of European card issuers the USA and the Asia-Pacific region are where the majority of such losses are being reported.”



Even after most U.S. banks put in place chip-capable ATMs, the magnetic stripe will still be needed because it’s an integral part of the way ATMs work: Most ATMs in use today require a magnetic stripe for the card to be accepted into the machine. The principal reason for this is to ensure that customers are putting the card into the slot correctly, as embossed letters and numbers running across odd spots in the card reader can take their toll on the machines over time.
Top

Improvements to PulseAudio’s Echo Cancellation

Postby Arun via Arun Raghavan »

As we approach the PulseAudio 9.0 release, I thought it would be a good time to talk about one of the things I had a chance to work on, that landed in this cycle.

Old-time readers will remember the work I had done in the past on echo cancellation. If you’re unfamiliar with the concept, imagine a situation where you’re making a call from your phone or laptop. You don’t have a headset, so you use your device’s speaker and microphone. Now when the person on the other end speaks, their voice is played out of your speaker and captured by your mic. This means that they can now also hear what they’re saying, with some lag — this is called echo. If this has happened to you, you know how annoying and disruptive it can be.

Using Acoustic Echo Cancellation (AEC), PulseAudio is able to detect this in the captured input, and remove the audio we recently played back. While doing this, we also run some other algorithms to enhance the captured input, such as noise suppression (great at damping out background and fan noise) and acoustic gain control, or AGC, which adjusts the mic volume so you are clearly audible). In addition to voice call use cases, this is also handy to have in other applications such as speech recognition (where you want the device to detect what a user is saying, while possibly playing out other sounds).

We don’t implement these algorithms ourselves in PulseAudio. The echo cancellation module — cunningly named module-echo-cancel — provides the infrastructure to plug in different echo canceller implementations. One of these that we support (and recommend), is based on Google’s [WebRTC.org] implementation which includes an extremely capable set of voice processing algorithms.

This is a large code-base, intended to support a full real-time communication stack, and we didn’t want to pick up all that code to include in PulseAudio. So what I did was to make a copy of the AudioProcessing module, wrap it in an easy-to-package library, and then used that from PulseAudio. Quite some time passed by, and I didn’t get a chance to update that code, until last October.

What’s New

The update brought us a number of things since the last one (5 years ago!):

  • The AGC module has essentially been rewritten. In practice, we see that it is slower to change the volume.

  • Voice Activity Detection (VAD) has also been split off into its own module and undergone significant changes.

  • Beamforming has been added, to allow you to use a set of microphones to be able to “point” your microphone array in a specific direction (more on this in a later post).

  • There is now an intelligibility enhancer for applying processing on the stream coming in from the far end (so you can hear the other side better). This feature has not been hooked up in PulseAudio yet.

  • There is a transient suppressor for when you’re on a laptop, and your microphone is picking up keystrokes. This can be important since the sound of the keystroke introduces sharp spikes or “transients” in the audio stream, which can throw off the echo canceller that works best with the frequency range of the human voice. This one seems to be work-in-progress, and not actually used yet.

In addition to this, I’ve also extended support in module-echo-cancel for performing cancellation on multiple channels. So we are now able to deal with hardware that has any number of playback and capture channels (and they don’t even need to be equal), and we no longer have the artificial restriction of having to downmix things to mono.

These changes are in the newly released webrtc-audio-processing v0.2. Unfortunately, we do break API with regards to the previous version. I wrote about this a while back, and hopefully the impact on other users of this library will be minimal.

All this work was made possible thanks to Aldebaran Robotics. A special shout-out to Julien Massot and his excellent team!

These features are already in our master branch, and will be part of the 9.0 release. If you’re using these features, let me know how things work for you, and watch out for a follow up post about beamforming.

If you or your company are looking for help with either PulseAudio or GStreamer, do take a look at the consulting services I currently provide.
Top

Neuerscheinungen

Postby via www.my-universe.com Blog Feed »

In den letzten zwei paar Tagen gab es eine wahre Flut an neuen Flugzeugmodellen für X-Plane. Prominent natürlich IXEGs Boeing 737-300, seit nunmehr sechs Jahren recht öffentlichkeitswirksam in Entwicklung, hat es bei X-Aviation endlich ins Ladenregal geschafft. Ebenfalls seit langer Zeit in Entwicklung, wurde nun auch EADTs Boeing 737-800 in Version 5 released – und damit erstmals mit 3D-Cockpit. Zu guter letzt sei noch die von x-plane.hu entwickelte Let L-410 erwähnt, die ebenfalls in Neuauflage gelandet ist.


Da IXEG es letzten Samstag als erste geschafft haben, ihr Release zu platzieren, habe ich mich in der vergangenen Woche hauptsächlich mit diesem Modell auseinandergesetzt (und natürlich war ich nach der langen Entwicklungszeit und den vielen Videos von Jan schon sehr neugierig auf den Vogel). Seit Release habe ich ca. 30 Flugstunden mit der B733 zusammengeflogen, und ich muss sagen, das Modell ist äußerst faszinierend und war für mich eine lohnende Investition.

Doch was macht die B733 so besonders im Vergleich mit anderen Payware-Modellen, und wodurch rechtfertigt sich der vergleichsweise hohe Preis? Aus meiner Sicht sind es die hohe Detailtreue bei der Umsetzung der Systeme, das in exzellenter Weise ausgetüftelte Flugmodell, die sehr gute Dokumentation und nicht zuletzt der (zumindest bisher) hervorragende Support durch das IXEG-Team. Dabei sollte vielleicht eines vorweg gesagt sein: IXEGs B733 ist ein Modell für ernsthafte Simulator-Piloten. Ja, man kann damit auch einfach losfliegen – die Entwickler haben sich alle Mühe gegeben, das Flugzeug auch dafür fit zu machen. Aus meiner Sicht wäre das aber reine Verschwendung – wer nicht reichlich Zeit ins Erlernen der richtigen Prozeduren stecken möchte, wird die Möglichkeiten dieses Modells nicht einmal ansatzweise ausschöpfen.

Nicht verschwiegen werden soll an dieser Stelle, dass die B733 bis dato (Version 1.0.2) den Reifegrad einer guten Beta-Version hat. IXEG hat auf eine öffentliche Beta-Phase verzichtet (es gibt gute Gründe sowohl dafür als auch dagegen), sieht sich jedoch jetzt mit einem nicht gerade kleinen Berg an Bug-Reports konfrontiert (Stand heute ca. 400 Reports, davon sicherlich einige doppelt – ich würde aber insgesamt eine dreistellige Zahl erwarten). Klar, viele Dinge sind nur Kleinigkeiten, und nicht wenige der Fehler wurden überhaupt erst entdeckt, weil Dinge ausgetestet wurden, die man in einem anderen Modell vielleicht gar nicht erst versucht hätte. Dennoch zeigen gerade die simpleren Fehler eine der großen Schwächen bei der Entwicklung von Addons für X-Plane generell: Es fehlt an solider Software-Produktion. Das zu erörtern gehört jedoch in einen gesonderten Blogeintrag – das Thema ist zu weitschweifig, und ich will schnell noch eine Runde fliegen…
Top

Fraudsters Steal Tax, Salary Data From ADP

Postby BrianKrebs via Krebs on Security »

Identity thieves stole tax and salary data from payroll giant ADP by registering accounts in the names of employees at more than a dozen customer firms, KrebsOnSecurity has learned. ADP says the incidents occurred because the victim companies all mistakenly published sensitive ADP account information online that made those firms easy targets for tax fraudsters.

Patterson, N.J.-based ADP provides payroll, tax and benefits administration for more than 640,000 companies. Last week, U.S. Bancorp (U.S. Bank) — the nation’s fifth-largest commercial bank — warned some of its employees that their W-2 data had been stolen thanks to a weakness in ADP’s customer portal.

ID thieves are interested in W-2 data because it contains much of the information needed to fraudulently request a large tax refund from the U.S. Internal Revenue Service (IRS) in someone else’s name. A reader who works at U.S. Bank shared a letter received from Jennie Carlson, the financial institution’s executive vice president of human resources.

“Since April 19, 2016, we have been actively investigating a security incident with our W-2 provider, ADP,” Carlson wrote. “During the course of that investigation we have learned that an external W-2 portal, maintained by ADP, may have been utilized by unauthorized individuals to access your W-2, which they may have used to file a fraudulent income tax return under your name.”

The letter continued:

“The incident originated because ADP offered an external online portal that has been exploited. For individuals who had never used the external portal, a registration had never been established. Criminals were able to take advantage of that situation to use confidential personal information from other sources to establish a registration in your name at ADP. Once the fraudulent registration was established, they were able to view or download your W-2.”

U.S. Bank spokesman Dana Ripley said the letter was sent to a “small population” of the bank’s more than 64,000 employees. Asked to comment on the letter from U.S. Bank, ADP confirmed that the fraud visited upon U.S. Bank also hit “a very small subset” of the ADP’s total customers this year.

ADP emphasized that the fraudsters needed to have the victim’s personal data — including name, date of birth and Social Security number — to successfully create an account in someone’s name. ADP also stressed that this personal data did not come from its systems, and that thieves appeared to already possess that data when they created the unauthorized accounts at ADP’s portal.

ADP Chief Security Officer Roland Cloutier said customers can choose to create an account at the ADP portal for each employee, or they can defer that process to a later date (but employers do have to chose one or the other, Cloutier said).

According to ADP, new users need to be in possession of two other things (in addition to the victim’s personal data) at a minimum in order to create an account: A custom, company-specific link provided by ADP, and a static code assigned to the customer by ADP.

The problem, Cloutier said, seems to stem from ADP customers that both deferred that signup process for some or all of their employees and at the same time inadvertently published online the link and the company code. As a result, for users who never registered, criminals were able to register as them with fairly basic personal info, and access W-2 data on those individuals.

U.S. Bank’s Ripley acknowledged that the bank published the link and company code to an employee resource online, but said the institution never considered that the data itself was privileged.

“We viewed the code as an identification code, not as an authentication code, and we posted it to a Web site for the convenience of our employees so they could access their W-2 information,” Ripley said. “We have discontinued that practice.”

In the meantime, ADP says it has developed systems to monitor the Web for any other customers that may inadvertently publish their signup link and code.

“We’ve now aggressively put in some security intelligence by trying to look for that code and turn off self-service registration access if we find that code” published online, Cloutier said.

ANALYSIS

ADP’s portal, like so many other authentication systems, relies entirely on static data that is available on just about every American for less than $4 in the cybercrime underground (SSN/DOB, address, etc). It’s true that companies should know better than to publish such a crucial link online along with the company’s ADP code, but then again these are pretty weak authenticators.

Cloutier said ADP does offer an additional layer of authentication — a personal identification code (PIC) — basically another static code that can be assigned to each employee. He added that ADP is trialing a service that will ask anyone requesting a new account to successfully answer a series of questions based on information that only the real account holder is supposed to know.

Cloutier declined to say who was providing the verification service, but these so-called knowledge-based authentication (KBA) or “out-of-wallet” questions generally focus on things such as previous address, loan amounts and dates and can be successfully enumerated with random guessing. In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

The IRS found this out the hard way, and over the past year has removed two separate authentication systems that placed too much reliance on KBA and static data to authenticate taxpayers. In May 2015, the IRS took down its “Get Transcript” service after tax refund fraudsters began using it to pull W-2 data on more than 724,000 taxpayers. In those cases, the fraudsters also already had the victim’s SSN, DoB and other personal data. In March 2016, the IRS suspended its “Get IP PIN” feature for the same reason.

But somehow, KBA questions are an innovation that’s worth looking forward to at ADP.

“The IRS didn’t have a PIC code or client code,” Cloutier said when I brought up the IRS’s experience. “They didn’t have as many levels and individual authentication components that we provide our clients.”

Cloutier’s words recalled to mind a scene from the movie Office Space, in which Jennifer Aniston’s character is upbraided by her manager for wearing too few “pieces of flair” on her ‘Chotchkie’s’ uniform. His comment also made me think about one of the best scenes from the cult hit “This is Spinal Tap,” in which the character Nigel Tufnel shows off how all the knobs on his amplifier go to “level 11,” while other amps only go to the more boring and standard level 10.

It’s truly a measure of the challenges ahead in improving online authentication that so many organizations are still looking backwards to obsolete and insecure approaches. ADP’s logo includes the clever slogan, “A more human resource.” It’s hard to think of a more apt mission statement for the company. After all, it’s high time we started moving away from asking people to robotically regurgitate the same static identifiers over and over, and shift to a more human approach that focuses on dynamic elements for authentication. But alas, that’s fodder for a future post.

Update 1:59 p.m. ET: Clarified Spinal Tap reference.

Update, 10:07 p.m. ET: It looks like ADP’s stock took a pretty big hit immediately after this story ran today.



The stock later rebounded:

Top

Dru Lavigne Will be Speaking @ KnoxBUG

Postby Josh Smith via Official PC-BSD Blog »

If you missed the inaugural meeting of the Knoxville BSD User Group, you definitely don’t want to miss this one.  Lead Documentation Expert and author for the PC-BSD and FreeNAS projects Dru Lavigne will be giving a talk: “You Too Can Doc Like an Egyptian”.  For more information on meeting times and venue, please visit the Knoxville Tennessee BSD User Group’s web page.  We hope to see you there!

http://www.knoxbug.org/content/2016-05-26
Top

How the Pwnedlist Got Pwned

Postby BrianKrebs via Krebs on Security »

Last week, I learned about a vulnerability that exposed all 866 million account credentials harvested by pwnedlist.com, a service designed to help companies track public password breaches that may create security problems for their users. The vulnerability has since been fixed, but this simple security flaw may have inadvertently exacerbated countless breaches by preserving the data lost in them and then providing free access to one of the Internet’s largest collections of compromised credentials.

Pwnedlist is run by Scottsdale, Ariz. based InfoArmor, and is marketed as a repository of usernames and passwords that have been publicly leaked online for any period of time at Pastebin, online chat channels and other free data dump sites.

The service until quite recently was free to all comers, but it makes money by allowing companies to get a live feed of usernames and passwords exposed in third-party breaches which might create security problems going forward for the subscriber organization and its employees.

This 2014 article from the Phoenix Business Journal describes one way InfoArmor markets the Pwnedlist to companies: “InfoArmor’s new Vendor Security Monitoring tool allows businesses to do due diligence and monitor its third-party vendors through real-time safety reports.”

The trouble is, the way Pwnedlist should work is very different from how it does. This became evident after I was contacted by Bob Hodges, a longtime reader and security researcher in Detroit who discovered something peculiar while he was using Pwnedlist: Hodges wanted to add to his watchlist the .edu and .com domains for which he is the administrator, but that feature wasn’t available.

In the first sign that something wasn’t quite right authentication-wise at Pwnedlist, the system didn’t even allow him to validate that he had control of an email address or domain by sending him a verification to said email or domain.

On the other hand, he found he could monitor any email address he wanted. Hodges said this gave him an idea about how to add his domains: Turns out that when any Pwnedlist user requests that a new Web site name be added to his “Watchlist,” the process for approving that request was fundamentally flawed.

That’s because the process of adding a new thing for Pwnedlist to look for — be it a domain, email address, or password hash — was a two-step procedure involving a submit button and confirmation page, and the confirmation page didn’t bother to check whether the thing being added in the first step was the same as the thing approved in the confirmation page. [For the Geek Factor 5 crowd here, this vulnerability type is known as “parameter tampering,” and it involves  the ability to modify hidden parameters in POST requests].

“Their system is supposed to compare the data that gets submitted in the second step with what you initially submitted in the first window, but there’s nothing to prevent you from changing that,” Hodges said. “They’re not even checking normal email addresses. For example, when you add an email to your watchlist, that email [account] doesn’t get a message saying they’ve been added. After you add an email you don’t own or control, it gives you the verified check box, but in reality it does no verification. You just typed it in. It’s almost like at some point they just disabled any verification systems they may have had at Pwnedlist.”

Hodges explained that one could easily circumvent Pwnedlist’s account controls by downloading and running a copy of Kali Linux — a free suite of tools made for finding and exploiting software and network vulnerabilities.

Always the student, I wanted to see this first-hand. I had a Pwnedlist account from way back when it first launched in 2011, so I fired up a downloadable virtual version of Kali on top of the free VirtualBox platform on my Mac.  Kali comes with a pretty handy vulnerability scanner called Burpsuite, which makes sniffing, snarfing and otherwise tampering with traffic to and from Web sites a fairly straightforward point-and-click exercise.

Indeed, after about a minute of instruction, I was able to replicate Hodges’ findings, successfully adding Apple.com to my watchlist. I also found I could add basically any resource I wanted. Although I verified that I could add top-level domains like “.com” and “.net,” I did not run these queries because I suspected that doing so would crash the database, and in any case might call unwanted attention to my account. (I also resisted the strong temptation to simply shut up about this bug and use it as my own private breach alerting service for the Fortune 500 firms).

Hodges told me that any newly-added domains would take about 24 hours to populate with results. But for some reason my account was taking far longer. Then I noticed that the email address I’d used to sign up for the free account back in 2011 didn’t have any hits in the Pwnedlist, and that was simply not possible if Pwnedlist was doing a halfway decent job tracking breaches. So I pinged InfoArmor and asked them to check my account. Sure enough, they said, it had never been used and was long ago deactivated.

Less than 12 hours after InfoArmor revived my dormant account, I received an automated email alert from the Pwnedlist telling me I had new results for Apple.com. In fact, the report I was then able to download included more than 100,000 usernames and passwords for accounts ending in apple.com. The data was available in plain text, and downloadable as a spreadsheet.

Some of the more than 100,000 credentials that Pwnedlist returned for me in a report on all passwords tied to email addresses that include “apple.com”.
It took a while for the enormity of what had just happened to sink in. I could now effectively request a report including all 866 million account credentials recorded by the Pwnsedlist. In short, the Pwnedlist had been pwned.

At this point, I got back in touch with InfoArmor and told them what Hodges had found and shown me. Their first response was that somehow I been given a privileged account on Pwnedlist, and that this is what allowed me to add any domain I chose. After all, I’d added the top 20 companies in the Fortune 500. How had I been able to do that?

“The account type you had had more privileges than an ordinary user would,” insisted Pwnedlist founder Alen Puzic.

After validating the bug, I added some other domains just for giggles. I deleted them all (except the Apple one) before they could generate reports.
I doubted that was true, and I suspected the vulnerability was present across their system regardless of which account type was used. Puzic said the company stopped allowing free account signups about six months ago, but since I had him on the phone I suggested he create a new, free account just for our testing purposes.

He rather gamely agreed. Within 30 seconds after the account was activated, I was able to add “gmail.com” to my Pwnedlist watchlist. Had we given it enough time, that query almost certainly would have caused Pwnedlist to produce a report with tens of millions of compromised credentials involving Gmail accounts.

“Wow, so you really can add whatever domain you want,” Puzic said in amazement as he loaded and viewed my account on his end.

Pwnedlist.com went offline shortly after my phonecall with InfoArmor.
It’s a shame that InfoArmor couldn’t design better authorization and authentication systems for Pwnedlist, given that the service itself is a monument to object failures in that regard. I’m a big believer in companies getting better intelligence about how large-scale everyday password breaches may impact their security, but it helps no one when a service that catalogs breaches has a lame security weakness that potentially prolongs and exacerbates them.

Update, 12:30 p.m. ET: InfoArmor downplayed the problem on Twitter, noting that “The data that was “exposed” has already been “compromised”- there was no loss of PII or subscriber data.” Also, a new notice is up on Pwnedlist.com, stating that the site is being shut down in a few weeks. The pop-up message reads:

“Thank you for being a subscriber and letting us help alert you of any risks related to your personal credentials. PwnedList launched in 2012 and quickly become the leader in open-source compromised data aggregation. In 2013 PwnedList was acquired by InfoArmor, Inc. a provider of enterprise based services. As part of the transition, the PwnedList Website has been scheduled for decommission on May 16, 2016. If you are interested in obtaining our commercial identity protection, please go to infoarmor.com for more information. It has been our pleasure to help you reduce your risk from compromised credentials.”

Top

January-March 2016 Status Report

Postby Webmaster Team via FreeBSD News Flash »

The January to March 2016 Status Report is now available.
Top

bsdtalk264 - Down the Gopher Hole

Postby Mr via bsdtalk »

Playing around with the gopher protocol.   Description of gopher from the 1995 book "Student's Guide to the Internet" by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at https://www.youtube.com/watch?v=oR76UI7aTvs

Check out http://gopher.floodgap.com/gopher/

File Info: 27 Min, 13 MB.

Ogg Link:https://archive.org/download/bsdtalk264/bsdtalk264.ogg
Top

A Dramatic Rise in ATM Skimming Attacks

Postby BrianKrebs via Krebs on Security »

Skimming attacks on ATMs increased at an alarming rate last year for both American and European banks and their customers, according to recent stats collected by fraud trackers. The trend appears to be continuing into 2016, with outbreaks of skimming activity visiting a much broader swath of the United States than in years past.

Two network cable card skimming devices, as found attached to this ATM.
In a series of recent alerts, the FICO Card Alert Service warned of large and sudden spikes in ATM skimming attacks. On April 8, FICO noted that its fraud-tracking service recorded a 546 percent increase in ATM skimming attacks from 2014 to 2015.

“The number of ATM compromises in 2015 was the highest ever recorded by the FICO Card Alert Service, which monitors hundreds of thousands of ATMs in the US,” the company said. “Criminal activity was highest at non-bank ATMs, such as those in convenience stores, where 10 times as many machines were compromised as in 2014.”

While 2014 saw skimming attacks targeting mainly banks in big cities on the east and west coasts of the United States, last year’s skimming attacks were far more spread out across the country, the FICO report noted.

Earlier this year, I published a post about skimming attacks targeting non-bank ATMs using hidden cameras and skimming devices plugged into the ATM network cables to intercept customer card data. The skimmer pictured in that story was at a 7-Eleven convenience store.

Since that story ran I’ve heard from multiple banking industry sources who said they have seen a spike in ATM fraud targeting cash machines in 7-Elevens and other convenience stores, and that the commonality among the machines is that they are all operated by ATM giant Cardtronics (machines in 7-Eleven locations made up for 17.5 percent of Cardtronics’ revenue last year, according to this report at ATM Marketplace).

Some financial institutions are taking dramatic steps to head off skimming activity. Trailhead Credit Union in Portland, Ore., for example, has posted a notice to customers atop its Web site, stating:

“ALERT: Until further notice, we have turned off ATM capabilities at all 7-11 ATMs due to recent fraudulent activity. Please use our ATM locator for other locations. We are sorry for the inconvenience.”

Trailhead Credit Union has stopped allowing members to withdraw cash from 7-11 ATMs.
7-Eleven did not respond to requests for comment. Cardtronics said it wasn’t aware of any banks blocking withdrawals across the board at 7-11 stores or at Cardtronics machines.

“While Cardtronics is aware that a single financial institution [Xceed Financial Credit Union] temporarily restricted ATM access late in 2015, it soon thereafter restored full ATM access to its account holders,” the company said in a statement. “As the largest ATM services provider, Cardtronics has a long history of executing a layered security strategy and implementing innovative security enhancements at our ATMs. As criminals modify their attack, Cardtronics always has and always will aggressively respond, reactively and proactively, with innovation to address these instances.”

DRAMA IN DC

A bit closer to home for this author, on April 22 FICO pushed an alert to its customers and partners warning about “a recent and dramatic increase in skimming fraud perpetrated at a chain of discount supercenters point-of-sale (POS) terminals,” in an around the Washington, D.C. area, including Frederick, Ellicott City and Mt. Airy in Maryland, and in Fredricksburg, Va.



“As this fraud activity has appeared and progressed suddenly, it is likely that sites in other cities and other geographic areas will be targeted by organized criminal groups,” the organization cautioned.

EUROPE

Banks in Europe also enjoyed an increase in skimming attacks of all kinds last year. According to statistics shared by the European ATM Security Team (EAST), during 2015 there were 18,738 skimming attacks reported against European ATMs. That’s a 19% increase from the previous year and equates to 51 attacks per 1000 ATMs over the period.

“During 2015 total losses of 327.48 million euros were reported,” EAST wrote. “This is a 17% increase when compared to the total losses of 279.86 million euros reported for 2014 and equates to losses of 884,069 euros per 1000 ATMs over the period.”

EAST’s report further breaks down the skimming activity by specialization. For example, there were at least 2,657 cases in which a thief tried to blow up or otherwise physically force his way into the cash machine. “This total also includes data from solid explosive and explosive gas attacks. This is a 34% increase from 2014 and equates to 7.2 attacks per 1000 ATMs over the period.”

EAST also tracked 15 malware incidents reported against European ATMs in 2015.  All of them were ‘cash out’ or ‘jackpotting’ attacks. According to EAST, this is a 71% decrease from 2014.

Source: EAST

PROTECT YOURSELF

As I’ve noted in countless skimmer stories here, the simplest way to protect yourself from ATM skimming is to cover your hand when entering your PIN. That’s because most skimmers rely on hidden cameras to steal the victim’s PIN.

Interestingly, a stat in Verizon‘s new Data Breach Investigations Report released this week bears this out: According to Verizon, in over 90 percent of the breaches in the report last year involving skimmers used a tiny hidden camera to steal the PIN.

The Verizon report also offers this advice about ATM safety: Trust your gut. “If you think that something looks odd or out of place, don’t use it. While it is increasingly difficult to find signs of tampering, it is not impossible. If you think a device may have been tampered with, move on to another location, after reporting to the merchant or bank staff.”

For more on ATM skimmers and other skimming devices, check out my series All About Skimmers.
Top

Dental Assn Mails Malware to Members

Postby BrianKrebs via Krebs on Security »

The American Dental Association (ADA) says it may have inadvertently mailed malware-laced USB thumb drives to thousands of dental offices nationwide.

The problem first came to light in a post on the DSL Reports Security Forum. DSLR member “Mike” from Pittsburgh got curious about the integrity of a USB drive that the ADA mailed to members to share updated “dental procedure codes” — codes that dental offices use to track procedures for billing and insurance purposes.

“Oh wow the usually inept ADA just sent me new codes,” Mike wrote. “I bet some marketing genius had this wonderful idea instead of making it downloadable. I can’t wait to plug an unknown USB into my computer that has PHI/HIPAA on it…” [link added].

The ADA says some flash drives mailed to members contained malware. Image: Mike
Sure enough, Mike looked at the code inside one of the files on the flash drive and found it tries to open a Web page that has long been tied to malware distribution. The domain is used by crooks to infect visitors with malware that lets the attackers gain full control of the infected Windows computer.

Reached by KrebsOnSecurity, the ADA said it sent the following email to members who have shared their email address with the organization:

“We have received a handful of reports that malware has been detected on some flash drives included with the 2016 CDT manual,” the ADA said. “The ‘flash drive’ is the credit card sized USB storage device that contains an electronic copy of the CDT 2016 manual. It is located in a pocket on the inside back cover of the manual. Your anti-virus software should detect the malware if it is present. However, if you haven’t used your CDT 2016 flash drive, please throw it away.

To give you access to an electronic version of the 2016 CDT manual, we are offering you the ability to download the PDF version of the 2016 CDT manual that was included on the flash drive.

To download the PDF version of the CDT manual:

1. Click on the link »ebusiness.ada.org/login/ ··· ion.aspx
2. Log in with your ADA.org user ID and password
3. After you log in you will automatically be directed to a page showing CDT 2016 Digital Edition.
4. Click on the “Download” button to save the file to your computer for use.

If you have difficulty accessing or downloading the file, please call 1.800.947.4746 and a Member Service Advisor will be happy to assist you.

Many of the flash drives do not contain the Malware. If you have already used your flash drive and it worked as expected (it displayed a menu linking to chapters of the 2016 CDT manual), you may continue using it.

We apologize if this issue has caused you any inconvenience and thank you for being a valued ADA customer.”

This incident could give new meaning to the term “root canal.” It’s not clear how the ADA could make a statement that anti-virus should detect the malware, since presently only some of the many antivirus tools out there will flag the malware link as malicious.

In response to questions from this author, the ADA said the USB media was manufactured in China by a subcontractor of an ADA vendor, and that some 37,000 of the devices have been distributed. The not-for-profit ADA is the nation’s largest dental association, with more than 159,000 members.

“Upon investigation, the ADA concluded that only a small percentage of the manufactured USB devices were infected,” the organization wrote in an emailed statement. “Of note it is speculated that one of several duplicating machines in use at the manufacturer had become infected during a production run for another customer. That infected machine infected our clean image during one of our three production runs. Our random quality assurance testing did not catch any infected devices. Since this incident, the ADA has begun to review whether to continue to use physical media to distribute products.”
Top

All About Fraud: How Crooks Get the CVV

Postby BrianKrebs via Krebs on Security »

A longtime reader recently asked: “How do online fraudsters get the 3-digit card verification value (CVV or CVV2) code printed on the back of customer cards if merchants are forbidden from storing this information? The answer: If not via phishing, probably by installing a Web-based keylogger at an online merchant so that all data that customers submit to the site is copied and sent to the attacker’s server.

Kenneth Labelle, a regional director at insurer Burns-Wilcox.com, wrote:

“So, I am trying to figure out how card not present transactions are possible after a breach due to the CVV. If the card information was stolen via the point-of-sale system then the hacker should not have access to the CVV because its not on the magnetic strip. So how in the world are they committing card not present fraud when they don’t have the CVV number? I don’t understand how that is possible with the CVV code being used in online transactions.”

First off, “dumps” — or credit and debit card accounts that are stolen from hacked point of sale systems via skimmers or malware on cash register systems — retail for about $20 apiece on average in the cybercrime underground. Each dump can be used to fabricate a new physical clone of the original card, and thieves typically use these counterfeits to buy goods from big box retailers that they can easily resell, or to extract cash at ATMs.

However, when cyber crooks wish to defraud online stores, they don’t use dumps. That’s mainly because online merchants typically require the CVV, criminal dumps sellers don’t bundle CVVs with their dumps.

Instead, online fraudsters turn to “CVV shops,” shadowy cybercrime stores that sell packages of cardholder data, including customer name, full card number, expiration, CVV2 and ZIP code. These CVV bundles are far cheaper than dumps — typically between $2-$5 apiece — in part because the are useful mainly just for online transactions, but probably also because overall they more complicated to “cash out” or make money from them.

The vast majority of the time, this CVV data has been stolen by Web-based keyloggers. This is a relatively uncomplicated program that behaves much like a banking Trojan does on an infected PC, except it’s designed to steal data from Web server applications.

PC Trojans like ZeuS, for example, siphon information using two major techniques: snarfing passwords stored in the browser, and conducting “form grabbing” — capturing any data entered into a form field in the browser before it can be encrypted in the Web session and sent to whatever site the victim is visiting.

Web-based keyloggers also can do form grabbing, ripping out form data submitted by visitors — including names, addresses, phone numbers, credit card numbers and card verification code — as customers are submitting the data during the online checkout process.

These attacks drive home one immutable point about malware’s role in subverting secure connections: Whether resident on a Web server or on an end-user computer, if either endpoint is compromised, it’s ‘game over’ for the security of that Web session. With PC banking trojans, it’s all about surveillance on the client side pre-encryption, whereas what the bad guys are doing with these Web site attacks involves sucking down customer data post- or pre-encryption (depending on whether the data was incoming or outgoing).

If you’re responsible for maintaining or securing Web sites, it might be a good idea to get involved in one or more local groups that seek to help administrators. Professional and semi-professionals are welcome at local chapter meetings of OWASP, CitySec, ISSA or Security Bsides meetups.
Top

SkyMaxx Pro und Real Weather Connector

Postby via www.my-universe.com Blog Feed »

So eindrucksvoll wie die Natur bekommt derzeit kein Computerprogramm die Darstellung von Wolken hin. Mit SkyMaxx Pro und dem Real Weather Connector gibt es aber immerhin Addons, die X-Planes doch recht schwache Wetterdarstellung etwas realistischer gestalten wollen – wenn auch auf Kosten eines Mehrbedarfs an Rechenleistung und eines nicht gerade geringen Budgetbedarfs; beide Addons zusammen schlagen immerhin mit rund 60 US-$ zu Buche. Was bekommt man also im Tausch gegen Geld und FPS?


Zunächst einmal wäre das SkyMaxx Pro 3 (von mir in den Versionen 3.0, 3.1, 3.1.1 und 3.1.2 getestet), kurz SMP. Es kümmert sich um das Rendering von Wolken, greift aber ansonsten nicht weiter in das Wettergeschehen ein, sondern greift auf die Wetterdaten von X-Plane selbst zurück. Die optische Darstellung von Wolken wird durch SMP stark verbessert; insbesondere Kumulus-Wolken erscheinen damit sehr viel realistischer – allerdings besonders dann, wenn zur Darstellung selbiger die performance-intensivste Einstellung („sparse particles“) gewählt wurde. SMP selbst ist also vornehmlich etwas für's Auge und gestaltet das Wettergeschehen selbst noch nicht realistischer.

Richtig spannend wird die Sache jedoch in Verbindung mit dem Real Weather Connector (RWC). Dazu muss man zunächst einmal verstehen, worin die Limitierung von X-Plane eigentlich liegt: Normalerweise kann von X-Plane zur Zeit nur ein Wetterzustand dargestellt werden, und dieser ändert sich immer dann abrupt, wenn neue Wetterdaten zur Verfügung gestellt werden (z. B. durch Download einer frischen METAR-Datei). RWC versucht, diese Limitierung zu umgehen, indem auch das korrekte Wetter in benachbarten Parzellen dargestellt werden kann. Dadurch werden Übergänge weniger abrupt, und man sieht bereits vorab das Wetter, in das man gleich hineinfliegen wird.

Soweit die Theorie; in der Praxis gibt es jedoch einige Limitierungen, die man kennen sollte, bevor man von der doch recht kostspieligen Investition enttäuscht ist. Wie schon angesprochen benötigt das Duo aus SMP und RWC ordentlich Grafik-Leistung. Dieser Bedarf schlägt umso deutlicher zu Buche, je größer das abzudeckende Gebiet gewählt wurde. Doch was ist sinnvoll, und wo liegen hier die Grenzen? SMP kann maximal ein Gebiet von 22.500 km2 darstellen. Das klingt zunächst viel, doch rechnen wir einmal ein bisschen herum. Diese Fläche entspricht einem Quadrat von gerade einmal 150 km Kantenlänge. Befindet man sich in der Mitte des Quadrats, sieht man auf eine Entfernung von 75 km realistisches Wetter; das entspricht ca. 40 NM. Fliegt man mit einem Jet in größerer Höhe, ist in der Regel eine weit größere Sichtweite auf Wettersysteme gegeben.

Anders sieht es natürlich mit kleinen, langsamen Flugzeugen oder Hubschraubern aus, die in weit geringerer Höhe operieren. Von solch einem Cockpit aus betrachtet nehmen sich mit SMP und RWC umgesetzte Wettersysteme recht realistisch und optisch eindrucksvoll aus – wenn nicht gerade einer der auch in der neuesten SMP-Version (3.1.2) vorhandenen Rendering-Bugs zuschlägt. Ab und zu erscheinen „Geisterwolken“, die gar nicht zum aktuellen Wettersystem gehören können. Oft äußern sich diese Bugs in einer äußerst dichten, geschlossenen Wolkendecke in niedriger Höhe (zumeist jetzt schmutzig weiß, während sie in älteren Versionen auch schon mal schwarz sein konnten). Diese sonderbaren Wolken tauchten bei mir meist auf, wenn SMP nicht den vollen Sichtbereich kontrollieren durfte.

Die Entwickler sind sehr rührig und nehmen konstruktive Verbesserungsvorschläge gerne entgegen. Sicherlich werden die Fehler in den kommenden Versionen noch ausgebügelt – die Restriktion mit dem maximal darstellbaren Sichtbereich wird aber nicht so leicht zu umgehen sein. Für eine realistische Sichtweite müsste der darstellbare Bereich auf über 300.000 km2 ausgeweitet werden. Daran scheitert aber beim momentanen Stand der Technik selbst der stärkste Rechner.

Mein Fazit: SMP und RWC im Gespann verbessern die Wolkendarstellung in X-Plane deutlich. So richtig profitieren davon aber eher GA-Piloten, während Airline-Kutscher dann doch an die Grenzen des derzeit technisch machbaren stoßen.
Top


2016 Superheroes Race for the National Children’s Advocacy Center (NCAC) in Huntsville, AL

Postby Zach via The Z-Issue »

Well, it was that time of year again… time to dress up as one’s favourite Superhero and run for a great cause! The first weekend of April, I made the drive down to the lovely city of Huntsville, Alabama in order to support the National Children’s Advocacy Center by running in the 2016 NCAC Superheros 5K (here’s my post about last year’s race).

This year’s course was the same as last year, so it was a really nice run through parts of the city centre and through more residential areas of Huntsville. Unlike last year, the race started much later in the afternoon, so the temperatures were a lot higher (last year, truthfully, a bit chilly). It was beautifully sunny, and actually bordered on a bit warm, but I would gladly take those conditions over the cold!


Right before the start of the race
Click for full photo
I wasn’t quite sure how this race was going to turn out, seeing as it was my first since the knee injury late last year. I was hopeful that my rehabilitation and training since the injury would help me at least come close to my time last year, but I also doubted that possibility. I came in first place overall with a time of 20:13, which was a little over 30 seconds slower than last year. All things considered, I was pleased with my time. A few other fantastic runners to mention this year were Elliott Kliesner (age 14) who came in about 37 seconds after me, Christian Grant (age 12) with a time of 21:42, and Bud Bettler (age 72) who finished with an outstanding time for his age bracket at 28:16.


5K Results 1st through 5th place
Click for top 45 results
Years ago, I decided that I wouldn’t run in any races unless they benefited a children’s charity, and I can’t think of any organisation with which my goals more align with their mission than the National Children’s Advocacy Center. According to WAFF News in Huntsville, the race raised over $24,000 for the NCAC! That will make a huge difference in the lives of the children that the NCAC serves! Here’s to hoping that next year’s race (the 7th annual) will raise even more. Hope to see you there!


Nathan Zachary’s award (and Superhero cape) acceptance
Click to enlarge
Cheers,
Zach
Top

SpyEye Makers Get 24 Years in Prison

Postby BrianKrebs via Krebs on Security »

Two hackers convicted of making and selling the infamous SpyEye botnet creation kit were sentenced in Georgia today to a combined 24 years in prison for helping to infect hundreds of thousands of computers with malware and stealing millions from unsuspecting victims.

Aleksander Panin developed and sold SpyEye. Image courtesy: RT.
Atlanta Judge Amy Totenberg handed down a sentence of nine years, six months for Aleksandr Andreevich Panin, a 27-year-old Russian national also known by the hacker aliases “Gribodemon” and “Harderman.”

Convicted of conspiracy to commit wire and bank fraud, Panin was the core developer and distributor of SpyEye, a botnet toolkit that made it easy for relatively unsophisticated cyber thieves to steal millions of dollars from victims.

Sentenced to 15 years in jail was Panin’s business partner —  27-year-old Hamza “Bx1” Bendelladj, an Algerian national who pleaded guilty in June 2015 to helping Panin develop and market the SpyEye kit. Bendelladj also admitting to running his own SpyEye botnet of hacked Windows computers, a crime machine that he used to harvest and steal 200,000 credit card numbers. By the government’s math (an assumed $500 loss per card) Bx1 was potentially responsible for $100 million in losses.

“It is difficult to over state the significance of this case, not only in terms of bringing two prolific computer hackers to justice, but also in disrupting and preventing immeasurable financial losses to individuals and the financial industry around the world,” said John Horn, U.S. Attorney for the Northern District of Georgia.

THE HAPPY HACKER

Bendelladj was arrested in Bangkok in January 2013 while in transit from Malaysia to Egypt. He quickly became known as the “happy hacker” after his arrest, in which he could be seen smiling broadly while in handcuffs and being paraded before the local news media.

Photo: Hamza “Bx1” Bendelladj, Bangkok Post
In its case against the pair of hackers, the government presented chat logs between Bendelladj and Panin and other hackers. The government says the chat logs reveal that although Bendelladj worked with Panin to fuel the rise of SpyEye by vouching for him on cybercrime forums such as “Darkode,” the two had an antagonistic relationship.

Their business partnership imploded after Bx1 announced that he was publicly releasing the source code for SpyEye.

“Indeed, after Bendelladj ‘cracked’ SpyEye and made it available to others without having to purchase it from Panin, the two had a falling out,” reads the government’s sentencing memo (PDF) to the judge in the case.

The government says that while Bendelladj maintained he was little more than a malware analyzer working for a security company, his own chat logs put the lie to that claim, noting in November 2012 Bx1 bluntly said: “if they pay me the whole money of the world . . . I wont work for security.”

Bx1 had a penchant for marketing to other thieves. He shrewdly cast SpyEye as a lower-cost, more powerful alternative to the Zeus botnet creation kit, plastering cybercrime forums with animated ads pimping SpyEye as the “Zeuskiller” (in part because SpyEye was designed to remove Zeus from host computers before infecting them).

Part of a video ad for SpyEye.
In Oct. 2010, KrebsOnSecurity was the first to report on rumors in the underground that the authors of Zeus and SpyEye were ending their rivalry and merging the two crimeware products into one software stack and support structure for existing clients.

“Panin developed SpyEye as a successor to the notorious Zeus malware that had, since 2009, wreaked havoc on financial institutions around the world,” the Justice Department said in its statement today. “In November 2010, Panin allegedly received the source code and rights to sell Zeus from Evginy Bogachev, a/k/a Slavik, and incorporated many components of Zeus into SpyEye.  Bogachev remains at large and is currently the FBI’s most wanted cybercriminal.”

Bogachev, the alleged Zeus Trojan author, in undated photos.
It’s not clear whether Bendelladj had any intention of honoring the sanctity of the merger agreement with the author of the Zeus Trojan. Not long after the supposed merger, copies of the Zeus source code were available for sale online, and the code went fully public and free not long after that. My money is on Bendelladj for that leak as well.

Apparently Bx1 was not a big fan of KrebsOnSecurity, either. According to the government’s sentencing memo:

“At various points, [Bendelladj] has expressed contempt for Brian Krebs, the author of the “Krebs on Security,” and claims that he has credit cards (‘ccs’) of Mr. Krebs’s family and that Bendelladj will be ‘after him until he die.’ He even suggests inflicting a Distributed Denial of Service attack against Mr. Krebs.”

Maybe that antagonism had something to do with this story, in which I repost chat logs from a conversation I had with Bx1 back in January 2012. In it, Bx1 brags about hacking one of his competitors and to getting the guy arrested.
Top

Cleared to Land

Postby via www.my-universe.com Blog Feed »

Nach „nur“ sechsjähriger Entwicklungszeit ist es endlich soweit: Am kommenden Samstag wird IXEGs Boeing 737-300 released. Wahrscheinlich ist dann erst mal Handbuchstudium angesagt – nach dem, was in den letzten Jahren so durchgedrungen ist, soll das Modell ziemlich nah am Original sein, und ohne Type Rating läuft schließlich im Real Life auch nichts… Einen Jungfernflug werde ich mir aber nicht nehmen lassen – und wenn's nur eine Platzrunde ist…

Top

Giant Food Sees Giant Card Fraud Spike

Postby BrianKrebs via Krebs on Security »

Citing a recent and large increase in credit card fraud, Washington, DC-area grocer Giant Food says it will no longer allow customers to use credit cards when purchasing gift cards and reloadable or prepaid debit cards.

A new warning sign at Giant Food checkout counters. Giant says the warning was prompted by a spike in credit card fraud.
I had no idea this was a new thing at Landover, Md.-based Giant, which operates 169 supermarkets in the Washington, D.C. metro area.  That is, until I encountered a couple of large new “attention” stickers in the checkout line at a local Giant in Virginia recently. Next to the credit card terminal were big decals with the warning:

“Attention Gift Card Customers: Effective immediately, all purchases of Visa, MasterCard, American Express Gift Cards and all General Purpose Reloadable or Prepaid Cards may only be made with Cash or Bank Pin-based Debit.”

Asked for comment about the change, Giant Food released a brief statement about the policy change that went into effect in March 2016, but otherwise didn’t respond to requests for more details.

“Giant has recently made a change in procedures for purchasing gift cards because of a large increase of fraudulent gift card purchasing,” the company said. “Giant will now accept only a Bank PIN-based debit card or cash for all VISA, MasterCard, and American Express gift cards, as well as re-loadable and prepaid gift cards. This change has been made in order to mitigate potential fraud risk.”

It’s not clear why Giant is only just now taking this basic anti-fraud step. Card thieves love to pick on grocery and convenience stores. Street gangs involved in card fraud (and they’re all involved in card fraud now) often extract money from grocery, dollar and convenience stores using “runners” — low-level members who are assigned the occasionally risky business of physically “cashing out” counterfeit credit and debit cards.

One of the easiest ways thieves can cash out? Walk into a grocery or retail store and buy prepaid gift cards using stolen credit cards. Such transactions — if successful — effectively launder money by converting the stolen item (counterfeit/stolen card) into a good that is equivalent to cash or can be easily resold for cash (gift cards).

I witnessed this exact crime firsthand at a Giant in Maryland last year. As I noted in a Dec. 2015 post about gift card fraud, the crooks caught in the process of these cashout schemes usually are found with dozens of counterfeit credit cards on their person or in their vehicle. From that post:

“The man in front of me in line looked and smelled homeless. The only items he was trying to buy were several $200 gift cards that Giant had on sale for various retailers. When the first card he swiped was declined, the man fished two more cards out of his wallet. Each was similarly declined, but the man just shrugged and walked out of the store. I asked the cashier if this sort of thing happened often, and he just shook his head and said, ‘Man, you have no idea.'”

Meanwhile, every Giant I visit still asks me to swipe my chip-based card, effectively negating any added security the chip provides. Chip-based cards are far more expensive and difficult for thieves to counterfeit, and they can help mitigate the threat from most modern card-skimming methods that read the cardholder data in plain text from the card’s magnetic stripe. Those include malicious software at the point-of-sale terminal, as well as physical skimmers placed over card readers at self-checkout lanes — like this one found at a Maryland Safeway earlier this year.

In a recent column – The Great EMV Fake-Out: No Chip for You! – I explored why so few retailers currently allow or require chip transactions, even though many of them already have all the hardware in place to accept chip transactions. I suspect also that grocers are reluctant to introduce chip readers at self-checkout lanes, as more supermarket chains seem to be pushing customers in the self-checkout direction.
Top

How to solve every problem in the world

Postby Dag-Erling Smørgrav via May Contain Traces of Bolts »

  1. Identify a complex problem in country A which is deeply rooted in that country’s demography / economy / culture / political system.
  2. Point out that country B, which has a completely different demography / economy / culture / political system, does not have that problem or has found a simple solution to it.
  3. Declare that the problem is trivial and that country A are idiots for having it in the first place.
  4. Job done, have a beer.
Top

tuxedoc XC1506 Akku

Postby bed via Zockertown: Nerten News »

Ich hatte mich ja in meinem ersten Artikel über den neuen Laptop über die fehlenden Einstellungen für die Akku Ladestrategie mukiert. Jetzt habe ich einen FAQ Eintrag gefunden, der genau dieses Problem angeht.

Beim Tuxedo XC1506 übernimmt das die BIOS Funktion "Flexicharge" FAQ Artikel im Supportforum von Tuxedo

Damit bin ich die gefährlichen Mikro Ladezyklen und das "immer Volle Pulle" auf 100% laden los.
Top

US-CERT to Windows Users: Dump Apple Quicktime

Postby BrianKrebs via Krebs on Security »

Microsoft Windows users who still have Apple Quicktime installed should ditch the program now that Apple has stopped shipping security updates for it, warns the Department of Homeland Security‘s U.S. Computer Emergency Readiness Team (US-CERT). The advice came just as researchers are reporting two new critical security holes in Quicktime that likely won’t be patched.

US-CERT cited an April 14 blog post by Christopher Budd at Trend Micro, which runs a program called Zero Day Initiative (ZDI) that buys security vulnerabilities and helps researchers coordinate fixing the bugs with software vendors. Budd urged Windows users to junk Quicktime, citing two new, unpatched vulnerabilities that ZDI detailed which could be used to remotely compromise Windows computers.

“According to Trend Micro, Apple will no longer be providing security updates for QuickTime for Windows, leaving this software vulnerable to exploitation,” US-CERT wrote. The advisory continued:

“Computers running QuickTime for Windows will continue to work after support ends. However, using unsupported software may increase the risks from viruses and other security threats. Potential negative consequences include loss of confidentiality, integrity, or availability of data, as well as damage to system resources or business assets. The only mitigation available is to uninstall QuickTime for Windows. Users can find instructions for uninstalling QuickTime for Windows on the Apple Uninstall QuickTime page.”

While the recommendations from US-CERT and others apparently came as a surprise to many, Apple has been distancing itself from QuickTime on Windows for some time now. In 2013, the Cupertino, Calif. tech giant deprecated all developer APIs for Quicktime on Windows.

Apple shipped an update to Quicktime in January 2016 that removed the Quicktime browser plugin on Windows systems, meaning the threat from browser-based attacks on Quicktime flaws was largely mitigated over the past few months for Windows users who have been keeping up to date with the latest version. Nevertheless, if you have Quicktime on a Windows box — do yourself a favor and get rid of it.

Update, Apr. 21, 10:00 a.m. ET: Apple has finally posted a support document online that explains QuickTime 7 for Windows is no longer supported by Apple. See the full advisory here.
Top

iwlwifi zickt manchmal nach resume (workaround)

Postby bed via Zockertown: Nerten News »

Mittlerweile ist mein Tuxedo XC1506 eigentlich alles soweit in komplett eingerichtet und in bester Ordung.

Immer ist Wifi im Flugmodus nach dem Resume, also wenn der Laptopdeckel wieder aufgeklappt wird.

Das ist ja nicht weiter schlimm, ein Druck auf Fn F11 schaltet wlan wieder an, es geht auch fix und man ist wieder mit dem heimischen Wlan verbunden.

Momentan allerdings hatte ich bereits 3 mal, dass das Flugsymbol zwar wegging, aber kein wifi Symbol auftauchte, auch nicht die 3 Punkte, die während der Initialisierung zu sehen sind.

Wie aktuell der Mechanismus im System umgesetzt wird, habe ich nicht evaluiert, sicher würde ein Eintrag reichen, um das iwlwifi und / oder das modul iwlmvm zu entladen und beim resume neu zu laden.

Als Workaround, bis ich den korrekten Weg gefunden habe, mache ich das mit folgenden Miniscript:

Genannt habe ich es /usr/local/bin/wifi-repair.sh



CODE:
sudo rmmod iwlmvm sudo rmmod iwlwifi sleep 1 sudo modprobe iwlwifi iwlmvm
Damit bin ich in der Lage, im Falle eines Falles mal eben schnell Wifi zu reaktivieren.
Top

Test

Postby blueness via Anthony G. Basile »

test test test … just testing to see if venus is working on planet.gentoo.org
Top

Why automated gentoo-mirror commits are not signed and how to verify them

Postby Michał Górny via Michał Górny »

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level — and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.



Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There’s the whole Gentoo-keys project that aims to eventually solve the problem but it’s not there yet. Maybe this year’s Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I’m not saying it would be that hard — but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I’m not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won’t be kept unencrypted outside the memory. All nice and shiny.

The problem is — this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I’m around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there’s one more issue: the software. I won’t be running all those scripts in my home. So it’s not just me you have to trust — you have to trust all other people with administrative access to the machine that’s running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won’t be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What’s the value of this signature then?

And this is the exact reason why I don’t do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I’ve mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What’s important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What’s the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn’t guarantee that the alterations are genuine, the danger coming from them is rather small (if any).
Top

‘Blackhole’ Exploit Kit Author Gets 7 Years

Postby BrianKrebs via Krebs on Security »

A Moscow court this week convicted and sentenced seven hackers for breaking into countless online bank accounts — including “Paunch,” the nickname used by the author of the infamous “Blackhole” exploit kit.  Once an extremely popular crimeware-as-a-service offering, Blackhole was for several years responsible for a large percentage of malware infections and stolen banking credentials, and likely contributed to tens of millions of dollars stolen from small to mid-sized businesses over several years.

Fedotov, the convicted creator of the Blackhole Exploit Kit, stands in front of his Porche Cayenne in an undated photo.
According to Russia’s ITAR-TASS news network, Dmitry “Paunch” Fedotov was sentenced on April 12 to seven years in a Russian penal colony. In October 2013, the then 27-year-old Fedotov was arrested along with an entire team of other cybercriminals who worked to sell, develop and profit from Blackhole.

According to Russian security firm Group-IB, Paunch had more than 1,000 customers and was earning $50,000 per month from his illegal activity. The image at right shows Paunch standing in front of his personal car, a Porsche Cayenne.

First spotted in 2010, BlackHole is commercial crimeware designed to be stitched into hacked or malicious sites and exploit a variety of Web-browser vulnerabilities for the purposes of installing malware of the customer’s choosing.

The price of renting the kit ran from $500 to $700 each month. For an extra $50 a month, Paunch also rented customers “crypting” services; cryptors are designed to obfuscate malicious software so that it remains undetectable by antivirus software.

Paunch worked with several other cybercriminals to purchase new exploits and security vulnerabilities that could be rolled into Blackhole and help increase the success of the software. He eventually sought to buy the exploits from other cybercrooks directly to fund a pricier ($10,000/month) and more exclusive exploit pack called “Cool Exploit Kit.”

The main page of the Blackhole exploit kit Web interface.
As documented on this blog in January 2013 (see Crimeware Author Funds Exploit Buying Spree), Paunch contracted with a third-party exploit broker who announced that he had a $100,000 budget for buying new, previously undocumented “zero-day” vulnerabilities.

Not long after that story, the individual with whom Paunch worked to purchase those exclusive exploits — a miscreant who uses the nickname “J.P. Morgan” — posted a message to the Darkode[dot]com crime forum, stating that he was doubling his exploit-buying budget to $200,000.

In October 2013, shortly after news of Paunch’s arrest leaked to the media, J.P. Morgan posted to Darkode again, this time more than doubling his previous budget — to $450,000.

“Dear ladies and gentlemen! In light of recent events, we look to build a new exploit kit framework. We have budgeted $450,000 to buy vulnerabilities of a browser and its plugins, which will be used only by us afterwards! ”

J.P. Morgan alludes to his former partner’s arrest, and ups his monthly exploit buying budget to $450,000.
The Russian Interior Ministry (MVD) estimates that Paunch and his gang earned more than 70 million rubles, or roughly USD $2.3 million. But this estimate is misleading because Blackhole was used as a means to perpetrate a vast array of cybercrimes. I would argue that Blackhole was perhaps the most important driving force behind an explosion of cyber fraud over the past three years. A majority of Paunch’s customers were using the kit to grow botnets powered by Zeus and Citadel, banking Trojans that are typically used in cyberheists targeting consumers and small businesses.

For more about Paunch, check out Who is Paunch?, a profile I ran in 2013 shortly after Fedotov’s arrest that examines some of the clues that connected his online criminal persona with his personal social networking profiles.

Update, 1:42: Corrected headline.
Top

pfsense 2.3, now on FreeBSD 10.3 with pkg

Postby Dan Langille via Dan Langille's Other Diary »

I upgraded my pfSense box to 2.3 last night. Here is what I got: uname -a FreeBSD bast.int.unixathome.org 10.3-RELEASE FreeBSD 10.3-RELEASE #4 05adf0a(RELENG_2_3_0): Mon Apr 11 19:09:19 CDT 2016 root@factory23-amd64-builder:/builder/factory-230/tmp/obj/builder/factory-230/tmp/FreeBSD-src/sys/pfSense amd64 These are the package repos they are using (as taken from pkg -vv): Repositories: pfSense-core: { url : "pkg+http://firmware.netgate.com/pkg/pfSense_factory-v2_3_0_amd64-core", enabled : yes, priority : [...]
Top

‘Badlock’ Bug Tops Microsoft Patch Batch

Postby BrianKrebs via Krebs on Security »

Microsoft released fixes on Tuesday to plug critical security holes in Windows and other software. The company issued 13 patches to tackle dozens of vulnerabilities, including a much-hyped “Badlock” file-sharing bug that appears ripe for exploitation. Also, Adobe updated its Flash Player release to address at least two-dozen flaws — in addition to the zero-day vulnerability Adobe patched last week.

Source: badlock.org
The Windows patch that seems to be getting the most attention this month remedies seven vulnerabilities in Samba, a service used to manage file and print services across networks and multiple operating systems. This may sound innocuous enough, but attackers who gain access to private or corporate network could use these flaws to intercept traffic, view or modify user passwords, or shut down critical services.

According to badlock.org, a Web site set up to disseminate information about the widespread nature of the threat that this vulnerability poses, we are likely to see active exploitation of the Samba vulnerabilities soon.

Two of the Microsoft patches address flaws that were disclosed prior to Patch Tuesday. One of them is included in a bundle of fixes for Internet Explorer. A critical update for the Microsoft Graphics Component targets four vulnerabilities, two of which have been detected already in exploits in the wild, according to Chris Goettl at security vendor Shavlik.

Just a reminder: If you use Windows and haven’t yet taken advantage of the Enhanced Mitigation Experience Toolkit, a.k.a. “EMET,” you should definitely consider it. I describe the basic features and benefits of running EMET in this blog post from 2014 (yes, it’s time to revisit EMET in a future post), but the gist of it is that EMET helps block or blunt exploits against known and unknown Windows vulnerabilities and flaws in third-party applications that run on top of Windows. The latest version, v. 5.5, is available here

On Friday, Adobe released an emergency update for Flash Player to fix a vulnerability that is being actively exploited in the wild and used to foist malware (such as ransomware). Adobe updated its advisory for that release to include fixes for 23 additional flaws.

As I noted in last week’s piece on the emergency Flash Patch, most users are better off hobbling or removing Flash altogether. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent version for Mac and Windows users is 21.0.0.213, and should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).
Top

Fly Hawaii

Postby via www.my-universe.com Blog Feed »

Derzeit vergnüge ich mich mit kurzen Flügen zwischen den Inseln Hawaiis – ein ideales Revier für Turboprop-Piloten. Hier kann man prima auch mit größeren Flugzeugen VFR fliegen und das GPS mal schön ausgeschaltet lassen. Die Inseln bieten eine große landschaftliche Vielfalt und damit gute visuelle Orientierungspunkte. Viele Flugplätze liegen an der Küste, so dass sie relativ leicht zu finden sind. Auch die Witterungsverhältnisse erlauben auf Hawaii meist die Navigation nach Sicht.


X-Plane Piloten aufgepasst: Hawaii bietet eine ganze Menge an kostenlosen Szenerie-Addons. Besonders hervorzuheben sind in aus meiner Sicht die Flughafenszenerien von NAPS (Freddy De Pues, Hans H. Gindra & Marc Leydecker), die acht Flugfelder verteilt über die meisten (bewohnten) Inseln abdecken – nämlich Kona und Hilo auf der Hauptinsel Hawaii, Kahului, Lahaina und Hana auf Maui, Honolulu und Dillingham auf Oahu und Lihue auf Kauai. Diese Szenerie-Addons lassen sich hier herunterladen. Weitere Flugfelder auf Lanai und Molokai finden sich ebenfalls auf x-plane.org.

Doch nur die Flughäfen und -felder selbst tragen nur wenig zum landschaftlichen Charme der Inseln bei. Wer mal gemütlich von einer Propellermaschine aus die Landschaft genießen möchte, dem kann ich nur wärmstens die Szenerie-Addons von Hawaii Photoreal ans Herz legen. Für X-Plane sind bisher drei der acht Inseln umgesetzt (Molokini zähle ich als Insel mal nicht mit, auch wenn man dort mit einer LISA Akoya einen netten Zwischenstopp einlegen kann…). Die bisherigen Ergebnisse sind vielversprechend, und Spender sollen (sobald fertiggestellt) Zugriff auf eine Plus-Version mit jahreszeitabhängigen Texturen erhalten.

Zu guter letzt gibt's hier noch ein paar Screenshots als Appetithäppchen, aufgenommen mit den oben beschriebenen Szenerie-Addons sowie den Payware-Addons SkyMaxx Pro 3.1, Leading Edge Simulations Saab 340A (Version 1.3), RWDesigns DHC-6 Series 300 (Version 1.2) und JARDesign Airbus A330-200 (Version 1.2r3).


Update


Do 14 Apr 2016
Hawaii Photoreal hat nachgelegt: Die Insel Molokai ist nun ebenfalls für X-Plane verfügbar. Allerdings steht der Download noch nicht auf der Projektwebsite, sondern ist bisher nur bei X-Pilot zu finden.

Top

Bash: Command Historie aufmotzen

Postby bed via Zockertown: Nerten News »

(Original Artikel von 2005) Was man alles entdeckt, wenn man mal wieder Zeit hat... Bash ist dermassen mächtig und komfortabel, dass die meisten von uns nur einen kleinen Prozentsatz be/ausnutzen. In den Beschreibungen zu der Datei /etc/inputrc zum Beispiel findet man im www allerlei Seiten, die offensichtlich alle voneinander abgeschrieben haben. Ein kurzer Artikel auf www.allweil.net/blog/ brachte mich dazu mich damit etwas auseinanderzusetzen. Die beiden Einträge
#/etc/inputrc:
# alternate mappings for "page up" and "page down" to search the history
"\e[5~": history-search-backward
"\e[6~": history-search-forward
bewirken nämlich z.B. dass man mittels der Bild-Auf und Bild-Ab Taste in der Command Historie suchen kann. Soll heissen, wenn man z.B. ssh eingetippt hat, dann führt das drücken der Bild-Auf Taste zu den passenden Fundstellen in der Eingabehistorie, natürlich nur, wenn man auch schon mal ssh Befehle eingegeben hatte. Damit kommt man bei der Suche wesentlich schneller voran, als wenn man Control-R benutzt. Control-R sucht in der gesamten Eingabe, Bild-Auf -Ab zeigt dagegen nur die passenden Treffer, die mit dem Suchbegriff beginnen. Das interessante für mich war, dass dies Funktionalität z.B. in Suse schon eingeschaltet ist, in Kanotix nicht. Aber ich bin mir sicher, dass die wenigsten Suse Nutzer dies wissen. Doch zurück zum Thema, was ich damit ausdrücken will, ist, dass die wunderbaren Features einfach viel zu wenig bekannt sind und ein Hinweis wie alternate mappings for "page up" and "page down" to search the history einen nicht unbedingt mit der Nase darauf stubsen. Wir sollten IMO mal eine ausführlichere Beschreibung dieser Möglichkeiten mit Beispielen verfassen, damit die schönen Gimmicks nicht einfach brach liegen. Weitere Info auf Robertkehl.de Über www.allweil.net/blog/ bin ich ursprünglich darüber gestolpert.
[update 02.10.2008] Eben haben ich in meinem Blog ewig genau nach diesem Beitrag gesucht, irgendwie fehlen hier noch die richtigen Suchworte, also mal sehen: Ergänzen der begonnenen Commandozeile profile, bash Tipp, in der History suchen, durchsuchen. Mit Bild-up Bild-down Kommando ergänzen.

Thematisch verwandt Bash historie retten, nicht überschreiben So, dann mal sehen, wenn ich es in drei Jahren wieder mal brauche Edit: in Lenny standardmäßig aktiviert. In Elive 2.0 ist es nicht aktiviert.

[update 30.01.2013] Wenn man keine systemweite Einstellung haben möchte, geht es auch mit ~ ./inputrc und somit auch auf einem System, auf dem man keine rootrechte hat

[update 13.04.2016] Man kann auch ein Mapping der Cursurtasten verändern:

## arrow up
"\e[A":history-search-backward
## arrow down
"\e[B":history-search-forward
Das hat seinen eigenen Reiz und funktioniert, weil ohne ein Zeichen eingegeben zu haben einfach zum nächsten bzw. vorherigen Eintrag gewechselt wird, was dem originalen Verhalten der Cursor Tasten entspricht.

Top

New Threat Can Auto-Brick Apple Devices

Postby BrianKrebs via Krebs on Security »

If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless.

Zach Straley demonstrating the fatal Jan. 1, 1970 bug. Don’t try this at home!
On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!).

Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks.

Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks.

Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.”

For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects.

From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network.

TIME TO DIE

And this is exactly what Kelley and Harrigan say they have done in real-life tests. They realized that iPads and other iDevices constantly check various “network time protocol” (NTP) servers around the globe to sync their internal date and time clocks.

The researchers said they discovered they could build a hostile Wi-Fi network that would force Apple devices to download time and date updates from their own (evil) NTP time server: And to set their internal clocks to one infernal date and time in particular: January 1, 1970.

Harrigan and Kelley named their destructive Wi-Fi test network “Phonebreaker.”
The result? The iPads that were brought within range of the test (evil) network rebooted, and began to slowly self-destruct. It’s not clear why they do this, but here’s one possible explanation: Most applications on an iPad are configured to use security certificates that encrypt data transmitted to and from the user’s device. Those encryption certificates stop working correctly if the system time and date on the user’s mobile is set to a year that predates the certificate’s issuance.

Harrigan and Kelley said this apparently creates havoc with most of the applications built into the iPad and iPhone, and that the ensuing bedlam as applications on the device compete for resources quickly overwhelms the iPad’s computer processing power. So much so that within minutes, they found their test iPad had reached 130 degrees Fahrenheit (54 Celsius), as the date and clock settings on the affected devices inexplicably and eerily began counting backwards.

 



Harrigan, president and CEO of San Diego-based security firm PacketSled, described the meltdown thusly:

“One thing we noticed was when we set the date on the iPad to 1970, the iPad display clock started counting backwards. While we were plugging in the second test iPad 15 minutes later, the first iPad said it was Dec. 15, 1968. I looked at Patrick and was like, ‘Did you mess with that thing?’ He hadn’t. It finally stopped at 1965, and by that time [the iPad] was about the temperature I like my steak served at.”

Kelley, a senior penetration tester with CriticalAssets.com, said he and Harrigan worked with Apple to coordinate the release of their findings to ensure doing so didn’t predate Apple’s issuance of a fix for this vulnerability. The flaw is present in all Apple devices running anything lower than iOS 9.3.1.

Apple did not respond to requests for comment. But an email shared by the researchers apparently sent by Apple’s product security team suggests the company’s researchers were unable to force an affected device to heat to more than 45.8 degrees Celcisus (~114 degrees Fahrenheit). The note read:

“1) We confirmed that iOS 9.3 addresses the issue that left a device unresponsive when the date is set to 1/1/1970.

2) A device affected by this issue can be restored to iOS 9.3 or later.  iTunes restored the iPad Air you provided to us for inspection.’

3) By examining the device, we determined that the battery temperature did not exceed 45.8 degrees centigrade.”

EVIL HARDWARE

According to Harrigan and Kelley, the hardware needed to execute this attack is little more than a common Raspberry Pi device with some custom software.

“By spoofing time.apple.com, we were able to roll back the time and have it hand out to all Apple clients on the network,” the researchers wrote in a paper shared with KrebsOnSecurity. “All test devices took the update without question and rolled back to 1970.”

The hardware used to automated an attack against the 1970 bug, including a Raspberry Pi and an Alfa antenna.
The researchers continued: “An interesting side effect was that this caused almost all web browsing traffic to cease working due to time mismatch. Typically, this would prompt a typical user to reboot their device. So, we did that. At this point, we could confirm that the reboot caused all iPads in test to degrade gradually, beginning with the inability to unlock, and ultimately ending with the device overheating and not booting at all. Apple has confirmed this vulnerability to be present in 64 bit devices that are running any version less than 9.3.1.”

Harrigan and Kelley say exploiting this bug on an Apple iPhone device is slightly trickier because iPhones get their network time updates via GSM, the communications standard the devices use to receive and transmit cell phone signals. But they said it may be possible to poison the date and time on iPhones using updates fed to the devices via GSM.

They pointed to research by Brandon Creighton, a research architect at software testing firm Veracode who is perhaps best known for setting up the NinjaTel GSM mobile network at the massive DefCon security conference in 2012. Creighton’s network relied on a technology called OpenBTS — a software based GSM access point. Harrigan and Kelley say an attacker could set up his own mobile (evil) network and push date and time updates to any phones that ping the evil tower.

“It is completely plausible that this vulnerability is exploitable over GSM using OpenBTS or OpenBSC to set the time,” Kelley said.

Creighton agreed, saying that his own experience testing and running the NinjaTel network shows that it’s theoretically possible, although he allows that he’s never tried it.

“Just from my experimentation, theoretically from a protocol level you can do it,” Creighton wrote in a note to KrebsOnSecurity. “But there are lots of factors (the carrier; the parameters on the SIM card; the phone’s locked status; the kind of phone; the baseband version; previously joined networks; neighboring towers; RF signal strength; and more).  If you’re just trying to cause general chaos, you don’t need to work very hard. But if, say, you were trying to target an individual device, it would require an additional amount of prep time/recon.”

Whether or not this attack could be used to remotely ruin iPhones or turn iPads into expensive skillets, it seems clear that failing to update to the latest version of Apple iOS is a less-than-stellar idea. iPad users who have not updated their OS need to be extremely cautious with respect to joining networks that they don’t know or trust.

iOS and Mac OS X have a feature that allows users to prevent the devices from automatically joining wireless networks. Enabling this “ask to join networks” feature blocks Apple devices from automatically joining networks they have never seen before — but the side effect is that the device may frequently toss up prompts asking if you wish to join any one of several available wireless networks (this can be disabled by unselecting “Ask to Join Networks”). But enabling it doesn’t prevent the device from connecting to, say, “attwifi” if it has previously connected to a network of that name.

The researchers have posted a video on Youtube that explains their work in greater detail.

Update, 1:08 p.m. ET: Added link to video and clarified how Apple’s “ask to join networks” feature works.
Top

Linux firmware for iwlwifi ucode failed with error -2

Postby Zach via The Z-Issue »

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
A couple weeks ago, I decided to update my primary laptop’s kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:

#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the ‘iwlwifi’ driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:

[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn’t loading:

[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:

# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):

[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the “Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2” error message.

Cheers,
Zach
Top

Adobe Patches Flash Player Zero-Day Threat

Postby BrianKrebs via Krebs on Security »

Adobe Systems this week rushed out an emergency patch to plug a security hole in its widely-installed Flash Player software, warning that the vulnerability is already being exploited in active attacks.

Adobe said a “critical” bug exists in all versions of Flash including Flash versions 21.0.0.197 and lower (older) across a broad range of systems, including Windows, Mac, Linux and Chrome OS. Find out if you have Flash and if so what version by visiting this link.

In a security advisory, the software maker said it is aware of reports that the vulnerability is being actively exploited on systems running Windows 7 and Windows XP with Flash Player version 20.0.0.306 and earlier. 

Adobe said additional security protections built into all versions of Flash including 21.0.0.182 and newer should block this flaw from being exploited. But even if you’re running one of the newer versions of Flash with the additional protections, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).

By the way, I’m not the only one trying to make it easier for people to put a lasso on Flash: In a blog post today, Microsoft said Microsoft Edge users on Windows 10 will auto-pause Flash content that is not central to the Web page. The new feature will be available in Windows 10 build 14316.

“Peripheral content like animations or advertisements built with Flash will be displayed in a paused state unless the user explicitly clicks to play that content,” wrote the Microsoft Edge team. “This significantly reduces power consumption and improves performance while preserving the full fidelity of the page. Flash content that is central to the page, like video and games, will not be paused. We are planning for and look forward to a future where Flash is no longer necessary as a default experience in Microsoft Edge.”

Additional reading on this vulnerability:

Kafeine‘s Malware Don’t Need Coffee Blog on active exploitation of the bug.

Trend Micro’s take on evidence that thieves have been using this flaw in automated attacks since at least March 31, 2016.
Top

tuxedo XC1506

Postby bed via Zockertown: Nerten News »

2016-03-21: Es war endlich soweit. Der neue Laptop ist da.

Die SSD OCZ Trion 150 240GB fürs Betriebssystem habe ich von Reichelt bestellt, eine zweite Platte ist eine WD Blue Mobile 1TB, 7mm 1TB Platte


TUXEDO Book XC1506 - 15,6" matt Full-HD IPS-Display Arbeitsspeicher (DDR4 SO-DIMM): 16 GB (2x 8GB) 2400Mhz Kingston Grafikkarte: NVIDIA GeForce GTX 970M 3GB CPU: Intel Core i7-6700HQ
Zum Installieren habe ich dieses Netinst Iso genommen (14.3.2016) 

cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/weekly-builds/amd64/iso-cd/

Leider führte der nouvou Treiber zu Freezes, wenn man sich einloggt.

Als Parameter ACPI=NO beim booten behob das aber. Mit dem Stock Nvidia Treiber ist das Problem dann Geschichte

Um mir die Anpassungen von Tuxedo anzuschauen, hielt ich es für eine gute Idee einfach Ubuntu 16.04 zu installieren. Da das System rund läuft, habe ich jetzt nur gnome-shell nach installiert, damit ich die ungewohnte Unity GUI loswerde.

Die Anpassungen für die Spezialtasten sind hier dokumentiert www.linux-onlineshop.de/forum/index.php?page=Thread&threadID=41

Nach nunmehr 14 Tage Benutzung bin ich mit meinem neuen Laptop sehr zufrieden. Es finde nur zwei Unzulänglichkeiten, die mich etwas stören.

Zum einen ist es der Fingerprint Reader anstelle einer mittleren Maustasten, klar kann man mit der Emulation der 3ten Maustaste leben, allerdings ist die Trefferquote nur nahe 100%, Physikalsich wäre es einfach besser.

Vielleicht finde ich ja noch eine Möglichkeit den Fingerprint Reader dafür zu missbrauchen.

Der zweite Mangel ist die fehlende Einstellbarkeit der Ladeschwellen, vor allem der Ladegrenze für den internen Akku.

Da kann ich nur hoffen, dass der Hersteller seine Hausaufgaben gemacht hat und der Akku nicht wirklich immer auf 100% geladen wird, auch wenn die Anzeige dies suggeriert.

Da ist mir das proprietäre Konzept bei Lenovo lieber. (Stichwort TLP)



Ps: xcom enemy within installiert: Starten geht wahnsinig schnell, Framerate zwischen 55 und > 100 in einer Mission.
Top

FBI: $2.3 Billion Lost to CEO Email Scams

Postby BrianKrebs via Krebs on Security »

The U.S. Federal Bureau of Investigation (FBI) this week warned about a “dramatic” increase in so-called “CEO fraud,” e-mail scams in which the attacker spoofs a message from the boss and tricks someone at the organization into wiring funds to the fraudsters. The FBI estimates these scams have cost organizations more than $2.3 billion in losses over the past three years.

In an alert posted to its site, the FBI said that since January 2015, the agency has seen a 270 percent increase in identified victims and exposed losses from CEO scams. The alert noted that law enforcement globally has received complaints from victims in every U.S. state, and in at least 79 countries.

A typical CEO fraud attack. Image: Phishme
CEO fraud usually begins with the thieves either phishing an executive and gaining access to that individual’s inbox, or emailing employees from a look-alike domain name that is one or two letters off from the target company’s true domain name. For example, if the target company’s domain was “example.com” the thieves might register “examp1e.com” (substituting the letter “L” for the numeral 1) or “example.co,” and send messages from that domain.

Unlike traditional phishing scams, spoofed emails used in CEO fraud schemes rarely set off spam traps because these are targeted phishing scams that are not mass e-mailed. Also, the crooks behind them take the time to understand the target organization’s relationships, activities, interests and travel and/or purchasing plans.

They do this by scraping employee email addresses and other information from the target’s Web site to help make the missives more convincing. In the case where executives or employees have their inboxes compromised by the thieves, the crooks will scour the victim’s email correspondence for certain words that might reveal whether the company routinely deals with wire transfers — searching for messages with key words like “invoice,” “deposit” and “president.”

On the surface, business email compromise scams may seem unsophisticated relative to moneymaking schemes that involve complex malicious software, such as Dyre and ZeuS. But in many ways, CEO fraud is more versatile and adept at sidestepping basic security strategies used by banks and their customers to minimize risks associated with account takeovers. In traditional phishing scams, the attackers interact with the victim’s bank directly, but in the CEO scam the crooks trick the victim into doing that for them.

The FBI estimates that organizations victimized by CEO fraud attacks lose on average between $25,000 and $75,000. But some CEO fraud incidents over the past year have cost victim companies millions — if not tens of millions — of dollars. 

Last month, the Associated Press wrote that toy maker Mattel lost $3 million in 2015 thanks to a CEO fraud phishing scam. In 2015, tech firm Ubiquiti disclosed in a quarterly financial report that it suffered a whopping $46.7 million hit because of a CEO fraud scam. In February 2015, email con artists made off with $17.2 million from The Scoular Co., an employee-owned commodities trader. More recently, I wrote about a slightly more complex CEO fraud scheme that incorporated a phony phone call from a phisher posing as an accountant at KPMG.

The FBI urges businesses to adopt two-step or two-factor authentication for email, where available, and to establish other communication channels — such as telephone calls — to verify significant transactions. Businesses are also advised to exercise restraint when publishing information about employee activities on their Web sites or through social media, as attackers perpetrating these schemes often will try to discover information about when executives at the targeted organization will be traveling or otherwise out of the office.

For an example of what some of these CEO fraud scams look like, check out this post from security education and awareness firm Phishme about scam artists trying to target the company’s leadership.

I’m always amazed when I hear security professionals I know and respect make comments suggesting that phishing and spam are solved problems. The right mix of blacklisting and email validation regimes like DKIM and SPF can block the vast majority of this junk, these experts argue.

But CEO fraud attacks succeed because they rely almost entirely on tricking employees into ignoring or sidestepping some very basic security precautions. Educating employees so that they are less likely to fall for these scams won’t block all social engineering attacks, but it should help. Remember, the attackers are constantly testing users’ security awareness. Organizations might as well be doing the same, using periodic tests to identify problematic users and to place additional security controls on those individuals.
Top

After Tax Fraud Spike, Payroll Firm Greenshades Ditches SSN/DOB Logins

Postby BrianKrebs via Krebs on Security »

Online payroll management firm Greenshades.com is an object lesson in how not to do authentication. Until very recently, the company allowed corporate payroll administrators to access employee payroll data online using nothing more than an employee’s date of birth and Social Security number. That is, until criminals discovered this and began mass-filing fraudulent tax refund requests with the IRS on large swaths of employees at firms that use the company’s services.

A notice on the Greenshades Web site.
Jacksonville, Fla.-based Greenshades posted an alert on its homepage stating that the company “has seen an abnormal increase in identity thieves using personal information to fraudulently log into the company’s system to access personal tax information.”

Many online services blame these sorts of attacks on customers re-using the same password at multiple sites, but Greenshades set customers up for this by allowing access to payroll records just by supplying the employee’s Social Security number and date of birth.

As this author has sought repeatedly to demonstrate, SSN/DOB information is extremely easy and cheap to obtain via multiple criminal-run Web sites: SSN/DOB data is reliably available for purchase from underground online crime shops for less than $4 per person (payable in Bitcoin only).

The spike in tax fraud against employees of companies that use Greenshades came to light earlier this month in various media stories. A number of employees at public high schools in Chicago discovered that crooks beat them to the punch on filing tax returns. An investigation into that incident suggested security weaknesses at Greenshades were to blame.

The Milwaukee Journal Sentinel wrote last month about tax fraud perpetrated against local county workers, fraud that also was linked to compromised Greenshades accounts. In Nebraska, the Lower Platte North Natural Resources District and Fremont Health hospital had a number of employees with tax fraud linked to compromised Greenshades accounts, according to a report in the Fremont Tribune.

Greenshades co-CEO Matthew Kane said the company allowed payroll administrators to access W2 information with nothing more than SSN and DOB for one simple reason: Many customers demanded it.

“There’s a valid reason to have what I call weak login credentials,” Kane told KrebsOnSecurity. “Some of our clients clamor for weaker login credentials, such as companies that have a large staff of temporary workers.”

Kane said customers have a “wide range of options” to select from in choosing how they will authenticate to Greenshades.com, but that the most secure option currently offered is a simple username and password.

When asked whether the company offers any sort of two-step or two-factor authentication, Kane argued that corporate email addresses assigned to company employees serve as a kind of second factor.

“In this case, the second factor would be having access to that corporate inbox,” Kane reasoned. He added that Greenshades is working on rolling out a 2-factor authentication feature that may not be optional going forward.

Kane said that although Greenshades heard from a “significant number” of its customers about unauthorized access to employee records, the company believes the overall percentage of affected employees at individual customer organizations was low.

However, in at least some of the reported incidents tied to this mess at Greenshades, the overall percentage has been quite high. In the case of the Lower Platt North NRD, for example, 90 percent of employees had their taxes filed fraudulently this year.

It’s remarkable that a company which specializes in helping firms manage sensitive tax and payroll data could be so lax with authentication. Unfortunately, shoddy authentication is still quite common — even among banks. In February, Pittsburgh, Pa.-based First National Bank alerted customers gained through a recent merger with Metro Bank that they could access the company’s bill pay and electronic banking portal by supplying their Metro Bank username and the last four digits of their Social Security number.

A letter from First National Bank to its customers.
Relying on static data elements like SSNs and birthdays for authentication is a horrible idea all around. These data points are no longer secret because they are broadly available for sale on most Americans, and companies have no business using them for authentication.
Top

Sources: Trump Hotels Breached Again

Postby BrianKrebs via Krebs on Security »

Banking industry sources tell KrebsOnSecurity that the Trump Hotel Collection — a string of luxury properties tied to business magnate and Republican presidential candidate Donald Trump — appears to be dealing with another breach of its credit card systems. If confirmed, this would be the second such breach at the Trump properties in less than a year.

Trump International Hotel in New York.
A representative from Trump Hotels said the organization was investigating the claims.

“We are in the midst of a thorough investigation on this matter,” the company said in a written statement. “We are committed to safeguarding all guests’ personal information and will continue to do so vigilantly.”

KrebsOnSecurity reached out to the Trump organization after hearing from three sources in the financial sector who said they’ve noticed a pattern of fraud on customer credit cards which suggests that hackers have breached credit card systems at some — if not all — of the Trump Hotel Collection properties.

On July 1, 2015, this publication was the first to report that banks suspected a breach at Trump properties. After that story ran, Trump Hotel Collection acknowledged being alerted about suspicious activity tied to accounts that were recently used at its hotels. But it didn’t officially confirm that its payment systems had been infected with card-stealing malware until October 2015.

The Trump Hotel Collection includes more than a dozen properties globally. Sources said they noticed a pattern of fraud on cards that were all used at multiple Trump hotel locations in the past two to three months, including at Trump International Hotel New York, Trump Hotel Waikiki in Honolulu, and the Trump International Hotel & Tower in Toronto.

The hospitality industry has been hit hard by card breaches over the past two years. In April 2014, hotel franchising firm White Lodging confirmed its second card breach in a year. Card thieves also have hit Hilton, Hyatt, and Starwood properties. In many of those breaches, the hacked systems were located inside of hotel restaurants and gift shops.

Like most other current presidential candidates, Mr. Trump has offered little in the way of a policy playbook on cybersecurity. But in statements last month, Trump bashed the United States as “obsolete” on cybersecurity, and suggested the country is being “toyed with” by adversaries from China, Russia and elsewhere.

“We’re so obsolete in cyber,” Trump told The New York Times. “We’re the ones that sort of were very much involved with the creation, but we’re so obsolete.” Trump was critical of the US military’s cyber prowess, charging the Defense Department and the military are “going backwards” in cyber while “other countries are moving forward at a much more rapid pace.”

“We are frankly not being led very well in terms of the protection of this country,” Trump said.
Top

Pwncloud – bad crypto in the Owncloud encryption module

Postby Hanno Böck via Hanno's blog »

The Owncloud web application has an encryption module. I first became aware of it when a press release was published advertising this encryption module containing this:

“Imagine you are an IT organization using industry standard AES 256 encryption keys. Let’s say that a vulnerability is found in the algorithm, and you now need to improve your overall security by switching over to RSA-2048, a completely different algorithm and key set. Now, with ownCloud’s modular encryption approach, you can swap out the existing AES 256 encryption with the new RSA algorithm, giving you added security while still enabling seamless access to enterprise-class file sharing and collaboration for all of your end-users.”

To anyone knowing anything about crypto this sounds quite weird. AES and RSA are very different algorithms – AES is a symmetric algorithm and RSA is a public key algorithm - and it makes no sense to replace one by the other. Also RSA is much older than AES. This press release has since been removed from the Owncloud webpage, but its content can still be found in this Reuters news article. This and some conversations with Owncloud developers caused me to have a look at this encryption module.

First it is important to understand what this encryption module is actually supposed to do and understand the threat scenario. The encryption provides no security against a malicious server operator, because the encryption happens on the server. The only scenario where this encryption helps is if one has a trusted server that is using an untrusted storage space.

When one uploads a file with the encryption module enabled it ends up under the same filename in the user's directory on the file storage. Now here's a first, quite obvious problem: The filename itself is not protected, so an attacker that is assumed to be able to see the storage space can already learn something about the supposedly encrypted data.

The content of the file starts with this:
BEGIN:oc_encryption_module:OC_DEFAULT_MODULE:cipher:AES-256-CFB:HEND----

It is then padded with further dashes until position 0x2000 and then the encrypted contend follows Base64-encoded in blocks of 8192 bytes. The header tells us what encryption algorithm and mode is used: AES-256 in CFB-mode. CFB stands for Cipher Feedback.

Authenticated and unauthenticated encryption modes

In order to proceed we need some basic understanding of encryption modes. AES is a block cipher with a block size of 128 bit. That means we cannot just encrypt arbitrary input with it, the algorithm itself only encrypts blocks of 128 bit (or 16 byte) at a time. The naive way to encrypt more data is to split it into 16 byte blocks and encrypt every block. This is called Electronic Codebook mode or ECB and it should never be used, because it is completely insecure.

Common modes for encryption are Cipherblock Chaining (CBC) and Counter mode (CTR). These modes are unauthenticated and have a property that's called malleability. This means an attacker that is able to manipulate encrypted data is able to manipulate it in a way that may cause a certain defined behavior in the output. Often this simply means an attacker can flip bits in the ciphertext and the same bits will be flipped in the decrypted data.

To counter this these modes are usually combined with some authentication mechanism, a common one is called HMAC. However experience has shown that this combining of encryption and authentication can go wrong. Many vulnerabilities in both TLS and SSH were due to bad combinations of these two mechanism. Therefore modern protocols usually use dedicated authenticated encryption modes (AEADs), popular ones include Galois/Counter-Mode (GCM), Poly1305 and OCB.

Cipher Feedback (CFB) mode is a self-correcting mode. When an error happens, which can be simple data transmission error or a hard disk failure, two blocks later the decryption will be correct again. This also allows decrypting parts of an encrypted data stream. But the crucial thing for our attack is that CFB is not authenticated and malleable. And Owncloud didn't use any authentication mechanism at all.

Therefore the data is encrypted and an attacker cannot see the content of a file (however he learns some metadata: the size and the filename), but an Owncloud user cannot be sure that the downloaded data is really the data that was uploaded in the first place. The malleability of CFB mode works like this: An attacker can flip arbitrary bits in the ciphertext, the same bit will be flipped in the decrypted data. However if he flips a bit in any block then the following block will contain unpredictable garbage.

Backdooring an EXE file

How does that matter in practice? Let's assume we have a group of people that share a software package over Owncloud. One user uploads a Windows EXE installer and the others download it from there and install it. Let's further assume that the attacker doesn't know the content of the EXE file (this is a generous assumption, in many cases he will know, as he knows the filename).

EXE files start with a so-called MZ-header, which is the old DOS EXE header that gets usually ignored. At a certain offset (0x3C), which is at the end of the fourth 16 byte block, there is an address of the PE header, which on Windows systems is the real EXE header. After the MZ header even on modern executables there is still a small DOS program. This starts with the fifth 16 byte block. This DOS program usually only shows the message “Th is program canno t be run in DOS mode”. And this DOS stub program is almost always the exactly the same.



Therefore our attacker can do the following: First flip any non-relevant bit in the third 16 byte block. This will cause the fourth block to contain garbage. The fourth block contains the offset of the PE header. As this is now garbled Windows will no longer consider this executable to be a Windows application and will therefore execute the DOS stub.

The attacker can then XOR 16 bytes of his own code with the first 16 bytes of the standard DOS stub code. He then XORs the result with the fifth block of the EXE file where he expects the DOS stub to be. Voila: The resulting decrypted EXE file will contain 16 bytes of code controlled by the attacker.

I created a proof of concept of this attack. This isn't enough to launch a real attack, because an attacker only has 16 bytes of DOS assembler code, which is very little. For a real attack an attacker would have to identify further pieces of the executable that are predictable and jump through the code segments.

The first fix

I reported this to Owncloud via Hacker One in January. The first fix they proposed was a change where they used Counter-Mode (CTR) in combination with HMAC. They still encrypt the file in blocks of 8192 bytes size. While this is certainly less problematic than the original construction it still had an obvious problem: All the 8192 bytes sized file blocks where encrypted the same way. Therefore an attacker can swap or remove chunks of a file. The encryption is still malleable.

The second fix then included a counter of the file and also avoided attacks where an attacker can go back to an earlier version of a file. This solution is shipped in Owncloud 9.0, which has recently been released.

Is this new construction secure? I honestly don't know. It is secure enough that I didn't find another obvious flaw in it, but that doesn't mean a whole lot.

You may wonder at this point why they didn't switch to an authenticated encryption mode like GCM. The reason for that is that PHP doesn't support any authenticated encryption modes. There is a proposal and most likely support for authenticated encryption will land in PHP 7.1. However given that using outdated PHP versions is a very widespread practice it will probably take another decade till anyone can use that in mainstream web applications.

Don't invent your own crypto protocols

The practical relevance of this vulnerability is probably limited, because the scenario that it protects from is relatively obscure. But I think there is a lesson to learn here. When people without a strong cryptographic background create ad-hoc designs of cryptographic protocols it will almost always go wrong.

It is widely known that designing your own crypto algorithms is a bad idea and that you should use standardized and well tested algorithms like AES. But using secure algorithms doesn't automatically create a secure protocol. One has to know the interactions and limitations of crypto primitives and this is far from trivial. There is a worrying trend – especially since the Snowden revelations – that new crypto products that never saw any professional review get developed and advertised in masses. A lot of these products are probably extremely insecure and shouldn't be trusted at all.

If you do crypto you should either do it right (which may mean paying someone to review your design or to create it in the first place) or you better don't do it at all. People trust your crypto, and if that trust isn't justified you shouldn't ship a product that creates the impression it contains secure cryptography.

There's another thing that bothers me about this. Although this seems to be a pretty standard use case of crypto – you have a symmetric key and you want to encrypt some data – there is no straightforward and widely available standard solution for it. Using authenticated encryption solves a number of issues, but not all of them (this talk by Adam Langley covers some interesting issues and caveats with authenticated encryption).

The proof of concept can be found on Github. I presented this vulnerability in a talk at the Easterhegg conference, a video recording is available.
Top

FreeBSD 10.3-RELEASE Available

Postby Webmaster Team via FreeBSD News Flash »

FreeBSD 10.3-RELEASE is now available. Please be sure to check the Release Notes and Release Errata before installation for any late-breaking news and/or issues with 10.3. More information about FreeBSD releases can be found on the Release Information page.
Top

Turris Omnia and openSUSE

Postby Michal Hrušecký via Michal Hrušecký »

About two weeks ago I was on the annual openSUSE Board face to face meeting. It was great and you can read reports of what was going on in there on openSUSE project mailing list. In this post I would like to focus on my other agenda I had while coming to Nuremberg. Nuremberg is among other things SUSE HQ and therefore there is a high concentration of skilled engineers and I wanted to take an advantage of that…

Little bit of my personal history. I recently join Turris team at CZ.NIC, partly because Omnia is so cool and I wanted to help to make it happen. And being long term openSUSE contributor I really wanted to see some way how to help both projects. I discussed it with my bosses at CZ.NIC and got in contact with Andreas Färber who you might know as one of the guys playing with ARMs within openSUSE project. The result was that I got an approval to bring Omnia prototype during the weekend to him and let him play with it.

My point was to give him a head start, so when Omnias will start shipping, there will be already some research done and maybe even howto for openSUSE so you could replace OpenWRT with openSUSE if you wanted. On the other hand, we will also get some preliminary feedback we can still try to incorporate.



Andreas Färber with Omnia

Why testing whether you can install openSUSE on Omnia? And do you want to do that? As a typical end user probably not. Here are few arguments that speaks against it. OpenWRT is great for routers – it has nice interface and anything you want to do regarding the network setup is really easy to do. You are able to setup even complicated network using simple web UI. Apart from that, by throwing away OpenWRT you would throw away quite some of the perks of Omnia – like parental control or mobile application. You might think that it is worth it to sacrifice those to get full-fledged server OS you are familiar with and where you can install everything in non-stripped down version. Actually, you don’t have to sacrifice anything – OpenWRT in Omnia will support LXC, so you can install your OS of choice inside LXC container and have both – easily manageable router with all the bells and whistles and also virtual server with very little overhead doing complicated stuff. Or even two or three of them. So most probably, you want to keep OpenWRT and install openSUSE or some other Linux distribution inside a container.

But if you still do want to replace OpenWRT, can you? And how difficult would it be? Long story short, the answer is yes. Andreas was able to get openSUSE running on Omnia and even wrote instructions how to do that! One little comment, Turris Omnia is still under heavy development. What Andreas played with was one of the prototypes we have. Software is still being worked on and even hardware is being polished a little bit from time to time. But still, HW will not change drastically and therefor howto probably wouldn’t change as well. It is nice to see that it is possible and quite easy to install your average Linux distribution.

Why is having this option so important given all the arguments I stated against doing so? Because of freedom. I consider it great advantage when buying a piece of hardware knowing that I can do whatever I want with it and I’m not locked in and depending on the vendor with everything. Being able to install openSUSE on Omnia basically proves that Omnia is really open and even in the unlikely situation in which hell freezes over and CZ.NIC will disappear or turn evil, you will still be able to install latest kernel 66.6 and continue to do whatever you want with your router.

This post was originally posted on CZ.NIC blog, re-posted here to make it available on Planet openSUSE.

Top

Shell calendar generator

Postby Michal Hrušecký via Michal Hrušecký »

Some people still use paper calendars. Stuff where you have a picture of the month and all days in the month listed. I have some relatives that do use those. On loosely related topic, I like to travel and I like to take some pictures in foreign lands. So combining both is an obvious idea – to create a calendar where pictures of the month are taken by me. I searched for some ready to use solution but haven’t found anything. So I decided to create my own simple tool. And this post is about creating that tool.

I know time and date stuff is complicated and I wasn’t really looking into learning all the rules regarding date and time and programing them. There had to be a simple way how I can use some of the tools that are already implemented. Obvious option would be to use some of the date manipulation libraries like mktime and write the tool in C. But that that sounded quite heavy weight for such a simple tool. Using Ruby would be an option, but still kinda too much and I’m not fluent rubyist and my python and perl are even rustier. I was also thinking what output format should I use to print it easily. As I was targeting some pretty printed paper LaTeX sounded like a good choice and in theory it could be used to implement the whole thing. I even found somebody who did that, but I didn’t managed to comprehend how it worked, how to modify it or even how to compile it. Turns out my LaTeX is rusty as well.

So I decided to use shell and powerful date command to generate the content. Started with generating LaTeX code as I still want it on paper in the end, right? Trouble is, LaTeX make great papers if you want to look serious and make some serious typography. For calendar on the wall, you probably want to make it fancy and screw typography. I was trying to make it what I wanted, but it was hard. So hard I gave up. And I ended up with the winning combo – shell and html. Html is easy to view and print and CSS supports various of options including different style for screen and print type media.

Html and css made the whole exercise really easy and I have something working now on GitHub in 150 lines of code where half of it is CSS. It’s not perfect, there is plenty of space for optimization, but it is really simple and fast enough. Are you interested? Give it a try and if it doesn’t work well for you, pull requests are welcome
Top

Aufgerüstet

Postby via www.my-universe.com Blog Feed »

Ich gehöre zu den Leuten, die altmodischerweise noch an einem Desktop-Computer festhalten. Da ist einfach immer noch mal eine Nummer mehr „Bums“ hinter als bei Notebooks oder gar Tablets. Besonders, wenn man eine Flugsimulation mit hoher Auflösung betreibt, geht die Framerate gerne schnell in den Keller. Das war auch bei meiner alten Workstation der Fall (immerhin aus dem Jahre 2011), die sich zwar dank zweier Xeon Hexacore CPUs und viel RAM bei vielen Dingen immer noch wacker schlägt, bei 3D-Grafik aber mittlerweile doch ihre Grenzen erkennen lässt (und nebenher die Geräuschkulisse eines startenden Turbofan-Triebwerks von sich gibt).


Deshalb habe ich in einen neuen Rechner investiert, der nun neben der alten Workstation seinen Betrieb aufgenommen hat. Da ich den Rechner speziell für die Nutzung mit X-Plane zusammengestellt habe, wurde diesmal auf die zweite CPU verzichtet (X-Plane nutzt ohnehin nur wenige Kerne, profitiert aber von einer schnelleren CPU). Stattdessen gab's die derzeit schnellste verfügbare Skylake-CPU (Core i7-6700K), kombiniert mit einer NVIDIA GeForce GTX Titan X. Erstere wird durch eine Wasserkühlung bei Laune gehalten, wodurch der Geräuschpegel des gesamten Systems angenehm leise ausfällt.

Auch bei den Speichermedien gab es einen deutlichen Schritt nach vorne: während die alte Workstation noch mit vier mechanischen SATA-Festplatten zu je 320GB in Wechselrahmen daher kam, nutzt der neue Rechner einen 512GB großen V-NAND Speicher, der über PCIe angebunden ist und daher noch einmal deutlisch schneller als eine über SATA angebundene SSD daherkommt. Diese Kapazität ist aber für X-Plane ein wenig zu klein (besonders wenn Andras Fabians HD und UHD Mesh ins Spiel kommen), daher stecken zusätzlich zwei Samsung SSD 850 Pro zu je einem TB mit im Rechner, die als knapp zwei TB großes Stripeset für X-Plane reserviert sind und per S-ATA 6Gb/s auch nicht gerade langsam arbeiten.

Schließlich blieb noch die Frage des Monitors offen – meine alte Workstation teilt sich über einen 24″ Samsung SyncMaster 245B mit (stammte noch vom Vorgänger-Rechner). Da ich die Workstation noch weiter betreibe, musste also ein neuer Monitor her. Meine Wahl fiel schließlich auf einen 32″ Samsung S32D850T mit WHQD Auflösung (2560×1440) und LED Hintergrundbeleuchtung. Dieser Monitor ist wirklich™ groß – also so richtig, meine ich. Ich hatte schon Fernsehgeräte, die deutlich kleiner waren… Sicherlich kein klassischer Gaming-Monitor (da mit einem MVA Panel ausgestattet), dafür aber mit hervorragendem Kontrast und einem großen Blickwinkel. Da taucht man als virtueller Pilot so richtig ein. Nur meine alte Wallpaper-Sammlung ist damit nutzlos geworden…

Bei der übrigen Peripherie bin ich konservativ geblieben. Meine Thrustmaster HOTAS Warthog Flight Controls sind natürlich mit umgezogen, ebenso wie meine Logitech Z4 Lautsprecher (Subwoofer und Stereo-Hochtöner). Bei Tastatur und Maus habe ich zu altbekannt-bewährtem gegriffen (Cherry G80-3000 Tastatur und Logitech M500 Corded Mouse). Diese beiden leisten mir auch schon an der alten Workstation gute Dienste, konnten aber nicht mit umziehen, da sie dort ja noch benötigt werden.

So, genug Zeit mit dem Webbrowser verbracht – das Cockpit ruft…
Top

Last words on diabetes and software

Postby Flameeyes via Flameeyes's Weblog »

Started as a rant on G+, then became too long and suited for a blog.

I do not understand why we can easily get people together with something like VideoLAN, but the moment when health is involved, the results are just horrible.

Projects either end up in "startuppy", which want to keep things for themselves and by themselves, or we end up fractionated in tiny one-person-projects because every single glucometer is a different beast and nobody wants to talk with others.

Tonight I ended up in a half-fight with a project to which I came saying "I've started drafting an exchange format, because nobody has written one down, and the format I've seen you use is just terrible and when I told you, you haven't replied" and the answer was "we're working on something we want to standardize by talking with manufacturers."

Their "we talk with these" projects are also something insane — one seem to be more like the idea of building a new device from scratch (great long term solution, terrible usefulness for people) and the other one is yet-another-build-your-own-cloud kind of solution that tells you to get Heroku or Azure with MongoDB to store your data. It also tells you to use a non-manufacturer-approved scanner for the sensors, which the comments point out can fry those sensors to begin with. (I should check whether that's actually within ToS for Play Store.)

So you know what? I'm losing hope in FLOSS once again. Maybe I should just stop caring, give up this laptop for a new Microsoft Surface Pro, and keep my head away from FLOSS until I am ready for retirement, at which point I can probably just go and keep up with the reading.

I have tried reaching out to the people who have written other tools, like I posted before, but it looks like people are just not interested in discussing this — I did talk with a few people over email about some of the glucometers I dealt with, but that came to one person creating yet another project wanting to become a business, and two figuring out which original proprietary tools to use, because they do actually work.

So I guess you won't be reading much about diabetes on my blog in the future, because I don't particularly enjoy writing this for my sole use, and clearly that's the only kind of usage these projects will ever get. Sharing seems to be considered deprecated.
Top

Testing one two three: planet venus is dead upstream!

Postby blueness via Anthony G. Basile »

So my blog posts are still not appearing on planet Gentoo or universe.  The aggregation software used on the server is Planet Venus which has not been maintained in five years.  Its an encoding error: line 2222 of feedparser.py needs to be changed from join(arLines) to join(arLines).decode(‘UTF-8’).

So this is another test post.  I’ll probably just delete it if it doesn’t show up again.
Top

AVScale – part1

Postby lu_zero via Luca Barbato »

swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

Colorspace conversion and Scaling

Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

Most multimedia concepts are more or less intuitive:
encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
demuxing is getting back the compressed data with the timing information stored in the container format
decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.

AVScale

The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

Everything is a Kernel

One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

Those components are called kernels.

The kernels can be conceptually divided in two kinds:
Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

This pipeline approach gets great flexibility and helps code reuse.

The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.

API

AVScale works with two kind of structures:
AVPixelFormaton: A full description of the pixel format
AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

For now only avscale_config and avscale_convert_frame are implemented.

So if the input and output are pre-determined the context can be configured like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_config(ctx, out, in);
if (ret < 0)
    ...
But you can skip it and scale and/or convert from a input to an output like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_convert_frame(ctx, out, in);
if (ret < 0)
    ...

avscale_free(&ctx);
The context gets lazily configured on the first call.

Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

As said the API is really simple and essential.

Help welcome!

Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.
Top

Why macros like __GLIBC__ and __UCLIBC__ are bad.

Postby blueness via Anthony G. Basile »

I’ll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account!  So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program.  Since I don’t like writing trivial stuff, I’m going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions.  Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on.  Chpt 1 of Kerrisk’s The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today.  If you look thorugh the code base of uClibc you’ll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU.  These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro.  Consider the approach to portability taken by GNU autotools.  Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__.  Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__.  If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)
This is getting a bit ugly and syntactically ambiguous.  Someone not familiar with this could easily misinterpret it, or reject it.

So I’ve hit bugs like these.  I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__.   Alternatively I hit this in geocode-glib and geoclue, and they did accept it.  I went with the wrong minded approach because that’s what was already there, and I didn’t feel like sifting through their code base and revamping their build system.  This isn’t just laziness, its historical weight.

So kudos to musl.  And for all the faults of GNU autotools, at least its approach to portability is correct.

 

 

 

Top

Travel cards collection

Postby Flameeyes via Flameeyes's Weblog »

As some of you might have noticed, for example by following me on Twitter, I have been traveling a significant amount over the past four years. Part of it has been for work, part for my involvement with VideoLAN and part again for personal reason (i.e. vacation.)

When I travel, I don't rent a car. The main reason being I (still) don't have a driving license, so particularly when I travel for leisure I tend to travel where there is at least some form of public transport, and even better if there is a good one. This matched perfectly with my hopes of visiting Japan (which I did last year), and usually tends to work relatively well with conference venues, so I have not had much trouble on it in the past few years.

One thing that is going a bit overboard for me, though, is the number of travel cards I have by now. With the exception of Japan, here every city or so have a different travel card — while London appears to have solved that, at least for tourists and casual passengers, by accepting contactless cards as if it was their local travel card (Oyster), it does not seem to be followed up by anyone else, that I can see.

Indeed I have at this point at home:

  • Clipper for San Francisco and Bay Area; prepaid, I actually have not used it in a while so I have some money "stuck" on it.
  • SmarTrip for Washington DC; also prepaid, but at least I managed to only keep very little on it.
  • Metro dayLink for Belfast; prepaid by tickets.
  • Ridacard for Edinburgh and the Lothian region; this one has my photo on it, and I paid for a weekly ticket when I used it.
  • imob.venezia, which is now discontinued, and I used when I lived in Venice, it's just terrible.
  • Suica, for Japan, which is a stored-value card that can be used for payments as well as travel, so it comes the closest to London's use of contactless.
  • Leap which is the local Dublin transports card, also prepaid.
  • Navigo for Paris, but I only used it once because you can only store Monday-to-Sunday tickets on it.
I might add a few more this year, as I'm hitting a few new places. On the other hand, while in London yesterday, I realized how nice and handy it is to just use my bank card for popping in and out of the Tube. And I've been wondering how did we get to this system of incompatible cards.

In the list above, most of the cities are one per State or Country, which might suggest cards work better within a country, but that's definitely not the case. I have been told that recently Nottingham has moved to a consolidate travelcard which is not compatible with Oyster either, and both of them are in England.

Suica is the exception. The IC system used in Japan is a stored-value system which can be used for both travel and for general payments, in stores and cafes and so on. This is not "limited" to Tokyo (though limited might be the wrong word there), but rather works in most of the cities I've visited — one exception being busses in Hiroshima, while it worked fine for trams and trains. It is essentially an upside-down version of what happens in London, like if instead of using your payment card to travel, you used your travel card for in-store purchases.

The convenience of using a payment card, by the way, lies for me mostly on being able to use (one of) my bank accounts to pay for the money without having to "earmark" it the way I did for Clipper, which is now going to be used only the next time I actually use the public transport in SF — which I'm not sure when it is!

At the same time, I can think of two big obstacles to implementing contactless payment in place for travelcards: contracts and incentives. On the first note, I'm sure that there is some weight that TfL (Travel for London) can pull, that your average small town can't. On the other note, it's a matter for finance experts, which I can only guess on: there is value for the travel companies to receive money before you travel — Clipper has already had my money in their coffers since I topped it up, though I have not used it.

While topped-up credit of customers is essentially a liability for the companies, it also increases their liquidity. So there is little incentive for them, particularly the smaller ones. Indeed, moving to a payment system for which the companies get their money mostly from banks rather than through cash, is likely to be a problem for them. And we're back on the first matter: contracts. I'm sure TfL can get better deals from banks and credit card companies than most.

There is also the matter of the tech behind all of this. TfL has definitely done a good job with keeping compatible systems — the Oyster I got in 2009, the first time I boarded a plane, still works. During the same seven years, Venice changed their system twice: once keeping the same name/brand but with different protocols on the card (making it compatible with more NFC systems), and once by replacing the previous brand — I assume they have kept some compatibility on the cards but since I no longer live there I have not investigated.

I'm definitely not one of those people who insist that opensource is the solution to everything, and that just by being opened, things become better for society. On the other hand, I do wonder if it would make sense for the opensource community to engage with public services like this to provide a solution that can be more easily mirrored by smaller towns, who would not otherwise be able to afford the system themselves.

On the other hand, this would require, most likely, compromises. The contracts with service providers would likely include a number of NDA-like provisions, and at the same time, the hardware would not be available off-the-shelf.

This post is not providing any useful information I'm afraid, it's just a bit of a bigger opinion I have about opensource nowadays, and particularly about how so many people limit their idea of "public interest" to "privacy" and cryptography.
Top

New AVCodec API

Postby lu_zero via Luca Barbato »

Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

What is here now

Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

  • Function Names: push got renamed to send and pull got renamed to receive.
  • No separated function to probe the process state, need_data and have_data are not here.
  • No codecs ported to use the new API, so no actual asyncronicity for now.
  • Subtitles aren’t supported yet.

New API

There are just 4 new functions replacing both audio-specific and video-specific ones:

// Decode
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

// Encode
int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);
The workflow is sort of simple:
– You setup the decoder or the encoder as usual
– You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
– You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
– Once you are done feeding data you have to pass a NULL to signal the end of stream.
– You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
– You free the contexts as usual.

Decoding examples

Setup

The setup uses the usual avcodec_open2.

    ...

    c = avcodec_alloc_context3(codec);

    ret = avcodec_open2(c, codec, &opts);
    if (ret < 0)
        ...

Simple decoding loop

People using the old API usually have some kind of simple loop like

while (get_packet(pkt)) {
    ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
    if (ret < 0) {
        ...
    }
    if (got_picture) {
        ...
    }
}
The old functions can be replaced by calling something like the following.

// The flush packet is a non-NULL packet with size 0 and data NULL
int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
{
    int ret;

    *got_frame = 0;

    if (pkt) {
        ret = avcodec_send_packet(avctx, pkt);
        // In particular, we don't expect AVERROR(EAGAIN), because we read all
        // decoded frames with avcodec_receive_frame() until done.
        if (ret < 0)
            return ret == AVERROR_EOF ? 0 : ret;
    }

    ret = avcodec_receive_frame(avctx, frame);
    if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
        return ret;
    if (ret >= 0)
        *got_frame = 1;

    return 0;
}

Callback approach

Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

// return 0 on success, negative on error
typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);

int decode(AVCodecContext *avctx, AVFrame *pkt,
           process_frame_cb cb, void *priv)
{
    AVFrame *frame = av_frame_alloc();
    int ret;

    ret = avcodec_send_packet(avctx, pkt);
    // Again EAGAIN is not expected
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_frame(avctx, frame);
        if (!ret)
            ret = cb(priv, frame);
    }

out:
    av_frame_free(&frame);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

Separated threads

The new API makes sort of easy to split the workload in two separated threads.

// Assume we have context with a mutex, a condition variable and the AVCodecContext


// Feeding loop
{
    AVPacket *pkt = NULL;

    while ((ret = get_packet(ctx, pkt)) >= 0) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_send_packet(avctx, pkt);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the draining loop
            pthread_cond_signal(&ctx->cond);
            // Wait here
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);
    }

    pthread_mutex_lock(&ctx->lock);
    ret = avcodec_send_packet(avctx, NULL);

    pthread_cond_signal(&ctx->cond);

out:
    pthread_mutex_unlock(&ctx->lock)
    return ret;
}

// Draining loop
{
    AVFrame *frame = av_frame_alloc();

    while (!done) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_receive_frame(avctx, frame);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the feeding loop
            pthread_cond_signal(&ctx->cond);
            // Wait
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);

        if (!ret) {
            do_something(frame);
        }
    }

out:
        pthread_mutex_unlock(&ctx->lock)
    return ret;
}
It isn’t as neat as having all this abstracted away, but is mostly workable.

Encoding Examples

Simple encoding loop

Some compatibility with the old API can be achieved using something along the lines of:

int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
{
    int ret;

    *got_packet = 0;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        return ret;

    ret = avcodec_receive_packet(avctx, pkt);
    if (!ret)
        *got_packet = 1;
    if (ret == AVERROR(EAGAIN))
        return 0;

    return ret;
}

Callback approach

Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

// return 0 on success, negative on error
typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);

int encode(AVCodecContext *avctx, AVFrame *frame,
           process_packet_cb cb, void *priv)
{
    AVPacket *pkt = av_packet_alloc();
    int ret;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            ret = cb(priv, pkt);
    }

out:
    av_packet_free(&pkt);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}
The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

Coming Next

This post is long enough so the next one might involve converting a codec to the new API.
Top

hardened-sources Role-based Access Control (RBAC): how to write mostly permissive policies.

Postby blueness via Anthony G. Basile »

RBAC is a security feature of the hardened-sources kernels.  As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources.   Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root.  In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned!  I wouldn’t recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not.  RBAC has to be part of a more comprehensive security plan and is not a single security solution.  In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers.  Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges.  So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself.  In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel.  Of course, if you’re planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy.  This is where the fun begins because you need to ask yourself what kind of a system you’re going to be running and decide on the policies you’re going to implement.  Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack.  I did this for years when I ran a moodle server for D’Youville College.  For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file.  RBAC is ideally suited for this.  Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs.  Basically your services should have attempted to access each resource at least once so the event gets logged.  You then distill those logs into a policy file describing only what should be permitted and tweak as needed.  Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm –F –L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm –F –L /etc/grsec/learning.log –O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to
Easy right?  This will get you pretty far but you’ll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning.  A typical example here, is you might have ssh’ed in from one IP, but now you’re ssh-ing in from a different IP and you’re getting denied.  To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role.  Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0
This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied.  As we’ll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root.  To do this you edit the policy file and look for subject /bin/ls under role root.  You then add an ‘l’ to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the ‘l’
You restart RBAC using  gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log  -O /etc/grsec/partial-learning.policy.  That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied.  But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there?  While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let’s start with a failry permissive policy file for root:

role admin sA
subject / rvka {
	/			rwcdmlxi
}

role default
subject / {
	/			h
	-CAP_ALL
	connect disabled
	bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
	/			r
	/boot			h
#
	/bin			rx
	/sbin			rx
	/usr/bin		rx
	/usr/libexec		rx
	/usr/sbin		rx
	/usr/local/bin		rx
	/usr/local/sbin		rx
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
	/usr/lib32		rx
	/usr/lib64		rx
	/usr/local/lib32	rx
	/usr/local/lib64	rx
#
	/dev			hx
	/dev/log		r
	/dev/urandom		r
	/dev/null		rw
	/dev/tty		rw
	/dev/ptmx		rw
	/dev/pts		rw
	/dev/initctl		rw
#
	/etc/grsec		h
#
	/home			rwcdl
	/root			rcdl
#
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
#
	/run/lock		rwcdl
	/sys			h
	/tmp			rwcdl
	/var			rwcdl
#
	+CAP_ALL
	-CAP_MKNOD
	-CAP_NET_ADMIN
	-CAP_NET_BIND_SERVICE
	-CAP_SETFCAP
	-CAP_SYS_ADMIN
	-CAP_SYS_BOOT
	-CAP_SYS_MODULE
	-CAP_SYS_RAWIO
	-CAP_SYS_TTY_CONFIG
	-CAP_SYSLOG
#
	bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
	connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
	sock_allow_family all
}
The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is ‘special’ meaning that it doesn’t correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role’s behavior. The ‘s’ and ‘A’ in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn’t have (eg, it is not subject ptrace restrictions). Its good to have the ‘A’ flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it’ll still function the same way.

The root role has the ‘u’ and the ‘G’ flags. The ‘u’ flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the ‘g’ flag instead which says to match a group by the same name. The ‘G’ flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don’t need to add those permissions yourself. Finally the default role doesn’t and shouldn’t have any flags. If its not a ‘u’ or ‘g’ or ‘s’ role, then its a default role.

Before we jump into the subject blocks, you’ll notice a couple of lines after the root role. The first says ‘role_transitions admin’ and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says ‘role_allow_ip 0.0.0.0/0’. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don’t have the line in there, get used to logging on on console because you’ll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the ‘subject’ keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the ‘/’ subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We’ll see this towards the end of the article.

Let’s start by looking at the ‘/’ subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have ‘/ h’ which says that every file in the file system starting from ‘/’ downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don’t forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn’t the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the ‘connect’ and ‘bind’ lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here ‘disabled’ means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let’s look at the ‘/’ subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags ‘rvka’. Here ‘r’ means that we relax ptrace restrictions for this subject, ‘a’ means we do not hide access to /dev/grsec, ‘k’ means we allow this subject to kill protected processes and ‘v’ means we allow this subject to view hidden processes. So ‘k’ and ‘v’ are interesting and have counterparts ‘p’ and ‘h’ respectively. If a subject is flagged as ‘p’ it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with ‘k’. Similarly processes belonging to a subject marked ‘h’ can only be viewed by processes belonging to a subject marked ‘v’. Nifty, eh? The only object line in this subject block is ‘/ rwcdmlxi’. This says that this subject can ‘r’ead, ‘w’rite, ‘c’reate, ‘d’elete, ‘m’ark as setuid/setgid, hard ‘l’ink to, e’x’ecute, and ‘i’nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let’s look at the ‘/’ subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don’t open any security holes. Notice that here we have ‘+CAP_ALL’ followed by a series of ‘-CAP_*’. Each of these were included otherwise gradm would complain. For example, if ‘CAP_SYS_ADMIN’ is not removed, an attacker can mount filesystems to bypass your policies.

So I won’t go through this entire subject block in detail, but let me highlight a few points. First consider these lines

	/			r
	/boot			h
	/etc/grsec		h
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
	/sys			h
The first line gives ‘r’ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by ‘h’iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory
but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module
Next consider these lines:

	/bin			rx
	/sbin			rx
	...
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
Since the ‘x’ flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The ‘r’ flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the ‘r’ flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop ‘r’ on /bin and not break our system. This, however, is not the case with the library directories. Dropping ‘r’ on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw
The ‘h’ flag will hide /dev and its contents, but the ‘x’ flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The ‘h’ is required to hide the directory and its contents because, as we saw above, ‘x’ is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log
In the admin role, all those files are visible.

Let’s end our study of this subject by looking at the ‘bind’, ‘connect’ and ‘sock_allow_family’ lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The ‘sock_allow_family’ allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename ‘role root’ to ‘role myusername’. You’ll probably want to remove the ‘role_transitions’ line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don’t want this user to be able to list any files or directories. You can simply add a line to his ‘/’ subject block which reads ‘/bin/ls h’ and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just ‘h’ all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user’s listing of a directory to just his home. To do this, we’ll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let’s proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}
Note that this is identical to the extremely restrictive ‘/’ subject for the default role except that the subject is ‘/bin/ls’ not ‘/’. There is also a subject flag ‘o’ which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0
Well that makes sense. We’ve started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we’ll restore read access to it by adding a line ‘/lib64/ld-2.21.so r’. Repeating our test, we get a seg fault! Obviously, we don’t just need read access to the ld.so, but we also execute privileges. We add ‘x’ and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
Of course! We need ‘rx’ for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
All we need now is ‘/home/myusername r’ and we’re done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}
Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System.  A good reference for the role, subject and object flags can be found in these  Tables.
Top

Template was specified incorrectly

Postby Sven Vermeulen via Simplicity is a form of art... »

After reorganizing my salt configuration, I received the following error:

[ERROR   ] Template was specified incorrectly: False
Enabling some debugging on the command gave me a slight pointer why this occurred:

[DEBUG   ] Could not find file from saltenv 'testing', u'salt://top.sls'
[DEBUG   ] No contents loaded for env: testing
[DEBUG   ] compile template: False
[ERROR   ] Template was specified incorrectly: False
I was using a single top file as recommended by Salt, but apparently it was still looking for top files in the other environments.

Yet, if I split the top files across the environments, I got the following warning:

[WARNING ] Top file merge strategy set to 'merge' and multiple top files found. Top file merging order is undefined; for better results use 'same' option
So what's all this about?

When using a single top file is preferred

If you want to stick with a single top file, then the first error is (or at least, in my case) caused by my environments not having a fall-back definition.

My /etc/salt/master configuration file had the following file_roots setting:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing
The problem is that Salt expects ''a'' top file through the environment. What I had to do was to set the fallback directory to the base directory again, like so:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing
    - /srv/salt/base
With this set, the error disappeared and both salt and myself were happy again.

When multiple top files are preferred

If you really want to use multiple top files (which is also a use case in my configuration), then first we need to make sure that the top files of all environments correctly isolate the minion matches. If two environments would match the same minion, then this approach becomes more troublesome.

On the one hand, we can just let saltstack merge the top files (default behavior) but the order of the merging is undefined (and no, you can't set it using env_order) which might result in salt states being executed in an unexpected order. If the definitions are done to such an extend that this is not a problem, then you can just ignore the warning. See also bug 29104 about the warning itself.

But better would be to have the top files of the environment(s) isolated so that each environment top file completely manages the entire environment. When that is the case, then we tell salt that only the top file of the affected environment should be used. This is done using the following setting in /etc/salt/master:

top_file_merging_strategy: same
If this is used, then the env_order setting is used to define in which order the environments are processed.

Oh and if you're using salt-ssh, then be sure to set the environment of the minion in the roster file, as there is no running minion on the target system that informs salt about the environment to use otherwise:

# In /etc/salt/roster
testserver:
  host: testserver.example.com
  minion_opts:
    environment: testing
Top

Using salt-ssh with agent forwarding

Postby Sven Vermeulen via Simplicity is a form of art... »

Part of a system's security is to reduce the attack surface. Following this principle, I want to see if I can switch from using regular salt minions for a saltstack managed system set towards salt-ssh. This would allow to do some system management over SSH instead of ZeroMQ.

I'm not confident yet that this is a solid approach to take (as performance is also important, which is greatly reduced with salt-ssh), and the security exposure of the salt minions over ZeroMQ is also not that insecure (especially not when a local firewall ensures that only connections from the salt master are allowed). But playing doesn't hurt.

Using SSH agent forwarding

Anyway, I quickly got stuck with accessing minions over the SSH interface as it seemed that salt requires its own SSH keys (I don't enable password-only authentication, most of the systems use the AuthenticationMethods approach to chain both key and passwords). But first things first, the current target uses regular ssh key authentication (no chained approach, that's for later). But I don't want to assign such a powerful key to my salt master (especially not if it would later also document the passwords). I would like to use SSH agent forwarding.

Luckily, salt does support that, it just forgot to document it. Basically, what you need to do is update the roster file with the priv: parameter set to agent-forwarding:

myminion:
  host: myminion.example.com
  priv: agent-forwarding
It will use the known_hosts file of the currently logged on user (the one executing the salt-ssh command) so make sure that the system's key is already known.

~$ salt-ssh myminion test.ping
myminion:
    True
Top

Crooks Steal, Sell Verizon Enterprise Customer Data

Postby BrianKrebs via Krebs on Security »

Verizon Enterprise Solutions, a B2B unit of the telecommunications giant that gets called in to help Fortune 500’s respond to some of the world’s largest data breaches, is reeling from its own data breach involving the theft and resale of customer data, KrebsOnSecurity has learned.

Earlier this week, a prominent member of a closely guarded underground cybercrime forum posted a new thread advertising the sale of a database containing the contact information on some 1.5 million customers of Verizon Enterprise.

The seller priced the entire package at $100,000, but also offered to sell it off in chunks of 100,000 records for $10,000 apiece. Buyers also were offered the option to purchase information about security vulnerabilities in Verizon’s Web site.

Contacted about the posting, Verizon Enterprise told KrebsOnSecurity that the company recently identified a security  flaw in its site that permitted hackers to steal customer contact information, and that it is in the process of alerting affected customers.

“Verizon recently discovered and remediated a security vulnerability on our enterprise client portal,” the company said in an emailed statement. “Our investigation to date found an attacker obtained basic contact information on a number of our enterprise customers. No customer proprietary network information (CPNI) or other data was accessed or accessible.”

The seller of the Verizon Enterprise data offers the database in multiple formats, including the database platform MongoDB, so it seems likely that the attackers somehow forced the MongoDB system to dump its contents. Verizon has not yet responded to questions about how the breach occurred, or exactly how many customers were being notified.

The irony in this breach is that Verizon Enterprise is typically the one telling the rest of the world how these sorts of breaches take place.  I frequently recommend Verizon’s annual Data Breach Investigations Report (DBIR) because each year’s is chock full of interesting case studies from actual breaches, case studies that include hard lessons which mostly age very well (i.e., even a DBIR report from four years ago has a great deal of relevance to today’s security challenges).

According to the 2015 report, for example, Verizon Enterprise found that organized crime groups were the most frequently seen threat actor for Web application attacks of the sort likely exploited in this instance. “Virtually every attack in this data set (98 percent) was opportunistic in nature, all aimed at easy marks,” the company explained.

It’s a fair bet that if cyber thieves buy all or some of the Verizon Enterprise customer database, some of those customers may be easy marks for phishing and other targeted attacks. Even if it is limited to the contact data for technical managers at companies that use Verizon Enterprise Solutions, this is bound to be target-rich list: According to Verizon’s page at Wikipedia, some 99 percent of Fortune 500 companies are using Verizon Enterprise Solutions.
Top

Phishing Victims Muddle Tax Fraud Fight

Postby BrianKrebs via Krebs on Security »

Many U.S. citizens are bound to experience delays in getting their tax returns processed this year, thanks largely to more stringent controls enacted by Uncle Sam and the states to block fraudulent tax refund requests filed by identity thieves. A steady drip of corporate data breaches involving phished employee W-2 information is adding to the backlog, as is an apparent mass adoption by ID thieves of professional tax services for processing large numbers of phony refund requests.


According to data released this week by anti-fraud company iovation, the Internal Revenue Service is taking up to three times longer to review 2015 tax returns compared to past years.

Julie Magee, commissioner of Alabama’s Department of Revenue,  said much of the delay this year at the state level is likely due to new “fraud filters” the states have put in place with Gentax, a return processing and auditing system used by about half of U.S. state revenue departments. If the states can’t outright deny a suspicious refund request, they’ll very often deny the requested electronic bank deposit and issue a paper check to the taxpayer’s known address instead.

“Many states decided they weren’t going to start paying refunds until March 1, and on our side we’ve been using all our internal fraud resources and tools to analyze the tax return before we even put it in the queue,” Magee said. “That’s delaying refunds nationwide for the IRS and the states, and it’s pretty much going to also mean a helluva lot of paper checks are going out this year.”

The added fraud filters that states are employing take advantage of data elements shared for the first time this tax season by the major online tax preparation firms such as TurboTax. The filters look for patterns known to be associated with phony refund requests, such how quickly the return was filed, or whether the same Internet address was seen completing multiple returns.

Magee said some of the states have been adding new fraud filters nearly every time they learn of another big breach involving large numbers of stolen or phished employee W2 data, a huge problem this tax season that is forcing dozens of companies large and small to disclose data breaches over the past few weeks.

“Every time we turn around getting a phone call about another breach,” Magee said. “Because of all the different breaches, the states and the IRS have been taking extreme measures to filter, filter, filter. And each time we’d get news of an additional breach, we’d start over, reprogram our fraud filters, and re-assess those returns that were not processed fully yet and those waiting to be processed.”

Magee said the Gentax software assigns each tax return a score for “wage confidence” and “identity confidence,” and that usually fraudulent tax refund requests have high wage confidence but low — if any — identity confidence. That’s because the fraudsters are filing refund requests on taxpayers for whom they already have stolen W2 information. The identity confidence in these cases is low often because the fraudsters are asking to have the money electronically deposited into an account that can’t be directly tied to the taxpayer, or they have incorrectly supplied some of the victim’s data.

“I have zero confidence that filings which match this pattern are legitimate,” Magee said. “It’s early still, but our new filtering system seems to be working. But it’s still a big unknown about the percentage of fraudulent refunds we’re not stopping.”

MORE W2 PHISHING VICTIMS

Most states didn’t start processing returns until after March 1, which is exactly when a flood of data breaches related to phished employee W2 data began washing up. As KrebsOnSecurity first warned in mid-February, thieves have been sending targeted phishing emails to human resources and finance employees at countless organizations, spoofing a message from the CEO requesting all employee W2’s in PDF format.

In Magee’s own state, W2 phishers hauled in tax data on an estimated 180 employees of ISCO Industries in Huntsville, and some 425 employees at the EWTN Global Catholic Network in Irondale, Ala. But those are just the ones that have been made public. Magee’s office only learned of those breaches after employees at the affected organizations reached out to journalists who then wrote about the compromises.

Over the past week, KrebsOnSecurity similarly has heard from employees at a broad range of organizations that appear to have fallen victim to W2 phishing scams, including some 28,000 employees of the market research giant Kantar Group; 17,000+ employees of Sprouts Farmer’s Market; call center software provider Aspect; computer backup software maker AcronisKids Dental Kare in Los Angeles; Century Fence, a fencing company in Wisconsin; Nation’s Lending Corporation, a mortgage lending firm in Independent, Ohio; QTI Group, a Wisconsin-based human resources consulting company; and the jousting-and-feasting entertainment company Medieval Times.

TAX FRAUDSTERS GOING PRO?

Magee said Alabama and other states are dealing with a huge spike this year in fraudulent refund requests filed via criminals who use online software firms that specialize in selling e-filing services to tax professionals.

According to Magee, crooks first register with the IRS as “electronic return originators.” EROs are typically accountants or tax preparation firms authorized by the IRS to prepare and transmit tax returns for people and companies electronically.  Magee said thieves have been registering as EROs and then buying tax preparation software and services from firms like PETZ Enterprises to push through large numbers of phony refund requets.

“The biggest move [in refund fraud] this year is in the so-called ‘professional services applications,’ which are being flagged in high rates this year for fraud,” Magee said. “And that’s not just Alabama. A great number of other states are seeing the same thing. We have always had fraud in that area, but we’re seeing significantly higher rates of fraud there now.”

Magee said tax software prep firms should be required to conduct more due diligence on their clients.

“In the state of Alabama, you need a license to cut someone’s hair, to be a barber or a cosmetologist, but anyone can become a tax preparation professional with no certification at all,” Magee said. “The software firms are where all the fraud is going now. The criminal becomes an ERO, and then he can just sit there all day and file an unlimited number of fraudulent returns.”

PETZ did not respond to requests for comment. But Stephen Ryan, a lobbyist for the industry group American Coalition for Taxpayer Rights, said states are free to regulate tax providers as they see fit.

“If there are facts that demonstrate there is a problem such as is being alleged about unscrupulous local preparers using professional software they license, the state certainly has the sovereign authority to prosecute or regulate this,” Ryan said. “If a specific source of fraud or crimes is being locally committed, that’s a pretty easy enforcement target to focus upon. And in the unlikely case a state doesn’t have that authority, they can seek it from their legislature.”

Look for additional stories in the coming days as part of a series on tax refund fraud in 2016. Next week, I’ll take a closer look at how thieves are exploiting know-your-customer weaknesses in the prepaid card industry to launder the proceeds from refund fraud and other schemes.
Top

bsdtalk263 - joshua stein and Brandon Mercer

Postby Mr via bsdtalk »

This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD.

An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at http://garbage.fm/

File Info: 17Min, 8MB.

Ogg Link: https://archive.org/download/bsdtalk263/bsdtalk263.ogg
Top

Of OpenStack and uwsgi

Postby Matthew Thode (prometheanfire) via Let's Play a Game »

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

  • Install >=www-servers/uwsgi-2.0.11.2-r1 with the python use flag as it has an updated init script.
  • Make sure you note the group you want for the webserver to access the uwsgi sockets, I chose nginx.

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi
UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'
/etc/uwsgi.d/keystone-admin.ini
[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin
/etc/uwsgi.d/keystone-main.ini
[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main
I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini
[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone
# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)
/var/www/keystone/main perms are 0750 keystone:keystone
# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)
Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon
#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()
Top

Implementing OpenPGP and S/MIME Cryptography in Trojita

Postby via jkt's blog »

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

Encrypted messages are of course supported, too:

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!
Top

Hospital Declares ‘Internal State of Emergency’ After Ransomware Infection

Postby BrianKrebs via Krebs on Security »

A Kentucky hospital says it is operating in an “internal state of emergency” after a ransomware attack rattled around inside its networks, encrypting files on computer systems and holding the data on them hostage unless and until the hospital pays up.

A streaming red banner on Methodisthospital.net warns that a computer virus infection has limited the hospital’s use of electronic web-based services. Click to enlarge.
Henderson, Ky.-based Methodist Hospital placed a scrolling red alert on its homepage this week, stating that “Methodist Hospital is currently working in an Internal State of Emergency due to a Computer Virus that has limited our use of electronic web based services.  We are currently working to resolve this issue, until then we will have limited access to web based services and electronic communications.”

Jamie Reid, information systems director at the hospital, said malware involved is known as the “Locky” strain of ransomware, a contagion that encrypts all of the important files, documents and images on an infected host, and then deletes the originals. Victims can regain access to their files only by paying the ransom, or by restoring from a backup that is hopefully not on a network which is freely accessible to the compromised computer.

In the case of Methodist Hospital, the ransomware tried to spread from the initial infection to the entire internal network, and succeeded in compromising several other systems, Reid said. That prompted the hospital to shut down all of the hospital’s desktop computers, bringing systems back online one by one only after scanning each for signs of the infection.

“We have a pretty robust emergency response system that we developed quite a few years ago, and it struck us that as everyone’s talking about the computer problem at the hospital maybe we ought to just treat this like a tornado hit, because we essentially shut our system down and reopened on a computer-by-computer basis,” said David Park, an attorney for the Kentucky healthcare center.

The attackers are demanding a mere four bitcoins in exchange for a key to unlock the encrypted files; that’s a little more than USD $1,600 at today’s exchange rate.

Park said the administration hasn’t ruled out paying the ransom.

“We haven’t yet made decision on that, we’re working through the process,” with the FBI, he said. “I think it’s our position that we’re not going to pay it unless we absolutely have to.”

The attack on Methodist comes just weeks after it was revealed that a California hospital that was similarly besieged with ransomware paid a $17,000 ransom to get its files back.

Park said the main effect of the infection has been downtime, which forced the hospital to process everything by hand on paper. He declined to say which systems were infected, but said no patient data was impacted.

“We have downtime procedures to going to paper system anyway, so we went to that paper system, he said. “But we don’t feel like it negatively impacted patient care. They didn’t get any patient information ”

Ransomware infections are largely opportunistic attacks that mainly prey on people who browse the Web with outdated Web browsers and/or browser plugins like Java and Adobe Flash and Reader. Most ransomware attacks take advantage of exploit kits, malicious code that when stitched into a hacked site probe visiting browsers for the the presence of these vulnerabilities.

The attack on Methodist Hospital was another form of opportunistic attack that came in via spam email, in messages stating something about invoices and that recipients needed to open an attached (booby-trapped) file.

It’s a fair bet that as ransomware attacks and attackers mature, these schemes will slowly become more targeted. I also worry that these more deliberate attackers will take a bit more time to discern how much the data they’ve encrypted is really worth, and precisely how much the victim might be willing to pay to get it back.
Top

Bitstream Filtering

Postby lu_zero via Luca Barbato »

Last weekend, after few months of work, the new bitstream filter API eventually landed.

Bitstream filters

In Libav is possible to manipulate raw and encoded data in many ways, the most common being

  • Demuxing: extracting single data packets and their timing information
  • Decoding: converting the compressed data packets in raw video or audio frames
  • Encoding: converting the raw multimedia information in a compressed form
  • Muxing: store the compressed information along timing information and additional information.
Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.

New API

The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

// Query
const AVBitStreamFilter *av_bsf_next(void **opaque);
const AVBitStreamFilter *av_bsf_get_by_name(const char *name);

// Setup
int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
int av_bsf_init(AVBSFContext *ctx);

// Usage
int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);

// Cleanup
void av_bsf_free(AVBSFContext **ctx);
In order to use a bsf you need to:

  • Look up its definition AVBitStreamFilter using a query function.
  • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
  • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
  • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.

Query

You can enumerate the available filters

void *state = NULL;

const AVBitStreamFilter *bsf;

while ((bsf = av_bsf_next(&state)) {
    av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);
}
or directly pick the one you need by name:

const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");

Setup

A bsf may use some codec parameters and time_base and provide updated ones.

AVBSFContext *ctx;

ret = av_bsf_alloc(bsf, &ctx);
if (ret < 0)
    return ret;

ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
if (ret < 0)
    goto fail;

ctx->time_base_in = in->time_base;

ret = av_bsf_init(ctx);
if (ret < 0)
    goto fail;

ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
if (ret < 0)
    goto fail;

out->time_base = ctx->time_base_out;

Usage

Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

AVPacket *pkt;

while (got_new_packet(&pkt)) {
    ret = av_bsf_send_packet(ctx, pkt);
    if (ret < 0)
        goto fail;

    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
        yield_packet(pkt);
    }

    if (ret == AVERROR(EAGAIN)
        continue;
    IF (ret == AVERROR_EOF)
        goto end;
    if (ret < 0)
        goto fail;
}

// Flush
ret = av_bsf_send_packet(ctx, NULL);
if (ret < 0)
    goto fail;

while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
    yield_packet(pkt);
}

if (ret != AVERROR_EOF)
    goto fail;
In order to signal the end of stream a NULL pkt should be fed to send_packet.

Cleanup

The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.

    av_bsf_free(&ctx);
All the memory is freed and the ctx pointer is set to NULL.

Coming Soon

Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.
Top

Carders Park Piles of Cash at Joker’s Stash

Postby BrianKrebs via Krebs on Security »

A steady stream of card breaches at retailers, restaurants and hotels has flooded underground markets with a historic glut of stolen debit and credit card data. Today there are at least hundreds of sites online selling stolen account data, yet only a handful of them actively court bulk buyers and organized crime rings. Faced with a buyer’s market, these elite shops set themselves apart by focusing on loyalty programs, frequent-buyer discounts, money-back guarantees and just plain old good customer service.

An ad for new stolen cards on Joker’s Stash.
Today’s post examines the complex networking and marketing apparatus behind “Joker’s Stash,” a sprawling virtual hub of stolen card data that has served as the distribution point for accounts compromised in many of the retail card breaches first disclosed by KrebsOnSecurity over the past two years, including Hilton Hotels and Bebe Stores.

Since opening for business in early October 2014, Joker’s Stash has attracted dozens of customers who’ve spent five- and six-figures at the carding store. All customers are buying card data that will be turned into counterfeit cards and used to fraudulently purchase gift cards, electronics and other goods at big-box retailers like Target and Wal-Mart.

Unlike so many carding sites that mainly resell cards stolen by other hackers, Joker’s Stash claims that all of its cards are “exclusive, self-hacked dumps.”

“This mean – in our shop you can buy only our own stuff, and our stuff you can buy only in our shop – nowhere else,” Joker’s Stash explained on an introductory post on a carding forum in October 2014.

“Just don’t wanna provide the name of victim right here, and bro, this is only the begin[ning], we already made several other big breaches – a lot of stuff is coming, stay tuned, check the news!” the Joker went on, in response to established forum members who were hazing the new guy. He continued:

“I promise u – in few days u will completely change your mind and will buy only from me. I will add another one absolute virgin fresh new zero-day db with 100%+1 valid rate. Read latest news on http://krebsonsecurity.com/ – this new huge base will be available in few days only at Joker’s Stash.”

As a business, Joker’s Stash made good on its promise. It’s now one of the most bustling carding stores on the Internet, often adding hundreds of thousands of freshly stolen cards for sale each week.

A true offshore pirate’s haven, its home base is a domain name ending in “.sh” Dot-sh is the country code top level domain (ccTLD) assigned to the tiny volcanic, tropical island of Saint Helena, but anyone can register a domain ending in dot-sh. St. Helena is on Greenwich Mean Time (GMT) — the same time zone used by this carding Web site. However, it’s highly unlikely that any part of this fraud operation is in Saint Helena, a remote British territory in the South Atlantic Ocean that has a population of just over 4,000 inhabitants.

This fraud shop includes a built-in discount system for larger orders: 5 percent for customers who spend between $300-$500; 15 percent off for fraudsters spending between $1,000 and $2,500; and 30 percent off for customers who top up their bitcoin balances to the equivalent of $10,000 or more.

For its big-spender “partner” clients, Joker’s Stash assigns three custom domain names to each partner. After those partners log in, the different 3-word domains are displayed at the top of their site dashboard, and the user is encouraged to use only those three custom domains to access the carding shop in the future (see screenshot below). More on these three domains in a moment.

The dashboard for a Joker’s Stash customer who has spent over $10,000 buying stolen credit cards from the site. Click image to enlarge.

REFUNDS AND CUSTOMER LOYALTY BONUSES

Customers pay for stolen cards using Bitcoin, a virtual currency. All sales are final, although some batches of stolen cards for sale at Joker’s Stash come with a replacement policy — a short window of time from minutes to a few hours, generally — in which buyers can request replacement cards for any that come back as declined during that replacement timeframe.

Like many other carding shops, Joker’s Stash also offers an a-la-carte card-checking option that customers can use an insurance policy when purchasing stolen cards. Such checking services usually rely on multiple legitimate, compromised credit card merchant accounts that can be used to round-robin process a small charge against each card the customer wishes to purchase to test whether the card is still valid. Customers receive an automatic credit to their shopping cart balances for any purchased cards that come back as declined when run through the site’s checking service.

This carding site also employs a unique rating system for clients, supposedly to prevent abuse of the service and to provide what the proprietors of this store call “a loyalty program for honest partners with proven partner’s record.”

According to Joker’s Stash administrators, customers with higher ratings get advance notice of new batches of stolen cards coming up for sale, prioritized support requests, as well as additional time to get refunds on cards that came back as “declined” or closed by the issuing bank shortly after purchase.

To determine a customer’s loyalty rating, the system calculates the sum of all customer deposits minus the total refunds requested by the customer.

“So if you have deposited $10,000 USD and refunded items for $3,000 USD then your rating is: 10,000 – 3,000 = 7,000 = 7k [Gold rating – you are the king],” Joker’s Stash explains. “If this is the case then new bases will become available for your purchase earlier than for others thanks to your high rating. It gives you ability to see and buy new updates before other people can do that, as well as some other privileges like prioritized support.”

This user has a stellar 16,000+ rating, because he’s deposited more than $20,000 and only requested refunds on $3,500 worth of stolen cards. Click image to enlarge.

HIGH ROLLERS

It would appear that Joker’s Stash has attracted a large number of high-dollar customers, and a good many of them qualify for the elite, “full stash” category reserved for clients who’ve deposited more than $10,000 and haven’t asked for more than about 30 percent of those cards to be refunded or replaced. KrebsOnSecurity has identified hundreds of these three-word domains that the card site has assigned to customers. They were mostly all registered across an array of domain registrars over the the past year, and nearly all are (ab)using services from a New Jersey-based cloud hosting firm called Vultr Holdings.

All customers — be they high-roller partners or one-card-at-a-time street thugs — are instructed on how to log in to the site with software that links users to the Tor network. Tor is a free anonymity network that routes its users’ encrypted traffic between multiple hops around the globe to obscure their real location online.

The site’s administrators no doubt very much want all customers to use the Tor version of the site as opposed to domains reachable on the open Internet. Carding site domain names get seized all the time, but it is far harder to discover and seize a site or link hosted on Tor.

What’s more, switching domain names all the time puts carding shop customers in the crosshairs of phishers and other scam artists. While customers are frantically searching for the shop’s updated domain name, fraudsters step in to take advantage of the confusion and to promote counterfeit versions of the site that phish account credentials from unwary criminals.

Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI), said it looks like the traffic from the three-word domains that Joker’s Stash assigns to each user gets routed through the same Tor hidden servers.

“What he appears to be doing is throwing up an Nginx proxy on each Internet address he’s using to host the domain sets given to users,” Weaver said. “This communicates with his back end server, which is also reachable as one of two Tor hidden services. And both are the same server: If you add to your shopping cart in Tor, it shows up instantly in the clearnet version of the site, and the same with removing cards. So my conclusion is both clearnet and Tornet are the same server on the back end.”

By routing all three-word partner domains through server hidden on Tor, the Joker’s Stash administration seems to understand that many customers can’t be bothered to run Tor and if forced to will just go to a competing site that allows direct access via a regular, non-Tor-based Internet connection.

“My guess is [Joker’s Stash] would like everyone to go to Tor, but they know that Tor is a pain, so they’re using the clearnet because that is what customers demand,” Weaver said.

Interestingly, this setup suggests several serious operational security failures by the Joker’s Stash staff. For example, while Tor encrypts data at every hop in the network, none of the partner traffic from any of the custom three-word domains is encrypted by default on its way to the Tor version of the site. To their credit, the site administrators do urge users to change this default setting by replacing http:// with https:// in front of their private domains.

A web page lists the various ways to reach the carding forum on the clearnet or via Tor. The links have been redacted.
I’ll have more on Joker’s Stash in an upcoming post. In the meantime, if you enjoyed this story, check out a deep dive I did last year into “McDumpals,” another credit card fraud bazaar that caters to bulk buyers and focuses heavily on customer service.
Top

Reverse engineering the FreeStyle Libre CGM, chapter 1

Postby Flameeyes via Flameeyes's Weblog »

I have already reviewed the Abbott FreeStyle Libre continuous glucose monitor, and I have hinted that I already started reverse engineering the protocol it uses to communicate with the (Windows) software. I should also point out that for once the software does provide significant value, as they seem to have spent more effort in the data analysis than any other part of it.

Please note that this is just a first part for this device. Unlike the previous blog posts, I have not managed yet to get even partial information downloaded with my script as I write and post this. Indeed, if you, as you read this, have any suggestion of things I have not tried yet, please do let me know.

Since at this point it's getting common, I've started up the sniffer, and sniffed starting from the first transaction. As it is to be expected, the amount of data in these transactions is significantly higher than that of the other glucometers. Even if you were taking seven blood samples a day for months with one of the other glucometers, it's going to take a much longer time to get the same amount of readings as this sensor, which takes 96 readings a day by itself, plus the spot-checks and added notes and information to comment them.

The device itself presents itself as a standard HID device, which is a welcome change from the craziness of SCSI-based hidden message protocols. The messages within are of course not defined in any standard of course, so inspecting them become interesting.

It took me a while to figure out what the data that the software was already decoding for me meant. At first I thought I would have to use magic constant and libusb to speak raw USB to the device — indeed, a quick glance around Xavier's work showed me that there were plently of similarities, and he's including quite a few magical consants in that code. Luckily for me, after managing to query the device with python-libusb1, which was quite awkward as I also had to fix it to work, I realized that I was essentially reimplementing hidraw access.

After rewriting the code to use /dev/hidraw1 (which makes it significantly simpler), I also managed to understand that the device uses exactly the same initialization procedure as the FreeStyle InsuLinx that Xavier already implemented, and similar but not identical command handling (some of the commands match, and some even match the Optium, at least in format.)

Indeed the device seem to respond to two general classes of commands: text-commands and binary commands, the first device I reverse engineer with such a hybrid protocol. Text commands also have the same checksumming as both the Optium and Neo protocols.

The messages are always transferred in 64-bytes packets, even though the second byte of the message declares the actual significant length, which can be even zero. Neither the software nor the device zero out their buffers before writing the new command/response packets, so there is lots of noise in those packets.

I've decided that the custom message framing and its usage of HID is significant enough to warrant being documented by itself so I did that for now, although I have not managed to complete the reverse engineering of the protocol.

The remaining of the protocol kept baffling me. Some of the commands appear to include a checksum, and are ignored if they are not sent correctly. Others actually seem to append to an error buffer that you can somehow access (but probably more by mistake than design) and in at least one case I managed to "crash" the device, which asked me to turn it off and on again. I have thus decided to stop trying to send random messages to it for a while.

I have not been pouring time on this as much as I was considering doing before, what with falling for a bad flu, being oncall, and having visitors in town, so I have only been looking at traces from time to time, particularly recording all of them as I downloaded more data out of it. What still confuses me is that the commands sent from the software are not constant across different calls, but I couldn't really make much heads or tails of it.

Then yesterday I caught a break — I really wanted to figure out at least if it was encoding or compressing the data, so I started looking for at least a sequence of numbers, by transcribing the device's logbook into hexadecimal and looking in the traces for them.

This is not as easy as it might sound, because I have a British device — in UK, Ireland and Australia the measure of blood sugar is given in mmol/l rather than the much more common mg/dl. There is a stable conversion between the two units (you multiply the former by 18 to get the latter), but this conversion usually happens on display. All the devices I have used up to now have been storing and sending over the wire values in mg/dl and only converted when the data is shown, usually by providing some value within the protocol to specify that the device is set to use a given unit measure. Because of this conversion issue, and the fact that I only had access to the values mmol/l, I usually had two different options for each of the readings, as I wasn't sure how the rounding happened.

The break happened when I was going through the software's interface, trying to get the latest report data to at least match the reading timing difference, so that I could look for what might appear like a timestamp in the transcript. Instead, I found the "Export" function. The exported file is a comma-separated values file, which includes all readings, including those by the sensor, rather than just the spot-checks I could see from the device interface and in the export report. Not only that, but it includes a "reading ID", which was interesting because it started from a value a bit over 32000, and is not always sequential. This was lucky.

I imported the CSV to Google Sheets, then added columns next to the ID and glucose readings. The latter were multiplied by 18 to get the value in mg/dl (yes the export feature still uses mmol/l, I think it might be some certification requirement), and then convert the whole lot to hexadecimal (hint: Google Sheets and LibreOffice have a DEC2HEX function that do that for you.) Now I had something interesting to search for: the IDs.

Now, I have to point out that the output I have from USBlyzer is a CSV file that includes the hexdump of the USB packets that are being exchanged. I already started writing a set of utilities (too rough to be published though) to convert those into a set of binary files (easier to bgrep or binwalk them) or hexdump-like transcripts (easier to recognize strings.) I wrote both a general "full USB transcript" script as well as a "Verio-specific USB transcript" while I was working on my OneTouch meter, so I wrote one for the Abbott protocol, too.

Because of the way that works, of course, it is not completely obvious if any value which is not a single byte is present, by looking at the text transcript, as it might be found on the message boundary. One would think they wouldn't, since that means there are odd-sized records, but indeed that is the case for this device at least. Indeed it took me a few tries of IDs found in the CSV file to find one in the USB transcript.

And even after finding one the question was to figure out the record format. What I have done in the past when doing binary format reverse engineering was to print on a piece of paper a dump of the binary I'm looking at, and start doodling on it trying to mark similar parts of the message. I don't have a printer in Dublin, so I decided to do a paperless version of the same, by taking a screenshot of a fragment of transcript, and loading it into a drawing app on my tablet. It's not quite as easy, but it does making sharing results easier and thanks to layers it's even easier to try and fail.

I made a mistake with the screenshot by not keeping the command this was a reply to in the picture — this will become more relevant later. Because of the size limit in the HID-based framing protocol Abbott uses, many commands reply with more than one message – although I have not understood yet how it signals a continuation – so in this case the three messages (separated by a white line) are in response to a single command (which by the way is neither the first or the last in a long series.)

The first thing I wanted to identify in the response was all the reading IDs, the one I searched for is marked in black in the screenshot, the others are marked in the same green tone. As you can see they are not (all) sequential; the values are written down as little-endian by the way. The next step was to figure out the reading values, which are marked in pink in the image. While the image itself has no value that is higher than 255, thus using more than bytes to represent them, not only it "looked fair" to assume little endian. It was also easy to confirm as (as noted in my review) I did have a flu while wearing the sensor, so by filtering for readings over 14 mmol/L I was able to find an example of a 16-bit reading.

The next thing I noted was the "constant" 0C 80 which might include some flags for the reading, I have not decoded it yet, but it's an easy way to find most of the other IDs anyway. Following from that, I needed to find an important value, as it could allow decoding many other record types just by being present: the timestamp of the reading. The good thing with timestamps is that they tend to stay similar for a relative long time: the two highest bytes are the same for most of a day, and the highest of those is usually the same for a long while. Unfortunately looking for the hex representation of the Unix timestamp at the time yield nothing, but that was not so surprising, given how I found usage of a "newer" epoch in the Verio device I looked at earlier.

Now, since I have the exported data I know not only the reading ID but also the timestamp it reports it at, which does not include seconds. I also know that since the readings are (usually) taken at 15 minutes intervals, if they are using seconds since a given epoch the numbers should be incrementing by 900 between readings. Knowing this and doing some mental pattern matching it became easy to see where the timestamps have been hiding, they are marked in blue in the image above. I'll get back to the epoch.

At this point, I still have not figured out where the record starts and ends — from the image it might appear that it starts with the record ID, but remember I took this piece of transcript mid-stream. What I can tell is that the length of the record is not only not a multiple of eight (the bytes in hexdump are grouped by eight) but it is odd, which, by itself, is fairly odd (pun intended.) This can be told by noticing how the colouring crosses the mid-row spacing, for 0c 80, for reading values and timestamps alike.

Even more interesting, not only the records can cross the message boundaries (see record 0x8fe0 for which the 0x004b value is the next message over), but even do the fields. Indeed you can see on the third message the timestamp ends abruptly at the end of the message. This wouldn't be much of an issue if it wasn't that it provides us with one more piece of information to decode the stream.

As I said earlier, timestamps change progressively, and in particular reading records shouldn't usually be more than 900 seconds apart, which means only the lower two bytes change that often. Since the device uses little-endian to encode the numbers, the higher bytes are at the end of the encoded sequence, which means 4B B5 DE needs to terminate with 05, just like CC B8 DE 05 before it. But the next time we encounter 05 is in position nine of the following message, what gives?

The first two bytes of the message, if you checked the protocol description linked earlier, describe the message type (0B) and the number of significant bytes following (out of the usb packet), in this case 3E means the whole rest of the packet is significant. Following that there are six bytes (highlighted turquoise in the image), and here is where things get to be a bit more confusing.

You can actually see how discarding those six bytes from each message now gives us a stream of records that are at least fixed length (except the last one that is truncated, which means the commands are requesting continuous sequences, rather than blocks of records.) Those six bytes now become interesting, together with the inbound command.

The command that was sent just before receiving this response was 0D 04 A5 13 00 00. Once again the first two bytes are only partially relevant (message type 0D, followed by four significant bytes.) But A5 13 is interesting, since the first message of the reply starts with 13 A6, and the next three message increment the second byte each. Indeed, the software follows these with 0D 04 A9 13 00 00, which matches the 13 A9 at the start of the last response message.

What the other four bytes mean is still quite the mystery. My assumption right now is that they are some form of checksum. The reason is to be found in a different set of messages:

>>>> 00000000: 0D 04 5F 13 00 00                                 .._...

<<<< 00000000: 0B 3E 10 5F 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>._4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 60 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.`4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 61 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.a4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 62 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.b4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 63 E8 B6 84 09  00 00 00 00 00 00 00 00  .>.c............
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 9A 39 65 70  99 51 09 30 4D 30 30 30  .....9ep.Q.0M000
<<<< 00000030: 30 37 52 4B 35 34 00 00  01 00 02 A0 9F DE 05 FC  07RK54..........
In this set of replies, there are two significant differences compared to the ones with record earlier. The first is that while the command lists 5F 13 the replies start with 10 5F, so that not only 13 becomes 10, but 5F is not incremented until the next message, making it unlikely for the two bytes to form a single 16-bit word. The second is that there are at least four messages with identical payload (fifty-six bytes of value zero). And despite the fourth byte of the message changing progressively, the following four bytes are staying the same. This makes me think it's a checksum we're talking about, although I can't for the life of me figure out which at first sight. It's not CRC32, CRC32c nor Adler32.

By the way, the data in the last message relates to the list of sensors the devices has seen — 9ep.Q.0M00007RK54 is the serial number, and A0 9F DE 05 is the timestamp of it initializing.

Going back to the epoch, which is essentially the last thing I can talk about for now. The numbers above clearly shower in a different range than the UNIX timestamp, which would start with 56 rather than 05. So I used the same method I used for the Verio, and used a fixed, known point in time, got the timestamp from the device and compared with its UNIX timestamp. The answer was 1455392700 — which is 2012-12-31T00:17:00+00:00. It would make perfect sense, if it wasn't 23 hours and 43 minutes away from a new year…

I guess that is all for now, I'm still trying to figure out how the data is passed around. I'm afraid that what I'm seeing from the software looks like it's sending whole "preferences" structures that change things at once, which makes it significantly more complicated to understand. It's also not so easy to tell how the device and software decide the measure unit as I don't have access to logs of a mg/dl device.
Top

Qt 5.6 is here and it runs on FreeBSD/Pi

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

Qt 5.6 is finally out so I thought I’d give it a spin on my Raspberry Pi. Previously I used cross-compilation but this time I thought I’d spend some time in trying to create ports Qt modules. There is Qt 5.5.1 in ports and it’s nicely split into sub-ports and most of gory details are hidden in bsd.qt.mk library. The problem with it is it’s highly coupled with Xorg stuff and I didn’t find easy way to squeeze non-desktop use cases into current infrastructure. So I just created new custom devel/qt56 port.

In order to get it done as fast as possible I took several shortcuts: all the stuff is installed to /usr/local/qt5 directory, there is no meta-port for submodules to share common part yet. Also besides base the only module I packaged (I was particularly interested in it) was QtMultimedia. Should be fairly easy to fix last 2 items though.

Qt layer for UI provider is called QPA: Qt Platform Abstraction. There are quite a few of them but I am familiar and interested in two: plain framebuffer and eglfs. Plain framebuffer stock QPA plugin is called linuxfb and naturally we can’t use it for FreeBSD. Luckily there is a lot of similarities between Linux fb and syscons(or vt) fb (you can’t get very innovative with framebuffer) so writing QPA support for scfb was easy. It can be used with any generic SoC with framebuffer support: AM335x(beaglebone black), i.MX6(Wandboard), NVIDIA Tegra, Pi.

elgfs is full-screen OpenGL mode. OpenGL implementation depends on SoC vendor, e.g. Pi’s library is provided by raspberrypi-userland port. If we had OpenGL support for AM335x it would have been provided by PowerVR userland libraries. As far as I understand eglfs can’t be made universal: each implementation has its own quirks so you have to specify target OpenGL vendor during build time. So far we support eglfs only for Raspberry Pi thanks to Broadcom’s open-sourcing kernel drivers and userland libraries.

Then there is question of input. When you start your application from console you have several options to get user’s input: keyboard, mouse, touchscreen. Common way to do this on Linux is through evdev which is a universal ways to access these kinds of devices. There is effort to get this functionality on FreeBSD so when it’s there stock input plugins could be used as-is. Until then there are two non-standard plugins by yours truly: bsdkeyboard and bsdsysmouse.

scfb, bsdysmouse, and bsdkeyboard are included in experimental ports as patches.

All in all experience of getting Qt 5.6 running on FreeBSD/Pi was smooth. And to make this post more entertining here are two demos running on my Pi2 with official touchscreen.

Qt demo player with visualizer (src):

OpenGL demo with Qt logo in it (src):

Top

CGM review: Abbott FreeStyle Libre

Postby Flameeyes via Flameeyes's Weblog »

While working on reverse engineering glucometers I decided to give a try to a CGM solution. As far as I know the only solution available in Ireland is Dexcom. A friend of mine already has this, and I've seen it, but it felt a bit too bulky for my taste.

Instead, I found out on Twitter about a new solution from Abbott – the same company I wrote plenty before while reverse engineering devices – called FreeStyle Libre. When I first got to their website, though, I found out that the description videos themselves were "not available in my country". When I went back to check on it, the whole website was not available at all, and instead redirected me to a general website telling me the device is not available in my country.

I won't spend time here to describe how to work around the geolocking, I'm sure you can figure it out or find the instructions on other websites. Once you work around accessing the website, ordering is also limited to UK addresses for both billing and shipping — these are also fairly easy to work around, particularly when you live in Éire. I can't blame Abbott for not selling the device in this country (they are not allowed by law) but it would be nice if they didn't hide the whole website though!

Anyway, I have in some ways (which I won't specify) worked around the website geolocking and order one of the starter kits back in February. The kit comes with two sensors (each valid for 14 days) and with a reader device which doubles as a normal glucometer.

The sensors come with an applicator that primes them and attaches them to the arm. The applicator is not too difficult to use even with your weak hand, which is a nice feature given that you should be alternating the arm you attach it to. Once you put the sensor on you do feel quite a bit of discomfort but you "get used to it" relatively quickly. I would suggest avoiding the outer-side of the arm though, particularly if you're clumsy like me and tend to run into walls fairly often — I ended up discarding my second sensor after only a week because I just took it out by virtue of falling.

One of the concerns that I've been warned about by a friend, on CGM sensors, is that while the sensor has no problem reading for the specified amount of time, the adhesive does not last that long. This was referred to another make and model (the Dexcom G4) and does not match my experience with the Libre. It might be because the Libre has a wider adhesive surface area, or because it's smaller and lighter, but I haven't had much problem with it trying to come away before the 14 days, even with showers and sweat. I would still suggest keeping at hand a roll of bandage tape though, just in case.

The reader device, as I said earlier, doubles as a normal glucometer, as it accepts the usual FreeStyle testing strips, both for blood and for ketone reading, although it does not come with sample strips. I did manage to try blood readings by using one of the sample strips I had from the FreeStyle Optium but I guess I should procure a few more just for the sake of it.

The design of the reading device is inspired by the FreeStyle InsuLinx, with a standard micro-USB port for both data access and charging – I was afraid the time would come that they would put non-replaceable batteries on glucometers! – and a strip-port to be used only for testing (I tried putting the serial port cable but the reader errors out.) It comes with a colourful capacitive touch-screen, from which you can change most (But not all) settings. A couple of things, such as the patient name, can only be changed from the software (available for Windows and OSX.)

The sensor takes a measurement every 15 minutes to draw the historical graph, which is stored for up to eight hours. Plus it takes a separate, instantaneous reading when you scan it. I really wish they put a little more memory in it to keep, say, 12 hours on the device, though. Eight hours is okay during the day if you're home, but it does mean you shouldn't forget the device home, when you go to the office (unless you work part-time), and that you might lose some of the data from just after going to sleep if you manage to sleep more than eight hours at a time — lucky you, by the way! I can't seem to be able to sleep more than six hours.

The scan is at least partially performed over NFC, as my phone can "see" the sensor as a tag, although it doesn't know what to do with it, of course. I'm not sure if the whole data dumping is done over NFC, but it would make it theoretically possible to get rid of the reader in favour of just using a smartphone then… but that's a topic for a different time.

The obvious problem with CGM solutions is their accuracy. Since they don't actually measure blood samples (they do use a needle, but it's a very small one) but rather interstitial fluid, it is often an open question on whether their readings can be trusted, and the suggestion is to keep measuring normal blood sugar once or twice a day. Which is part of the reason why the reader also doubles as a normal glucometer.

Your mileage here may vary widely, among other things because it varies for me as well! Indeed, I've had days in which the Libre sensor and the Accu-Chek Mobile matched perfectly, while the last couple of days (as I'm writing this) the Libre gave a slightly lower reading, between 1 and 2 mmol/l (yes this is the measure used in UK, Ireland and Australia) lower than the Accu-Chek blood sample reading. In the opinion of my doctor, hearing from his colleagues across the water (remember, this device is not available in my country), it is quite accurate and trustworthy. I'll run with his opinion — particularly because while trying to cross-check different meters I have here, they all seem to have a quite wider error range you'd expect, even when working on a blood sample from the same finger (from different fingers it gets complicated even for the same reader.)

I'm not thrilled by the idea of using rechargeable batteries for a glucometer. If I need to take a measurement and my Accu-Chek Mobile doesn't turn on, it takes me just a moment to pick up another pair of AAA from my supply and put them in — not so on a USB-charged device. But on the other hand, it does make for a relatively small size, given the amount of extra components the device need, as you can see from the picture. The battery also lasts more than a couple of weeks without charging, and it does charge with the same microUSB standard as most of my other devices (excluding the iPod Touch and the Nexus 5X), so it's not too cumbersome while traveling.

A note on the picture: while the Accu-Chek Mobile has a much smaller and monochromatic non-touch screen, lots of its bulk is taken by the cassette with the tests (as it does not use strips at all), and it includes the lancing devices on its side, making it still quite reasonably sized. See also my review of it.

While the sensors store up to 8 hours of readings, the reader stores then up to three months of that data, including additional notes you can add to it like insulin dosage (similar to InsuLinx), meals and so on. The way it shows you that data is interesting too: any spot-check (when you scan the sensor yourself) is stored in a logbook, together with the blood sample tests — the logbooks also include a quick evaluation on whether the blood sugar is rising, falling (and greatly so) or staying constant. The automatic sensor readings are kept visible only as a "daily graph" (for midnight to midnight), or through "daily patterns" that graph (for 7, 14, 30 and 90 days) the median glucose within a band of high and low percentiles (the device does not tell you which ones they are, more to that later.)

I find the ability to see these information, particularly after recording notes on the meals, for instance, very useful. It is making me change my approach for many things, in particular I have stopped eating bagels in the morning (but I still eat them in the evenings) since I get hypers if I do — according to my doctor it's not unheard of for insulinoresistance to be stronger as you wake up.

I also discovered that other health issues you'd expect not to be related do make a mess of diabetes treatment (and thus why my doctors both insisted I take the flu shot every year). A "simple" flu (well, one that got me to 38.7⁰C, but that's unrelated, no?) brought my blood sugar to raise quite high (over 20 mmol/l), even though I was not eating as much as usual either. I could have noticed with the usual blood checking, but that's not something you look forward to when you're already feverish and unwell. For next time, I should increase insulin in those cases, but it also made me wary of colds and in general gave me a good data point that caring even for small things is important.

A more sour point is, not unusually, the software. Now, to be fair, as my doctor pointed out, all diabetes management software sucks because none of it can have a full picture of things, so the data is not as useful, particularly not to the patients. I have of course interest in the software because of my work on reverse engineering, so I installed the Windows software right away (for once, they also provide an OSX software, but since the only Mac I have access to nowadays is a work device, I have not tried it.)

Unlike the previous terrible experience with Abbott software, this time I managed to download it without a glitch, except for the already-noted geolocking of their website. It also installed fine on Windows 10 and works out of the box, among other things because it requires no kernel drivers whatsoever (I'll talk about that later when I go into the reverse engineering bits.)

Another difference between this software and anything else I've seen up to now, is that it's completely stateless. It does not download the data off the glucometer to store it locally, it downloads it and run the reports. But if you don't have the reader with you, there's no data. And since the reader stores up to 90 days worth of data before discarding, there are no reports that cross that horizon!

On the other hand, the software does seem to do a good job at generating a vast number of information. Not only it generates all the daily graphs, and documents the data more properly regarding which percentiles the "patterns" refer to (they also include two more percentile levels just to give a better idea of the actual pattern), but it provides info such as the "expected A1C" which is quite interesting.

At first, I mistakenly thought that the report functionality only worked by printing, similarly to the OneTouch software, but it turns out you can "Save" the report as PDF and that actually works quite well. It also allows you to "Export" the data, which provides you with a comma-separated values file with most of the raw data coming from the device (again, this will become useful in a separate post.)

That does not mean the software is not free from bugs, though. First of all, it does not close. Instead, if you click on the window's X button, it'll be minimized. There's an "Exit" option in the "File" menu, but more often than not it seems to cause the software to get stuck and either terminated by Windows, or requiring termination through the Task Manager. It also keeps "prodding" for the device, which ends up using 25% of one core, just for the sake of being open.

The funniest bit, though, was when I tried to "print" the report to PDF — which as I said above is not really needed, you can export it from the software just fine, but I didn't notice. In this situation, after the print dialog is shown, the software decides to hide any other window for its process behind its main window. I can only assume that this hide some Windows printing dialog that they don't want to distract the user with, but it also hides the "Save As" dialog that pops up. You can type the name blindly, assuming you can confirm you're in the right window through Alt-Tab, but you'll also have to deal with the software using its installation directory as work directory. Luckily Windows 10 is smart enough, and will warn about not having write access to the directory, and if you "OK" the invisible dialog, it'll save the file on your user's home directory instead.

As for final words, I'm sure hoping the device becomes available in Republic of Ireland, and I would really like for it to be covered by the HSE's Long Term Illness program, as the sensors are not cheap at £58 every two weeks (unless you're clumsy as me and have to replace it sooner.) I originally bought the starter kit to try this out and evaluate it, but I think it's making enough of good impact that (since I can afford it) I'll keep buying the supplies with my current method until it is actually available here (or until they make it too miserable.) I am not going to stop using the Accu-Chek Mobile for blood testing, though. While it would be nice to use a single device, the cassette system used by the Roche meter is just too handy, particularly when out in a restaurant.

I'll provide more information on my effort of reverse engineering the protocol in a follow-up post, so stay tuned if you're interested in it.
Top

Linux on the Arty, Part 0: Establishing a build environment

Postby brix via blog.brixandersen.dk »

I have long wanted to explore the endless possibilities of soft core CPUs like the Xilinx Microblaze. A few months back, I stumbled across the announcement of the Arty by Digilent/Avnet/Xilinx – an affordable evaluation/development board clearly built for makers and perfect for getting my feet wet with a few Microblaze-based projects.



I initially wanted to get GNU/Linux up and running on a Microblaze CPU on the Arty to establish a familiar software environment for further tinkering. This proved to be quite a task in itself – a task which required reading through lots of product guides, manuals, blog posts and howtos online. Lots of documentation has been written on the Microblaze subject, but none of it was quite to the point or simple for me to follow.

In the following series of blog posts I will describe my way of accomplishing this goal; Linux on the Arty, with all on-board peripherals up and running and exposed to userland.

In the first posts, post 0, we will establish a build environment for both the hardware and the software. Later posts will span from establishing a bare-minimal, Linux-ready hardware design to getting each on-board peripheral of the Arty included in the hardware design, supported by the Linux kernel and exposed to the GNU/Linux userland.

My main development environment is a Macbook running OS X. I constantly use VMware Fusion for running various other (typically either FreeBSD or GNU/Linux) virtual machines to fulfill my needs for other development platforms. For developing the hardware and building the software for the Arty, I have installed CentOS 7 in a VM. I then use ssh to establish a local connection from OS X to the CentOS VM and forward the X display to OS X, thus allowing my to see e.g. the Xilinx Vivado windows on my OS X desktop through XQuartz.

You could of course use any PC capable of running Xilinx Vivado for the hardware part, but note that you will need a GNU/Linux host for building the Linux kernel and GNU/Linux userland. We will be focusing on Vivado 2015.4 and buildroot 2016.02 for this blog series.

If you plan on using a virtual CentOS 7 for the build environment, as I do, I recommend going with at least 50 GB HDD. The Xilinx Vivado installation itself will take up at least 20 GB and the hardware/software projects will take up another 5 GB. You will need a few packages and customizations on top of the Minimal 64 Bit installation image. Also, make sure you create an administrator user account during the installation.



First off, almost all CentOS command-line tools will complain about missing locale settings. Fix this by logging in and setting the locale to e.g. U.S. English with UTF-8 encoding by issuing the following command:
sudo localectl set-locale LANG=en_US.utf-8
Remember to log out and back in for the locale settings to take effect.

Next up, install and start the Open VM Tools (skip this if not running under VMware):
sudo yum -y install open-vm-tools
sudo systemctl enable vmtoolsd
sudo systemctl start vmtoolsd

Before we install the Xilinx Vivado suite, we will need to install a few dependencies and tools:
sudo yum -y update
sudo yum -y groupinstall 'Development Tools'
sudo yum -y groupinstall 'X Window System'
sudo yum -y groupinstall Fonts
sudo yum -y install glibc.i686 libstdc++.i686 fontconfig.i686 libXext.i686 libXrender.i686 glib2.i686 libpng12.i686 libSM.i686
sudo yum -y install evince wget ncurses-devel bc screen
sudo reboot

The *.i686 packages are needed by the Xilinx Vivado Documentation Navigator, which for some reason is still shipped as a 32 bit binary. Evince is a PDF viewer, which we will be using from within the Documentation Navigator.

Download the Vivado HLx Web Install Client for Linux and install it using the following commands:
sudo mkdir /opt/Xilinx
sudo chown $USER:$USER /opt/Xilinx
chmod +x Xilinx_Vivado_SDK_2015.4_1118_2_Lin64.bin
./Xilinx_Vivado_SDK_2015.4_1118_2_Lin64.bin

If you get an error about no X11 DISPLAY variable being set, make sure you’re logged on to the CentOS host using ssh -Y .... The -Y option will enable trusted X11 display forwarding.

Make sure to enable installation of the Software Development Kit, as this will be needed later on. You propably already aquired a license for Xilinx Vivado using the voucher included with the Arty, so you can skip acquiring a licence during the installation.



The Xilinx Platform Cable USB Driver/Digilent JTAG Driver are a separate install requiring root access. Install them using the following command:
cd /opt/Xilinx/Vivado/2015.4/data/xicom/cable_drivers/lin64/install_script/install_drivers
sudo ./install_drivers


We also need to make sure that our user account has access to the USB JTAG and serial port devices exposed by the Arty. These will be owned by the dialout group, so add our user to that group:
sudo usermod -G wheel,dialout $USER
To be able to launch the various Xilinx Vivado tools, we also need to set up our shell accordingly.
echo 'source /opt/Xilinx/Vivado/2015.4/settings64.sh' >> ~/.bashrc
source /opt/Xilinx/Vivado/2015.4/settings64.sh

Finally, install the license for Xilinx Vivado obtained by using the voucher included with the Arty:
mkdir -p ~/.Xilinx
mv Xilinx.lic ~/.Xilinx/Xilinx.lic

Finally, launch docnav, open the Settings Dialog and change the PDF Viewer Path to evince.



This concludes the first part of this blog series. The next part will describe how to do the basic, Linux-ready hardware design.
Top

Abgewrackt

Postby via www.my-universe.com Blog Feed »

Heute geht mit Atuan* der erste meiner beiden verbliebenen Server endgültig vom Netz. Die Dienste sind schon alle heruntergefahren; momentan wandern noch die letzten Daten auf mein heimisches NAS. Danach werden die Platten noch einmal gründlich geschrubbt (schließlich sind das noch „normale“ Festplatten, die sich sektorweise überschreiben lassen), und die Maschine anschließend heruntergefahren – fertig zur Rückgabe an Hetzner. Damit vollzieht sich nun der Schritt, den ich bereits im Januar angekündigt hatte.


Damit ist es nun auch an der Zeit, etwas zu tun, das ich so öffentlich bisher noch nicht getan habe – nämlich all jenen zu danken, die mir in den letzten 15 Jahren bei der Serverei geholfen haben. Daher: danke Hetzner, für mehr als 10 Jahre reibungslosen Betrieb und stets hilfsbereiten und schnellen Support. Danke Gorden für den Schubs, den Du mir damals gegeben hast, mich mit Servern überhaupt zu befassen. Ein ganz besonders dickes Dankeschön geht an die RootForum.org Community, vor allem an meine Admin- und Moderationskollegen Joe User, ddm3ve und Roger Wilco, aber natürlich auch an die „alte Garde“ (Fritz, Captain Crunch, Floschi, … wo auch immer ihr gerade steckt) – von Euch habe ich so richtig viel gelernt.

Diese Aufzählung ist natürlich alles andere als vollständig – bevor ich hier jedoch weitere Absätze mit Aufzählungen fülle, möchte ich eines noch einmal ganz besonders betonen: Ohne Open Source Software wäre das alles nie möglich geworden. Irgendwie nimmt man es mit der Zeit als Selbstverständlichkeit hin, dass alles einfach so da und zu haben ist: Betriebssysteme, Datenbankserver, Web- und Mailserver, Versionsverwaltung, grafische Oberflächen, Bürosoftware, Spiele, ungezählte Webanwendungen, … Und hinter all dem steht das Talent und Können, die Kreativität und das Engagement unzähliger Designer, Programmierer, Koordinatoren, von denen viele auf freiwilliger Basis und unbezahlt an den Projekten arbeiten.

Ich habe selbst an der einen oder anderen Stelle die Erfahrung gemacht, was es bedeutet, an einem Open Source Projekt mitzuarbeiten. Daher gilt mein ganz großer Respekt all jenen, die sich um Open Source verdient machen. Egal ob als Fürsprecher, Entwickler oder in sonstiger Weise. Hier ist über viele Jahre hinweg etwas großartiges entstanden, von dem wir alle profitieren, und das es wert ist, von uns allen bewahrt, gewertschätzt und unterstützt zu werden.

So, nun ist's aber auch genug – die nächsten Postings werden wieder weniger sentimental, versprochen…

* Meine Server sind bzw. waren tatsächlich alle nach Inseln aus den Erdsee-Romanen von Ursula K. Le Guin benannt
Top

FreeBSD 10.3-RC3 Available

Postby Webmaster Team via FreeBSD News Flash »

The third Release Candidate build for the FreeBSD 10.3 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.
Top

Spammers Abusing Trust in US .Gov Domains

Postby BrianKrebs via Krebs on Security »

Spammers are abusing ill-configured U.S. dot-gov domains and link shorteners to promote spammy sites that are hidden behind short links ending in”usa.gov”.

Spam purveyors are taking advantage of so-called “open redirects” on several U.S. state Web sites to hide the true destination to which users will be taken if they click the link.  Open redirects are potentially dangerous because they let spammers abuse the reputation of the site hosting the redirect to get users to visit malicious or spammy sites without realizing it.

For example, South Dakota has an open redirect:

http://dss.sd.gov/scripts/programredirect.asp?url=

…which spammers are abusing to insert the name of their site at the end of the script. Here’ a link that uses this redirect to route you through dss.sd.gov and then on to krebsonsecurity.com. But this same redirect could just as easily be altered to divert anyone clicking the link to a booby-trapped Web site that tries to foist malware.

The federal government’s stamp of approval comes into the picture when spammers take those open redirect links and use bit.ly to shorten them. Bit.ly’s service automatically shortens any US dot-gov or dot-mil (military) site with a “1.usa.gov” shortlink. That allows me to convert the redirect link to krebsonsecurity.com from the ungainly….

http://dss.sd.gov/scripts/programredirect.asp?url=http://krebsonsecurity.com

…into the far less ugly and perhaps even official-looking:

http://1.usa.gov/1pwtneQ.

Helpfully, Uncle Sam makes available a list of all the 1.usa.gov links being clicked at this page. Keep an eye on that and you’re bound to see spammy links going by, as in this screen shot. One of the more recent examples I saw was this link — http:// 1.usa[dot]gov/1P8HfQJ# (please don’t visit this unless you know what you’re doing) — which was advertised via Skype instant message spam, and takes clickers to a fake TMZ story allegedly about “Gwen Stefani Sharing Blake Shelton’s Secret to Rapid Weight Loss.”

Spammers are using open redirects on state sites and bit.ly to make spammy domains like this one look like .gov links.
Unfortunately, a minute or so of research online shows that exact issue was highlighted almost four years ago by researchers at Symantec. In October 2012, Symantec said it found that about 15 percent of all 1.usa.gov URLS were used to promote spammy messages. I’d be curious to know the current ratio, but I doubt it has changed much.

A story at the time about the Symantec research in Sophos‘s Naked Security blog noted that the curator of usa.gov — the U.S. General Services Administration’s Office of Citizen Services and Innovative Technology — was working with bit.ly to filter out malicious or spammy links — pointing to a interstitial warning that bit.ly pops up when it detects a suspicious link is being shortened.

KrebsOnSecurity requested comment from both bit.ly and the GSA, and will update this post in the event that they respond.

I wanted to get a sense of how well bit.ly’s system would block any .gov redirects that sent users to known malicious Web sites. So I created .gov shortlinks using the South Dakota redirect, bit.ly, and the first page of URLs listed at malwaredomainlist.com — a site that tracks malicious links being used in active attacks.

The result? Bit.ly’s system allowed clicks on all of the shortened malicious links that didn’t end in “.exe,” which was most of them. It’s nice that bit.ly at least tries to filter out malicious links, but perhaps the better solution is for U.S. state and federal government sites to get rid of open redirects altogether.

The warning that bit.ly sometimes pops up if you try to shorten known, malicious links.
I generally don’t trust shortened links, and have long relied on the Unshorten.it extension for Google Chrome, which lets users unshorten any link by right clicking on it and selecting “unshorten this link”. Unshorten.it also pulls reputation data on each URL from Web of Trust (WOT).

Fun fact: Adding a “+” to the end of any link shortened with bit.ly will take you to a page on bit.ly that displays the link actual link that was shortened.

How do you respond to shortened links? Sound off in the comments below.

Update, Mar. 22, 6:20 p.m. ET: A GSA spokesperson said that When GSA learns that an open redirector is being used for 1.usa.gov links, “we reach out to the owner and ask that it be shut down. We are also working with Bitly to remove 1.usa.gov links with open redirectors that aren’t shut down at our request. GSA will continue to take the necessary steps to keep .gov domains secure, and we encourage anyone who discovers an open redirector in the .gov space to notify the affected agency so that it can be disabled.”
Top

Anime/Manga Figuren Verkauf

Postby via dietanu »

Nach einigen Jahren ist die Luft etwas aus der Sammelleidenschaft für Anime/Manga-Figuren raus. Wir haben hier jede Menge sehr schöne und auch seltene Figuren über die Jahre zusammengetragen, wie die 20th Anniversary Edition von “Ah! My Goddess”. Die ganzen Figuren zu fotografieren wird einiges an Zeit in Anspruch nehmen, also schaut mal ab und an in meinem Sale vorbei.
Top

Introducing a New Website and Logo for the Foundation

Postby Anne Dickison via FreeBSD Foundation »


The FreeBSD Foundation is pleased to announce the debut of our new logo and website, signaling the ongoing evolution of the Foundation identity, and ability to better serve the FreeBSD Project. Our new logo was designed to not only reflect the established and professional nature of our organization, but also to represent the link between the Project and the Foundation, and our commitment to community, collaboration, and the advancement of FreeBSD.

We did not make this decision lightly.  We are proud of the Beastie in the Business Suit and the history he encompasses. That is why you’ll still see him make an appearance on occasion. However, as the Foundation’s reach and objectives continue to expand, we must ensure our identity reflects who we are today, and where we are going in the future. From spotlighting companies who support and use FreeBSD, to making it easier to learn how to get involved, spread the word about, and work within the Project, the new site has been designed to better showcase, not only how we support the Project, but also the impact FreeBSD has on the world. The launch today marks the end of Phase I of our Website Development Project. Please stay tuned as we continue to add enhancements to the site.

We are also in the process of updating all our collateral, marketing literature, stationery, etc with the new logo. If you have used the FreeBSD Foundation logo in any of your marketing materials, please assist us in updating them. New Logo Guidelines will be available soon. In the meantime, if you are in the process of producing some new literature, and you would like to use the new Foundation logo, please contact our marketing department to get the new artwork.

Please note: we've moved the blog to the new site. See it here.



Top

Thieves Phish Moneytree Employee Tax Data

Postby BrianKrebs via Krebs on Security »

Payday lending firm Moneytree is the latest company to alert current and former employees that their tax data — including Social Security numbers, salary and address information — was accidentally handed over directly to scam artists.

Seattle-based Moneytree sent an email to employees on March 4 stating that “one of our team members fell victim to a phishing scam and revealed payroll information to an external source.”

“Moneytree was apparently targeted by a scam in which the scammer impersonated me and asked for an emailed copy of certain information about the Company’s payroll including Team Member names, home addresses, social security numbers, birthdates and W2 information,” Moneytree co-founder Dennis Bassford wrote to employees.

The message continued:

“Unfortunately, this request was not recognized as a scam, and the information about current and former Team Members who worked in the US at Moneytree in 2015 or were hired in early 2016 was disclosed. The good news is that our servers and security systems were not breached, and our millions of customer records were not affected. The bad news is that our Team Members’ information has been compromised.”

A woman who answered a Moneytree phone number listed in the email confirmed the veracity of the co-founder’s message to employees, but would not say how many employees were notified. According to the company’s profile on Yellowpages.com, Moneytree Inc. maintains a staff of more than 1,200 employees. The company offers check cashing, payday loan, money order, wire transfer, mortgage, lending, prepaid gift cards, and copying and fax services.

Moneytree joins a growing list of companies disclosing to employees that they were duped by W2 phishing scams, which this author first warned about in mid-February.  Earlier this month, data storage giant Seagate acknowledged that a similar phishing scam had compromised the tax and personal data on thousands of current and past employees.

I’m working on a separate piece that examines the breadth of damage done this year by W2 phishing schemes. Just based on the number of emails I’ve been forwarded from readers who say they were similarly notified by current or former employers, I’d estimate there are hundreds — if not thousands — of companies that fell for these phishing scams and exposed their employees to all manner of identity theft.

W2 information is highly prized by fraudsters involved in tax refund fraud, a multi-billion dollar problem in which thieves claim a large refund in the victim’s name, and ask for the funds to be electronically deposited into an account the crooks control.

Tax refund fraud victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS. To learn more about tax refund scams and how best to avoid becoming the next victim, check out this story.

For better or worse, most companies that have notified employees about a W2 phish this year are offering employees the predictable free credit monitoring, which is of course useless to prevent tax fraud and many other types of identity theft. But in a refreshing departure from that tired playbook, Moneytree says it will be giving employees an extra $50 in their next paycheck to cover the initial cost of placing a credit freeze (for more information on the different between credit monitoring and a freeze and why a freeze might be a better idea, check out Credit Monitoring vs. Freeze and How I Learned to Stop Worrying and Embrace the Security Freeze).

“When something like this happens, the right thing to do is to disclose what you know as soon as possible, take care of the people affected, and learn from what went wrong,” Bassford’s email concluded. “To make good on that last point, we will be ramping up our information security efforts company-wide, because we never want to have to write an email like this to you again.”
Top

Call For Artists: New Icon Theme

Postby Josh Smith via Official PC-BSD Blog »

Source: Call For Artists: New Icon Theme

Since the founding of the Lumina desktop project, one of the most common questions I get asked is: “I am not a programmer, but how can I help out?” Well today I would like to open up a new method of contributing for those of you that are graphically-inclined: the creation of a brand new icon theme for the Lumina desktop!

This new icon theme will adhere to the FreeDesktop specifications[1] for registering an icon theme, and the good news is that I have already handled all the administrative setup/framework for you so that all you need to do to contribute is basically just send in icon files!

Here are the highlights for the new theme:

  1. Included within the main Lumina source repository
  2. All icons will be licensed under the Creative Commons Attribution 4.0 International Public License. This is comparable to the 3-clause BSD license, but specifically for static images/files (whereas the BSD license is for source code).
  3. This will be a high-contrast, high-resolution, fully-scalable (SVG) theme.
  4. The general concept is a white foreground, with a black outline/shadow around the icon, and colorized emblems/overlays for distinguishing between similar icons (“folder” vs “folder-network” for instance). We are going for a more professional/simple look to the icons since tiny image details generally do not scale as well across the range we are looking at.
The details on how to contribute an icon to the theme are listed on the repository page as well, but here is the summary:

  1. Icons which are still needed are listed in the TODO.txt files within each directory.
  2. Submit the icon file via git pull request
  3. Add an entry for your icon/submission to the AUTHORS file (to ensure each contributor gets proper credit for their work)
  4. Remove the icon from the appropriate TODO.txt file/list
If you are not familiar with git or how to send git pull requests, feel free to email me the icon file(s) you want to contribute and I can add them to the source tree for you (and update the AUTHORS/TODO files as necessary). Just be sure to include your full name/email so we can give you the proper credit for your work (if you care about that).

 

As an added bonus since we don’t have any actual icons yet (just the general guidelines), the first contributor to send in some icons will get to help decide the overall look-n-feel of the icon theme!

 

Have Fun!

 

Ken Moore

 

[1] FreeDesktop Specifications

  • Theme Registration: https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html
  • Icon Names: https://specifications.freedesktop.org/icon-naming-spec/latest/ar01s04.html
Top

FOSDEM and the unrealistic IPv6-only network

Postby Flameeyes via Flameeyes's Weblog »

Most of you know FOSDEM already, for those who don't, it's the largest Free and Open Source Software focused conference in Europe (if not the world.) If you haven't been to it I definitely suggest it, particularly because it's a free admission conference and it always has something interesting to discuss.

Even though there is no ticket and no badge, the conference does have free WiFi Internet access, which is how the number of attendees is usually estimated. In the past few years, their network has also been pushing the envelope on IPv6 support, first providing a dualstack network when IPv6 was fairly rare, and in the recent (three?) years providing an IPv6-only network as the default.

I can see the reason to do this, in the sense that a lot of Free Software developers are physically at the conference, which means they can see their tools suffer in an IPv6 environment and fix them. But at the same time, this has generated lots of complaints about Android not working in this setup. While part of that noise was useful, I got the impression this year that the complaints are repeated only for the sake of complaining.

Full disclosure, of course: I do happen to work for the company behind Android. On the other hand, I don't work on anything related at all. So this post is as usual my own personal opinion.

The complaints about Android started off quite healthy: devices couldn't actually connect to an IPv6 dual-stack network, and then they couldn't connect to a IPv6-only network. Both are valid complaints to begin with, though there is a bit more to it. This year in particular the complaints were not so healthy because current versions of Android (6.0) actually do support IPv6-only networks, though most of the Android devices out there are not running this version, either because they have too old hardware or because the manufacturer has not released a new build yet.

What does tick me though has really nothing to do with Android, but rather with the idea that people have that the current IPv6-only setup used by FOSDEM is a realistic approach to IPv6 networking — it really is not. It is a nice setup to test things out and stress the need for proper support for IPv6 in tools, but it's very unlikely to be used in production by anybody as is.

The technique used (at least this year) by FOSDEM is NAT64. To oversimplify how this works, it is designed to modify the DNS replies when resolving hostnames so that they always provide an IPv6 address, even though they would only have A records (IPv4 addresses). The IPv6 addresses used would then map back to IPv4, and the edge router would then "translate" between the two connections.

Unlike classic NAT, this technique requires user-space components, as the kernel uses separate stacks for IPv4 and IPv6 which do not allow direct message passing between the two. This makes it complicated and significantly slower (you have to copy the data from kernel to userspace and back all the time), unless you use one of the hardware router that are designed to deal with this (I know both Juniper and Cisco have those.)

NAT64 is a very useful testbed, if your target is figuring out what in your stack is not ready for IPv6. It is not, though, a realistic approach for consumer networks. If your client application does not have IPv6 support, it'll just fail to connect. If for whatever reason you rely on IPv4 literals, they won't work. Even worse, if the code allows a connection to be established over IPv6, but relies on IPv4 semantics for things like logging, or (worse) access control, then you now have bugs, crashes or worse, vulnerabilities.

And while fuzzing and stress-testing are great for development environments, they are not good for final users. In the same way -Werror is a great tool to fix your code, but uselessly disrupts your users.

In a similar fashion, while IPv6-only datacenters are not that uncommon – Facebook (the company) talked about them two years ago already – they serve a definite different purpose from a customer network. You don't want, after all, your database cluster to connect to random external services that you don't control — and if you do control the services, you just need to make sure they are all available over IPv6. In such a system, having a single stack to worry about simplifies, rather than complicate, things. I do something similar for the server I divide into containers: some of them, that are only backends, get no IPv4 at all, not even in NAT. If they ever have to go fetch something to build on the Internet at large, they go through a proxy instead.

I'm not saying that FOSDEM setting up such a network is not useful. It actually hugely is, as it clearly highlights the problems of applications not supporting IPv6 properly. And for Free Software developers setting up a network like this might indeed be too expensive in time or money, so it is a chance to try things out and iron out bugs. But at the same time it does not reflect a realistic environment. Which is why adding more and more rant on the tracking Android bug (which I'm not even going to link here) is not going to be useful — the limitation was known for a while and has been addressed on newer versions, but it would be useless to try backporting it.

For what it's worth, what is more likely to happen as IPv6 adoption needs to happen, is that providers will move towards solutions like DS-Lite (nothing to do with Nintendo), which couples native IPv6 with carrier-grade NAT. While this has limitations, depending on the size of the ISP pools, it is still easier to set up than NAT64, and is essentially transparent for customers if their systems don't support IPv6 at all. My ISP here in Ireland (Virgin Media) already has such a setup.
Top