Planet

Dru Lavigne Will be Speaking @ KnoxBUG

Postby Josh Smith via Official PC-BSD Blog »

If you missed the inaugural meeting of the Knoxville BSD User Group, you definitely don’t want to miss this one.  Lead Documentation Expert and author for the PC-BSD and FreeNAS projects Dru Lavigne will be giving a talk: “You Too Can Doc Like an Egyptian”.  For more information on meeting times and venue, please visit the Knoxville Tennessee BSD User Group’s web page.  We hope to see you there!

http://www.knoxbug.org/content/2016-05-26
Top

How the Pwnedlist Got Pwned

Postby BrianKrebs via Krebs on Security »

Last week, I learned about a vulnerability that exposed all 866 million account credentials harvested by pwnedlist.com, a service designed to help companies track public password breaches that may create security problems for their users. The vulnerability has since been fixed, but this simple security flaw may have inadvertently exacerbated countless breaches by preserving the data lost in them and then providing free access to one of the Internet’s largest collections of compromised credentials.

Pwnedlist is run by Scottsdale, Ariz. based InfoArmor, and is marketed as a repository of usernames and passwords that have been publicly leaked online for any period of time at Pastebin, online chat channels and other free data dump sites.

The service until quite recently was free to all comers, but it makes money by allowing companies to get a live feed of usernames and passwords exposed in third-party breaches which might create security problems going forward for the subscriber organization and its employees.

This 2014 article from the Phoenix Business Journal describes one way InfoArmor markets the Pwnedlist to companies: “InfoArmor’s new Vendor Security Monitoring tool allows businesses to do due diligence and monitor its third-party vendors through real-time safety reports.”

The trouble is, the way Pwnedlist should work is very different from how it does. This became evident after I was contacted by Bob Hodges, a longtime reader and security researcher in Detroit who discovered something peculiar while he was using Pwnedlist: Hodges wanted to add to his watchlist the .edu and .com domains for which he is the administrator, but that feature wasn’t available.

In the first sign that something wasn’t quite right authentication-wise at Pwnedlist, the system didn’t even allow him to validate that he had control of an email address or domain by sending him a verification to said email or domain.

On the other hand, he found he could monitor any email address he wanted. Hodges said this gave him an idea about how to add his domains: Turns out that when any Pwnedlist user requests that a new Web site name be added to his “Watchlist,” the process for approving that request was fundamentally flawed.

That’s because the process of adding a new thing for Pwnedlist to look for — be it a domain, email address, or password hash — was a two-step procedure involving a submit button and confirmation page, and the confirmation page didn’t bother to check whether the thing being added in the first step was the same as the thing approved in the confirmation page. [For the Geek Factor 5 crowd here, this vulnerability type is known as “parameter tampering,” and it involves  the ability to modify hidden parameters in POST requests].

“Their system is supposed to compare the data that gets submitted in the second step with what you initially submitted in the first window, but there’s nothing to prevent you from changing that,” Hodges said. “They’re not even checking normal email addresses. For example, when you add an email to your watchlist, that email [account] doesn’t get a message saying they’ve been added. After you add an email you don’t own or control, it gives you the verified check box, but in reality it does no verification. You just typed it in. It’s almost like at some point they just disabled any verification systems they may have had at Pwnedlist.”

Hodges explained that one could easily circumvent Pwnedlist’s account controls by downloading and running a copy of Kali Linux — a free suite of tools made for finding and exploiting software and network vulnerabilities.

Always the student, I wanted to see this first-hand. I had a Pwnedlist account from way back when it first launched in 2011, so I fired up a downloadable virtual version of Kali on top of the free VirtualBox platform on my Mac.  Kali comes with a pretty handy vulnerability scanner called Burpsuite, which makes sniffing, snarfing and otherwise tampering with traffic to and from Web sites a fairly straightforward point-and-click exercise.

Indeed, after about a minute of instruction, I was able to replicate Hodges’ findings, successfully adding Apple.com to my watchlist. I also found I could add basically any resource I wanted. Although I verified that I could add top-level domains like “.com” and “.net,” I did not run these queries because I suspected that doing so would crash the database, and in any case might call unwanted attention to my account. (I also resisted the strong temptation to simply shut up about this bug and use it as my own private breach alerting service for the Fortune 500 firms).

Hodges told me that any newly-added domains would take about 24 hours to populate with results. But for some reason my account was taking far longer. Then I noticed that the email address I’d used to sign up for the free account back in 2011 didn’t have any hits in the Pwnedlist, and that was simply not possible if Pwnedlist was doing a halfway decent job tracking breaches. So I pinged InfoArmor and asked them to check my account. Sure enough, they said, it had never been used and was long ago deactivated.

Less than 12 hours after InfoArmor revived my dormant account, I received an automated email alert from the Pwnedlist telling me I had new results for Apple.com. In fact, the report I was then able to download included more than 100,000 usernames and passwords for accounts ending in apple.com. The data was available in plain text, and downloadable as a spreadsheet.

Some of the more than 100,000 credentials that Pwnedlist returned for me in a report on all passwords tied to email addresses that include “apple.com”.
It took a while for the enormity of what had just happened to sink in. I could now effectively request a report including all 866 million account credentials recorded by the Pwnsedlist. In short, the Pwnedlist had been pwned.

At this point, I got back in touch with InfoArmor and told them what Hodges had found and shown me. Their first response was that somehow I been given a privileged account on Pwnedlist, and that this is what allowed me to add any domain I chose. After all, I’d added the top 20 companies in the Fortune 500. How had I been able to do that?

“The account type you had had more privileges than an ordinary user would,” insisted Pwnedlist founder Alen Puzic.

After validating the bug, I added some other domains just for giggles. I deleted them all (except the Apple one) before they could generate reports.
I doubted that was true, and I suspected the vulnerability was present across their system regardless of which account type was used. Puzic said the company stopped allowing free account signups about six months ago, but since I had him on the phone I suggested he create a new, free account just for our testing purposes.

He rather gamely agreed. Within 30 seconds after the account was activated, I was able to add “gmail.com” to my Pwnedlist watchlist. Had we given it enough time, that query almost certainly would have caused Pwnedlist to produce a report with tens of millions of compromised credentials involving Gmail accounts.

“Wow, so you really can add whatever domain you want,” Puzic said in amazement as he loaded and viewed my account on his end.

Pwnedlist.com went offline shortly after my phonecall with InfoArmor.
It’s a shame that InfoArmor couldn’t design better authorization and authentication systems for Pwnedlist, given that the service itself is a monument to object failures in that regard. I’m a big believer in companies getting better intelligence about how large-scale everyday password breaches may impact their security, but it helps no one when a service that catalogs breaches has a lame security weakness that potentially prolongs and exacerbates them.

Update, 12:30 p.m. ET: InfoArmor downplayed the problem on Twitter, noting that “The data that was “exposed” has already been “compromised”- there was no loss of PII or subscriber data.” Also, a new notice is up on Pwnedlist.com, stating that the site is being shut down in a few weeks. The pop-up message reads:

“Thank you for being a subscriber and letting us help alert you of any risks related to your personal credentials. PwnedList launched in 2012 and quickly become the leader in open-source compromised data aggregation. In 2013 PwnedList was acquired by InfoArmor, Inc. a provider of enterprise based services. As part of the transition, the PwnedList Website has been scheduled for decommission on May 16, 2016. If you are interested in obtaining our commercial identity protection, please go to infoarmor.com for more information. It has been our pleasure to help you reduce your risk from compromised credentials.”

Top

January-March 2016 Status Report

Postby Webmaster Team via FreeBSD News Flash »

The January to March 2016 Status Report is now available.
Top

bsdtalk264 - Down the Gopher Hole

Postby Mr via bsdtalk »

Playing around with the gopher protocol.   Description of gopher from the 1995 book "Student's Guide to the Internet" by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at https://www.youtube.com/watch?v=oR76UI7aTvs

Check out http://gopher.floodgap.com/gopher/

File Info: 27 Min, 13 MB.

Ogg Link:https://archive.org/download/bsdtalk264/bsdtalk264.ogg
Top

A Dramatic Rise in ATM Skimming Attacks

Postby BrianKrebs via Krebs on Security »

Skimming attacks on ATMs increased at an alarming rate last year for both American and European banks and their customers, according to recent stats collected by fraud trackers. The trend appears to be continuing into 2016, with outbreaks of skimming activity visiting a much broader swath of the United States than in years past.

Two network cable card skimming devices, as found attached to this ATM.
In a series of recent alerts, the FICO Card Alert Service warned of large and sudden spikes in ATM skimming attacks. On April 8, FICO noted that its fraud-tracking service recorded a 546 percent increase in ATM skimming attacks from 2014 to 2015.

“The number of ATM compromises in 2015 was the highest ever recorded by the FICO Card Alert Service, which monitors hundreds of thousands of ATMs in the US,” the company said. “Criminal activity was highest at non-bank ATMs, such as those in convenience stores, where 10 times as many machines were compromised as in 2014.”

While 2014 saw skimming attacks targeting mainly banks in big cities on the east and west coasts of the United States, last year’s skimming attacks were far more spread out across the country, the FICO report noted.

Earlier this year, I published a post about skimming attacks targeting non-bank ATMs using hidden cameras and skimming devices plugged into the ATM network cables to intercept customer card data. The skimmer pictured in that story was at a 7-Eleven convenience store.

Since that story ran I’ve heard from multiple banking industry sources who said they have seen a spike in ATM fraud targeting cash machines in 7-Elevens and other convenience stores, and that the commonality among the machines is that they are all operated by ATM giant Cardtronics (machines in 7-Eleven locations made up for 17.5 percent of Cardtronics’ revenue last year, according to this report at ATM Marketplace).

Some financial institutions are taking dramatic steps to head off skimming activity. Trailhead Credit Union in Portland, Ore., for example, has posted a notice to customers atop its Web site, stating:

“ALERT: Until further notice, we have turned off ATM capabilities at all 7-11 ATMs due to recent fraudulent activity. Please use our ATM locator for other locations. We are sorry for the inconvenience.”

Trailhead Credit Union has stopped allowing members to withdraw cash from 7-11 ATMs.
7-Eleven did not respond to requests for comment. Cardtronics said it wasn’t aware of any banks blocking withdrawals across the board at 7-11 stores or at Cardtronics machines.

“While Cardtronics is aware that a single financial institution [Xceed Financial Credit Union] temporarily restricted ATM access late in 2015, it soon thereafter restored full ATM access to its account holders,” the company said in a statement. “As the largest ATM services provider, Cardtronics has a long history of executing a layered security strategy and implementing innovative security enhancements at our ATMs. As criminals modify their attack, Cardtronics always has and always will aggressively respond, reactively and proactively, with innovation to address these instances.”

DRAMA IN DC

A bit closer to home for this author, on April 22 FICO pushed an alert to its customers and partners warning about “a recent and dramatic increase in skimming fraud perpetrated at a chain of discount supercenters point-of-sale (POS) terminals,” in an around the Washington, D.C. area, including Frederick, Ellicott City and Mt. Airy in Maryland, and in Fredricksburg, Va.



“As this fraud activity has appeared and progressed suddenly, it is likely that sites in other cities and other geographic areas will be targeted by organized criminal groups,” the organization cautioned.

EUROPE

Banks in Europe also enjoyed an increase in skimming attacks of all kinds last year. According to statistics shared by the European ATM Security Team (EAST), during 2015 there were 18,738 skimming attacks reported against European ATMs. That’s a 19% increase from the previous year and equates to 51 attacks per 1000 ATMs over the period.

“During 2015 total losses of 327.48 million euros were reported,” EAST wrote. “This is a 17% increase when compared to the total losses of 279.86 million euros reported for 2014 and equates to losses of 884,069 euros per 1000 ATMs over the period.”

EAST’s report further breaks down the skimming activity by specialization. For example, there were at least 2,657 cases in which a thief tried to blow up or otherwise physically force his way into the cash machine. “This total also includes data from solid explosive and explosive gas attacks. This is a 34% increase from 2014 and equates to 7.2 attacks per 1000 ATMs over the period.”

EAST also tracked 15 malware incidents reported against European ATMs in 2015.  All of them were ‘cash out’ or ‘jackpotting’ attacks. According to EAST, this is a 71% decrease from 2014.

Source: EAST

PROTECT YOURSELF

As I’ve noted in countless skimmer stories here, the simplest way to protect yourself from ATM skimming is to cover your hand when entering your PIN. That’s because most skimmers rely on hidden cameras to steal the victim’s PIN.

Interestingly, a stat in Verizon‘s new Data Breach Investigations Report released this week bears this out: According to Verizon, in over 90 percent of the breaches in the report last year involving skimmers used a tiny hidden camera to steal the PIN.

The Verizon report also offers this advice about ATM safety: Trust your gut. “If you think that something looks odd or out of place, don’t use it. While it is increasingly difficult to find signs of tampering, it is not impossible. If you think a device may have been tampered with, move on to another location, after reporting to the merchant or bank staff.”

For more on ATM skimmers and other skimming devices, check out my series All About Skimmers.
Top

Dental Assn Mails Malware to Members

Postby BrianKrebs via Krebs on Security »

The American Dental Association (ADA) says it may have inadvertently mailed malware-laced USB thumb drives to thousands of dental offices nationwide.

The problem first came to light in a post on the DSL Reports Security Forum. DSLR member “Mike” from Pittsburgh got curious about the integrity of a USB drive that the ADA mailed to members to share updated “dental procedure codes” — codes that dental offices use to track procedures for billing and insurance purposes.

“Oh wow the usually inept ADA just sent me new codes,” Mike wrote. “I bet some marketing genius had this wonderful idea instead of making it downloadable. I can’t wait to plug an unknown USB into my computer that has PHI/HIPAA on it…” [link added].

The ADA says some flash drives mailed to members contained malware. Image: Mike
Sure enough, Mike looked at the code inside one of the files on the flash drive and found it tries to open a Web page that has long been tied to malware distribution. The domain is used by crooks to infect visitors with malware that lets the attackers gain full control of the infected Windows computer.

Reached by KrebsOnSecurity, the ADA said it sent the following email to members who have shared their email address with the organization:

“We have received a handful of reports that malware has been detected on some flash drives included with the 2016 CDT manual,” the ADA said. “The ‘flash drive’ is the credit card sized USB storage device that contains an electronic copy of the CDT 2016 manual. It is located in a pocket on the inside back cover of the manual. Your anti-virus software should detect the malware if it is present. However, if you haven’t used your CDT 2016 flash drive, please throw it away.

To give you access to an electronic version of the 2016 CDT manual, we are offering you the ability to download the PDF version of the 2016 CDT manual that was included on the flash drive.

To download the PDF version of the CDT manual:

1. Click on the link »ebusiness.ada.org/login/ ··· ion.aspx
2. Log in with your ADA.org user ID and password
3. After you log in you will automatically be directed to a page showing CDT 2016 Digital Edition.
4. Click on the “Download” button to save the file to your computer for use.

If you have difficulty accessing or downloading the file, please call 1.800.947.4746 and a Member Service Advisor will be happy to assist you.

Many of the flash drives do not contain the Malware. If you have already used your flash drive and it worked as expected (it displayed a menu linking to chapters of the 2016 CDT manual), you may continue using it.

We apologize if this issue has caused you any inconvenience and thank you for being a valued ADA customer.”

This incident could give new meaning to the term “root canal.” It’s not clear how the ADA could make a statement that anti-virus should detect the malware, since presently only some of the many antivirus tools out there will flag the malware link as malicious.

In response to questions from this author, the ADA said the USB media was manufactured in China by a subcontractor of an ADA vendor, and that some 37,000 of the devices have been distributed. The not-for-profit ADA is the nation’s largest dental association, with more than 159,000 members.

“Upon investigation, the ADA concluded that only a small percentage of the manufactured USB devices were infected,” the organization wrote in an emailed statement. “Of note it is speculated that one of several duplicating machines in use at the manufacturer had become infected during a production run for another customer. That infected machine infected our clean image during one of our three production runs. Our random quality assurance testing did not catch any infected devices. Since this incident, the ADA has begun to review whether to continue to use physical media to distribute products.”
Top

All About Fraud: How Crooks Get the CVV

Postby BrianKrebs via Krebs on Security »

A longtime reader recently asked: “How do online fraudsters get the 3-digit card verification value (CVV or CVV2) code printed on the back of customer cards if merchants are forbidden from storing this information? The answer: If not via phishing, probably by installing a Web-based keylogger at an online merchant so that all data that customers submit to the site is copied and sent to the attacker’s server.

Kenneth Labelle, a regional director at insurer Burns-Wilcox.com, wrote:

“So, I am trying to figure out how card not present transactions are possible after a breach due to the CVV. If the card information was stolen via the point-of-sale system then the hacker should not have access to the CVV because its not on the magnetic strip. So how in the world are they committing card not present fraud when they don’t have the CVV number? I don’t understand how that is possible with the CVV code being used in online transactions.”

First off, “dumps” — or credit and debit card accounts that are stolen from hacked point of sale systems via skimmers or malware on cash register systems — retail for about $20 apiece on average in the cybercrime underground. Each dump can be used to fabricate a new physical clone of the original card, and thieves typically use these counterfeits to buy goods from big box retailers that they can easily resell, or to extract cash at ATMs.

However, when cyber crooks wish to defraud online stores, they don’t use dumps. That’s mainly because online merchants typically require the CVV, criminal dumps sellers don’t bundle CVVs with their dumps.

Instead, online fraudsters turn to “CVV shops,” shadowy cybercrime stores that sell packages of cardholder data, including customer name, full card number, expiration, CVV2 and ZIP code. These CVV bundles are far cheaper than dumps — typically between $2-$5 apiece — in part because the are useful mainly just for online transactions, but probably also because overall they more complicated to “cash out” or make money from them.

The vast majority of the time, this CVV data has been stolen by Web-based keyloggers. This is a relatively uncomplicated program that behaves much like a banking Trojan does on an infected PC, except it’s designed to steal data from Web server applications.

PC Trojans like ZeuS, for example, siphon information using two major techniques: snarfing passwords stored in the browser, and conducting “form grabbing” — capturing any data entered into a form field in the browser before it can be encrypted in the Web session and sent to whatever site the victim is visiting.

Web-based keyloggers also can do form grabbing, ripping out form data submitted by visitors — including names, addresses, phone numbers, credit card numbers and card verification code — as customers are submitting the data during the online checkout process.

These attacks drive home one immutable point about malware’s role in subverting secure connections: Whether resident on a Web server or on an end-user computer, if either endpoint is compromised, it’s ‘game over’ for the security of that Web session. With PC banking trojans, it’s all about surveillance on the client side pre-encryption, whereas what the bad guys are doing with these Web site attacks involves sucking down customer data post- or pre-encryption (depending on whether the data was incoming or outgoing).

If you’re responsible for maintaining or securing Web sites, it might be a good idea to get involved in one or more local groups that seek to help administrators. Professional and semi-professionals are welcome at local chapter meetings of OWASP, CitySec, ISSA or Security Bsides meetups.
Top

SkyMaxx Pro und Real Weather Connector

Postby via www.my-universe.com Blog Feed »

So eindrucksvoll wie die Natur bekommt derzeit kein Computerprogramm die Darstellung von Wolken hin. Mit SkyMaxx Pro und dem Real Weather Connector gibt es aber immerhin Addons, die X-Planes doch recht schwache Wetterdarstellung etwas realistischer gestalten wollen – wenn auch auf Kosten eines Mehrbedarfs an Rechenleistung und eines nicht gerade geringen Budgetbedarfs; beide Addons zusammen schlagen immerhin mit rund 60 US-$ zu Buche. Was bekommt man also im Tausch gegen Geld und FPS?


Zunächst einmal wäre das SkyMaxx Pro 3 (von mir in den Versionen 3.0, 3.1, 3.1.1 und 3.1.2 getestet), kurz SMP. Es kümmert sich um das Rendering von Wolken, greift aber ansonsten nicht weiter in das Wettergeschehen ein, sondern greift auf die Wetterdaten von X-Plane selbst zurück. Die optische Darstellung von Wolken wird durch SMP stark verbessert; insbesondere Kumulus-Wolken erscheinen damit sehr viel realistischer – allerdings besonders dann, wenn zur Darstellung selbiger die performance-intensivste Einstellung („sparse particles“) gewählt wurde. SMP selbst ist also vornehmlich etwas für's Auge und gestaltet das Wettergeschehen selbst noch nicht realistischer.

Richtig spannend wird die Sache jedoch in Verbindung mit dem Real Weather Connector (RWC). Dazu muss man zunächst einmal verstehen, worin die Limitierung von X-Plane eigentlich liegt: Normalerweise kann von X-Plane zur Zeit nur ein Wetterzustand dargestellt werden, und dieser ändert sich immer dann abrupt, wenn neue Wetterdaten zur Verfügung gestellt werden (z. B. durch Download einer frischen METAR-Datei). RWC versucht, diese Limitierung zu umgehen, indem auch das korrekte Wetter in benachbarten Parzellen dargestellt werden kann. Dadurch werden Übergänge weniger abrupt, und man sieht bereits vorab das Wetter, in das man gleich hineinfliegen wird.

Soweit die Theorie; in der Praxis gibt es jedoch einige Limitierungen, die man kennen sollte, bevor man von der doch recht kostspieligen Investition enttäuscht ist. Wie schon angesprochen benötigt das Duo aus SMP und RWC ordentlich Grafik-Leistung. Dieser Bedarf schlägt umso deutlicher zu Buche, je größer das abzudeckende Gebiet gewählt wurde. Doch was ist sinnvoll, und wo liegen hier die Grenzen? SMP kann maximal ein Gebiet von 22.500 km2 darstellen. Das klingt zunächst viel, doch rechnen wir einmal ein bisschen herum. Diese Fläche entspricht einem Quadrat von gerade einmal 150 km Kantenlänge. Befindet man sich in der Mitte des Quadrats, sieht man auf eine Entfernung von 75 km realistisches Wetter; das entspricht ca. 40 NM. Fliegt man mit einem Jet in größerer Höhe, ist in der Regel eine weit größere Sichtweite auf Wettersysteme gegeben.

Anders sieht es natürlich mit kleinen, langsamen Flugzeugen oder Hubschraubern aus, die in weit geringerer Höhe operieren. Von solch einem Cockpit aus betrachtet nehmen sich mit SMP und RWC umgesetzte Wettersysteme recht realistisch und optisch eindrucksvoll aus – wenn nicht gerade einer der auch in der neuesten SMP-Version (3.1.2) vorhandenen Rendering-Bugs zuschlägt. Ab und zu erscheinen „Geisterwolken“, die gar nicht zum aktuellen Wettersystem gehören können. Oft äußern sich diese Bugs in einer äußerst dichten, geschlossenen Wolkendecke in niedriger Höhe (zumeist jetzt schmutzig weiß, während sie in älteren Versionen auch schon mal schwarz sein konnten). Diese sonderbaren Wolken tauchten bei mir meist auf, wenn SMP nicht den vollen Sichtbereich kontrollieren durfte.

Die Entwickler sind sehr rührig und nehmen konstruktive Verbesserungsvorschläge gerne entgegen. Sicherlich werden die Fehler in den kommenden Versionen noch ausgebügelt – die Restriktion mit dem maximal darstellbaren Sichtbereich wird aber nicht so leicht zu umgehen sein. Für eine realistische Sichtweite müsste der darstellbare Bereich auf über 300.000 km2 ausgeweitet werden. Daran scheitert aber beim momentanen Stand der Technik selbst der stärkste Rechner.

Mein Fazit: SMP und RWC im Gespann verbessern die Wolkendarstellung in X-Plane deutlich. So richtig profitieren davon aber eher GA-Piloten, während Airline-Kutscher dann doch an die Grenzen des derzeit technisch machbaren stoßen.
Top


2016 Superheroes Race for the National Children’s Advocacy Center (NCAC) in Huntsville, AL

Postby Zach via The Z-Issue »

Well, it was that time of year again… time to dress up as one’s favourite Superhero and run for a great cause! The first weekend of April, I made the drive down to the lovely city of Huntsville, Alabama in order to support the National Children’s Advocacy Center by running in the 2016 NCAC Superheros 5K (here’s my post about last year’s race).

This year’s course was the same as last year, so it was a really nice run through parts of the city centre and through more residential areas of Huntsville. Unlike last year, the race started much later in the afternoon, so the temperatures were a lot higher (last year, truthfully, a bit chilly). It was beautifully sunny, and actually bordered on a bit warm, but I would gladly take those conditions over the cold!


Right before the start of the race
Click for full photo
I wasn’t quite sure how this race was going to turn out, seeing as it was my first since the knee injury late last year. I was hopeful that my rehabilitation and training since the injury would help me at least come close to my time last year, but I also doubted that possibility. I came in first place overall with a time of 20:13, which was a little over 30 seconds slower than last year. All things considered, I was pleased with my time. A few other fantastic runners to mention this year were Elliott Kliesner (age 14) who came in about 37 seconds after me, Christian Grant (age 12) with a time of 21:42, and Bud Bettler (age 72) who finished with an outstanding time for his age bracket at 28:16.


5K Results 1st through 5th place
Click for top 45 results
Years ago, I decided that I wouldn’t run in any races unless they benefited a children’s charity, and I can’t think of any organisation with which my goals more align with their mission than the National Children’s Advocacy Center. According to WAFF News in Huntsville, the race raised over $24,000 for the NCAC! That will make a huge difference in the lives of the children that the NCAC serves! Here’s to hoping that next year’s race (the 7th annual) will raise even more. Hope to see you there!


Nathan Zachary’s award (and Superhero cape) acceptance
Click to enlarge
Cheers,
Zach
Top

SpyEye Makers Get 24 Years in Prison

Postby BrianKrebs via Krebs on Security »

Two hackers convicted of making and selling the infamous SpyEye botnet creation kit were sentenced in Georgia today to a combined 24 years in prison for helping to infect hundreds of thousands of computers with malware and stealing millions from unsuspecting victims.

Aleksander Panin developed and sold SpyEye. Image courtesy: RT.
Atlanta Judge Amy Totenberg handed down a sentence of nine years, six months for Aleksandr Andreevich Panin, a 27-year-old Russian national also known by the hacker aliases “Gribodemon” and “Harderman.”

Convicted of conspiracy to commit wire and bank fraud, Panin was the core developer and distributor of SpyEye, a botnet toolkit that made it easy for relatively unsophisticated cyber thieves to steal millions of dollars from victims.

Sentenced to 15 years in jail was Panin’s business partner —  27-year-old Hamza “Bx1” Bendelladj, an Algerian national who pleaded guilty in June 2015 to helping Panin develop and market the SpyEye kit. Bendelladj also admitting to running his own SpyEye botnet of hacked Windows computers, a crime machine that he used to harvest and steal 200,000 credit card numbers. By the government’s math (an assumed $500 loss per card) Bx1 was potentially responsible for $100 million in losses.

“It is difficult to over state the significance of this case, not only in terms of bringing two prolific computer hackers to justice, but also in disrupting and preventing immeasurable financial losses to individuals and the financial industry around the world,” said John Horn, U.S. Attorney for the Northern District of Georgia.

THE HAPPY HACKER

Bendelladj was arrested in Bangkok in January 2013 while in transit from Malaysia to Egypt. He quickly became known as the “happy hacker” after his arrest, in which he could be seen smiling broadly while in handcuffs and being paraded before the local news media.

Photo: Hamza “Bx1” Bendelladj, Bangkok Post
In its case against the pair of hackers, the government presented chat logs between Bendelladj and Panin and other hackers. The government says the chat logs reveal that although Bendelladj worked with Panin to fuel the rise of SpyEye by vouching for him on cybercrime forums such as “Darkode,” the two had an antagonistic relationship.

Their business partnership imploded after Bx1 announced that he was publicly releasing the source code for SpyEye.

“Indeed, after Bendelladj ‘cracked’ SpyEye and made it available to others without having to purchase it from Panin, the two had a falling out,” reads the government’s sentencing memo (PDF) to the judge in the case.

The government says that while Bendelladj maintained he was little more than a malware analyzer working for a security company, his own chat logs put the lie to that claim, noting in November 2012 Bx1 bluntly said: “if they pay me the whole money of the world . . . I wont work for security.”

Bx1 had a penchant for marketing to other thieves. He shrewdly cast SpyEye as a lower-cost, more powerful alternative to the Zeus botnet creation kit, plastering cybercrime forums with animated ads pimping SpyEye as the “Zeuskiller” (in part because SpyEye was designed to remove Zeus from host computers before infecting them).

Part of a video ad for SpyEye.
In Oct. 2010, KrebsOnSecurity was the first to report on rumors in the underground that the authors of Zeus and SpyEye were ending their rivalry and merging the two crimeware products into one software stack and support structure for existing clients.

“Panin developed SpyEye as a successor to the notorious Zeus malware that had, since 2009, wreaked havoc on financial institutions around the world,” the Justice Department said in its statement today. “In November 2010, Panin allegedly received the source code and rights to sell Zeus from Evginy Bogachev, a/k/a Slavik, and incorporated many components of Zeus into SpyEye.  Bogachev remains at large and is currently the FBI’s most wanted cybercriminal.”

Bogachev, the alleged Zeus Trojan author, in undated photos.
It’s not clear whether Bendelladj had any intention of honoring the sanctity of the merger agreement with the author of the Zeus Trojan. Not long after the supposed merger, copies of the Zeus source code were available for sale online, and the code went fully public and free not long after that. My money is on Bendelladj for that leak as well.

Apparently Bx1 was not a big fan of KrebsOnSecurity, either. According to the government’s sentencing memo:

“At various points, [Bendelladj] has expressed contempt for Brian Krebs, the author of the “Krebs on Security,” and claims that he has credit cards (‘ccs’) of Mr. Krebs’s family and that Bendelladj will be ‘after him until he die.’ He even suggests inflicting a Distributed Denial of Service attack against Mr. Krebs.”

Maybe that antagonism had something to do with this story, in which I repost chat logs from a conversation I had with Bx1 back in January 2012. In it, Bx1 brags about hacking one of his competitors and to getting the guy arrested.
Top

Cleared to Land

Postby via www.my-universe.com Blog Feed »

Nach „nur“ sechsjähriger Entwicklungszeit ist es endlich soweit: Am kommenden Samstag wird IXEGs Boeing 737-300 released. Wahrscheinlich ist dann erst mal Handbuchstudium angesagt – nach dem, was in den letzten Jahren so durchgedrungen ist, soll das Modell ziemlich nah am Original sein, und ohne Type Rating läuft schließlich im Real Life auch nichts… Einen Jungfernflug werde ich mir aber nicht nehmen lassen – und wenn's nur eine Platzrunde ist…

Top

Giant Food Sees Giant Card Fraud Spike

Postby BrianKrebs via Krebs on Security »

Citing a recent and large increase in credit card fraud, Washington, DC-area grocer Giant Food says it will no longer allow customers to use credit cards when purchasing gift cards and reloadable or prepaid debit cards.

A new warning sign at Giant Food checkout counters. Giant says the warning was prompted by a spike in credit card fraud.
I had no idea this was a new thing at Landover, Md.-based Giant, which operates 169 supermarkets in the Washington, D.C. metro area.  That is, until I encountered a couple of large new “attention” stickers in the checkout line at a local Giant in Virginia recently. Next to the credit card terminal were big decals with the warning:

“Attention Gift Card Customers: Effective immediately, all purchases of Visa, MasterCard, American Express Gift Cards and all General Purpose Reloadable or Prepaid Cards may only be made with Cash or Bank Pin-based Debit.”

Asked for comment about the change, Giant Food released a brief statement about the policy change that went into effect in March 2016, but otherwise didn’t respond to requests for more details.

“Giant has recently made a change in procedures for purchasing gift cards because of a large increase of fraudulent gift card purchasing,” the company said. “Giant will now accept only a Bank PIN-based debit card or cash for all VISA, MasterCard, and American Express gift cards, as well as re-loadable and prepaid gift cards. This change has been made in order to mitigate potential fraud risk.”

It’s not clear why Giant is only just now taking this basic anti-fraud step. Card thieves love to pick on grocery and convenience stores. Street gangs involved in card fraud (and they’re all involved in card fraud now) often extract money from grocery, dollar and convenience stores using “runners” — low-level members who are assigned the occasionally risky business of physically “cashing out” counterfeit credit and debit cards.

One of the easiest ways thieves can cash out? Walk into a grocery or retail store and buy prepaid gift cards using stolen credit cards. Such transactions — if successful — effectively launder money by converting the stolen item (counterfeit/stolen card) into a good that is equivalent to cash or can be easily resold for cash (gift cards).

I witnessed this exact crime firsthand at a Giant in Maryland last year. As I noted in a Dec. 2015 post about gift card fraud, the crooks caught in the process of these cashout schemes usually are found with dozens of counterfeit credit cards on their person or in their vehicle. From that post:

“The man in front of me in line looked and smelled homeless. The only items he was trying to buy were several $200 gift cards that Giant had on sale for various retailers. When the first card he swiped was declined, the man fished two more cards out of his wallet. Each was similarly declined, but the man just shrugged and walked out of the store. I asked the cashier if this sort of thing happened often, and he just shook his head and said, ‘Man, you have no idea.'”

Meanwhile, every Giant I visit still asks me to swipe my chip-based card, effectively negating any added security the chip provides. Chip-based cards are far more expensive and difficult for thieves to counterfeit, and they can help mitigate the threat from most modern card-skimming methods that read the cardholder data in plain text from the card’s magnetic stripe. Those include malicious software at the point-of-sale terminal, as well as physical skimmers placed over card readers at self-checkout lanes — like this one found at a Maryland Safeway earlier this year.

In a recent column – The Great EMV Fake-Out: No Chip for You! – I explored why so few retailers currently allow or require chip transactions, even though many of them already have all the hardware in place to accept chip transactions. I suspect also that grocers are reluctant to introduce chip readers at self-checkout lanes, as more supermarket chains seem to be pushing customers in the self-checkout direction.
Top

How to solve every problem in the world

Postby Dag-Erling Smørgrav via May Contain Traces of Bolts »

  1. Identify a complex problem in country A which is deeply rooted in that country’s demography / economy / culture / political system.
  2. Point out that country B, which has a completely different demography / economy / culture / political system, does not have that problem or has found a simple solution to it.
  3. Declare that the problem is trivial and that country A are idiots for having it in the first place.
  4. Job done, have a beer.
Top

tuxedoc XC1506 Akku

Postby bed via Zockertown: Nerten News »

Ich hatte mich ja in meinem ersten Artikel über den neuen Laptop über die fehlenden Einstellungen für die Akku Ladestrategie mukiert. Jetzt habe ich einen FAQ Eintrag gefunden, der genau dieses Problem angeht.

Beim Tuxedo XC1506 übernimmt das die BIOS Funktion "Flexicharge" FAQ Artikel im Supportforum von Tuxedo

Damit bin ich die gefährlichen Mikro Ladezyklen und das "immer Volle Pulle" auf 100% laden los.
Top

US-CERT to Windows Users: Dump Apple Quicktime

Postby BrianKrebs via Krebs on Security »

Microsoft Windows users who still have Apple Quicktime installed should ditch the program now that Apple has stopped shipping security updates for it, warns the Department of Homeland Security‘s U.S. Computer Emergency Readiness Team (US-CERT). The advice came just as researchers are reporting two new critical security holes in Quicktime that likely won’t be patched.

US-CERT cited an April 14 blog post by Christopher Budd at Trend Micro, which runs a program called Zero Day Initiative (ZDI) that buys security vulnerabilities and helps researchers coordinate fixing the bugs with software vendors. Budd urged Windows users to junk Quicktime, citing two new, unpatched vulnerabilities that ZDI detailed which could be used to remotely compromise Windows computers.

“According to Trend Micro, Apple will no longer be providing security updates for QuickTime for Windows, leaving this software vulnerable to exploitation,” US-CERT wrote. The advisory continued:

“Computers running QuickTime for Windows will continue to work after support ends. However, using unsupported software may increase the risks from viruses and other security threats. Potential negative consequences include loss of confidentiality, integrity, or availability of data, as well as damage to system resources or business assets. The only mitigation available is to uninstall QuickTime for Windows. Users can find instructions for uninstalling QuickTime for Windows on the Apple Uninstall QuickTime page.”

While the recommendations from US-CERT and others apparently came as a surprise to many, Apple has been distancing itself from QuickTime on Windows for some time now. In 2013, the Cupertino, Calif. tech giant deprecated all developer APIs for Quicktime on Windows.

Apple shipped an update to Quicktime in January 2016 that removed the Quicktime browser plugin on Windows systems, meaning the threat from browser-based attacks on Quicktime flaws was largely mitigated over the past few months for Windows users who have been keeping up to date with the latest version. Nevertheless, if you have Quicktime on a Windows box — do yourself a favor and get rid of it.

Update, Apr. 21, 10:00 a.m. ET: Apple has finally posted a support document online that explains QuickTime 7 for Windows is no longer supported by Apple. See the full advisory here.
Top

iwlwifi zickt manchmal nach resume (workaround)

Postby bed via Zockertown: Nerten News »

Mittlerweile ist mein Tuxedo XC1506 eigentlich alles soweit in komplett eingerichtet und in bester Ordung.

Immer ist Wifi im Flugmodus nach dem Resume, also wenn der Laptopdeckel wieder aufgeklappt wird.

Das ist ja nicht weiter schlimm, ein Druck auf Fn F11 schaltet wlan wieder an, es geht auch fix und man ist wieder mit dem heimischen Wlan verbunden.

Momentan allerdings hatte ich bereits 3 mal, dass das Flugsymbol zwar wegging, aber kein wifi Symbol auftauchte, auch nicht die 3 Punkte, die während der Initialisierung zu sehen sind.

Wie aktuell der Mechanismus im System umgesetzt wird, habe ich nicht evaluiert, sicher würde ein Eintrag reichen, um das iwlwifi und / oder das modul iwlmvm zu entladen und beim resume neu zu laden.

Als Workaround, bis ich den korrekten Weg gefunden habe, mache ich das mit folgenden Miniscript:

Genannt habe ich es /usr/local/bin/wifi-repair.sh



CODE:
sudo rmmod iwlmvm sudo rmmod iwlwifi sleep 1 sudo modprobe iwlwifi iwlmvm
Damit bin ich in der Lage, im Falle eines Falles mal eben schnell Wifi zu reaktivieren.
Top

Test

Postby blueness via Anthony G. Basile »

test test test … just testing to see if venus is working on planet.gentoo.org
Top

Why automated gentoo-mirror commits are not signed and how to verify them

Postby Michał Górny via Michał Górny »

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level — and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.



Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There’s the whole Gentoo-keys project that aims to eventually solve the problem but it’s not there yet. Maybe this year’s Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I’m not saying it would be that hard — but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I’m not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won’t be kept unencrypted outside the memory. All nice and shiny.

The problem is — this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I’m around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there’s one more issue: the software. I won’t be running all those scripts in my home. So it’s not just me you have to trust — you have to trust all other people with administrative access to the machine that’s running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won’t be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What’s the value of this signature then?

And this is the exact reason why I don’t do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I’ve mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What’s important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What’s the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn’t guarantee that the alterations are genuine, the danger coming from them is rather small (if any).
Top

‘Blackhole’ Exploit Kit Author Gets 7 Years

Postby BrianKrebs via Krebs on Security »

A Moscow court this week convicted and sentenced seven hackers for breaking into countless online bank accounts — including “Paunch,” the nickname used by the author of the infamous “Blackhole” exploit kit.  Once an extremely popular crimeware-as-a-service offering, Blackhole was for several years responsible for a large percentage of malware infections and stolen banking credentials, and likely contributed to tens of millions of dollars stolen from small to mid-sized businesses over several years.

Fedotov, the convicted creator of the Blackhole Exploit Kit, stands in front of his Porche Cayenne in an undated photo.
According to Russia’s ITAR-TASS news network, Dmitry “Paunch” Fedotov was sentenced on April 12 to seven years in a Russian penal colony. In October 2013, the then 27-year-old Fedotov was arrested along with an entire team of other cybercriminals who worked to sell, develop and profit from Blackhole.

According to Russian security firm Group-IB, Paunch had more than 1,000 customers and was earning $50,000 per month from his illegal activity. The image at right shows Paunch standing in front of his personal car, a Porsche Cayenne.

First spotted in 2010, BlackHole is commercial crimeware designed to be stitched into hacked or malicious sites and exploit a variety of Web-browser vulnerabilities for the purposes of installing malware of the customer’s choosing.

The price of renting the kit ran from $500 to $700 each month. For an extra $50 a month, Paunch also rented customers “crypting” services; cryptors are designed to obfuscate malicious software so that it remains undetectable by antivirus software.

Paunch worked with several other cybercriminals to purchase new exploits and security vulnerabilities that could be rolled into Blackhole and help increase the success of the software. He eventually sought to buy the exploits from other cybercrooks directly to fund a pricier ($10,000/month) and more exclusive exploit pack called “Cool Exploit Kit.”

The main page of the Blackhole exploit kit Web interface.
As documented on this blog in January 2013 (see Crimeware Author Funds Exploit Buying Spree), Paunch contracted with a third-party exploit broker who announced that he had a $100,000 budget for buying new, previously undocumented “zero-day” vulnerabilities.

Not long after that story, the individual with whom Paunch worked to purchase those exclusive exploits — a miscreant who uses the nickname “J.P. Morgan” — posted a message to the Darkode[dot]com crime forum, stating that he was doubling his exploit-buying budget to $200,000.

In October 2013, shortly after news of Paunch’s arrest leaked to the media, J.P. Morgan posted to Darkode again, this time more than doubling his previous budget — to $450,000.

“Dear ladies and gentlemen! In light of recent events, we look to build a new exploit kit framework. We have budgeted $450,000 to buy vulnerabilities of a browser and its plugins, which will be used only by us afterwards! ”

J.P. Morgan alludes to his former partner’s arrest, and ups his monthly exploit buying budget to $450,000.
The Russian Interior Ministry (MVD) estimates that Paunch and his gang earned more than 70 million rubles, or roughly USD $2.3 million. But this estimate is misleading because Blackhole was used as a means to perpetrate a vast array of cybercrimes. I would argue that Blackhole was perhaps the most important driving force behind an explosion of cyber fraud over the past three years. A majority of Paunch’s customers were using the kit to grow botnets powered by Zeus and Citadel, banking Trojans that are typically used in cyberheists targeting consumers and small businesses.

For more about Paunch, check out Who is Paunch?, a profile I ran in 2013 shortly after Fedotov’s arrest that examines some of the clues that connected his online criminal persona with his personal social networking profiles.

Update, 1:42: Corrected headline.
Top

pfsense 2.3, now on FreeBSD 10.3 with pkg

Postby Dan Langille via Dan Langille's Other Diary »

I upgraded my pfSense box to 2.3 last night. Here is what I got: uname -a FreeBSD bast.int.unixathome.org 10.3-RELEASE FreeBSD 10.3-RELEASE #4 05adf0a(RELENG_2_3_0): Mon Apr 11 19:09:19 CDT 2016 root@factory23-amd64-builder:/builder/factory-230/tmp/obj/builder/factory-230/tmp/FreeBSD-src/sys/pfSense amd64 These are the package repos they are using (as taken from pkg -vv): Repositories: pfSense-core: { url : "pkg+http://firmware.netgate.com/pkg/pfSense_factory-v2_3_0_amd64-core", enabled : yes, priority : [...]
Top

‘Badlock’ Bug Tops Microsoft Patch Batch

Postby BrianKrebs via Krebs on Security »

Microsoft released fixes on Tuesday to plug critical security holes in Windows and other software. The company issued 13 patches to tackle dozens of vulnerabilities, including a much-hyped “Badlock” file-sharing bug that appears ripe for exploitation. Also, Adobe updated its Flash Player release to address at least two-dozen flaws — in addition to the zero-day vulnerability Adobe patched last week.

Source: badlock.org
The Windows patch that seems to be getting the most attention this month remedies seven vulnerabilities in Samba, a service used to manage file and print services across networks and multiple operating systems. This may sound innocuous enough, but attackers who gain access to private or corporate network could use these flaws to intercept traffic, view or modify user passwords, or shut down critical services.

According to badlock.org, a Web site set up to disseminate information about the widespread nature of the threat that this vulnerability poses, we are likely to see active exploitation of the Samba vulnerabilities soon.

Two of the Microsoft patches address flaws that were disclosed prior to Patch Tuesday. One of them is included in a bundle of fixes for Internet Explorer. A critical update for the Microsoft Graphics Component targets four vulnerabilities, two of which have been detected already in exploits in the wild, according to Chris Goettl at security vendor Shavlik.

Just a reminder: If you use Windows and haven’t yet taken advantage of the Enhanced Mitigation Experience Toolkit, a.k.a. “EMET,” you should definitely consider it. I describe the basic features and benefits of running EMET in this blog post from 2014 (yes, it’s time to revisit EMET in a future post), but the gist of it is that EMET helps block or blunt exploits against known and unknown Windows vulnerabilities and flaws in third-party applications that run on top of Windows. The latest version, v. 5.5, is available here

On Friday, Adobe released an emergency update for Flash Player to fix a vulnerability that is being actively exploited in the wild and used to foist malware (such as ransomware). Adobe updated its advisory for that release to include fixes for 23 additional flaws.

As I noted in last week’s piece on the emergency Flash Patch, most users are better off hobbling or removing Flash altogether. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent version for Mac and Windows users is 21.0.0.213, and should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).
Top

Fly Hawaii

Postby via www.my-universe.com Blog Feed »

Derzeit vergnüge ich mich mit kurzen Flügen zwischen den Inseln Hawaiis – ein ideales Revier für Turboprop-Piloten. Hier kann man prima auch mit größeren Flugzeugen VFR fliegen und das GPS mal schön ausgeschaltet lassen. Die Inseln bieten eine große landschaftliche Vielfalt und damit gute visuelle Orientierungspunkte. Viele Flugplätze liegen an der Küste, so dass sie relativ leicht zu finden sind. Auch die Witterungsverhältnisse erlauben auf Hawaii meist die Navigation nach Sicht.


X-Plane Piloten aufgepasst: Hawaii bietet eine ganze Menge an kostenlosen Szenerie-Addons. Besonders hervorzuheben sind in aus meiner Sicht die Flughafenszenerien von NAPS (Freddy De Pues, Hans H. Gindra & Marc Leydecker), die acht Flugfelder verteilt über die meisten (bewohnten) Inseln abdecken – nämlich Kona und Hilo auf der Hauptinsel Hawaii, Kahului, Lahaina und Hana auf Maui, Honolulu und Dillingham auf Oahu und Lihue auf Kauai. Diese Szenerie-Addons lassen sich hier herunterladen. Weitere Flugfelder auf Lanai und Molokai finden sich ebenfalls auf x-plane.org.

Doch nur die Flughäfen und -felder selbst tragen nur wenig zum landschaftlichen Charme der Inseln bei. Wer mal gemütlich von einer Propellermaschine aus die Landschaft genießen möchte, dem kann ich nur wärmstens die Szenerie-Addons von Hawaii Photoreal ans Herz legen. Für X-Plane sind bisher drei der acht Inseln umgesetzt (Molokini zähle ich als Insel mal nicht mit, auch wenn man dort mit einer LISA Akoya einen netten Zwischenstopp einlegen kann…). Die bisherigen Ergebnisse sind vielversprechend, und Spender sollen (sobald fertiggestellt) Zugriff auf eine Plus-Version mit jahreszeitabhängigen Texturen erhalten.

Zu guter letzt gibt's hier noch ein paar Screenshots als Appetithäppchen, aufgenommen mit den oben beschriebenen Szenerie-Addons sowie den Payware-Addons SkyMaxx Pro 3.1, Leading Edge Simulations Saab 340A (Version 1.3), RWDesigns DHC-6 Series 300 (Version 1.2) und JARDesign Airbus A330-200 (Version 1.2r3).


Update


Do 14 Apr 2016
Hawaii Photoreal hat nachgelegt: Die Insel Molokai ist nun ebenfalls für X-Plane verfügbar. Allerdings steht der Download noch nicht auf der Projektwebsite, sondern ist bisher nur bei X-Pilot zu finden.

Top

Bash: Command Historie aufmotzen

Postby bed via Zockertown: Nerten News »

(Original Artikel von 2005) Was man alles entdeckt, wenn man mal wieder Zeit hat... Bash ist dermassen mächtig und komfortabel, dass die meisten von uns nur einen kleinen Prozentsatz be/ausnutzen. In den Beschreibungen zu der Datei /etc/inputrc zum Beispiel findet man im www allerlei Seiten, die offensichtlich alle voneinander abgeschrieben haben. Ein kurzer Artikel auf www.allweil.net/blog/ brachte mich dazu mich damit etwas auseinanderzusetzen. Die beiden Einträge
#/etc/inputrc:
# alternate mappings for "page up" and "page down" to search the history
"\e[5~": history-search-backward
"\e[6~": history-search-forward
bewirken nämlich z.B. dass man mittels der Bild-Auf und Bild-Ab Taste in der Command Historie suchen kann. Soll heissen, wenn man z.B. ssh eingetippt hat, dann führt das drücken der Bild-Auf Taste zu den passenden Fundstellen in der Eingabehistorie, natürlich nur, wenn man auch schon mal ssh Befehle eingegeben hatte. Damit kommt man bei der Suche wesentlich schneller voran, als wenn man Control-R benutzt. Control-R sucht in der gesamten Eingabe, Bild-Auf -Ab zeigt dagegen nur die passenden Treffer, die mit dem Suchbegriff beginnen. Das interessante für mich war, dass dies Funktionalität z.B. in Suse schon eingeschaltet ist, in Kanotix nicht. Aber ich bin mir sicher, dass die wenigsten Suse Nutzer dies wissen. Doch zurück zum Thema, was ich damit ausdrücken will, ist, dass die wunderbaren Features einfach viel zu wenig bekannt sind und ein Hinweis wie alternate mappings for "page up" and "page down" to search the history einen nicht unbedingt mit der Nase darauf stubsen. Wir sollten IMO mal eine ausführlichere Beschreibung dieser Möglichkeiten mit Beispielen verfassen, damit die schönen Gimmicks nicht einfach brach liegen. Weitere Info auf Robertkehl.de Über www.allweil.net/blog/ bin ich ursprünglich darüber gestolpert.
[update 02.10.2008] Eben haben ich in meinem Blog ewig genau nach diesem Beitrag gesucht, irgendwie fehlen hier noch die richtigen Suchworte, also mal sehen: Ergänzen der begonnenen Commandozeile profile, bash Tipp, in der History suchen, durchsuchen. Mit Bild-up Bild-down Kommando ergänzen.

Thematisch verwandt Bash historie retten, nicht überschreiben So, dann mal sehen, wenn ich es in drei Jahren wieder mal brauche Edit: in Lenny standardmäßig aktiviert. In Elive 2.0 ist es nicht aktiviert.

[update 30.01.2013] Wenn man keine systemweite Einstellung haben möchte, geht es auch mit ~ ./inputrc und somit auch auf einem System, auf dem man keine rootrechte hat

[update 13.04.2016] Man kann auch ein Mapping der Cursurtasten verändern:

## arrow up
"\e[A":history-search-backward
## arrow down
"\e[B":history-search-forward
Das hat seinen eigenen Reiz und funktioniert, weil ohne ein Zeichen eingegeben zu haben einfach zum nächsten bzw. vorherigen Eintrag gewechselt wird, was dem originalen Verhalten der Cursor Tasten entspricht.

Top

New Threat Can Auto-Brick Apple Devices

Postby BrianKrebs via Krebs on Security »

If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless.

Zach Straley demonstrating the fatal Jan. 1, 1970 bug. Don’t try this at home!
On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!).

Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks.

Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks.

Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.”

For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects.

From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network.

TIME TO DIE

And this is exactly what Kelley and Harrigan say they have done in real-life tests. They realized that iPads and other iDevices constantly check various “network time protocol” (NTP) servers around the globe to sync their internal date and time clocks.

The researchers said they discovered they could build a hostile Wi-Fi network that would force Apple devices to download time and date updates from their own (evil) NTP time server: And to set their internal clocks to one infernal date and time in particular: January 1, 1970.

Harrigan and Kelley named their destructive Wi-Fi test network “Phonebreaker.”
The result? The iPads that were brought within range of the test (evil) network rebooted, and began to slowly self-destruct. It’s not clear why they do this, but here’s one possible explanation: Most applications on an iPad are configured to use security certificates that encrypt data transmitted to and from the user’s device. Those encryption certificates stop working correctly if the system time and date on the user’s mobile is set to a year that predates the certificate’s issuance.

Harrigan and Kelley said this apparently creates havoc with most of the applications built into the iPad and iPhone, and that the ensuing bedlam as applications on the device compete for resources quickly overwhelms the iPad’s computer processing power. So much so that within minutes, they found their test iPad had reached 130 degrees Fahrenheit (54 Celsius), as the date and clock settings on the affected devices inexplicably and eerily began counting backwards.

 



Harrigan, president and CEO of San Diego-based security firm PacketSled, described the meltdown thusly:

“One thing we noticed was when we set the date on the iPad to 1970, the iPad display clock started counting backwards. While we were plugging in the second test iPad 15 minutes later, the first iPad said it was Dec. 15, 1968. I looked at Patrick and was like, ‘Did you mess with that thing?’ He hadn’t. It finally stopped at 1965, and by that time [the iPad] was about the temperature I like my steak served at.”

Kelley, a senior penetration tester with CriticalAssets.com, said he and Harrigan worked with Apple to coordinate the release of their findings to ensure doing so didn’t predate Apple’s issuance of a fix for this vulnerability. The flaw is present in all Apple devices running anything lower than iOS 9.3.1.

Apple did not respond to requests for comment. But an email shared by the researchers apparently sent by Apple’s product security team suggests the company’s researchers were unable to force an affected device to heat to more than 45.8 degrees Celcisus (~114 degrees Fahrenheit). The note read:

“1) We confirmed that iOS 9.3 addresses the issue that left a device unresponsive when the date is set to 1/1/1970.

2) A device affected by this issue can be restored to iOS 9.3 or later.  iTunes restored the iPad Air you provided to us for inspection.’

3) By examining the device, we determined that the battery temperature did not exceed 45.8 degrees centigrade.”

EVIL HARDWARE

According to Harrigan and Kelley, the hardware needed to execute this attack is little more than a common Raspberry Pi device with some custom software.

“By spoofing time.apple.com, we were able to roll back the time and have it hand out to all Apple clients on the network,” the researchers wrote in a paper shared with KrebsOnSecurity. “All test devices took the update without question and rolled back to 1970.”

The hardware used to automated an attack against the 1970 bug, including a Raspberry Pi and an Alfa antenna.
The researchers continued: “An interesting side effect was that this caused almost all web browsing traffic to cease working due to time mismatch. Typically, this would prompt a typical user to reboot their device. So, we did that. At this point, we could confirm that the reboot caused all iPads in test to degrade gradually, beginning with the inability to unlock, and ultimately ending with the device overheating and not booting at all. Apple has confirmed this vulnerability to be present in 64 bit devices that are running any version less than 9.3.1.”

Harrigan and Kelley say exploiting this bug on an Apple iPhone device is slightly trickier because iPhones get their network time updates via GSM, the communications standard the devices use to receive and transmit cell phone signals. But they said it may be possible to poison the date and time on iPhones using updates fed to the devices via GSM.

They pointed to research by Brandon Creighton, a research architect at software testing firm Veracode who is perhaps best known for setting up the NinjaTel GSM mobile network at the massive DefCon security conference in 2012. Creighton’s network relied on a technology called OpenBTS — a software based GSM access point. Harrigan and Kelley say an attacker could set up his own mobile (evil) network and push date and time updates to any phones that ping the evil tower.

“It is completely plausible that this vulnerability is exploitable over GSM using OpenBTS or OpenBSC to set the time,” Kelley said.

Creighton agreed, saying that his own experience testing and running the NinjaTel network shows that it’s theoretically possible, although he allows that he’s never tried it.

“Just from my experimentation, theoretically from a protocol level you can do it,” Creighton wrote in a note to KrebsOnSecurity. “But there are lots of factors (the carrier; the parameters on the SIM card; the phone’s locked status; the kind of phone; the baseband version; previously joined networks; neighboring towers; RF signal strength; and more).  If you’re just trying to cause general chaos, you don’t need to work very hard. But if, say, you were trying to target an individual device, it would require an additional amount of prep time/recon.”

Whether or not this attack could be used to remotely ruin iPhones or turn iPads into expensive skillets, it seems clear that failing to update to the latest version of Apple iOS is a less-than-stellar idea. iPad users who have not updated their OS need to be extremely cautious with respect to joining networks that they don’t know or trust.

iOS and Mac OS X have a feature that allows users to prevent the devices from automatically joining wireless networks. Enabling this “ask to join networks” feature blocks Apple devices from automatically joining networks they have never seen before — but the side effect is that the device may frequently toss up prompts asking if you wish to join any one of several available wireless networks (this can be disabled by unselecting “Ask to Join Networks”). But enabling it doesn’t prevent the device from connecting to, say, “attwifi” if it has previously connected to a network of that name.

The researchers have posted a video on Youtube that explains their work in greater detail.

Update, 1:08 p.m. ET: Added link to video and clarified how Apple’s “ask to join networks” feature works.
Top

Linux firmware for iwlwifi ucode failed with error -2

Postby Zach via The Z-Issue »

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
A couple weeks ago, I decided to update my primary laptop’s kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:

#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the ‘iwlwifi’ driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:

[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn’t loading:

[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:

# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):

[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the “Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2” error message.

Cheers,
Zach
Top

Adobe Patches Flash Player Zero-Day Threat

Postby BrianKrebs via Krebs on Security »

Adobe Systems this week rushed out an emergency patch to plug a security hole in its widely-installed Flash Player software, warning that the vulnerability is already being exploited in active attacks.

Adobe said a “critical” bug exists in all versions of Flash including Flash versions 21.0.0.197 and lower (older) across a broad range of systems, including Windows, Mac, Linux and Chrome OS. Find out if you have Flash and if so what version by visiting this link.

In a security advisory, the software maker said it is aware of reports that the vulnerability is being actively exploited on systems running Windows 7 and Windows XP with Flash Player version 20.0.0.306 and earlier. 

Adobe said additional security protections built into all versions of Flash including 21.0.0.182 and newer should block this flaw from being exploited. But even if you’re running one of the newer versions of Flash with the additional protections, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).

By the way, I’m not the only one trying to make it easier for people to put a lasso on Flash: In a blog post today, Microsoft said Microsoft Edge users on Windows 10 will auto-pause Flash content that is not central to the Web page. The new feature will be available in Windows 10 build 14316.

“Peripheral content like animations or advertisements built with Flash will be displayed in a paused state unless the user explicitly clicks to play that content,” wrote the Microsoft Edge team. “This significantly reduces power consumption and improves performance while preserving the full fidelity of the page. Flash content that is central to the page, like video and games, will not be paused. We are planning for and look forward to a future where Flash is no longer necessary as a default experience in Microsoft Edge.”

Additional reading on this vulnerability:

Kafeine‘s Malware Don’t Need Coffee Blog on active exploitation of the bug.

Trend Micro’s take on evidence that thieves have been using this flaw in automated attacks since at least March 31, 2016.
Top

tuxedo XC1506

Postby bed via Zockertown: Nerten News »

2016-03-21: Es war endlich soweit. Der neue Laptop ist da.

Die SSD OCZ Trion 150 240GB fürs Betriebssystem habe ich von Reichelt bestellt, eine zweite Platte ist eine WD Blue Mobile 1TB, 7mm 1TB Platte


TUXEDO Book XC1506 - 15,6" matt Full-HD IPS-Display Arbeitsspeicher (DDR4 SO-DIMM): 16 GB (2x 8GB) 2400Mhz Kingston Grafikkarte: NVIDIA GeForce GTX 970M 3GB CPU: Intel Core i7-6700HQ
Zum Installieren habe ich dieses Netinst Iso genommen (14.3.2016) 

cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/weekly-builds/amd64/iso-cd/

Leider führte der nouvou Treiber zu Freezes, wenn man sich einloggt.

Als Parameter ACPI=NO beim booten behob das aber. Mit dem Stock Nvidia Treiber ist das Problem dann Geschichte

Um mir die Anpassungen von Tuxedo anzuschauen, hielt ich es für eine gute Idee einfach Ubuntu 16.04 zu installieren. Da das System rund läuft, habe ich jetzt nur gnome-shell nach installiert, damit ich die ungewohnte Unity GUI loswerde.

Die Anpassungen für die Spezialtasten sind hier dokumentiert www.linux-onlineshop.de/forum/index.php?page=Thread&threadID=41

Nach nunmehr 14 Tage Benutzung bin ich mit meinem neuen Laptop sehr zufrieden. Es finde nur zwei Unzulänglichkeiten, die mich etwas stören.

Zum einen ist es der Fingerprint Reader anstelle einer mittleren Maustasten, klar kann man mit der Emulation der 3ten Maustaste leben, allerdings ist die Trefferquote nur nahe 100%, Physikalsich wäre es einfach besser.

Vielleicht finde ich ja noch eine Möglichkeit den Fingerprint Reader dafür zu missbrauchen.

Der zweite Mangel ist die fehlende Einstellbarkeit der Ladeschwellen, vor allem der Ladegrenze für den internen Akku.

Da kann ich nur hoffen, dass der Hersteller seine Hausaufgaben gemacht hat und der Akku nicht wirklich immer auf 100% geladen wird, auch wenn die Anzeige dies suggeriert.

Da ist mir das proprietäre Konzept bei Lenovo lieber. (Stichwort TLP)



Ps: xcom enemy within installiert: Starten geht wahnsinig schnell, Framerate zwischen 55 und > 100 in einer Mission.
Top

FBI: $2.3 Billion Lost to CEO Email Scams

Postby BrianKrebs via Krebs on Security »

The U.S. Federal Bureau of Investigation (FBI) this week warned about a “dramatic” increase in so-called “CEO fraud,” e-mail scams in which the attacker spoofs a message from the boss and tricks someone at the organization into wiring funds to the fraudsters. The FBI estimates these scams have cost organizations more than $2.3 billion in losses over the past three years.

In an alert posted to its site, the FBI said that since January 2015, the agency has seen a 270 percent increase in identified victims and exposed losses from CEO scams. The alert noted that law enforcement globally has received complaints from victims in every U.S. state, and in at least 79 countries.

A typical CEO fraud attack. Image: Phishme
CEO fraud usually begins with the thieves either phishing an executive and gaining access to that individual’s inbox, or emailing employees from a look-alike domain name that is one or two letters off from the target company’s true domain name. For example, if the target company’s domain was “example.com” the thieves might register “examp1e.com” (substituting the letter “L” for the numeral 1) or “example.co,” and send messages from that domain.

Unlike traditional phishing scams, spoofed emails used in CEO fraud schemes rarely set off spam traps because these are targeted phishing scams that are not mass e-mailed. Also, the crooks behind them take the time to understand the target organization’s relationships, activities, interests and travel and/or purchasing plans.

They do this by scraping employee email addresses and other information from the target’s Web site to help make the missives more convincing. In the case where executives or employees have their inboxes compromised by the thieves, the crooks will scour the victim’s email correspondence for certain words that might reveal whether the company routinely deals with wire transfers — searching for messages with key words like “invoice,” “deposit” and “president.”

On the surface, business email compromise scams may seem unsophisticated relative to moneymaking schemes that involve complex malicious software, such as Dyre and ZeuS. But in many ways, CEO fraud is more versatile and adept at sidestepping basic security strategies used by banks and their customers to minimize risks associated with account takeovers. In traditional phishing scams, the attackers interact with the victim’s bank directly, but in the CEO scam the crooks trick the victim into doing that for them.

The FBI estimates that organizations victimized by CEO fraud attacks lose on average between $25,000 and $75,000. But some CEO fraud incidents over the past year have cost victim companies millions — if not tens of millions — of dollars. 

Last month, the Associated Press wrote that toy maker Mattel lost $3 million in 2015 thanks to a CEO fraud phishing scam. In 2015, tech firm Ubiquiti disclosed in a quarterly financial report that it suffered a whopping $46.7 million hit because of a CEO fraud scam. In February 2015, email con artists made off with $17.2 million from The Scoular Co., an employee-owned commodities trader. More recently, I wrote about a slightly more complex CEO fraud scheme that incorporated a phony phone call from a phisher posing as an accountant at KPMG.

The FBI urges businesses to adopt two-step or two-factor authentication for email, where available, and to establish other communication channels — such as telephone calls — to verify significant transactions. Businesses are also advised to exercise restraint when publishing information about employee activities on their Web sites or through social media, as attackers perpetrating these schemes often will try to discover information about when executives at the targeted organization will be traveling or otherwise out of the office.

For an example of what some of these CEO fraud scams look like, check out this post from security education and awareness firm Phishme about scam artists trying to target the company’s leadership.

I’m always amazed when I hear security professionals I know and respect make comments suggesting that phishing and spam are solved problems. The right mix of blacklisting and email validation regimes like DKIM and SPF can block the vast majority of this junk, these experts argue.

But CEO fraud attacks succeed because they rely almost entirely on tricking employees into ignoring or sidestepping some very basic security precautions. Educating employees so that they are less likely to fall for these scams won’t block all social engineering attacks, but it should help. Remember, the attackers are constantly testing users’ security awareness. Organizations might as well be doing the same, using periodic tests to identify problematic users and to place additional security controls on those individuals.
Top

After Tax Fraud Spike, Payroll Firm Greenshades Ditches SSN/DOB Logins

Postby BrianKrebs via Krebs on Security »

Online payroll management firm Greenshades.com is an object lesson in how not to do authentication. Until very recently, the company allowed corporate payroll administrators to access employee payroll data online using nothing more than an employee’s date of birth and Social Security number. That is, until criminals discovered this and began mass-filing fraudulent tax refund requests with the IRS on large swaths of employees at firms that use the company’s services.

A notice on the Greenshades Web site.
Jacksonville, Fla.-based Greenshades posted an alert on its homepage stating that the company “has seen an abnormal increase in identity thieves using personal information to fraudulently log into the company’s system to access personal tax information.”

Many online services blame these sorts of attacks on customers re-using the same password at multiple sites, but Greenshades set customers up for this by allowing access to payroll records just by supplying the employee’s Social Security number and date of birth.

As this author has sought repeatedly to demonstrate, SSN/DOB information is extremely easy and cheap to obtain via multiple criminal-run Web sites: SSN/DOB data is reliably available for purchase from underground online crime shops for less than $4 per person (payable in Bitcoin only).

The spike in tax fraud against employees of companies that use Greenshades came to light earlier this month in various media stories. A number of employees at public high schools in Chicago discovered that crooks beat them to the punch on filing tax returns. An investigation into that incident suggested security weaknesses at Greenshades were to blame.

The Milwaukee Journal Sentinel wrote last month about tax fraud perpetrated against local county workers, fraud that also was linked to compromised Greenshades accounts. In Nebraska, the Lower Platte North Natural Resources District and Fremont Health hospital had a number of employees with tax fraud linked to compromised Greenshades accounts, according to a report in the Fremont Tribune.

Greenshades co-CEO Matthew Kane said the company allowed payroll administrators to access W2 information with nothing more than SSN and DOB for one simple reason: Many customers demanded it.

“There’s a valid reason to have what I call weak login credentials,” Kane told KrebsOnSecurity. “Some of our clients clamor for weaker login credentials, such as companies that have a large staff of temporary workers.”

Kane said customers have a “wide range of options” to select from in choosing how they will authenticate to Greenshades.com, but that the most secure option currently offered is a simple username and password.

When asked whether the company offers any sort of two-step or two-factor authentication, Kane argued that corporate email addresses assigned to company employees serve as a kind of second factor.

“In this case, the second factor would be having access to that corporate inbox,” Kane reasoned. He added that Greenshades is working on rolling out a 2-factor authentication feature that may not be optional going forward.

Kane said that although Greenshades heard from a “significant number” of its customers about unauthorized access to employee records, the company believes the overall percentage of affected employees at individual customer organizations was low.

However, in at least some of the reported incidents tied to this mess at Greenshades, the overall percentage has been quite high. In the case of the Lower Platt North NRD, for example, 90 percent of employees had their taxes filed fraudulently this year.

It’s remarkable that a company which specializes in helping firms manage sensitive tax and payroll data could be so lax with authentication. Unfortunately, shoddy authentication is still quite common — even among banks. In February, Pittsburgh, Pa.-based First National Bank alerted customers gained through a recent merger with Metro Bank that they could access the company’s bill pay and electronic banking portal by supplying their Metro Bank username and the last four digits of their Social Security number.

A letter from First National Bank to its customers.
Relying on static data elements like SSNs and birthdays for authentication is a horrible idea all around. These data points are no longer secret because they are broadly available for sale on most Americans, and companies have no business using them for authentication.
Top

Sources: Trump Hotels Breached Again

Postby BrianKrebs via Krebs on Security »

Banking industry sources tell KrebsOnSecurity that the Trump Hotel Collection — a string of luxury properties tied to business magnate and Republican presidential candidate Donald Trump — appears to be dealing with another breach of its credit card systems. If confirmed, this would be the second such breach at the Trump properties in less than a year.

Trump International Hotel in New York.
A representative from Trump Hotels said the organization was investigating the claims.

“We are in the midst of a thorough investigation on this matter,” the company said in a written statement. “We are committed to safeguarding all guests’ personal information and will continue to do so vigilantly.”

KrebsOnSecurity reached out to the Trump organization after hearing from three sources in the financial sector who said they’ve noticed a pattern of fraud on customer credit cards which suggests that hackers have breached credit card systems at some — if not all — of the Trump Hotel Collection properties.

On July 1, 2015, this publication was the first to report that banks suspected a breach at Trump properties. After that story ran, Trump Hotel Collection acknowledged being alerted about suspicious activity tied to accounts that were recently used at its hotels. But it didn’t officially confirm that its payment systems had been infected with card-stealing malware until October 2015.

The Trump Hotel Collection includes more than a dozen properties globally. Sources said they noticed a pattern of fraud on cards that were all used at multiple Trump hotel locations in the past two to three months, including at Trump International Hotel New York, Trump Hotel Waikiki in Honolulu, and the Trump International Hotel & Tower in Toronto.

The hospitality industry has been hit hard by card breaches over the past two years. In April 2014, hotel franchising firm White Lodging confirmed its second card breach in a year. Card thieves also have hit Hilton, Hyatt, and Starwood properties. In many of those breaches, the hacked systems were located inside of hotel restaurants and gift shops.

Like most other current presidential candidates, Mr. Trump has offered little in the way of a policy playbook on cybersecurity. But in statements last month, Trump bashed the United States as “obsolete” on cybersecurity, and suggested the country is being “toyed with” by adversaries from China, Russia and elsewhere.

“We’re so obsolete in cyber,” Trump told The New York Times. “We’re the ones that sort of were very much involved with the creation, but we’re so obsolete.” Trump was critical of the US military’s cyber prowess, charging the Defense Department and the military are “going backwards” in cyber while “other countries are moving forward at a much more rapid pace.”

“We are frankly not being led very well in terms of the protection of this country,” Trump said.
Top

Pwncloud – bad crypto in the Owncloud encryption module

Postby Hanno Böck via Hanno's blog »

The Owncloud web application has an encryption module. I first became aware of it when a press release was published advertising this encryption module containing this:

“Imagine you are an IT organization using industry standard AES 256 encryption keys. Let’s say that a vulnerability is found in the algorithm, and you now need to improve your overall security by switching over to RSA-2048, a completely different algorithm and key set. Now, with ownCloud’s modular encryption approach, you can swap out the existing AES 256 encryption with the new RSA algorithm, giving you added security while still enabling seamless access to enterprise-class file sharing and collaboration for all of your end-users.”

To anyone knowing anything about crypto this sounds quite weird. AES and RSA are very different algorithms – AES is a symmetric algorithm and RSA is a public key algorithm - and it makes no sense to replace one by the other. Also RSA is much older than AES. This press release has since been removed from the Owncloud webpage, but its content can still be found in this Reuters news article. This and some conversations with Owncloud developers caused me to have a look at this encryption module.

First it is important to understand what this encryption module is actually supposed to do and understand the threat scenario. The encryption provides no security against a malicious server operator, because the encryption happens on the server. The only scenario where this encryption helps is if one has a trusted server that is using an untrusted storage space.

When one uploads a file with the encryption module enabled it ends up under the same filename in the user's directory on the file storage. Now here's a first, quite obvious problem: The filename itself is not protected, so an attacker that is assumed to be able to see the storage space can already learn something about the supposedly encrypted data.

The content of the file starts with this:
BEGIN:oc_encryption_module:OC_DEFAULT_MODULE:cipher:AES-256-CFB:HEND----

It is then padded with further dashes until position 0x2000 and then the encrypted contend follows Base64-encoded in blocks of 8192 bytes. The header tells us what encryption algorithm and mode is used: AES-256 in CFB-mode. CFB stands for Cipher Feedback.

Authenticated and unauthenticated encryption modes

In order to proceed we need some basic understanding of encryption modes. AES is a block cipher with a block size of 128 bit. That means we cannot just encrypt arbitrary input with it, the algorithm itself only encrypts blocks of 128 bit (or 16 byte) at a time. The naive way to encrypt more data is to split it into 16 byte blocks and encrypt every block. This is called Electronic Codebook mode or ECB and it should never be used, because it is completely insecure.

Common modes for encryption are Cipherblock Chaining (CBC) and Counter mode (CTR). These modes are unauthenticated and have a property that's called malleability. This means an attacker that is able to manipulate encrypted data is able to manipulate it in a way that may cause a certain defined behavior in the output. Often this simply means an attacker can flip bits in the ciphertext and the same bits will be flipped in the decrypted data.

To counter this these modes are usually combined with some authentication mechanism, a common one is called HMAC. However experience has shown that this combining of encryption and authentication can go wrong. Many vulnerabilities in both TLS and SSH were due to bad combinations of these two mechanism. Therefore modern protocols usually use dedicated authenticated encryption modes (AEADs), popular ones include Galois/Counter-Mode (GCM), Poly1305 and OCB.

Cipher Feedback (CFB) mode is a self-correcting mode. When an error happens, which can be simple data transmission error or a hard disk failure, two blocks later the decryption will be correct again. This also allows decrypting parts of an encrypted data stream. But the crucial thing for our attack is that CFB is not authenticated and malleable. And Owncloud didn't use any authentication mechanism at all.

Therefore the data is encrypted and an attacker cannot see the content of a file (however he learns some metadata: the size and the filename), but an Owncloud user cannot be sure that the downloaded data is really the data that was uploaded in the first place. The malleability of CFB mode works like this: An attacker can flip arbitrary bits in the ciphertext, the same bit will be flipped in the decrypted data. However if he flips a bit in any block then the following block will contain unpredictable garbage.

Backdooring an EXE file

How does that matter in practice? Let's assume we have a group of people that share a software package over Owncloud. One user uploads a Windows EXE installer and the others download it from there and install it. Let's further assume that the attacker doesn't know the content of the EXE file (this is a generous assumption, in many cases he will know, as he knows the filename).

EXE files start with a so-called MZ-header, which is the old DOS EXE header that gets usually ignored. At a certain offset (0x3C), which is at the end of the fourth 16 byte block, there is an address of the PE header, which on Windows systems is the real EXE header. After the MZ header even on modern executables there is still a small DOS program. This starts with the fifth 16 byte block. This DOS program usually only shows the message “Th is program canno t be run in DOS mode”. And this DOS stub program is almost always the exactly the same.



Therefore our attacker can do the following: First flip any non-relevant bit in the third 16 byte block. This will cause the fourth block to contain garbage. The fourth block contains the offset of the PE header. As this is now garbled Windows will no longer consider this executable to be a Windows application and will therefore execute the DOS stub.

The attacker can then XOR 16 bytes of his own code with the first 16 bytes of the standard DOS stub code. He then XORs the result with the fifth block of the EXE file where he expects the DOS stub to be. Voila: The resulting decrypted EXE file will contain 16 bytes of code controlled by the attacker.

I created a proof of concept of this attack. This isn't enough to launch a real attack, because an attacker only has 16 bytes of DOS assembler code, which is very little. For a real attack an attacker would have to identify further pieces of the executable that are predictable and jump through the code segments.

The first fix

I reported this to Owncloud via Hacker One in January. The first fix they proposed was a change where they used Counter-Mode (CTR) in combination with HMAC. They still encrypt the file in blocks of 8192 bytes size. While this is certainly less problematic than the original construction it still had an obvious problem: All the 8192 bytes sized file blocks where encrypted the same way. Therefore an attacker can swap or remove chunks of a file. The encryption is still malleable.

The second fix then included a counter of the file and also avoided attacks where an attacker can go back to an earlier version of a file. This solution is shipped in Owncloud 9.0, which has recently been released.

Is this new construction secure? I honestly don't know. It is secure enough that I didn't find another obvious flaw in it, but that doesn't mean a whole lot.

You may wonder at this point why they didn't switch to an authenticated encryption mode like GCM. The reason for that is that PHP doesn't support any authenticated encryption modes. There is a proposal and most likely support for authenticated encryption will land in PHP 7.1. However given that using outdated PHP versions is a very widespread practice it will probably take another decade till anyone can use that in mainstream web applications.

Don't invent your own crypto protocols

The practical relevance of this vulnerability is probably limited, because the scenario that it protects from is relatively obscure. But I think there is a lesson to learn here. When people without a strong cryptographic background create ad-hoc designs of cryptographic protocols it will almost always go wrong.

It is widely known that designing your own crypto algorithms is a bad idea and that you should use standardized and well tested algorithms like AES. But using secure algorithms doesn't automatically create a secure protocol. One has to know the interactions and limitations of crypto primitives and this is far from trivial. There is a worrying trend – especially since the Snowden revelations – that new crypto products that never saw any professional review get developed and advertised in masses. A lot of these products are probably extremely insecure and shouldn't be trusted at all.

If you do crypto you should either do it right (which may mean paying someone to review your design or to create it in the first place) or you better don't do it at all. People trust your crypto, and if that trust isn't justified you shouldn't ship a product that creates the impression it contains secure cryptography.

There's another thing that bothers me about this. Although this seems to be a pretty standard use case of crypto – you have a symmetric key and you want to encrypt some data – there is no straightforward and widely available standard solution for it. Using authenticated encryption solves a number of issues, but not all of them (this talk by Adam Langley covers some interesting issues and caveats with authenticated encryption).

The proof of concept can be found on Github. I presented this vulnerability in a talk at the Easterhegg conference, a video recording is available.
Top

FreeBSD 10.3-RELEASE Available

Postby Webmaster Team via FreeBSD News Flash »

FreeBSD 10.3-RELEASE is now available. Please be sure to check the Release Notes and Release Errata before installation for any late-breaking news and/or issues with 10.3. More information about FreeBSD releases can be found on the Release Information page.
Top

Turris Omnia and openSUSE

Postby Michal Hrušecký via Michal Hrušecký »

About two weeks ago I was on the annual openSUSE Board face to face meeting. It was great and you can read reports of what was going on in there on openSUSE project mailing list. In this post I would like to focus on my other agenda I had while coming to Nuremberg. Nuremberg is among other things SUSE HQ and therefore there is a high concentration of skilled engineers and I wanted to take an advantage of that…

Little bit of my personal history. I recently join Turris team at CZ.NIC, partly because Omnia is so cool and I wanted to help to make it happen. And being long term openSUSE contributor I really wanted to see some way how to help both projects. I discussed it with my bosses at CZ.NIC and got in contact with Andreas Färber who you might know as one of the guys playing with ARMs within openSUSE project. The result was that I got an approval to bring Omnia prototype during the weekend to him and let him play with it.

My point was to give him a head start, so when Omnias will start shipping, there will be already some research done and maybe even howto for openSUSE so you could replace OpenWRT with openSUSE if you wanted. On the other hand, we will also get some preliminary feedback we can still try to incorporate.



Andreas Färber with Omnia

Why testing whether you can install openSUSE on Omnia? And do you want to do that? As a typical end user probably not. Here are few arguments that speaks against it. OpenWRT is great for routers – it has nice interface and anything you want to do regarding the network setup is really easy to do. You are able to setup even complicated network using simple web UI. Apart from that, by throwing away OpenWRT you would throw away quite some of the perks of Omnia – like parental control or mobile application. You might think that it is worth it to sacrifice those to get full-fledged server OS you are familiar with and where you can install everything in non-stripped down version. Actually, you don’t have to sacrifice anything – OpenWRT in Omnia will support LXC, so you can install your OS of choice inside LXC container and have both – easily manageable router with all the bells and whistles and also virtual server with very little overhead doing complicated stuff. Or even two or three of them. So most probably, you want to keep OpenWRT and install openSUSE or some other Linux distribution inside a container.

But if you still do want to replace OpenWRT, can you? And how difficult would it be? Long story short, the answer is yes. Andreas was able to get openSUSE running on Omnia and even wrote instructions how to do that! One little comment, Turris Omnia is still under heavy development. What Andreas played with was one of the prototypes we have. Software is still being worked on and even hardware is being polished a little bit from time to time. But still, HW will not change drastically and therefor howto probably wouldn’t change as well. It is nice to see that it is possible and quite easy to install your average Linux distribution.

Why is having this option so important given all the arguments I stated against doing so? Because of freedom. I consider it great advantage when buying a piece of hardware knowing that I can do whatever I want with it and I’m not locked in and depending on the vendor with everything. Being able to install openSUSE on Omnia basically proves that Omnia is really open and even in the unlikely situation in which hell freezes over and CZ.NIC will disappear or turn evil, you will still be able to install latest kernel 66.6 and continue to do whatever you want with your router.

This post was originally posted on CZ.NIC blog, re-posted here to make it available on Planet openSUSE.

Top

Shell calendar generator

Postby Michal Hrušecký via Michal Hrušecký »

Some people still use paper calendars. Stuff where you have a picture of the month and all days in the month listed. I have some relatives that do use those. On loosely related topic, I like to travel and I like to take some pictures in foreign lands. So combining both is an obvious idea – to create a calendar where pictures of the month are taken by me. I searched for some ready to use solution but haven’t found anything. So I decided to create my own simple tool. And this post is about creating that tool.

I know time and date stuff is complicated and I wasn’t really looking into learning all the rules regarding date and time and programing them. There had to be a simple way how I can use some of the tools that are already implemented. Obvious option would be to use some of the date manipulation libraries like mktime and write the tool in C. But that that sounded quite heavy weight for such a simple tool. Using Ruby would be an option, but still kinda too much and I’m not fluent rubyist and my python and perl are even rustier. I was also thinking what output format should I use to print it easily. As I was targeting some pretty printed paper LaTeX sounded like a good choice and in theory it could be used to implement the whole thing. I even found somebody who did that, but I didn’t managed to comprehend how it worked, how to modify it or even how to compile it. Turns out my LaTeX is rusty as well.

So I decided to use shell and powerful date command to generate the content. Started with generating LaTeX code as I still want it on paper in the end, right? Trouble is, LaTeX make great papers if you want to look serious and make some serious typography. For calendar on the wall, you probably want to make it fancy and screw typography. I was trying to make it what I wanted, but it was hard. So hard I gave up. And I ended up with the winning combo – shell and html. Html is easy to view and print and CSS supports various of options including different style for screen and print type media.

Html and css made the whole exercise really easy and I have something working now on GitHub in 150 lines of code where half of it is CSS. It’s not perfect, there is plenty of space for optimization, but it is really simple and fast enough. Are you interested? Give it a try and if it doesn’t work well for you, pull requests are welcome
Top

Aufgerüstet

Postby via www.my-universe.com Blog Feed »

Ich gehöre zu den Leuten, die altmodischerweise noch an einem Desktop-Computer festhalten. Da ist einfach immer noch mal eine Nummer mehr „Bums“ hinter als bei Notebooks oder gar Tablets. Besonders, wenn man eine Flugsimulation mit hoher Auflösung betreibt, geht die Framerate gerne schnell in den Keller. Das war auch bei meiner alten Workstation der Fall (immerhin aus dem Jahre 2011), die sich zwar dank zweier Xeon Hexacore CPUs und viel RAM bei vielen Dingen immer noch wacker schlägt, bei 3D-Grafik aber mittlerweile doch ihre Grenzen erkennen lässt (und nebenher die Geräuschkulisse eines startenden Turbofan-Triebwerks von sich gibt).


Deshalb habe ich in einen neuen Rechner investiert, der nun neben der alten Workstation seinen Betrieb aufgenommen hat. Da ich den Rechner speziell für die Nutzung mit X-Plane zusammengestellt habe, wurde diesmal auf die zweite CPU verzichtet (X-Plane nutzt ohnehin nur wenige Kerne, profitiert aber von einer schnelleren CPU). Stattdessen gab's die derzeit schnellste verfügbare Skylake-CPU (Core i7-6700K), kombiniert mit einer NVIDIA GeForce GTX Titan X. Erstere wird durch eine Wasserkühlung bei Laune gehalten, wodurch der Geräuschpegel des gesamten Systems angenehm leise ausfällt.

Auch bei den Speichermedien gab es einen deutlichen Schritt nach vorne: während die alte Workstation noch mit vier mechanischen SATA-Festplatten zu je 320GB in Wechselrahmen daher kam, nutzt der neue Rechner einen 512GB großen V-NAND Speicher, der über PCIe angebunden ist und daher noch einmal deutlisch schneller als eine über SATA angebundene SSD daherkommt. Diese Kapazität ist aber für X-Plane ein wenig zu klein (besonders wenn Andras Fabians HD und UHD Mesh ins Spiel kommen), daher stecken zusätzlich zwei Samsung SSD 850 Pro zu je einem TB mit im Rechner, die als knapp zwei TB großes Stripeset für X-Plane reserviert sind und per S-ATA 6Gb/s auch nicht gerade langsam arbeiten.

Schließlich blieb noch die Frage des Monitors offen – meine alte Workstation teilt sich über einen 24″ Samsung SyncMaster 245B mit (stammte noch vom Vorgänger-Rechner). Da ich die Workstation noch weiter betreibe, musste also ein neuer Monitor her. Meine Wahl fiel schließlich auf einen 32″ Samsung S32D850T mit WHQD Auflösung (2560×1440) und LED Hintergrundbeleuchtung. Dieser Monitor ist wirklich™ groß – also so richtig, meine ich. Ich hatte schon Fernsehgeräte, die deutlich kleiner waren… Sicherlich kein klassischer Gaming-Monitor (da mit einem MVA Panel ausgestattet), dafür aber mit hervorragendem Kontrast und einem großen Blickwinkel. Da taucht man als virtueller Pilot so richtig ein. Nur meine alte Wallpaper-Sammlung ist damit nutzlos geworden…

Bei der übrigen Peripherie bin ich konservativ geblieben. Meine Thrustmaster HOTAS Warthog Flight Controls sind natürlich mit umgezogen, ebenso wie meine Logitech Z4 Lautsprecher (Subwoofer und Stereo-Hochtöner). Bei Tastatur und Maus habe ich zu altbekannt-bewährtem gegriffen (Cherry G80-3000 Tastatur und Logitech M500 Corded Mouse). Diese beiden leisten mir auch schon an der alten Workstation gute Dienste, konnten aber nicht mit umziehen, da sie dort ja noch benötigt werden.

So, genug Zeit mit dem Webbrowser verbracht – das Cockpit ruft…
Top

Last words on diabetes and software

Postby Flameeyes via Flameeyes's Weblog »

Started as a rant on G+, then became too long and suited for a blog.

I do not understand why we can easily get people together with something like VideoLAN, but the moment when health is involved, the results are just horrible.

Projects either end up in "startuppy", which want to keep things for themselves and by themselves, or we end up fractionated in tiny one-person-projects because every single glucometer is a different beast and nobody wants to talk with others.

Tonight I ended up in a half-fight with a project to which I came saying "I've started drafting an exchange format, because nobody has written one down, and the format I've seen you use is just terrible and when I told you, you haven't replied" and the answer was "we're working on something we want to standardize by talking with manufacturers."

Their "we talk with these" projects are also something insane — one seem to be more like the idea of building a new device from scratch (great long term solution, terrible usefulness for people) and the other one is yet-another-build-your-own-cloud kind of solution that tells you to get Heroku or Azure with MongoDB to store your data. It also tells you to use a non-manufacturer-approved scanner for the sensors, which the comments point out can fry those sensors to begin with. (I should check whether that's actually within ToS for Play Store.)

So you know what? I'm losing hope in FLOSS once again. Maybe I should just stop caring, give up this laptop for a new Microsoft Surface Pro, and keep my head away from FLOSS until I am ready for retirement, at which point I can probably just go and keep up with the reading.

I have tried reaching out to the people who have written other tools, like I posted before, but it looks like people are just not interested in discussing this — I did talk with a few people over email about some of the glucometers I dealt with, but that came to one person creating yet another project wanting to become a business, and two figuring out which original proprietary tools to use, because they do actually work.

So I guess you won't be reading much about diabetes on my blog in the future, because I don't particularly enjoy writing this for my sole use, and clearly that's the only kind of usage these projects will ever get. Sharing seems to be considered deprecated.
Top

Testing one two three: planet venus is dead upstream!

Postby blueness via Anthony G. Basile »

So my blog posts are still not appearing on planet Gentoo or universe.  The aggregation software used on the server is Planet Venus which has not been maintained in five years.  Its an encoding error: line 2222 of feedparser.py needs to be changed from join(arLines) to join(arLines).decode(‘UTF-8’).

So this is another test post.  I’ll probably just delete it if it doesn’t show up again.
Top

AVScale – part1

Postby lu_zero via Luca Barbato »

swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

Colorspace conversion and Scaling

Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

Most multimedia concepts are more or less intuitive:
encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
demuxing is getting back the compressed data with the timing information stored in the container format
decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.

AVScale

The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

Everything is a Kernel

One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

Those components are called kernels.

The kernels can be conceptually divided in two kinds:
Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

This pipeline approach gets great flexibility and helps code reuse.

The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.

API

AVScale works with two kind of structures:
AVPixelFormaton: A full description of the pixel format
AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

For now only avscale_config and avscale_convert_frame are implemented.

So if the input and output are pre-determined the context can be configured like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_config(ctx, out, in);
if (ret < 0)
    ...
But you can skip it and scale and/or convert from a input to an output like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_convert_frame(ctx, out, in);
if (ret < 0)
    ...

avscale_free(&ctx);
The context gets lazily configured on the first call.

Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

As said the API is really simple and essential.

Help welcome!

Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.
Top

Why macros like __GLIBC__ and __UCLIBC__ are bad.

Postby blueness via Anthony G. Basile »

I’ll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account!  So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program.  Since I don’t like writing trivial stuff, I’m going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions.  Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on.  Chpt 1 of Kerrisk’s The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today.  If you look thorugh the code base of uClibc you’ll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU.  These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro.  Consider the approach to portability taken by GNU autotools.  Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__.  Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__.  If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)
This is getting a bit ugly and syntactically ambiguous.  Someone not familiar with this could easily misinterpret it, or reject it.

So I’ve hit bugs like these.  I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__.   Alternatively I hit this in geocode-glib and geoclue, and they did accept it.  I went with the wrong minded approach because that’s what was already there, and I didn’t feel like sifting through their code base and revamping their build system.  This isn’t just laziness, its historical weight.

So kudos to musl.  And for all the faults of GNU autotools, at least its approach to portability is correct.

 

 

 

Top

Travel cards collection

Postby Flameeyes via Flameeyes's Weblog »

As some of you might have noticed, for example by following me on Twitter, I have been traveling a significant amount over the past four years. Part of it has been for work, part for my involvement with VideoLAN and part again for personal reason (i.e. vacation.)

When I travel, I don't rent a car. The main reason being I (still) don't have a driving license, so particularly when I travel for leisure I tend to travel where there is at least some form of public transport, and even better if there is a good one. This matched perfectly with my hopes of visiting Japan (which I did last year), and usually tends to work relatively well with conference venues, so I have not had much trouble on it in the past few years.

One thing that is going a bit overboard for me, though, is the number of travel cards I have by now. With the exception of Japan, here every city or so have a different travel card — while London appears to have solved that, at least for tourists and casual passengers, by accepting contactless cards as if it was their local travel card (Oyster), it does not seem to be followed up by anyone else, that I can see.

Indeed I have at this point at home:

  • Clipper for San Francisco and Bay Area; prepaid, I actually have not used it in a while so I have some money "stuck" on it.
  • SmarTrip for Washington DC; also prepaid, but at least I managed to only keep very little on it.
  • Metro dayLink for Belfast; prepaid by tickets.
  • Ridacard for Edinburgh and the Lothian region; this one has my photo on it, and I paid for a weekly ticket when I used it.
  • imob.venezia, which is now discontinued, and I used when I lived in Venice, it's just terrible.
  • Suica, for Japan, which is a stored-value card that can be used for payments as well as travel, so it comes the closest to London's use of contactless.
  • Leap which is the local Dublin transports card, also prepaid.
  • Navigo for Paris, but I only used it once because you can only store Monday-to-Sunday tickets on it.
I might add a few more this year, as I'm hitting a few new places. On the other hand, while in London yesterday, I realized how nice and handy it is to just use my bank card for popping in and out of the Tube. And I've been wondering how did we get to this system of incompatible cards.

In the list above, most of the cities are one per State or Country, which might suggest cards work better within a country, but that's definitely not the case. I have been told that recently Nottingham has moved to a consolidate travelcard which is not compatible with Oyster either, and both of them are in England.

Suica is the exception. The IC system used in Japan is a stored-value system which can be used for both travel and for general payments, in stores and cafes and so on. This is not "limited" to Tokyo (though limited might be the wrong word there), but rather works in most of the cities I've visited — one exception being busses in Hiroshima, while it worked fine for trams and trains. It is essentially an upside-down version of what happens in London, like if instead of using your payment card to travel, you used your travel card for in-store purchases.

The convenience of using a payment card, by the way, lies for me mostly on being able to use (one of) my bank accounts to pay for the money without having to "earmark" it the way I did for Clipper, which is now going to be used only the next time I actually use the public transport in SF — which I'm not sure when it is!

At the same time, I can think of two big obstacles to implementing contactless payment in place for travelcards: contracts and incentives. On the first note, I'm sure that there is some weight that TfL (Travel for London) can pull, that your average small town can't. On the other note, it's a matter for finance experts, which I can only guess on: there is value for the travel companies to receive money before you travel — Clipper has already had my money in their coffers since I topped it up, though I have not used it.

While topped-up credit of customers is essentially a liability for the companies, it also increases their liquidity. So there is little incentive for them, particularly the smaller ones. Indeed, moving to a payment system for which the companies get their money mostly from banks rather than through cash, is likely to be a problem for them. And we're back on the first matter: contracts. I'm sure TfL can get better deals from banks and credit card companies than most.

There is also the matter of the tech behind all of this. TfL has definitely done a good job with keeping compatible systems — the Oyster I got in 2009, the first time I boarded a plane, still works. During the same seven years, Venice changed their system twice: once keeping the same name/brand but with different protocols on the card (making it compatible with more NFC systems), and once by replacing the previous brand — I assume they have kept some compatibility on the cards but since I no longer live there I have not investigated.

I'm definitely not one of those people who insist that opensource is the solution to everything, and that just by being opened, things become better for society. On the other hand, I do wonder if it would make sense for the opensource community to engage with public services like this to provide a solution that can be more easily mirrored by smaller towns, who would not otherwise be able to afford the system themselves.

On the other hand, this would require, most likely, compromises. The contracts with service providers would likely include a number of NDA-like provisions, and at the same time, the hardware would not be available off-the-shelf.

This post is not providing any useful information I'm afraid, it's just a bit of a bigger opinion I have about opensource nowadays, and particularly about how so many people limit their idea of "public interest" to "privacy" and cryptography.
Top

New AVCodec API

Postby lu_zero via Luca Barbato »

Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

What is here now

Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

  • Function Names: push got renamed to send and pull got renamed to receive.
  • No separated function to probe the process state, need_data and have_data are not here.
  • No codecs ported to use the new API, so no actual asyncronicity for now.
  • Subtitles aren’t supported yet.

New API

There are just 4 new functions replacing both audio-specific and video-specific ones:

// Decode
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

// Encode
int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);
The workflow is sort of simple:
– You setup the decoder or the encoder as usual
– You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
– You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
– Once you are done feeding data you have to pass a NULL to signal the end of stream.
– You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
– You free the contexts as usual.

Decoding examples

Setup

The setup uses the usual avcodec_open2.

    ...

    c = avcodec_alloc_context3(codec);

    ret = avcodec_open2(c, codec, &opts);
    if (ret < 0)
        ...

Simple decoding loop

People using the old API usually have some kind of simple loop like

while (get_packet(pkt)) {
    ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
    if (ret < 0) {
        ...
    }
    if (got_picture) {
        ...
    }
}
The old functions can be replaced by calling something like the following.

// The flush packet is a non-NULL packet with size 0 and data NULL
int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
{
    int ret;

    *got_frame = 0;

    if (pkt) {
        ret = avcodec_send_packet(avctx, pkt);
        // In particular, we don't expect AVERROR(EAGAIN), because we read all
        // decoded frames with avcodec_receive_frame() until done.
        if (ret < 0)
            return ret == AVERROR_EOF ? 0 : ret;
    }

    ret = avcodec_receive_frame(avctx, frame);
    if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
        return ret;
    if (ret >= 0)
        *got_frame = 1;

    return 0;
}

Callback approach

Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

// return 0 on success, negative on error
typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);

int decode(AVCodecContext *avctx, AVFrame *pkt,
           process_frame_cb cb, void *priv)
{
    AVFrame *frame = av_frame_alloc();
    int ret;

    ret = avcodec_send_packet(avctx, pkt);
    // Again EAGAIN is not expected
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_frame(avctx, frame);
        if (!ret)
            ret = cb(priv, frame);
    }

out:
    av_frame_free(&frame);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

Separated threads

The new API makes sort of easy to split the workload in two separated threads.

// Assume we have context with a mutex, a condition variable and the AVCodecContext


// Feeding loop
{
    AVPacket *pkt = NULL;

    while ((ret = get_packet(ctx, pkt)) >= 0) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_send_packet(avctx, pkt);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the draining loop
            pthread_cond_signal(&ctx->cond);
            // Wait here
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);
    }

    pthread_mutex_lock(&ctx->lock);
    ret = avcodec_send_packet(avctx, NULL);

    pthread_cond_signal(&ctx->cond);

out:
    pthread_mutex_unlock(&ctx->lock)
    return ret;
}

// Draining loop
{
    AVFrame *frame = av_frame_alloc();

    while (!done) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_receive_frame(avctx, frame);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the feeding loop
            pthread_cond_signal(&ctx->cond);
            // Wait
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);

        if (!ret) {
            do_something(frame);
        }
    }

out:
        pthread_mutex_unlock(&ctx->lock)
    return ret;
}
It isn’t as neat as having all this abstracted away, but is mostly workable.

Encoding Examples

Simple encoding loop

Some compatibility with the old API can be achieved using something along the lines of:

int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
{
    int ret;

    *got_packet = 0;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        return ret;

    ret = avcodec_receive_packet(avctx, pkt);
    if (!ret)
        *got_packet = 1;
    if (ret == AVERROR(EAGAIN))
        return 0;

    return ret;
}

Callback approach

Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

// return 0 on success, negative on error
typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);

int encode(AVCodecContext *avctx, AVFrame *frame,
           process_packet_cb cb, void *priv)
{
    AVPacket *pkt = av_packet_alloc();
    int ret;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            ret = cb(priv, pkt);
    }

out:
    av_packet_free(&pkt);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}
The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

Coming Next

This post is long enough so the next one might involve converting a codec to the new API.
Top

hardened-sources Role-based Access Control (RBAC): how to write mostly permissive policies.

Postby blueness via Anthony G. Basile »

RBAC is a security feature of the hardened-sources kernels.  As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources.   Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root.  In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned!  I wouldn’t recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not.  RBAC has to be part of a more comprehensive security plan and is not a single security solution.  In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers.  Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges.  So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself.  In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel.  Of course, if you’re planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy.  This is where the fun begins because you need to ask yourself what kind of a system you’re going to be running and decide on the policies you’re going to implement.  Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack.  I did this for years when I ran a moodle server for D’Youville College.  For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file.  RBAC is ideally suited for this.  Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs.  Basically your services should have attempted to access each resource at least once so the event gets logged.  You then distill those logs into a policy file describing only what should be permitted and tweak as needed.  Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm –F –L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm –F –L /etc/grsec/learning.log –O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to
Easy right?  This will get you pretty far but you’ll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning.  A typical example here, is you might have ssh’ed in from one IP, but now you’re ssh-ing in from a different IP and you’re getting denied.  To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role.  Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0
This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied.  As we’ll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root.  To do this you edit the policy file and look for subject /bin/ls under role root.  You then add an ‘l’ to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the ‘l’
You restart RBAC using  gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log  -O /etc/grsec/partial-learning.policy.  That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied.  But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there?  While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let’s start with a failry permissive policy file for root:

role admin sA
subject / rvka {
	/			rwcdmlxi
}

role default
subject / {
	/			h
	-CAP_ALL
	connect disabled
	bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
	/			r
	/boot			h
#
	/bin			rx
	/sbin			rx
	/usr/bin		rx
	/usr/libexec		rx
	/usr/sbin		rx
	/usr/local/bin		rx
	/usr/local/sbin		rx
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
	/usr/lib32		rx
	/usr/lib64		rx
	/usr/local/lib32	rx
	/usr/local/lib64	rx
#
	/dev			hx
	/dev/log		r
	/dev/urandom		r
	/dev/null		rw
	/dev/tty		rw
	/dev/ptmx		rw
	/dev/pts		rw
	/dev/initctl		rw
#
	/etc/grsec		h
#
	/home			rwcdl
	/root			rcdl
#
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
#
	/run/lock		rwcdl
	/sys			h
	/tmp			rwcdl
	/var			rwcdl
#
	+CAP_ALL
	-CAP_MKNOD
	-CAP_NET_ADMIN
	-CAP_NET_BIND_SERVICE
	-CAP_SETFCAP
	-CAP_SYS_ADMIN
	-CAP_SYS_BOOT
	-CAP_SYS_MODULE
	-CAP_SYS_RAWIO
	-CAP_SYS_TTY_CONFIG
	-CAP_SYSLOG
#
	bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
	connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
	sock_allow_family all
}
The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is ‘special’ meaning that it doesn’t correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role’s behavior. The ‘s’ and ‘A’ in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn’t have (eg, it is not subject ptrace restrictions). Its good to have the ‘A’ flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it’ll still function the same way.

The root role has the ‘u’ and the ‘G’ flags. The ‘u’ flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the ‘g’ flag instead which says to match a group by the same name. The ‘G’ flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don’t need to add those permissions yourself. Finally the default role doesn’t and shouldn’t have any flags. If its not a ‘u’ or ‘g’ or ‘s’ role, then its a default role.

Before we jump into the subject blocks, you’ll notice a couple of lines after the root role. The first says ‘role_transitions admin’ and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says ‘role_allow_ip 0.0.0.0/0’. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don’t have the line in there, get used to logging on on console because you’ll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the ‘subject’ keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the ‘/’ subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We’ll see this towards the end of the article.

Let’s start by looking at the ‘/’ subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have ‘/ h’ which says that every file in the file system starting from ‘/’ downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don’t forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn’t the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the ‘connect’ and ‘bind’ lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here ‘disabled’ means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let’s look at the ‘/’ subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags ‘rvka’. Here ‘r’ means that we relax ptrace restrictions for this subject, ‘a’ means we do not hide access to /dev/grsec, ‘k’ means we allow this subject to kill protected processes and ‘v’ means we allow this subject to view hidden processes. So ‘k’ and ‘v’ are interesting and have counterparts ‘p’ and ‘h’ respectively. If a subject is flagged as ‘p’ it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with ‘k’. Similarly processes belonging to a subject marked ‘h’ can only be viewed by processes belonging to a subject marked ‘v’. Nifty, eh? The only object line in this subject block is ‘/ rwcdmlxi’. This says that this subject can ‘r’ead, ‘w’rite, ‘c’reate, ‘d’elete, ‘m’ark as setuid/setgid, hard ‘l’ink to, e’x’ecute, and ‘i’nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let’s look at the ‘/’ subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don’t open any security holes. Notice that here we have ‘+CAP_ALL’ followed by a series of ‘-CAP_*’. Each of these were included otherwise gradm would complain. For example, if ‘CAP_SYS_ADMIN’ is not removed, an attacker can mount filesystems to bypass your policies.

So I won’t go through this entire subject block in detail, but let me highlight a few points. First consider these lines

	/			r
	/boot			h
	/etc/grsec		h
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
	/sys			h
The first line gives ‘r’ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by ‘h’iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory
but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module
Next consider these lines:

	/bin			rx
	/sbin			rx
	...
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
Since the ‘x’ flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The ‘r’ flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the ‘r’ flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop ‘r’ on /bin and not break our system. This, however, is not the case with the library directories. Dropping ‘r’ on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw
The ‘h’ flag will hide /dev and its contents, but the ‘x’ flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The ‘h’ is required to hide the directory and its contents because, as we saw above, ‘x’ is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log
In the admin role, all those files are visible.

Let’s end our study of this subject by looking at the ‘bind’, ‘connect’ and ‘sock_allow_family’ lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The ‘sock_allow_family’ allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename ‘role root’ to ‘role myusername’. You’ll probably want to remove the ‘role_transitions’ line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don’t want this user to be able to list any files or directories. You can simply add a line to his ‘/’ subject block which reads ‘/bin/ls h’ and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just ‘h’ all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user’s listing of a directory to just his home. To do this, we’ll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let’s proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}
Note that this is identical to the extremely restrictive ‘/’ subject for the default role except that the subject is ‘/bin/ls’ not ‘/’. There is also a subject flag ‘o’ which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0
Well that makes sense. We’ve started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we’ll restore read access to it by adding a line ‘/lib64/ld-2.21.so r’. Repeating our test, we get a seg fault! Obviously, we don’t just need read access to the ld.so, but we also execute privileges. We add ‘x’ and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
Of course! We need ‘rx’ for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
All we need now is ‘/home/myusername r’ and we’re done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}
Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System.  A good reference for the role, subject and object flags can be found in these  Tables.
Top

Template was specified incorrectly

Postby Sven Vermeulen via Simplicity is a form of art... »

After reorganizing my salt configuration, I received the following error:

[ERROR   ] Template was specified incorrectly: False
Enabling some debugging on the command gave me a slight pointer why this occurred:

[DEBUG   ] Could not find file from saltenv 'testing', u'salt://top.sls'
[DEBUG   ] No contents loaded for env: testing
[DEBUG   ] compile template: False
[ERROR   ] Template was specified incorrectly: False
I was using a single top file as recommended by Salt, but apparently it was still looking for top files in the other environments.

Yet, if I split the top files across the environments, I got the following warning:

[WARNING ] Top file merge strategy set to 'merge' and multiple top files found. Top file merging order is undefined; for better results use 'same' option
So what's all this about?

When using a single top file is preferred

If you want to stick with a single top file, then the first error is (or at least, in my case) caused by my environments not having a fall-back definition.

My /etc/salt/master configuration file had the following file_roots setting:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing
The problem is that Salt expects ''a'' top file through the environment. What I had to do was to set the fallback directory to the base directory again, like so:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing
    - /srv/salt/base
With this set, the error disappeared and both salt and myself were happy again.

When multiple top files are preferred

If you really want to use multiple top files (which is also a use case in my configuration), then first we need to make sure that the top files of all environments correctly isolate the minion matches. If two environments would match the same minion, then this approach becomes more troublesome.

On the one hand, we can just let saltstack merge the top files (default behavior) but the order of the merging is undefined (and no, you can't set it using env_order) which might result in salt states being executed in an unexpected order. If the definitions are done to such an extend that this is not a problem, then you can just ignore the warning. See also bug 29104 about the warning itself.

But better would be to have the top files of the environment(s) isolated so that each environment top file completely manages the entire environment. When that is the case, then we tell salt that only the top file of the affected environment should be used. This is done using the following setting in /etc/salt/master:

top_file_merging_strategy: same
If this is used, then the env_order setting is used to define in which order the environments are processed.

Oh and if you're using salt-ssh, then be sure to set the environment of the minion in the roster file, as there is no running minion on the target system that informs salt about the environment to use otherwise:

# In /etc/salt/roster
testserver:
  host: testserver.example.com
  minion_opts:
    environment: testing
Top

Using salt-ssh with agent forwarding

Postby Sven Vermeulen via Simplicity is a form of art... »

Part of a system's security is to reduce the attack surface. Following this principle, I want to see if I can switch from using regular salt minions for a saltstack managed system set towards salt-ssh. This would allow to do some system management over SSH instead of ZeroMQ.

I'm not confident yet that this is a solid approach to take (as performance is also important, which is greatly reduced with salt-ssh), and the security exposure of the salt minions over ZeroMQ is also not that insecure (especially not when a local firewall ensures that only connections from the salt master are allowed). But playing doesn't hurt.

Using SSH agent forwarding

Anyway, I quickly got stuck with accessing minions over the SSH interface as it seemed that salt requires its own SSH keys (I don't enable password-only authentication, most of the systems use the AuthenticationMethods approach to chain both key and passwords). But first things first, the current target uses regular ssh key authentication (no chained approach, that's for later). But I don't want to assign such a powerful key to my salt master (especially not if it would later also document the passwords). I would like to use SSH agent forwarding.

Luckily, salt does support that, it just forgot to document it. Basically, what you need to do is update the roster file with the priv: parameter set to agent-forwarding:

myminion:
  host: myminion.example.com
  priv: agent-forwarding
It will use the known_hosts file of the currently logged on user (the one executing the salt-ssh command) so make sure that the system's key is already known.

~$ salt-ssh myminion test.ping
myminion:
    True
Top

Crooks Steal, Sell Verizon Enterprise Customer Data

Postby BrianKrebs via Krebs on Security »

Verizon Enterprise Solutions, a B2B unit of the telecommunications giant that gets called in to help Fortune 500’s respond to some of the world’s largest data breaches, is reeling from its own data breach involving the theft and resale of customer data, KrebsOnSecurity has learned.

Earlier this week, a prominent member of a closely guarded underground cybercrime forum posted a new thread advertising the sale of a database containing the contact information on some 1.5 million customers of Verizon Enterprise.

The seller priced the entire package at $100,000, but also offered to sell it off in chunks of 100,000 records for $10,000 apiece. Buyers also were offered the option to purchase information about security vulnerabilities in Verizon’s Web site.

Contacted about the posting, Verizon Enterprise told KrebsOnSecurity that the company recently identified a security  flaw in its site that permitted hackers to steal customer contact information, and that it is in the process of alerting affected customers.

“Verizon recently discovered and remediated a security vulnerability on our enterprise client portal,” the company said in an emailed statement. “Our investigation to date found an attacker obtained basic contact information on a number of our enterprise customers. No customer proprietary network information (CPNI) or other data was accessed or accessible.”

The seller of the Verizon Enterprise data offers the database in multiple formats, including the database platform MongoDB, so it seems likely that the attackers somehow forced the MongoDB system to dump its contents. Verizon has not yet responded to questions about how the breach occurred, or exactly how many customers were being notified.

The irony in this breach is that Verizon Enterprise is typically the one telling the rest of the world how these sorts of breaches take place.  I frequently recommend Verizon’s annual Data Breach Investigations Report (DBIR) because each year’s is chock full of interesting case studies from actual breaches, case studies that include hard lessons which mostly age very well (i.e., even a DBIR report from four years ago has a great deal of relevance to today’s security challenges).

According to the 2015 report, for example, Verizon Enterprise found that organized crime groups were the most frequently seen threat actor for Web application attacks of the sort likely exploited in this instance. “Virtually every attack in this data set (98 percent) was opportunistic in nature, all aimed at easy marks,” the company explained.

It’s a fair bet that if cyber thieves buy all or some of the Verizon Enterprise customer database, some of those customers may be easy marks for phishing and other targeted attacks. Even if it is limited to the contact data for technical managers at companies that use Verizon Enterprise Solutions, this is bound to be target-rich list: According to Verizon’s page at Wikipedia, some 99 percent of Fortune 500 companies are using Verizon Enterprise Solutions.
Top

Phishing Victims Muddle Tax Fraud Fight

Postby BrianKrebs via Krebs on Security »

Many U.S. citizens are bound to experience delays in getting their tax returns processed this year, thanks largely to more stringent controls enacted by Uncle Sam and the states to block fraudulent tax refund requests filed by identity thieves. A steady drip of corporate data breaches involving phished employee W-2 information is adding to the backlog, as is an apparent mass adoption by ID thieves of professional tax services for processing large numbers of phony refund requests.


According to data released this week by anti-fraud company iovation, the Internal Revenue Service is taking up to three times longer to review 2015 tax returns compared to past years.

Julie Magee, commissioner of Alabama’s Department of Revenue,  said much of the delay this year at the state level is likely due to new “fraud filters” the states have put in place with Gentax, a return processing and auditing system used by about half of U.S. state revenue departments. If the states can’t outright deny a suspicious refund request, they’ll very often deny the requested electronic bank deposit and issue a paper check to the taxpayer’s known address instead.

“Many states decided they weren’t going to start paying refunds until March 1, and on our side we’ve been using all our internal fraud resources and tools to analyze the tax return before we even put it in the queue,” Magee said. “That’s delaying refunds nationwide for the IRS and the states, and it’s pretty much going to also mean a helluva lot of paper checks are going out this year.”

The added fraud filters that states are employing take advantage of data elements shared for the first time this tax season by the major online tax preparation firms such as TurboTax. The filters look for patterns known to be associated with phony refund requests, such how quickly the return was filed, or whether the same Internet address was seen completing multiple returns.

Magee said some of the states have been adding new fraud filters nearly every time they learn of another big breach involving large numbers of stolen or phished employee W2 data, a huge problem this tax season that is forcing dozens of companies large and small to disclose data breaches over the past few weeks.

“Every time we turn around getting a phone call about another breach,” Magee said. “Because of all the different breaches, the states and the IRS have been taking extreme measures to filter, filter, filter. And each time we’d get news of an additional breach, we’d start over, reprogram our fraud filters, and re-assess those returns that were not processed fully yet and those waiting to be processed.”

Magee said the Gentax software assigns each tax return a score for “wage confidence” and “identity confidence,” and that usually fraudulent tax refund requests have high wage confidence but low — if any — identity confidence. That’s because the fraudsters are filing refund requests on taxpayers for whom they already have stolen W2 information. The identity confidence in these cases is low often because the fraudsters are asking to have the money electronically deposited into an account that can’t be directly tied to the taxpayer, or they have incorrectly supplied some of the victim’s data.

“I have zero confidence that filings which match this pattern are legitimate,” Magee said. “It’s early still, but our new filtering system seems to be working. But it’s still a big unknown about the percentage of fraudulent refunds we’re not stopping.”

MORE W2 PHISHING VICTIMS

Most states didn’t start processing returns until after March 1, which is exactly when a flood of data breaches related to phished employee W2 data began washing up. As KrebsOnSecurity first warned in mid-February, thieves have been sending targeted phishing emails to human resources and finance employees at countless organizations, spoofing a message from the CEO requesting all employee W2’s in PDF format.

In Magee’s own state, W2 phishers hauled in tax data on an estimated 180 employees of ISCO Industries in Huntsville, and some 425 employees at the EWTN Global Catholic Network in Irondale, Ala. But those are just the ones that have been made public. Magee’s office only learned of those breaches after employees at the affected organizations reached out to journalists who then wrote about the compromises.

Over the past week, KrebsOnSecurity similarly has heard from employees at a broad range of organizations that appear to have fallen victim to W2 phishing scams, including some 28,000 employees of the market research giant Kantar Group; 17,000+ employees of Sprouts Farmer’s Market; call center software provider Aspect; computer backup software maker AcronisKids Dental Kare in Los Angeles; Century Fence, a fencing company in Wisconsin; Nation’s Lending Corporation, a mortgage lending firm in Independent, Ohio; QTI Group, a Wisconsin-based human resources consulting company; and the jousting-and-feasting entertainment company Medieval Times.

TAX FRAUDSTERS GOING PRO?

Magee said Alabama and other states are dealing with a huge spike this year in fraudulent refund requests filed via criminals who use online software firms that specialize in selling e-filing services to tax professionals.

According to Magee, crooks first register with the IRS as “electronic return originators.” EROs are typically accountants or tax preparation firms authorized by the IRS to prepare and transmit tax returns for people and companies electronically.  Magee said thieves have been registering as EROs and then buying tax preparation software and services from firms like PETZ Enterprises to push through large numbers of phony refund requets.

“The biggest move [in refund fraud] this year is in the so-called ‘professional services applications,’ which are being flagged in high rates this year for fraud,” Magee said. “And that’s not just Alabama. A great number of other states are seeing the same thing. We have always had fraud in that area, but we’re seeing significantly higher rates of fraud there now.”

Magee said tax software prep firms should be required to conduct more due diligence on their clients.

“In the state of Alabama, you need a license to cut someone’s hair, to be a barber or a cosmetologist, but anyone can become a tax preparation professional with no certification at all,” Magee said. “The software firms are where all the fraud is going now. The criminal becomes an ERO, and then he can just sit there all day and file an unlimited number of fraudulent returns.”

PETZ did not respond to requests for comment. But Stephen Ryan, a lobbyist for the industry group American Coalition for Taxpayer Rights, said states are free to regulate tax providers as they see fit.

“If there are facts that demonstrate there is a problem such as is being alleged about unscrupulous local preparers using professional software they license, the state certainly has the sovereign authority to prosecute or regulate this,” Ryan said. “If a specific source of fraud or crimes is being locally committed, that’s a pretty easy enforcement target to focus upon. And in the unlikely case a state doesn’t have that authority, they can seek it from their legislature.”

Look for additional stories in the coming days as part of a series on tax refund fraud in 2016. Next week, I’ll take a closer look at how thieves are exploiting know-your-customer weaknesses in the prepaid card industry to launder the proceeds from refund fraud and other schemes.
Top

bsdtalk263 - joshua stein and Brandon Mercer

Postby Mr via bsdtalk »

This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD.

An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at http://garbage.fm/

File Info: 17Min, 8MB.

Ogg Link: https://archive.org/download/bsdtalk263/bsdtalk263.ogg
Top

Of OpenStack and uwsgi

Postby Matthew Thode (prometheanfire) via Let's Play a Game »

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

  • Install >=www-servers/uwsgi-2.0.11.2-r1 with the python use flag as it has an updated init script.
  • Make sure you note the group you want for the webserver to access the uwsgi sockets, I chose nginx.

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi
UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'
/etc/uwsgi.d/keystone-admin.ini
[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin
/etc/uwsgi.d/keystone-main.ini
[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main
I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini
[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone
# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)
/var/www/keystone/main perms are 0750 keystone:keystone
# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)
Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon
#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()
Top

Implementing OpenPGP and S/MIME Cryptography in Trojita

Postby via jkt's blog »

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

Encrypted messages are of course supported, too:

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!
Top

Hospital Declares ‘Internal State of Emergency’ After Ransomware Infection

Postby BrianKrebs via Krebs on Security »

A Kentucky hospital says it is operating in an “internal state of emergency” after a ransomware attack rattled around inside its networks, encrypting files on computer systems and holding the data on them hostage unless and until the hospital pays up.

A streaming red banner on Methodisthospital.net warns that a computer virus infection has limited the hospital’s use of electronic web-based services. Click to enlarge.
Henderson, Ky.-based Methodist Hospital placed a scrolling red alert on its homepage this week, stating that “Methodist Hospital is currently working in an Internal State of Emergency due to a Computer Virus that has limited our use of electronic web based services.  We are currently working to resolve this issue, until then we will have limited access to web based services and electronic communications.”

Jamie Reid, information systems director at the hospital, said malware involved is known as the “Locky” strain of ransomware, a contagion that encrypts all of the important files, documents and images on an infected host, and then deletes the originals. Victims can regain access to their files only by paying the ransom, or by restoring from a backup that is hopefully not on a network which is freely accessible to the compromised computer.

In the case of Methodist Hospital, the ransomware tried to spread from the initial infection to the entire internal network, and succeeded in compromising several other systems, Reid said. That prompted the hospital to shut down all of the hospital’s desktop computers, bringing systems back online one by one only after scanning each for signs of the infection.

“We have a pretty robust emergency response system that we developed quite a few years ago, and it struck us that as everyone’s talking about the computer problem at the hospital maybe we ought to just treat this like a tornado hit, because we essentially shut our system down and reopened on a computer-by-computer basis,” said David Park, an attorney for the Kentucky healthcare center.

The attackers are demanding a mere four bitcoins in exchange for a key to unlock the encrypted files; that’s a little more than USD $1,600 at today’s exchange rate.

Park said the administration hasn’t ruled out paying the ransom.

“We haven’t yet made decision on that, we’re working through the process,” with the FBI, he said. “I think it’s our position that we’re not going to pay it unless we absolutely have to.”

The attack on Methodist comes just weeks after it was revealed that a California hospital that was similarly besieged with ransomware paid a $17,000 ransom to get its files back.

Park said the main effect of the infection has been downtime, which forced the hospital to process everything by hand on paper. He declined to say which systems were infected, but said no patient data was impacted.

“We have downtime procedures to going to paper system anyway, so we went to that paper system, he said. “But we don’t feel like it negatively impacted patient care. They didn’t get any patient information ”

Ransomware infections are largely opportunistic attacks that mainly prey on people who browse the Web with outdated Web browsers and/or browser plugins like Java and Adobe Flash and Reader. Most ransomware attacks take advantage of exploit kits, malicious code that when stitched into a hacked site probe visiting browsers for the the presence of these vulnerabilities.

The attack on Methodist Hospital was another form of opportunistic attack that came in via spam email, in messages stating something about invoices and that recipients needed to open an attached (booby-trapped) file.

It’s a fair bet that as ransomware attacks and attackers mature, these schemes will slowly become more targeted. I also worry that these more deliberate attackers will take a bit more time to discern how much the data they’ve encrypted is really worth, and precisely how much the victim might be willing to pay to get it back.
Top

Bitstream Filtering

Postby lu_zero via Luca Barbato »

Last weekend, after few months of work, the new bitstream filter API eventually landed.

Bitstream filters

In Libav is possible to manipulate raw and encoded data in many ways, the most common being

  • Demuxing: extracting single data packets and their timing information
  • Decoding: converting the compressed data packets in raw video or audio frames
  • Encoding: converting the raw multimedia information in a compressed form
  • Muxing: store the compressed information along timing information and additional information.
Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.

New API

The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

// Query
const AVBitStreamFilter *av_bsf_next(void **opaque);
const AVBitStreamFilter *av_bsf_get_by_name(const char *name);

// Setup
int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
int av_bsf_init(AVBSFContext *ctx);

// Usage
int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);

// Cleanup
void av_bsf_free(AVBSFContext **ctx);
In order to use a bsf you need to:

  • Look up its definition AVBitStreamFilter using a query function.
  • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
  • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
  • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.

Query

You can enumerate the available filters

void *state = NULL;

const AVBitStreamFilter *bsf;

while ((bsf = av_bsf_next(&state)) {
    av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);
}
or directly pick the one you need by name:

const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");

Setup

A bsf may use some codec parameters and time_base and provide updated ones.

AVBSFContext *ctx;

ret = av_bsf_alloc(bsf, &ctx);
if (ret < 0)
    return ret;

ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
if (ret < 0)
    goto fail;

ctx->time_base_in = in->time_base;

ret = av_bsf_init(ctx);
if (ret < 0)
    goto fail;

ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
if (ret < 0)
    goto fail;

out->time_base = ctx->time_base_out;

Usage

Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

AVPacket *pkt;

while (got_new_packet(&pkt)) {
    ret = av_bsf_send_packet(ctx, pkt);
    if (ret < 0)
        goto fail;

    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
        yield_packet(pkt);
    }

    if (ret == AVERROR(EAGAIN)
        continue;
    IF (ret == AVERROR_EOF)
        goto end;
    if (ret < 0)
        goto fail;
}

// Flush
ret = av_bsf_send_packet(ctx, NULL);
if (ret < 0)
    goto fail;

while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
    yield_packet(pkt);
}

if (ret != AVERROR_EOF)
    goto fail;
In order to signal the end of stream a NULL pkt should be fed to send_packet.

Cleanup

The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.

    av_bsf_free(&ctx);
All the memory is freed and the ctx pointer is set to NULL.

Coming Soon

Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.
Top

Carders Park Piles of Cash at Joker’s Stash

Postby BrianKrebs via Krebs on Security »

A steady stream of card breaches at retailers, restaurants and hotels has flooded underground markets with a historic glut of stolen debit and credit card data. Today there are at least hundreds of sites online selling stolen account data, yet only a handful of them actively court bulk buyers and organized crime rings. Faced with a buyer’s market, these elite shops set themselves apart by focusing on loyalty programs, frequent-buyer discounts, money-back guarantees and just plain old good customer service.

An ad for new stolen cards on Joker’s Stash.
Today’s post examines the complex networking and marketing apparatus behind “Joker’s Stash,” a sprawling virtual hub of stolen card data that has served as the distribution point for accounts compromised in many of the retail card breaches first disclosed by KrebsOnSecurity over the past two years, including Hilton Hotels and Bebe Stores.

Since opening for business in early October 2014, Joker’s Stash has attracted dozens of customers who’ve spent five- and six-figures at the carding store. All customers are buying card data that will be turned into counterfeit cards and used to fraudulently purchase gift cards, electronics and other goods at big-box retailers like Target and Wal-Mart.

Unlike so many carding sites that mainly resell cards stolen by other hackers, Joker’s Stash claims that all of its cards are “exclusive, self-hacked dumps.”

“This mean – in our shop you can buy only our own stuff, and our stuff you can buy only in our shop – nowhere else,” Joker’s Stash explained on an introductory post on a carding forum in October 2014.

“Just don’t wanna provide the name of victim right here, and bro, this is only the begin[ning], we already made several other big breaches – a lot of stuff is coming, stay tuned, check the news!” the Joker went on, in response to established forum members who were hazing the new guy. He continued:

“I promise u – in few days u will completely change your mind and will buy only from me. I will add another one absolute virgin fresh new zero-day db with 100%+1 valid rate. Read latest news on http://krebsonsecurity.com/ – this new huge base will be available in few days only at Joker’s Stash.”

As a business, Joker’s Stash made good on its promise. It’s now one of the most bustling carding stores on the Internet, often adding hundreds of thousands of freshly stolen cards for sale each week.

A true offshore pirate’s haven, its home base is a domain name ending in “.sh” Dot-sh is the country code top level domain (ccTLD) assigned to the tiny volcanic, tropical island of Saint Helena, but anyone can register a domain ending in dot-sh. St. Helena is on Greenwich Mean Time (GMT) — the same time zone used by this carding Web site. However, it’s highly unlikely that any part of this fraud operation is in Saint Helena, a remote British territory in the South Atlantic Ocean that has a population of just over 4,000 inhabitants.

This fraud shop includes a built-in discount system for larger orders: 5 percent for customers who spend between $300-$500; 15 percent off for fraudsters spending between $1,000 and $2,500; and 30 percent off for customers who top up their bitcoin balances to the equivalent of $10,000 or more.

For its big-spender “partner” clients, Joker’s Stash assigns three custom domain names to each partner. After those partners log in, the different 3-word domains are displayed at the top of their site dashboard, and the user is encouraged to use only those three custom domains to access the carding shop in the future (see screenshot below). More on these three domains in a moment.

The dashboard for a Joker’s Stash customer who has spent over $10,000 buying stolen credit cards from the site. Click image to enlarge.

REFUNDS AND CUSTOMER LOYALTY BONUSES

Customers pay for stolen cards using Bitcoin, a virtual currency. All sales are final, although some batches of stolen cards for sale at Joker’s Stash come with a replacement policy — a short window of time from minutes to a few hours, generally — in which buyers can request replacement cards for any that come back as declined during that replacement timeframe.

Like many other carding shops, Joker’s Stash also offers an a-la-carte card-checking option that customers can use an insurance policy when purchasing stolen cards. Such checking services usually rely on multiple legitimate, compromised credit card merchant accounts that can be used to round-robin process a small charge against each card the customer wishes to purchase to test whether the card is still valid. Customers receive an automatic credit to their shopping cart balances for any purchased cards that come back as declined when run through the site’s checking service.

This carding site also employs a unique rating system for clients, supposedly to prevent abuse of the service and to provide what the proprietors of this store call “a loyalty program for honest partners with proven partner’s record.”

According to Joker’s Stash administrators, customers with higher ratings get advance notice of new batches of stolen cards coming up for sale, prioritized support requests, as well as additional time to get refunds on cards that came back as “declined” or closed by the issuing bank shortly after purchase.

To determine a customer’s loyalty rating, the system calculates the sum of all customer deposits minus the total refunds requested by the customer.

“So if you have deposited $10,000 USD and refunded items for $3,000 USD then your rating is: 10,000 – 3,000 = 7,000 = 7k [Gold rating – you are the king],” Joker’s Stash explains. “If this is the case then new bases will become available for your purchase earlier than for others thanks to your high rating. It gives you ability to see and buy new updates before other people can do that, as well as some other privileges like prioritized support.”

This user has a stellar 16,000+ rating, because he’s deposited more than $20,000 and only requested refunds on $3,500 worth of stolen cards. Click image to enlarge.

HIGH ROLLERS

It would appear that Joker’s Stash has attracted a large number of high-dollar customers, and a good many of them qualify for the elite, “full stash” category reserved for clients who’ve deposited more than $10,000 and haven’t asked for more than about 30 percent of those cards to be refunded or replaced. KrebsOnSecurity has identified hundreds of these three-word domains that the card site has assigned to customers. They were mostly all registered across an array of domain registrars over the the past year, and nearly all are (ab)using services from a New Jersey-based cloud hosting firm called Vultr Holdings.

All customers — be they high-roller partners or one-card-at-a-time street thugs — are instructed on how to log in to the site with software that links users to the Tor network. Tor is a free anonymity network that routes its users’ encrypted traffic between multiple hops around the globe to obscure their real location online.

The site’s administrators no doubt very much want all customers to use the Tor version of the site as opposed to domains reachable on the open Internet. Carding site domain names get seized all the time, but it is far harder to discover and seize a site or link hosted on Tor.

What’s more, switching domain names all the time puts carding shop customers in the crosshairs of phishers and other scam artists. While customers are frantically searching for the shop’s updated domain name, fraudsters step in to take advantage of the confusion and to promote counterfeit versions of the site that phish account credentials from unwary criminals.

Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI), said it looks like the traffic from the three-word domains that Joker’s Stash assigns to each user gets routed through the same Tor hidden servers.

“What he appears to be doing is throwing up an Nginx proxy on each Internet address he’s using to host the domain sets given to users,” Weaver said. “This communicates with his back end server, which is also reachable as one of two Tor hidden services. And both are the same server: If you add to your shopping cart in Tor, it shows up instantly in the clearnet version of the site, and the same with removing cards. So my conclusion is both clearnet and Tornet are the same server on the back end.”

By routing all three-word partner domains through server hidden on Tor, the Joker’s Stash administration seems to understand that many customers can’t be bothered to run Tor and if forced to will just go to a competing site that allows direct access via a regular, non-Tor-based Internet connection.

“My guess is [Joker’s Stash] would like everyone to go to Tor, but they know that Tor is a pain, so they’re using the clearnet because that is what customers demand,” Weaver said.

Interestingly, this setup suggests several serious operational security failures by the Joker’s Stash staff. For example, while Tor encrypts data at every hop in the network, none of the partner traffic from any of the custom three-word domains is encrypted by default on its way to the Tor version of the site. To their credit, the site administrators do urge users to change this default setting by replacing http:// with https:// in front of their private domains.

A web page lists the various ways to reach the carding forum on the clearnet or via Tor. The links have been redacted.
I’ll have more on Joker’s Stash in an upcoming post. In the meantime, if you enjoyed this story, check out a deep dive I did last year into “McDumpals,” another credit card fraud bazaar that caters to bulk buyers and focuses heavily on customer service.
Top

Reverse engineering the FreeStyle Libre CGM, chapter 1

Postby Flameeyes via Flameeyes's Weblog »

I have already reviewed the Abbott FreeStyle Libre continuous glucose monitor, and I have hinted that I already started reverse engineering the protocol it uses to communicate with the (Windows) software. I should also point out that for once the software does provide significant value, as they seem to have spent more effort in the data analysis than any other part of it.

Please note that this is just a first part for this device. Unlike the previous blog posts, I have not managed yet to get even partial information downloaded with my script as I write and post this. Indeed, if you, as you read this, have any suggestion of things I have not tried yet, please do let me know.

Since at this point it's getting common, I've started up the sniffer, and sniffed starting from the first transaction. As it is to be expected, the amount of data in these transactions is significantly higher than that of the other glucometers. Even if you were taking seven blood samples a day for months with one of the other glucometers, it's going to take a much longer time to get the same amount of readings as this sensor, which takes 96 readings a day by itself, plus the spot-checks and added notes and information to comment them.

The device itself presents itself as a standard HID device, which is a welcome change from the craziness of SCSI-based hidden message protocols. The messages within are of course not defined in any standard of course, so inspecting them become interesting.

It took me a while to figure out what the data that the software was already decoding for me meant. At first I thought I would have to use magic constant and libusb to speak raw USB to the device — indeed, a quick glance around Xavier's work showed me that there were plently of similarities, and he's including quite a few magical consants in that code. Luckily for me, after managing to query the device with python-libusb1, which was quite awkward as I also had to fix it to work, I realized that I was essentially reimplementing hidraw access.

After rewriting the code to use /dev/hidraw1 (which makes it significantly simpler), I also managed to understand that the device uses exactly the same initialization procedure as the FreeStyle InsuLinx that Xavier already implemented, and similar but not identical command handling (some of the commands match, and some even match the Optium, at least in format.)

Indeed the device seem to respond to two general classes of commands: text-commands and binary commands, the first device I reverse engineer with such a hybrid protocol. Text commands also have the same checksumming as both the Optium and Neo protocols.

The messages are always transferred in 64-bytes packets, even though the second byte of the message declares the actual significant length, which can be even zero. Neither the software nor the device zero out their buffers before writing the new command/response packets, so there is lots of noise in those packets.

I've decided that the custom message framing and its usage of HID is significant enough to warrant being documented by itself so I did that for now, although I have not managed to complete the reverse engineering of the protocol.

The remaining of the protocol kept baffling me. Some of the commands appear to include a checksum, and are ignored if they are not sent correctly. Others actually seem to append to an error buffer that you can somehow access (but probably more by mistake than design) and in at least one case I managed to "crash" the device, which asked me to turn it off and on again. I have thus decided to stop trying to send random messages to it for a while.

I have not been pouring time on this as much as I was considering doing before, what with falling for a bad flu, being oncall, and having visitors in town, so I have only been looking at traces from time to time, particularly recording all of them as I downloaded more data out of it. What still confuses me is that the commands sent from the software are not constant across different calls, but I couldn't really make much heads or tails of it.

Then yesterday I caught a break — I really wanted to figure out at least if it was encoding or compressing the data, so I started looking for at least a sequence of numbers, by transcribing the device's logbook into hexadecimal and looking in the traces for them.

This is not as easy as it might sound, because I have a British device — in UK, Ireland and Australia the measure of blood sugar is given in mmol/l rather than the much more common mg/dl. There is a stable conversion between the two units (you multiply the former by 18 to get the latter), but this conversion usually happens on display. All the devices I have used up to now have been storing and sending over the wire values in mg/dl and only converted when the data is shown, usually by providing some value within the protocol to specify that the device is set to use a given unit measure. Because of this conversion issue, and the fact that I only had access to the values mmol/l, I usually had two different options for each of the readings, as I wasn't sure how the rounding happened.

The break happened when I was going through the software's interface, trying to get the latest report data to at least match the reading timing difference, so that I could look for what might appear like a timestamp in the transcript. Instead, I found the "Export" function. The exported file is a comma-separated values file, which includes all readings, including those by the sensor, rather than just the spot-checks I could see from the device interface and in the export report. Not only that, but it includes a "reading ID", which was interesting because it started from a value a bit over 32000, and is not always sequential. This was lucky.

I imported the CSV to Google Sheets, then added columns next to the ID and glucose readings. The latter were multiplied by 18 to get the value in mg/dl (yes the export feature still uses mmol/l, I think it might be some certification requirement), and then convert the whole lot to hexadecimal (hint: Google Sheets and LibreOffice have a DEC2HEX function that do that for you.) Now I had something interesting to search for: the IDs.

Now, I have to point out that the output I have from USBlyzer is a CSV file that includes the hexdump of the USB packets that are being exchanged. I already started writing a set of utilities (too rough to be published though) to convert those into a set of binary files (easier to bgrep or binwalk them) or hexdump-like transcripts (easier to recognize strings.) I wrote both a general "full USB transcript" script as well as a "Verio-specific USB transcript" while I was working on my OneTouch meter, so I wrote one for the Abbott protocol, too.

Because of the way that works, of course, it is not completely obvious if any value which is not a single byte is present, by looking at the text transcript, as it might be found on the message boundary. One would think they wouldn't, since that means there are odd-sized records, but indeed that is the case for this device at least. Indeed it took me a few tries of IDs found in the CSV file to find one in the USB transcript.

And even after finding one the question was to figure out the record format. What I have done in the past when doing binary format reverse engineering was to print on a piece of paper a dump of the binary I'm looking at, and start doodling on it trying to mark similar parts of the message. I don't have a printer in Dublin, so I decided to do a paperless version of the same, by taking a screenshot of a fragment of transcript, and loading it into a drawing app on my tablet. It's not quite as easy, but it does making sharing results easier and thanks to layers it's even easier to try and fail.

I made a mistake with the screenshot by not keeping the command this was a reply to in the picture — this will become more relevant later. Because of the size limit in the HID-based framing protocol Abbott uses, many commands reply with more than one message – although I have not understood yet how it signals a continuation – so in this case the three messages (separated by a white line) are in response to a single command (which by the way is neither the first or the last in a long series.)

The first thing I wanted to identify in the response was all the reading IDs, the one I searched for is marked in black in the screenshot, the others are marked in the same green tone. As you can see they are not (all) sequential; the values are written down as little-endian by the way. The next step was to figure out the reading values, which are marked in pink in the image. While the image itself has no value that is higher than 255, thus using more than bytes to represent them, not only it "looked fair" to assume little endian. It was also easy to confirm as (as noted in my review) I did have a flu while wearing the sensor, so by filtering for readings over 14 mmol/L I was able to find an example of a 16-bit reading.

The next thing I noted was the "constant" 0C 80 which might include some flags for the reading, I have not decoded it yet, but it's an easy way to find most of the other IDs anyway. Following from that, I needed to find an important value, as it could allow decoding many other record types just by being present: the timestamp of the reading. The good thing with timestamps is that they tend to stay similar for a relative long time: the two highest bytes are the same for most of a day, and the highest of those is usually the same for a long while. Unfortunately looking for the hex representation of the Unix timestamp at the time yield nothing, but that was not so surprising, given how I found usage of a "newer" epoch in the Verio device I looked at earlier.

Now, since I have the exported data I know not only the reading ID but also the timestamp it reports it at, which does not include seconds. I also know that since the readings are (usually) taken at 15 minutes intervals, if they are using seconds since a given epoch the numbers should be incrementing by 900 between readings. Knowing this and doing some mental pattern matching it became easy to see where the timestamps have been hiding, they are marked in blue in the image above. I'll get back to the epoch.

At this point, I still have not figured out where the record starts and ends — from the image it might appear that it starts with the record ID, but remember I took this piece of transcript mid-stream. What I can tell is that the length of the record is not only not a multiple of eight (the bytes in hexdump are grouped by eight) but it is odd, which, by itself, is fairly odd (pun intended.) This can be told by noticing how the colouring crosses the mid-row spacing, for 0c 80, for reading values and timestamps alike.

Even more interesting, not only the records can cross the message boundaries (see record 0x8fe0 for which the 0x004b value is the next message over), but even do the fields. Indeed you can see on the third message the timestamp ends abruptly at the end of the message. This wouldn't be much of an issue if it wasn't that it provides us with one more piece of information to decode the stream.

As I said earlier, timestamps change progressively, and in particular reading records shouldn't usually be more than 900 seconds apart, which means only the lower two bytes change that often. Since the device uses little-endian to encode the numbers, the higher bytes are at the end of the encoded sequence, which means 4B B5 DE needs to terminate with 05, just like CC B8 DE 05 before it. But the next time we encounter 05 is in position nine of the following message, what gives?

The first two bytes of the message, if you checked the protocol description linked earlier, describe the message type (0B) and the number of significant bytes following (out of the usb packet), in this case 3E means the whole rest of the packet is significant. Following that there are six bytes (highlighted turquoise in the image), and here is where things get to be a bit more confusing.

You can actually see how discarding those six bytes from each message now gives us a stream of records that are at least fixed length (except the last one that is truncated, which means the commands are requesting continuous sequences, rather than blocks of records.) Those six bytes now become interesting, together with the inbound command.

The command that was sent just before receiving this response was 0D 04 A5 13 00 00. Once again the first two bytes are only partially relevant (message type 0D, followed by four significant bytes.) But A5 13 is interesting, since the first message of the reply starts with 13 A6, and the next three message increment the second byte each. Indeed, the software follows these with 0D 04 A9 13 00 00, which matches the 13 A9 at the start of the last response message.

What the other four bytes mean is still quite the mystery. My assumption right now is that they are some form of checksum. The reason is to be found in a different set of messages:

>>>> 00000000: 0D 04 5F 13 00 00                                 .._...

<<<< 00000000: 0B 3E 10 5F 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>._4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 60 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.`4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 61 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.a4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 62 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.b4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 63 E8 B6 84 09  00 00 00 00 00 00 00 00  .>.c............
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 9A 39 65 70  99 51 09 30 4D 30 30 30  .....9ep.Q.0M000
<<<< 00000030: 30 37 52 4B 35 34 00 00  01 00 02 A0 9F DE 05 FC  07RK54..........
In this set of replies, there are two significant differences compared to the ones with record earlier. The first is that while the command lists 5F 13 the replies start with 10 5F, so that not only 13 becomes 10, but 5F is not incremented until the next message, making it unlikely for the two bytes to form a single 16-bit word. The second is that there are at least four messages with identical payload (fifty-six bytes of value zero). And despite the fourth byte of the message changing progressively, the following four bytes are staying the same. This makes me think it's a checksum we're talking about, although I can't for the life of me figure out which at first sight. It's not CRC32, CRC32c nor Adler32.

By the way, the data in the last message relates to the list of sensors the devices has seen — 9ep.Q.0M00007RK54 is the serial number, and A0 9F DE 05 is the timestamp of it initializing.

Going back to the epoch, which is essentially the last thing I can talk about for now. The numbers above clearly shower in a different range than the UNIX timestamp, which would start with 56 rather than 05. So I used the same method I used for the Verio, and used a fixed, known point in time, got the timestamp from the device and compared with its UNIX timestamp. The answer was 1455392700 — which is 2012-12-31T00:17:00+00:00. It would make perfect sense, if it wasn't 23 hours and 43 minutes away from a new year…

I guess that is all for now, I'm still trying to figure out how the data is passed around. I'm afraid that what I'm seeing from the software looks like it's sending whole "preferences" structures that change things at once, which makes it significantly more complicated to understand. It's also not so easy to tell how the device and software decide the measure unit as I don't have access to logs of a mg/dl device.
Top

Qt 5.6 is here and it runs on FreeBSD/Pi

Postby gonzo via FreeBSD developer's notebook | All Things FreeBSD »

Qt 5.6 is finally out so I thought I’d give it a spin on my Raspberry Pi. Previously I used cross-compilation but this time I thought I’d spend some time in trying to create ports Qt modules. There is Qt 5.5.1 in ports and it’s nicely split into sub-ports and most of gory details are hidden in bsd.qt.mk library. The problem with it is it’s highly coupled with Xorg stuff and I didn’t find easy way to squeeze non-desktop use cases into current infrastructure. So I just created new custom devel/qt56 port.

In order to get it done as fast as possible I took several shortcuts: all the stuff is installed to /usr/local/qt5 directory, there is no meta-port for submodules to share common part yet. Also besides base the only module I packaged (I was particularly interested in it) was QtMultimedia. Should be fairly easy to fix last 2 items though.

Qt layer for UI provider is called QPA: Qt Platform Abstraction. There are quite a few of them but I am familiar and interested in two: plain framebuffer and eglfs. Plain framebuffer stock QPA plugin is called linuxfb and naturally we can’t use it for FreeBSD. Luckily there is a lot of similarities between Linux fb and syscons(or vt) fb (you can’t get very innovative with framebuffer) so writing QPA support for scfb was easy. It can be used with any generic SoC with framebuffer support: AM335x(beaglebone black), i.MX6(Wandboard), NVIDIA Tegra, Pi.

elgfs is full-screen OpenGL mode. OpenGL implementation depends on SoC vendor, e.g. Pi’s library is provided by raspberrypi-userland port. If we had OpenGL support for AM335x it would have been provided by PowerVR userland libraries. As far as I understand eglfs can’t be made universal: each implementation has its own quirks so you have to specify target OpenGL vendor during build time. So far we support eglfs only for Raspberry Pi thanks to Broadcom’s open-sourcing kernel drivers and userland libraries.

Then there is question of input. When you start your application from console you have several options to get user’s input: keyboard, mouse, touchscreen. Common way to do this on Linux is through evdev which is a universal ways to access these kinds of devices. There is effort to get this functionality on FreeBSD so when it’s there stock input plugins could be used as-is. Until then there are two non-standard plugins by yours truly: bsdkeyboard and bsdsysmouse.

scfb, bsdysmouse, and bsdkeyboard are included in experimental ports as patches.

All in all experience of getting Qt 5.6 running on FreeBSD/Pi was smooth. And to make this post more entertining here are two demos running on my Pi2 with official touchscreen.

Qt demo player with visualizer (src):

OpenGL demo with Qt logo in it (src):

Top

CGM review: Abbott FreeStyle Libre

Postby Flameeyes via Flameeyes's Weblog »

While working on reverse engineering glucometers I decided to give a try to a CGM solution. As far as I know the only solution available in Ireland is Dexcom. A friend of mine already has this, and I've seen it, but it felt a bit too bulky for my taste.

Instead, I found out on Twitter about a new solution from Abbott – the same company I wrote plenty before while reverse engineering devices – called FreeStyle Libre. When I first got to their website, though, I found out that the description videos themselves were "not available in my country". When I went back to check on it, the whole website was not available at all, and instead redirected me to a general website telling me the device is not available in my country.

I won't spend time here to describe how to work around the geolocking, I'm sure you can figure it out or find the instructions on other websites. Once you work around accessing the website, ordering is also limited to UK addresses for both billing and shipping — these are also fairly easy to work around, particularly when you live in Éire. I can't blame Abbott for not selling the device in this country (they are not allowed by law) but it would be nice if they didn't hide the whole website though!

Anyway, I have in some ways (which I won't specify) worked around the website geolocking and order one of the starter kits back in February. The kit comes with two sensors (each valid for 14 days) and with a reader device which doubles as a normal glucometer.

The sensors come with an applicator that primes them and attaches them to the arm. The applicator is not too difficult to use even with your weak hand, which is a nice feature given that you should be alternating the arm you attach it to. Once you put the sensor on you do feel quite a bit of discomfort but you "get used to it" relatively quickly. I would suggest avoiding the outer-side of the arm though, particularly if you're clumsy like me and tend to run into walls fairly often — I ended up discarding my second sensor after only a week because I just took it out by virtue of falling.

One of the concerns that I've been warned about by a friend, on CGM sensors, is that while the sensor has no problem reading for the specified amount of time, the adhesive does not last that long. This was referred to another make and model (the Dexcom G4) and does not match my experience with the Libre. It might be because the Libre has a wider adhesive surface area, or because it's smaller and lighter, but I haven't had much problem with it trying to come away before the 14 days, even with showers and sweat. I would still suggest keeping at hand a roll of bandage tape though, just in case.

The reader device, as I said earlier, doubles as a normal glucometer, as it accepts the usual FreeStyle testing strips, both for blood and for ketone reading, although it does not come with sample strips. I did manage to try blood readings by using one of the sample strips I had from the FreeStyle Optium but I guess I should procure a few more just for the sake of it.

The design of the reading device is inspired by the FreeStyle InsuLinx, with a standard micro-USB port for both data access and charging – I was afraid the time would come that they would put non-replaceable batteries on glucometers! – and a strip-port to be used only for testing (I tried putting the serial port cable but the reader errors out.) It comes with a colourful capacitive touch-screen, from which you can change most (But not all) settings. A couple of things, such as the patient name, can only be changed from the software (available for Windows and OSX.)

The sensor takes a measurement every 15 minutes to draw the historical graph, which is stored for up to eight hours. Plus it takes a separate, instantaneous reading when you scan it. I really wish they put a little more memory in it to keep, say, 12 hours on the device, though. Eight hours is okay during the day if you're home, but it does mean you shouldn't forget the device home, when you go to the office (unless you work part-time), and that you might lose some of the data from just after going to sleep if you manage to sleep more than eight hours at a time — lucky you, by the way! I can't seem to be able to sleep more than six hours.

The scan is at least partially performed over NFC, as my phone can "see" the sensor as a tag, although it doesn't know what to do with it, of course. I'm not sure if the whole data dumping is done over NFC, but it would make it theoretically possible to get rid of the reader in favour of just using a smartphone then… but that's a topic for a different time.

The obvious problem with CGM solutions is their accuracy. Since they don't actually measure blood samples (they do use a needle, but it's a very small one) but rather interstitial fluid, it is often an open question on whether their readings can be trusted, and the suggestion is to keep measuring normal blood sugar once or twice a day. Which is part of the reason why the reader also doubles as a normal glucometer.

Your mileage here may vary widely, among other things because it varies for me as well! Indeed, I've had days in which the Libre sensor and the Accu-Chek Mobile matched perfectly, while the last couple of days (as I'm writing this) the Libre gave a slightly lower reading, between 1 and 2 mmol/l (yes this is the measure used in UK, Ireland and Australia) lower than the Accu-Chek blood sample reading. In the opinion of my doctor, hearing from his colleagues across the water (remember, this device is not available in my country), it is quite accurate and trustworthy. I'll run with his opinion — particularly because while trying to cross-check different meters I have here, they all seem to have a quite wider error range you'd expect, even when working on a blood sample from the same finger (from different fingers it gets complicated even for the same reader.)

I'm not thrilled by the idea of using rechargeable batteries for a glucometer. If I need to take a measurement and my Accu-Chek Mobile doesn't turn on, it takes me just a moment to pick up another pair of AAA from my supply and put them in — not so on a USB-charged device. But on the other hand, it does make for a relatively small size, given the amount of extra components the device need, as you can see from the picture. The battery also lasts more than a couple of weeks without charging, and it does charge with the same microUSB standard as most of my other devices (excluding the iPod Touch and the Nexus 5X), so it's not too cumbersome while traveling.

A note on the picture: while the Accu-Chek Mobile has a much smaller and monochromatic non-touch screen, lots of its bulk is taken by the cassette with the tests (as it does not use strips at all), and it includes the lancing devices on its side, making it still quite reasonably sized. See also my review of it.

While the sensors store up to 8 hours of readings, the reader stores then up to three months of that data, including additional notes you can add to it like insulin dosage (similar to InsuLinx), meals and so on. The way it shows you that data is interesting too: any spot-check (when you scan the sensor yourself) is stored in a logbook, together with the blood sample tests — the logbooks also include a quick evaluation on whether the blood sugar is rising, falling (and greatly so) or staying constant. The automatic sensor readings are kept visible only as a "daily graph" (for midnight to midnight), or through "daily patterns" that graph (for 7, 14, 30 and 90 days) the median glucose within a band of high and low percentiles (the device does not tell you which ones they are, more to that later.)

I find the ability to see these information, particularly after recording notes on the meals, for instance, very useful. It is making me change my approach for many things, in particular I have stopped eating bagels in the morning (but I still eat them in the evenings) since I get hypers if I do — according to my doctor it's not unheard of for insulinoresistance to be stronger as you wake up.

I also discovered that other health issues you'd expect not to be related do make a mess of diabetes treatment (and thus why my doctors both insisted I take the flu shot every year). A "simple" flu (well, one that got me to 38.7⁰C, but that's unrelated, no?) brought my blood sugar to raise quite high (over 20 mmol/l), even though I was not eating as much as usual either. I could have noticed with the usual blood checking, but that's not something you look forward to when you're already feverish and unwell. For next time, I should increase insulin in those cases, but it also made me wary of colds and in general gave me a good data point that caring even for small things is important.

A more sour point is, not unusually, the software. Now, to be fair, as my doctor pointed out, all diabetes management software sucks because none of it can have a full picture of things, so the data is not as useful, particularly not to the patients. I have of course interest in the software because of my work on reverse engineering, so I installed the Windows software right away (for once, they also provide an OSX software, but since the only Mac I have access to nowadays is a work device, I have not tried it.)

Unlike the previous terrible experience with Abbott software, this time I managed to download it without a glitch, except for the already-noted geolocking of their website. It also installed fine on Windows 10 and works out of the box, among other things because it requires no kernel drivers whatsoever (I'll talk about that later when I go into the reverse engineering bits.)

Another difference between this software and anything else I've seen up to now, is that it's completely stateless. It does not download the data off the glucometer to store it locally, it downloads it and run the reports. But if you don't have the reader with you, there's no data. And since the reader stores up to 90 days worth of data before discarding, there are no reports that cross that horizon!

On the other hand, the software does seem to do a good job at generating a vast number of information. Not only it generates all the daily graphs, and documents the data more properly regarding which percentiles the "patterns" refer to (they also include two more percentile levels just to give a better idea of the actual pattern), but it provides info such as the "expected A1C" which is quite interesting.

At first, I mistakenly thought that the report functionality only worked by printing, similarly to the OneTouch software, but it turns out you can "Save" the report as PDF and that actually works quite well. It also allows you to "Export" the data, which provides you with a comma-separated values file with most of the raw data coming from the device (again, this will become useful in a separate post.)

That does not mean the software is not free from bugs, though. First of all, it does not close. Instead, if you click on the window's X button, it'll be minimized. There's an "Exit" option in the "File" menu, but more often than not it seems to cause the software to get stuck and either terminated by Windows, or requiring termination through the Task Manager. It also keeps "prodding" for the device, which ends up using 25% of one core, just for the sake of being open.

The funniest bit, though, was when I tried to "print" the report to PDF — which as I said above is not really needed, you can export it from the software just fine, but I didn't notice. In this situation, after the print dialog is shown, the software decides to hide any other window for its process behind its main window. I can only assume that this hide some Windows printing dialog that they don't want to distract the user with, but it also hides the "Save As" dialog that pops up. You can type the name blindly, assuming you can confirm you're in the right window through Alt-Tab, but you'll also have to deal with the software using its installation directory as work directory. Luckily Windows 10 is smart enough, and will warn about not having write access to the directory, and if you "OK" the invisible dialog, it'll save the file on your user's home directory instead.

As for final words, I'm sure hoping the device becomes available in Republic of Ireland, and I would really like for it to be covered by the HSE's Long Term Illness program, as the sensors are not cheap at £58 every two weeks (unless you're clumsy as me and have to replace it sooner.) I originally bought the starter kit to try this out and evaluate it, but I think it's making enough of good impact that (since I can afford it) I'll keep buying the supplies with my current method until it is actually available here (or until they make it too miserable.) I am not going to stop using the Accu-Chek Mobile for blood testing, though. While it would be nice to use a single device, the cassette system used by the Roche meter is just too handy, particularly when out in a restaurant.

I'll provide more information on my effort of reverse engineering the protocol in a follow-up post, so stay tuned if you're interested in it.
Top

Linux on the Arty, Part 0: Establishing a build environment

Postby brix via blog.brixandersen.dk »

I have long wanted to explore the endless possibilities of soft core CPUs like the Xilinx Microblaze. A few months back, I stumbled across the announcement of the Arty by Digilent/Avnet/Xilinx – an affordable evaluation/development board clearly built for makers and perfect for getting my feet wet with a few Microblaze-based projects.



I initially wanted to get GNU/Linux up and running on a Microblaze CPU on the Arty to establish a familiar software environment for further tinkering. This proved to be quite a task in itself – a task which required reading through lots of product guides, manuals, blog posts and howtos online. Lots of documentation has been written on the Microblaze subject, but none of it was quite to the point or simple for me to follow.

In the following series of blog posts I will describe my way of accomplishing this goal; Linux on the Arty, with all on-board peripherals up and running and exposed to userland.

In the first posts, post 0, we will establish a build environment for both the hardware and the software. Later posts will span from establishing a bare-minimal, Linux-ready hardware design to getting each on-board peripheral of the Arty included in the hardware design, supported by the Linux kernel and exposed to the GNU/Linux userland.

My main development environment is a Macbook running OS X. I constantly use VMware Fusion for running various other (typically either FreeBSD or GNU/Linux) virtual machines to fulfill my needs for other development platforms. For developing the hardware and building the software for the Arty, I have installed CentOS 7 in a VM. I then use ssh to establish a local connection from OS X to the CentOS VM and forward the X display to OS X, thus allowing my to see e.g. the Xilinx Vivado windows on my OS X desktop through XQuartz.

You could of course use any PC capable of running Xilinx Vivado for the hardware part, but note that you will need a GNU/Linux host for building the Linux kernel and GNU/Linux userland. We will be focusing on Vivado 2015.4 and buildroot 2016.02 for this blog series.

If you plan on using a virtual CentOS 7 for the build environment, as I do, I recommend going with at least 50 GB HDD. The Xilinx Vivado installation itself will take up at least 20 GB and the hardware/software projects will take up another 5 GB. You will need a few packages and customizations on top of the Minimal 64 Bit installation image. Also, make sure you create an administrator user account during the installation.



First off, almost all CentOS command-line tools will complain about missing locale settings. Fix this by logging in and setting the locale to e.g. U.S. English with UTF-8 encoding by issuing the following command:
sudo localectl set-locale LANG=en_US.utf-8
Remember to log out and back in for the locale settings to take effect.

Next up, install and start the Open VM Tools (skip this if not running under VMware):
sudo yum -y install open-vm-tools
sudo systemctl enable vmtoolsd
sudo systemctl start vmtoolsd

Before we install the Xilinx Vivado suite, we will need to install a few dependencies and tools:
sudo yum -y update
sudo yum -y groupinstall 'Development Tools'
sudo yum -y groupinstall 'X Window System'
sudo yum -y groupinstall Fonts
sudo yum -y install glibc.i686 libstdc++.i686 fontconfig.i686 libXext.i686 libXrender.i686 glib2.i686 libpng12.i686 libSM.i686
sudo yum -y install evince wget ncurses-devel bc screen
sudo reboot

The *.i686 packages are needed by the Xilinx Vivado Documentation Navigator, which for some reason is still shipped as a 32 bit binary. Evince is a PDF viewer, which we will be using from within the Documentation Navigator.

Download the Vivado HLx Web Install Client for Linux and install it using the following commands:
sudo mkdir /opt/Xilinx
sudo chown $USER:$USER /opt/Xilinx
chmod +x Xilinx_Vivado_SDK_2015.4_1118_2_Lin64.bin
./Xilinx_Vivado_SDK_2015.4_1118_2_Lin64.bin

If you get an error about no X11 DISPLAY variable being set, make sure you’re logged on to the CentOS host using ssh -Y .... The -Y option will enable trusted X11 display forwarding.

Make sure to enable installation of the Software Development Kit, as this will be needed later on. You propably already aquired a license for Xilinx Vivado using the voucher included with the Arty, so you can skip acquiring a licence during the installation.



The Xilinx Platform Cable USB Driver/Digilent JTAG Driver are a separate install requiring root access. Install them using the following command:
cd /opt/Xilinx/Vivado/2015.4/data/xicom/cable_drivers/lin64/install_script/install_drivers
sudo ./install_drivers


We also need to make sure that our user account has access to the USB JTAG and serial port devices exposed by the Arty. These will be owned by the dialout group, so add our user to that group:
sudo usermod -G wheel,dialout $USER
To be able to launch the various Xilinx Vivado tools, we also need to set up our shell accordingly.
echo 'source /opt/Xilinx/Vivado/2015.4/settings64.sh' >> ~/.bashrc
source /opt/Xilinx/Vivado/2015.4/settings64.sh

Finally, install the license for Xilinx Vivado obtained by using the voucher included with the Arty:
mkdir -p ~/.Xilinx
mv Xilinx.lic ~/.Xilinx/Xilinx.lic

Finally, launch docnav, open the Settings Dialog and change the PDF Viewer Path to evince.



This concludes the first part of this blog series. The next part will describe how to do the basic, Linux-ready hardware design.
Top

Abgewrackt

Postby via www.my-universe.com Blog Feed »

Heute geht mit Atuan* der erste meiner beiden verbliebenen Server endgültig vom Netz. Die Dienste sind schon alle heruntergefahren; momentan wandern noch die letzten Daten auf mein heimisches NAS. Danach werden die Platten noch einmal gründlich geschrubbt (schließlich sind das noch „normale“ Festplatten, die sich sektorweise überschreiben lassen), und die Maschine anschließend heruntergefahren – fertig zur Rückgabe an Hetzner. Damit vollzieht sich nun der Schritt, den ich bereits im Januar angekündigt hatte.


Damit ist es nun auch an der Zeit, etwas zu tun, das ich so öffentlich bisher noch nicht getan habe – nämlich all jenen zu danken, die mir in den letzten 15 Jahren bei der Serverei geholfen haben. Daher: danke Hetzner, für mehr als 10 Jahre reibungslosen Betrieb und stets hilfsbereiten und schnellen Support. Danke Gorden für den Schubs, den Du mir damals gegeben hast, mich mit Servern überhaupt zu befassen. Ein ganz besonders dickes Dankeschön geht an die RootForum.org Community, vor allem an meine Admin- und Moderationskollegen Joe User, ddm3ve und Roger Wilco, aber natürlich auch an die „alte Garde“ (Fritz, Captain Crunch, Floschi, … wo auch immer ihr gerade steckt) – von Euch habe ich so richtig viel gelernt.

Diese Aufzählung ist natürlich alles andere als vollständig – bevor ich hier jedoch weitere Absätze mit Aufzählungen fülle, möchte ich eines noch einmal ganz besonders betonen: Ohne Open Source Software wäre das alles nie möglich geworden. Irgendwie nimmt man es mit der Zeit als Selbstverständlichkeit hin, dass alles einfach so da und zu haben ist: Betriebssysteme, Datenbankserver, Web- und Mailserver, Versionsverwaltung, grafische Oberflächen, Bürosoftware, Spiele, ungezählte Webanwendungen, … Und hinter all dem steht das Talent und Können, die Kreativität und das Engagement unzähliger Designer, Programmierer, Koordinatoren, von denen viele auf freiwilliger Basis und unbezahlt an den Projekten arbeiten.

Ich habe selbst an der einen oder anderen Stelle die Erfahrung gemacht, was es bedeutet, an einem Open Source Projekt mitzuarbeiten. Daher gilt mein ganz großer Respekt all jenen, die sich um Open Source verdient machen. Egal ob als Fürsprecher, Entwickler oder in sonstiger Weise. Hier ist über viele Jahre hinweg etwas großartiges entstanden, von dem wir alle profitieren, und das es wert ist, von uns allen bewahrt, gewertschätzt und unterstützt zu werden.

So, nun ist's aber auch genug – die nächsten Postings werden wieder weniger sentimental, versprochen…

* Meine Server sind bzw. waren tatsächlich alle nach Inseln aus den Erdsee-Romanen von Ursula K. Le Guin benannt
Top

FreeBSD 10.3-RC3 Available

Postby Webmaster Team via FreeBSD News Flash »

The third Release Candidate build for the FreeBSD 10.3 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.
Top

Spammers Abusing Trust in US .Gov Domains

Postby BrianKrebs via Krebs on Security »

Spammers are abusing ill-configured U.S. dot-gov domains and link shorteners to promote spammy sites that are hidden behind short links ending in”usa.gov”.

Spam purveyors are taking advantage of so-called “open redirects” on several U.S. state Web sites to hide the true destination to which users will be taken if they click the link.  Open redirects are potentially dangerous because they let spammers abuse the reputation of the site hosting the redirect to get users to visit malicious or spammy sites without realizing it.

For example, South Dakota has an open redirect:

http://dss.sd.gov/scripts/programredirect.asp?url=

…which spammers are abusing to insert the name of their site at the end of the script. Here’ a link that uses this redirect to route you through dss.sd.gov and then on to krebsonsecurity.com. But this same redirect could just as easily be altered to divert anyone clicking the link to a booby-trapped Web site that tries to foist malware.

The federal government’s stamp of approval comes into the picture when spammers take those open redirect links and use bit.ly to shorten them. Bit.ly’s service automatically shortens any US dot-gov or dot-mil (military) site with a “1.usa.gov” shortlink. That allows me to convert the redirect link to krebsonsecurity.com from the ungainly….

http://dss.sd.gov/scripts/programredirect.asp?url=http://krebsonsecurity.com

…into the far less ugly and perhaps even official-looking:

http://1.usa.gov/1pwtneQ.

Helpfully, Uncle Sam makes available a list of all the 1.usa.gov links being clicked at this page. Keep an eye on that and you’re bound to see spammy links going by, as in this screen shot. One of the more recent examples I saw was this link — http:// 1.usa[dot]gov/1P8HfQJ# (please don’t visit this unless you know what you’re doing) — which was advertised via Skype instant message spam, and takes clickers to a fake TMZ story allegedly about “Gwen Stefani Sharing Blake Shelton’s Secret to Rapid Weight Loss.”

Spammers are using open redirects on state sites and bit.ly to make spammy domains like this one look like .gov links.
Unfortunately, a minute or so of research online shows that exact issue was highlighted almost four years ago by researchers at Symantec. In October 2012, Symantec said it found that about 15 percent of all 1.usa.gov URLS were used to promote spammy messages. I’d be curious to know the current ratio, but I doubt it has changed much.

A story at the time about the Symantec research in Sophos‘s Naked Security blog noted that the curator of usa.gov — the U.S. General Services Administration’s Office of Citizen Services and Innovative Technology — was working with bit.ly to filter out malicious or spammy links — pointing to a interstitial warning that bit.ly pops up when it detects a suspicious link is being shortened.

KrebsOnSecurity requested comment from both bit.ly and the GSA, and will update this post in the event that they respond.

I wanted to get a sense of how well bit.ly’s system would block any .gov redirects that sent users to known malicious Web sites. So I created .gov shortlinks using the South Dakota redirect, bit.ly, and the first page of URLs listed at malwaredomainlist.com — a site that tracks malicious links being used in active attacks.

The result? Bit.ly’s system allowed clicks on all of the shortened malicious links that didn’t end in “.exe,” which was most of them. It’s nice that bit.ly at least tries to filter out malicious links, but perhaps the better solution is for U.S. state and federal government sites to get rid of open redirects altogether.

The warning that bit.ly sometimes pops up if you try to shorten known, malicious links.
I generally don’t trust shortened links, and have long relied on the Unshorten.it extension for Google Chrome, which lets users unshorten any link by right clicking on it and selecting “unshorten this link”. Unshorten.it also pulls reputation data on each URL from Web of Trust (WOT).

Fun fact: Adding a “+” to the end of any link shortened with bit.ly will take you to a page on bit.ly that displays the link actual link that was shortened.

How do you respond to shortened links? Sound off in the comments below.

Update, Mar. 22, 6:20 p.m. ET: A GSA spokesperson said that When GSA learns that an open redirector is being used for 1.usa.gov links, “we reach out to the owner and ask that it be shut down. We are also working with Bitly to remove 1.usa.gov links with open redirectors that aren’t shut down at our request. GSA will continue to take the necessary steps to keep .gov domains secure, and we encourage anyone who discovers an open redirector in the .gov space to notify the affected agency so that it can be disabled.”
Top

Anime/Manga Figuren Verkauf

Postby via dietanu »

Nach einigen Jahren ist die Luft etwas aus der Sammelleidenschaft für Anime/Manga-Figuren raus. Wir haben hier jede Menge sehr schöne und auch seltene Figuren über die Jahre zusammengetragen, wie die 20th Anniversary Edition von “Ah! My Goddess”. Die ganzen Figuren zu fotografieren wird einiges an Zeit in Anspruch nehmen, also schaut mal ab und an in meinem Sale vorbei.
Top

Introducing a New Website and Logo for the Foundation

Postby Anne Dickison via FreeBSD Foundation »


The FreeBSD Foundation is pleased to announce the debut of our new logo and website, signaling the ongoing evolution of the Foundation identity, and ability to better serve the FreeBSD Project. Our new logo was designed to not only reflect the established and professional nature of our organization, but also to represent the link between the Project and the Foundation, and our commitment to community, collaboration, and the advancement of FreeBSD.

We did not make this decision lightly.  We are proud of the Beastie in the Business Suit and the history he encompasses. That is why you’ll still see him make an appearance on occasion. However, as the Foundation’s reach and objectives continue to expand, we must ensure our identity reflects who we are today, and where we are going in the future. From spotlighting companies who support and use FreeBSD, to making it easier to learn how to get involved, spread the word about, and work within the Project, the new site has been designed to better showcase, not only how we support the Project, but also the impact FreeBSD has on the world. The launch today marks the end of Phase I of our Website Development Project. Please stay tuned as we continue to add enhancements to the site.

We are also in the process of updating all our collateral, marketing literature, stationery, etc with the new logo. If you have used the FreeBSD Foundation logo in any of your marketing materials, please assist us in updating them. New Logo Guidelines will be available soon. In the meantime, if you are in the process of producing some new literature, and you would like to use the new Foundation logo, please contact our marketing department to get the new artwork.

Please note: we've moved the blog to the new site. See it here.



Top

Thieves Phish Moneytree Employee Tax Data

Postby BrianKrebs via Krebs on Security »

Payday lending firm Moneytree is the latest company to alert current and former employees that their tax data — including Social Security numbers, salary and address information — was accidentally handed over directly to scam artists.

Seattle-based Moneytree sent an email to employees on March 4 stating that “one of our team members fell victim to a phishing scam and revealed payroll information to an external source.”

“Moneytree was apparently targeted by a scam in which the scammer impersonated me and asked for an emailed copy of certain information about the Company’s payroll including Team Member names, home addresses, social security numbers, birthdates and W2 information,” Moneytree co-founder Dennis Bassford wrote to employees.

The message continued:

“Unfortunately, this request was not recognized as a scam, and the information about current and former Team Members who worked in the US at Moneytree in 2015 or were hired in early 2016 was disclosed. The good news is that our servers and security systems were not breached, and our millions of customer records were not affected. The bad news is that our Team Members’ information has been compromised.”

A woman who answered a Moneytree phone number listed in the email confirmed the veracity of the co-founder’s message to employees, but would not say how many employees were notified. According to the company’s profile on Yellowpages.com, Moneytree Inc. maintains a staff of more than 1,200 employees. The company offers check cashing, payday loan, money order, wire transfer, mortgage, lending, prepaid gift cards, and copying and fax services.

Moneytree joins a growing list of companies disclosing to employees that they were duped by W2 phishing scams, which this author first warned about in mid-February.  Earlier this month, data storage giant Seagate acknowledged that a similar phishing scam had compromised the tax and personal data on thousands of current and past employees.

I’m working on a separate piece that examines the breadth of damage done this year by W2 phishing schemes. Just based on the number of emails I’ve been forwarded from readers who say they were similarly notified by current or former employers, I’d estimate there are hundreds — if not thousands — of companies that fell for these phishing scams and exposed their employees to all manner of identity theft.

W2 information is highly prized by fraudsters involved in tax refund fraud, a multi-billion dollar problem in which thieves claim a large refund in the victim’s name, and ask for the funds to be electronically deposited into an account the crooks control.

Tax refund fraud victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS. To learn more about tax refund scams and how best to avoid becoming the next victim, check out this story.

For better or worse, most companies that have notified employees about a W2 phish this year are offering employees the predictable free credit monitoring, which is of course useless to prevent tax fraud and many other types of identity theft. But in a refreshing departure from that tired playbook, Moneytree says it will be giving employees an extra $50 in their next paycheck to cover the initial cost of placing a credit freeze (for more information on the different between credit monitoring and a freeze and why a freeze might be a better idea, check out Credit Monitoring vs. Freeze and How I Learned to Stop Worrying and Embrace the Security Freeze).

“When something like this happens, the right thing to do is to disclose what you know as soon as possible, take care of the people affected, and learn from what went wrong,” Bassford’s email concluded. “To make good on that last point, we will be ramping up our information security efforts company-wide, because we never want to have to write an email like this to you again.”
Top

Call For Artists: New Icon Theme

Postby Josh Smith via Official PC-BSD Blog »

Source: Call For Artists: New Icon Theme

Since the founding of the Lumina desktop project, one of the most common questions I get asked is: “I am not a programmer, but how can I help out?” Well today I would like to open up a new method of contributing for those of you that are graphically-inclined: the creation of a brand new icon theme for the Lumina desktop!

This new icon theme will adhere to the FreeDesktop specifications[1] for registering an icon theme, and the good news is that I have already handled all the administrative setup/framework for you so that all you need to do to contribute is basically just send in icon files!

Here are the highlights for the new theme:

  1. Included within the main Lumina source repository
  2. All icons will be licensed under the Creative Commons Attribution 4.0 International Public License. This is comparable to the 3-clause BSD license, but specifically for static images/files (whereas the BSD license is for source code).
  3. This will be a high-contrast, high-resolution, fully-scalable (SVG) theme.
  4. The general concept is a white foreground, with a black outline/shadow around the icon, and colorized emblems/overlays for distinguishing between similar icons (“folder” vs “folder-network” for instance). We are going for a more professional/simple look to the icons since tiny image details generally do not scale as well across the range we are looking at.
The details on how to contribute an icon to the theme are listed on the repository page as well, but here is the summary:

  1. Icons which are still needed are listed in the TODO.txt files within each directory.
  2. Submit the icon file via git pull request
  3. Add an entry for your icon/submission to the AUTHORS file (to ensure each contributor gets proper credit for their work)
  4. Remove the icon from the appropriate TODO.txt file/list
If you are not familiar with git or how to send git pull requests, feel free to email me the icon file(s) you want to contribute and I can add them to the source tree for you (and update the AUTHORS/TODO files as necessary). Just be sure to include your full name/email so we can give you the proper credit for your work (if you care about that).

 

As an added bonus since we don’t have any actual icons yet (just the general guidelines), the first contributor to send in some icons will get to help decide the overall look-n-feel of the icon theme!

 

Have Fun!

 

Ken Moore

 

[1] FreeDesktop Specifications

  • Theme Registration: https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html
  • Icon Names: https://specifications.freedesktop.org/icon-naming-spec/latest/ar01s04.html
Top

FOSDEM and the unrealistic IPv6-only network

Postby Flameeyes via Flameeyes's Weblog »

Most of you know FOSDEM already, for those who don't, it's the largest Free and Open Source Software focused conference in Europe (if not the world.) If you haven't been to it I definitely suggest it, particularly because it's a free admission conference and it always has something interesting to discuss.

Even though there is no ticket and no badge, the conference does have free WiFi Internet access, which is how the number of attendees is usually estimated. In the past few years, their network has also been pushing the envelope on IPv6 support, first providing a dualstack network when IPv6 was fairly rare, and in the recent (three?) years providing an IPv6-only network as the default.

I can see the reason to do this, in the sense that a lot of Free Software developers are physically at the conference, which means they can see their tools suffer in an IPv6 environment and fix them. But at the same time, this has generated lots of complaints about Android not working in this setup. While part of that noise was useful, I got the impression this year that the complaints are repeated only for the sake of complaining.

Full disclosure, of course: I do happen to work for the company behind Android. On the other hand, I don't work on anything related at all. So this post is as usual my own personal opinion.

The complaints about Android started off quite healthy: devices couldn't actually connect to an IPv6 dual-stack network, and then they couldn't connect to a IPv6-only network. Both are valid complaints to begin with, though there is a bit more to it. This year in particular the complaints were not so healthy because current versions of Android (6.0) actually do support IPv6-only networks, though most of the Android devices out there are not running this version, either because they have too old hardware or because the manufacturer has not released a new build yet.

What does tick me though has really nothing to do with Android, but rather with the idea that people have that the current IPv6-only setup used by FOSDEM is a realistic approach to IPv6 networking — it really is not. It is a nice setup to test things out and stress the need for proper support for IPv6 in tools, but it's very unlikely to be used in production by anybody as is.

The technique used (at least this year) by FOSDEM is NAT64. To oversimplify how this works, it is designed to modify the DNS replies when resolving hostnames so that they always provide an IPv6 address, even though they would only have A records (IPv4 addresses). The IPv6 addresses used would then map back to IPv4, and the edge router would then "translate" between the two connections.

Unlike classic NAT, this technique requires user-space components, as the kernel uses separate stacks for IPv4 and IPv6 which do not allow direct message passing between the two. This makes it complicated and significantly slower (you have to copy the data from kernel to userspace and back all the time), unless you use one of the hardware router that are designed to deal with this (I know both Juniper and Cisco have those.)

NAT64 is a very useful testbed, if your target is figuring out what in your stack is not ready for IPv6. It is not, though, a realistic approach for consumer networks. If your client application does not have IPv6 support, it'll just fail to connect. If for whatever reason you rely on IPv4 literals, they won't work. Even worse, if the code allows a connection to be established over IPv6, but relies on IPv4 semantics for things like logging, or (worse) access control, then you now have bugs, crashes or worse, vulnerabilities.

And while fuzzing and stress-testing are great for development environments, they are not good for final users. In the same way -Werror is a great tool to fix your code, but uselessly disrupts your users.

In a similar fashion, while IPv6-only datacenters are not that uncommon – Facebook (the company) talked about them two years ago already – they serve a definite different purpose from a customer network. You don't want, after all, your database cluster to connect to random external services that you don't control — and if you do control the services, you just need to make sure they are all available over IPv6. In such a system, having a single stack to worry about simplifies, rather than complicate, things. I do something similar for the server I divide into containers: some of them, that are only backends, get no IPv4 at all, not even in NAT. If they ever have to go fetch something to build on the Internet at large, they go through a proxy instead.

I'm not saying that FOSDEM setting up such a network is not useful. It actually hugely is, as it clearly highlights the problems of applications not supporting IPv6 properly. And for Free Software developers setting up a network like this might indeed be too expensive in time or money, so it is a chance to try things out and iron out bugs. But at the same time it does not reflect a realistic environment. Which is why adding more and more rant on the tracking Android bug (which I'm not even going to link here) is not going to be useful — the limitation was known for a while and has been addressed on newer versions, but it would be useless to try backporting it.

For what it's worth, what is more likely to happen as IPv6 adoption needs to happen, is that providers will move towards solutions like DS-Lite (nothing to do with Nintendo), which couples native IPv6 with carrier-grade NAT. While this has limitations, depending on the size of the ISP pools, it is still easier to set up than NAT64, and is essentially transparent for customers if their systems don't support IPv6 at all. My ISP here in Ireland (Virgin Media) already has such a setup.
Top

From Stolen Wallet to ID Theft, Wrongful Arrest

Postby BrianKrebs via Krebs on Security »

It’s remarkable how quickly a stolen purse or wallet can morph into full-blown identity theft, and possibly even result in the victim’s wrongful arrest. All of the above was visited recently on a fellow infosec professional whose admitted lapse in physical security led to a mistaken early morning arrest in front of his kids.

The guy police say stole Miller’s wallet and got him wrongfully arrested was himself apprehended earlier this month.
On the morning of Feb. 20, Lance Miller was arrested in front of his two children by local sheriffs in Golden, Colo. Miller, a managing partner at cybersecurity recruitment firm Curity, had discovered his wallet was missing three days prior to his arrest, reported it to the local police and canceled his credit cards. In the meantime someone had drained his checking account of approximately $5,000, and maxed out his credit cards for almost another $5,000.

“I was standing there in front of my kids saying, ‘You guys are crazy. Do I look like a burglar?'” Miller recalled. “The cop goes, ‘Well, I don’t know what a burglar looks like,’ and they put me in cuffs and in the car.”

Miller said it wasn’t until the 30-minute, handcuffed drive to police station that the local police and the local sheriff’s office began comparing notes, discovering in the process that they’d grabbed the wrong guy and removing the cuffs. Miller soon learned the thief who’d stolen his wallet had impersonated him during multiple traffic stops. A car the impostor was driving also was spotted speeding away from the scene of a burglary, but Miller said the police in that case didn’t give chase in that case because it wasn’t a violent crime.

“He started doing all kinds of stuff, and when he got pulled over he gave them my ID,” Miller said. “The first time he got pulled over and gave them my ID he was riding shotgun in a car with stolen plates that hadn’t yet been reported stolen. They let the guy go that night but then came and arrested me the next morning.”

Miller’s arrest came less than 24 hours after the local Arvada Police Department called to alert him that someone had tried to use his credit card at a nearby bank. Not long after that, a fuel station owner called the cops after getting suspicious about a customer and writing down his license plates.

“When we got to the [police] station, the police chief met me in the parking lot and apologized, then brought me 3 cups of coffee,” Miller said.

According to Miller, the police eventually arrested the guy suspected of stealing his wallet and other crimes that were previously pinned on Miller. The authorities now believe the man responsible is one John Tyler Waldorf, a 37-year-old suspect who had at least 16 warrants for his arrest pending in surrounding counties in connection with burglary and other alleged offenses.

Miller said investigators told him that Waldorf was suspected of associating with a white supremacist crime ring involved in identity theft, drug dealing and serial burglary.

“When these guys are not in prison, they’re expected to earn for the gang,” Miller said. “And apparently one of the best earning methods for these guys is ID theft.”

Louisville, Colo. police issued a bulletin explaining that Waldorf and his associates were known to have entered unlocked vehicles in the driveways of local residences and grabbed the garage door openers to the homes. “The suspect(s) entered the homes through the garage doors and stole items of value,” the police explained. “Both homes were occupied during the burglaries.”

Miller allows that the thieves in his case didn’t need to open the garage or enter his home: He’d absent-mindedly left his wallet in the car overnight while the vehicle was parked in the open garage. He now vows to tighten up his personal security habits.

“We live in a pretty nice area, and I got lulled into the idea that the garage was safe,” he said. “But in the end, it’s all on me. I’m an infosec guy, and if I can’t practice better operational security like that at my house, I should get the hell out of this industry.”

If your wallet or purse is lost or stolen, it’s a good idea to do most – if not all — of these things:

-File a police report as soon as possible to establish a record of the loss. If possible, get a physical copy of the police report at some point. You may be able to file a report and obtain a copy of it online, or you may have to go down to the local police station and pay a small administrative fee to get a copy. Either way, this report can be very useful in getting you a freeze on your credit file or an extended fraud alert at no cost if you decide to do that down the road.

-Contact your bank and report any checks or credit/debit cards lost or stolen. Most banks issue credit and debit cards with “zero liability” provisions, meaning you’re not on the hook for fraudulent charges or withdrawals — provided you report them promptly. The Truth In Lending Act limits consumer liability to $50.00 once a credit card is reported lost or stolen, although many card issuers will waive that amount as well. Fraudulent debit card charges are a different story: The Electronic Fund Transfer Act limits liability for unauthorized charges to $50.00, if you notify your financial institution within two business days of discovering that your debit card was “lost or stolen.” If you wait longer, but notify your bank within 60 days of the date your statement is mailed, you may be responsible for up to $500.00. Wait longer than that and you could lose all the money stolen from your account.

-Contact one of the major credit reporting bureaus (Equifax, Experian, Innovis and Trans Union) and at the very least ask to put a fraud alert on your file, to prevent identity theft in the future. By law, the one you alert has to share the alert with the other three. The initial fraud alert stays on for 90 days. If you have that police report handy, you can instead request an extended fraud alert, which stays in effect for seven years.

-Fraud alerts are okay, but consider placing a security freeze on your credit file with the major bureaus. For more on the importance of a security freeze, check out How I Learned to Stop Worrying and Embrace the Security Freeze.

-Order a free copy of your credit report from one of the major bureaus. By law, you are entitled to a free report from each of the bureaus once a year. The only real free place to get your report is via the site mandated by the federal government: annualcreditreport.com.
Top

Turbo-Familienkutsche

Postby via www.my-universe.com Blog Feed »

Dürfte ich mir ein Flugzeug zum privaten Gebrauch aussuchen, wäre die Pilatus PC-12 ganz weit vorne mit dabei. Die Maschine bietet viel Platz für Familie und Freunde und ist für den Betrieb mit nur einem Piloten zugelassen. Außerdem ist sie schnell und schafft mit über 1.500 NM für ein einmotoriges Flugzeug auch eine beindruckende Reichweite. In Sachen Infrastruktur bleibt sie ebenfalls genügsam – eine befestigte Piste ist nicht unbedingt erforderlich; die Pilatus kommt auch gut mit unbefestigten Landestreifen klar.


Der geneigte X-Plane Pilot hat gleich drei kommerzielle Modelle zur Auswahl: Shade Tree Micro Aviation (STMA) und Michael Sgier bieten ihr Paket je für knapp 27 US-$ an; Carenado langt mit 35 US-$ etwas kräftiger zu. Mein persönlicher Favorit und im Screenshot zu sehen ist die Variante von STMA. Diese kommt mit klassischer PC-12 Avionik daher (EFIS und zwei Garmin GNS530). Die Optik ist etwas schlichter als die der beiden anderen Modelle (es werden auch nur 2k Texturen verwendet), dafür ist das Modell auf meiner immerhin fünf Jahre alten Hardware noch recht Framerate-verträglich und bietet eine recht akkurate Nachbildung der Systeme sowie des Flugverhaltens (auch wenn eine Anpassung an die aktuelle X-Plane Version definitiv nicht schaden würde).

Wer mehr Wert auf ein Glascockpit und 4k Texturen legt, findet bei Michael Sgier eine Variante mit hochgradig angepasster Avionik (z. B. einer texturierten Moving Map) – und eben höherwertigen Texturen. Seine PC-12 NG ergänzt das klassische STMA Modell eigentlich ganz wunderbar, nur schlägt es bei mir „etwas“ auf die Framerate. Genauso übrigens wie das Modell von Carenado, dessen Anschaffung sich für mich nicht wirklich gelohnt hat. Klar, optisch ist es die ansprechendste Nachbildung einer PC-12, die für X-Plane derzeit zu haben ist. Es handelt sich hierbei übrigens nicht um ein NG-Modell, sondern um eine klassische PC-12; allerdings mit einem nicht gerade realitätsgetreuen Taxi- und Flugverhalten.

Für das Modell von STMA habe ich dieses Wochenende eine Air-Child Livery gebastelt. Dabei habe ich erstmals meine selbst erstellten Vektorgrafiken eingesetzt, die ich in Ermangelung eines hochwertigen Air-Child Paintkits selbst nachgebaut habe. Kerry, STMAs Aircraft Painter war mit der Livery soweit zufrieden; so dass ich die Livery im Hangar zum Download bereitstellen durfte.
Top

Trying out imapsync

Postby Sven Vermeulen via Simplicity is a form of art... »

Recently, I had to migrate mail boxes for a couple of users from one mail provider to another. Both mail providers used IMAP, so I looked into IMAP related synchronization methods. I quickly found the imapsync application, also supported through Gentoo's repository.

What I required

The migration required that all mails, except for the spam and trash e-mails, were migrated to another mail server. The migrated mails had to retain their status flags (so unread mails had to remain unread while read mails had to remain read), and the migration had to be done in two waves: one while the primary mail server was still in use (where most of the mails where synchronized) and then, after switching the mail servers (which was done through DNS changes) re-sync to fetch the final ones.

I did not get access to the credentials of all mail boxes, but together with the main administrator we enabled a sort-of shadow authentication system (a temporary OpenLDAP installation) in which the same users were enabled, but with passwords that will be used during the synchronization. The mailservers were then configured to have a secondary interface available which used this OpenLDAP rather than the primary authentication that was being used by the end users.

Using imapsync

Using imapsync is simple. It is a command-line application, and everything configurable is done through command arguments. The basic ones are of course the source and target definitions, as well as the authentication information for both sides.

~$ imapsync \
  --host1 src-host --user1 src-user --password1 src-pw --authmech1 LOGIN --ssl1 \
  --host2 dst-host --user2 dst-user --password2 dst-pw --authmech2 LOGIN --ssl2
The use of the --ssl1 and --ssl2 is not to enable an older or newer version of the SSL/TLS protocol. It just enables the use of SSL/TLS for the source host (--ssl1) and destination host (--ssl2).

This would just start synchronizing messages, but we need to include the necessary directives to skip trash and spam mailboxes for instance. For this, the --exclude parameter can be used:

~$ imapsync ... --exclude "Trash|Spam|Drafts"
It is also possible to transform some mailbox names. For instance, if the source host uses Sent as the mailbox for sent mail, while the target has Sent Items, then the following would enable migrating mails between the right folders:

~$ imapsync ... --folder "Sent" --regextrans2 's/Sent/Sent Items/'
Conclusions and interesting resources

Using the application was a breeze. I do recommend to create a test account on both sides so that you can easily see the available folders, source and target naming conventions as well as test if rerunning the application works flawlessly.

In my case for instance, I had to add --skipsize so that the application does not use the mail sizes for comparing if a mail is already transferred or not, as the target mailserver showed different mail sizes for the same mails. This was luckily often documented on the various online tutorials about imapsync, such as

The migration took a while, but without major issues. Within a few hours, the mailboxes of all users where correctly migrated.
Top

FreeBSD 10.3-RC2 Available

Postby Webmaster Team via FreeBSD News Flash »

The second Release Candidate build for the FreeBSD 10.3 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.
Top

How to break sysctl

Postby Patrick via Patrick's playground »

A long time ago sysctl used one confg file: /etc/sysctl.conf

There was a simple way to (re)load the values from that file: sysctl -p
There are aliases -f --file that do the same.

Then things were Improved and Enhanced. Now sysctl -p will either fail (bug in procps-3.3.9) or not apply the config (3.3.10+). Which is possibly at times a bit fatal on production machines that rely on nonstandard settings to handle the workload.

How did things break? Of course new config paths must be added. Like any Modern Application sysctl will read snippets from a directory, and not just one directory but six:
/run/sysctl.d/*.conf
/etc/sysctl.d/*.conf
/usr/local/lib/sysctl.d/*.conf
/usr/lib/sysctl.d/*.conf
/lib/sysctl.d/*.conf
/etc/sysctl.conf

So let's think ...

/run ? Why would you put config there. Srsly wat. Use sysctl -w if you want to temporarily set a value.

/etc/sysctl.d ? Looks reasonable.

/usr/local/lib ? WAT. That's not a path where config lives. /usr/lib ? Why do you put things that are not libs in libdir. And since you need to be administrative access person to modify that path, it is like /etc/sysctl.d only strictly worse.

/lib ? oh, because ... uhm ... I can't figure this one out

and finally, the classic /etc/sysctl.conf.

So four of the six new paths are poop, and we could completely remove this misfeature by adding an 'include /etc/sysctl.d/*.conf' to /etc/sysctl.conf. Then we wouldn't need sysctl --system, sysctl -p would still work, and there'd be less code written to implement this misfortune and less code written to mitigate the failures caused by it.

Having to fight such changes and the breakage they cause is frustrating, by changing less we could achieve more.
What amuses me most about this is that this change actually broke the new feature (--system) in the first iteration, after breaking the old behaviour. Amazing amount of churn that doesn't fix a problem we've had. No, I'm not grumpy!
Top

Hackers Target Anti-DDoS Firm Staminus

Postby BrianKrebs via Krebs on Security »

Staminus Communications Inc., a California-based Internet hosting provider that specializes in protecting customers from massive “distributed denial of service” (DDoS) attacks aimed at knocking sites offline, has itself apparently been massively hacked. Staminus’s entire network was down for more than 20 hours until Thursday evening, leaving customers to vent their rage on the company’s Facebook and Twitter pages. In the midst of the outage, someone posted online download links for what appear to be Staminus’s customer credentials, support tickets, credit card numbers and other sensitive data.

The e-zine posted online Thursday following an outage at Staminus Communications.
Newport Beach, Calif.-based Staminus first acknowledged an issue on its social media pages because the company’s Web site was unavailable much of Thursday.

“Around 5am PST today, a rare event cascaded across multiple routers in a system wide event, making our backbone unavailable,” Staminus wrote to its customers. “Our technicians quickly began working to identify the problem. We understand and share your frustration. We currently have all hands on deck working to restore service but have no ETA for full recovery.”

Staminus now says its global services are back online, and that ancillary services are being brought back online. However, the company’s Web site still displays a black page with a short message directing customers to Staminus’s social media pages.

Meanwhile, a huge trove of data appeared online Thursday, in a classic “hacker e-zine” format entitled, “Fuck ’em all.” The page includes links to download databases reportedly stolen from Staminus and from Intreppid, another Staminus project that targets customers looking for protection against large DDoS attacks.

Frustrated Staminus customers vent on the company’s Facebook page.
The authors of this particular e-zine indicated that they seized control over most or all of Staminus’s Internet routers and reset the devices to their factory settings. They also accuse Staminus of “using one root password for all the boxes,” and of storing customer credit card data in plain text, which is violation of payment card industry standards.

Staminus so far has not offered any additional details about what may have caused the outage, nor has it acknowledged any kind of intrusion. Several Twitter accounts associated with people who claim to be Staminus customers frustrated by the outage say they have confirmed seeing their own account credentials in the trove of data dumped online.

I’ve sent multiple requests for comment to Staminus, which is no doubt busy with more pressing matters at the moment. I’ll update this post in the event I hear back from them.

It is not unusual for attackers to target Anti-DDoS providers. After all, they typically host many customers whose content or message might be offensive — even hateful — speech to many. For example, among the company’s many other clients is kkk-dot-com, the official home page of the Ku Klux Klan (KKK) white supremacist group. In addition, Staminus appears to be hosting a large number of internet relay chat (IRC) networks, text-based communities that are often the staging grounds for large-scale DDoS attack services.
Top

Using Bacula’s btape speed test on FreeBSD 10.2 and an LTO-4 tape drive

Postby Dan Langille via Dan Langille's Other Diary »

Let’s try the speed test which comes with Bacula’s btape utility. This is Bacula 7.4.0 on FreeBSD 10.2, with a Dell TL4000 LTO-4 tape library.
Top

eero: A Mesh WiFi Router Built for Security

Postby BrianKrebs via Krebs on Security »

User-friendly and secure. Hardly anyone would pick either word to describe the vast majority of wireless routers in use today. So naturally I was intrigued a year ago when I had the chance to pre-order a eero, a new WiFi system billed as easy-to-use, designed with security in mind, and able to dramatically extend the range of a wireless network without compromising speed. Here’s a brief review of the eero system I received and installed a week ago.

Three eero devices designed to create an extended range “mesh” wireless network without compromising speed.
The standard eero WiFi system comes with three eero devices, each about the width of a square coaster and roughly an inch thick. Every individual eero unit has two built-in WiFi radios that are designed to hand off traffic with the other two units.

This two-radio aspect is important, as most consumer devices that are made and marketed as WiFi range extenders or “repeaters” contain only one radio, and thus end up halving the speed of the repeated WiFi signal.

The makers of eero recommend one device for every 1,000 square feet, and advise placing one device no further than 40 feet from another. Each eero has two ethernet ports in the back, but only one of the eeros needs to be connected directly into your modem with an ethernet cable. That means that a 3-piece eero set has a total of five available ethernet ports, or at least one open ethernet port at each eero location.

Most wireless routers require owners to configure the device by using a hard-wired computer or laptop, opening a browser and navigating to a numeric Internet address to enter some default credentials. From there, you’re on your own. In contrast, the eero system relies on a simple mobile app for setup. The app asks for your name, email address and mobile number, and then sends a text with a one-time passcode.

After you verify the code on your mobile device, the app prompts you to pick a network name (SSID) and password. The device defaults to WPA-2 PSK (AES) for encryption — the strongest security currently available.

Once you’ve assigned each eero a unique location — and as long as the three devices can talk to each other — the network should be set up. The entire process — from placing and plugging in the eeros to setting up the network —  took me about five minutes, but most of that was just me walking from one room or floor to the next to adjust the location of the devices.

MY TAKE?

The eero system did indeed noticeably extend the range of my home WiFi network. My most recent router — an ASUS RT-N66U, a.k.a the “Dark Knight” — cost about $150 when I bought it, but it never gave me coverage throughout our three-level home despite multiple experiments with physical placement of the device. In contrast, the eero system extended the range of my network throughout our home and to about a dozen meters outside the house in every direction.

In fact, I’m now writing this column from a folding chair in the front lawn, something I couldn’t do with any router I’ve previously owned. Then again, a wireless network that extends well beyond one’s home may actually be a security minus for those who’d rather not have their network broadcast beyond their front porch or apartment walls.

This is a good time to note one of eero’s best features: The ability to add guests to your wireless network quickly and easily. According to an interview with eero’s co-founder (more on that below), the firewall rules that govern any devices added to a eero guest network prevent individual hosts from directly communicating with any other on the local network. With a few taps on the app, guests are invited to join via a text or email message, and the invite contains the name (SSID) of the guest wireless network and a plaintext password.

There are a few aspects about the eero system that may give pause to some readers — particularly the tinfoil hat types and those who crave more granular control over their wireless router. Control freaks may have a hard time letting go with the eero — in part because it demands a great deal of trust — but also because frankly it’s a little too easy to set up.

There aren’t a lot of configuration options available in the app. eero says it is working on rolling out new features and options, and that it’s so far been focused on getting shipping all of the pre-ordered units so that they work as advertised. This is a WiFi system that I can see selling very nicely to relatively well-off consumers who don’t know or don’t want to know how to configure a wireless router.

To be clear, the eero is not a cheap WiFi system. I paid $299 for my three eeros, and that was at the pre-order rate. The same package now retails for $499. In contrast, your average, 4-port consumer WiFi router sells for about $45-$50 at the local electronics store and will do the job okay for most Internet users.

Another behavior central to the eero that is bound to be a sticking point with some is that it is regularly checking for or downloading new security and bug updates from the cloud. This may be a huge change for consumers accustomed to configuring all of this themselves, but overall I think it’s a positive development if done right.

For starters, the vast majority of consumer grade routers ship with poorly written and insecure software, and often with unnecessary networking features turned on. It’s a fair bet that if you were to buy a regular WiFi router off the shelf at the local electronics store, that software or “firmware” that powers that device is going to be out-of-date and in need of updating straight out of the box.

Worse still, most of these device will remain in this default insecure state for the remainder of their Internet-connected lifespan (which is probably at least several years), because few consumer routers make it easy for consumers to update, or even alert them that the devices need updates. There are so many out-of-date and insecure routers exposed to the Internet now that it’s not uncommon to find criminal botnets made up entirely of hacked home routers.

True, geeks who feel at home tinkering with open-source router firmware can void their warranty by installing something like DD-WRT or Tomato on a normal wireless router, and I have recommended as much for those with the confidence to do so. But I also am careful to note that anyone who updates their router with third-party firmware but fumbles a crucial step can quickly be left with an oversized and otherwise useless paperweight.

INTERVIEW WITH EERO CEO/CO-FOUNDER

I wanted to know more about the security design that went into the eero, and fortunately was in eero’s hometown of San Francisco last week for the RSA Security conference. So I dropped by the company’s headquarters and got to sit down briefly with the company’s CEO and co-founder, Nick Weaver.

“The way we designed the eero system in general is that it’s a distributed system that runs in your home, and the system we use to deliver that experience is also a distributed system,” Weaver explained. “In your home, the system distributes the load of clients, compute, updates, and diagnostics across the units in your home. We also have a cloud with a distributed architecture, and that’s what allows the eero networks to update an configure themselves automatically.”

BK: Where does that distributed cloud architecture live?

NW: Today it’s Amazon, and everything is hosted on AWS. There’s a high frequency [of check-ins] but not a lot of traffic.  There is very little information exchanged. Only diagnostic info that explains how the links between the eeros are doing. You can think of it as a network engineer in the sky who helps ensure that your network is configured properly.

BK: How does the eero know the updates being pushed to it are from eero and not from someone else?

NW: Every update is signed by a key, and that key is locked away at [the bank].

BK: Does eero collect any other information about its users?

NW: There is no information collected ever about where you go on the Internet or how your connection is being used. That is not information that’s interesting to us. The other co-founder studied networking and security and contributed quite a bit to the Tor Project. We’ve got all the right tensions in our founding team. Security is really important. And it’s been totally underestimated by all the existing players. As we’re discovering more and more security vulnerabilities, we have to be able to move quickly and deploy quickly. Because if you don’t, you’re doing a disservice to your customers.

Would you buy a eero system? Sound off in the comments below.

Update 12:58 p.m. ET: Corrected the price of the 3-eero unit.
Top

New Video Tutorial on the Pipelight Plugin and Netflix in PC-BSD

Postby Josh Smith via Official PC-BSD Blog »

We recently looked at the pipelight port that recently received a patch in the ports tree and made a video for you guys.

I’ll just leave this here…

Watching Netflix in PC-BSD
Top

Adobe, Microsoft Push Critical Updates

Postby BrianKrebs via Krebs on Security »

Microsoft today pushed out 13 security updates to fix at least 39 separate vulnerabilities in its various Windows operating systems and software. Five of the updates fix flaws that allow hackers or malware to break into vulnerable systems without any help from the user, save for perhaps visiting a hacked Web site.

The bulk of the security holes plugged in this month’s Patch Tuesday reside in either Internet Explorer or in Microsoft’s flagship browser — Edge. As security firm Shavlik notes, Microsoft’s claim that Edge is more secure than IE seems to be holding out, albeit not by much. So far this year, Shavlik found, Edge has required 19 fixes versus IE’s 27.

Windows users who get online with a non-Microsoft browser still need to get their patches on: Ten of the updates affect Windows — including three other critical updates from Microsoft. As always, Qualys has a readable post about the rest of the Microsoft patches. If you experience any issues with the Windows patches, please share your experience in the comments below.

As it is known to do on patch Tuesday, Adobe issued security updates for its Reader and Acrobat software. Alas, there appears to be no update for Adobe’s Flash Player plugin as per usual on Patch Tuesday. However, an Adobe spokesperson told KrebsOnSecurity that the company will be issuing a Flash Player update on Thursday morning.
Top

IRS Suspends Insecure ‘Get IP PIN’ Feature

Postby BrianKrebs via Krebs on Security »

Citing ongoing security concerns, the Internal Revenue Service (IRS) has suspended a service offered via its Web site that allowed taxpayers to retrieve so-called IP Protection PINs (IP PINs), codes that the IRS has mailed to some 2.7 million taxpayers to help prevent those individuals from becoming victims of tax refund fraud two years in a row. The move comes just days after KrebsOnSecurity first exposed how ID thieves were abusing the service to revisit tax refund on innocent taxpayers two years running.

Last week, this blog told the story of Becky Wittrock, a certified public accountant (CPA) from Sioux Falls, S.D., who received an IP PIN in 2014 after crooks tried to impersonate her to the IRS. Wittrock said she found out her IP PIN had been compromised by thieves this year after she tried to file her tax return on Feb. 25, 2016. Turns out, the crooks beat her to the punch by more than three weeks, filing a large refund request with the IRS on Feb. 2, 2016.

The problem, as Wittrock’s case made clear, is that IRS allows IP PIN recipients to retrieve their PIN via the agency’s Web site, after supplying the answers to four easy-to-guess questions from consumer credit bureau Equifax. These so-called knowledge-based authentication (KBA) or “out-of-wallet” questions focus on things such as previous address, loan amounts and dates and can be successfully enumerated with random guessing. In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

In a statement issued Monday evening, the IRS said that as part of its ongoing security review, the agency was temporarily suspending the Identity Protection PIN tool on IRS.gov.

“The IRS is conducting a further review of the application that allows taxpayers to retrieve their IP PINs online and is looking at further strengthening the security features on the tool,” the agency said.

According to the IRS, of the 2.7 million IP PINs sent to taxpayers by mail for the current filing season, about 5 percent of those – approximately 130,000 – used the online tool to try retrieving a lost or forgotten IP PIN. The agency said that through the end of February 2016, the IRS had confirmed and stopped 800 fraudulent returns using an IP PIN.

“For taxpayers retrieving a lost IP PIN, the IRS emphasizes it has put strengthened processes and filters in place for this tax season to review these tax returns,” the statement continued. “These strengthened review procedures – which are invisible to taxpayers – have helped detect potential identity theft and stopped refund fraud. Taxpayers who have been issued an IP PIN should continue to file their tax returns as they normally would. The online tool is primarily used by taxpayers who have lost their IP PINs and need to retrieve their numbers. Most taxpayers receive their IP PIN via mail and never use the online tool.”

Eight hundred taxpayers may not seem like a lot of folks impacted by this security weakness, but then again the IRS doesn’t release stats on fraud it may have missed. Also, the agency has a history of significantly revising the victim numbers upwards in incidents like these.

For example, the very same weakness caused the IRS last year to disable online access to its “Get Transcript” feature (the IRS disabled access to the Get Transcript tool in May 2015). The IRS originally said a little over 100,000 people were impacted by the Get Transcript weakness, a number it later revised to 340,000 and last month more than doubled again to more than 700,000 taxpayers.
Top

Traffik 2015

Postby johannes via Johannes Formanns Webseite »

Ich habe die Entwicklung der übertragenen Daten angesehen, und musste überrascht feststellen, dass sich im Vergleich zu 2012 überraschend wenig geändert hat.

Insgesamt wurden 11.263GB Daten ausgeliefert, davon 6,5% per IPv6. Das ist im Vergleich zu 2012 zwar eine deutliche Steigerung, aber finde es trotzdem noch beschämend wenig. Leider scheinen sich da die meisten Zugangs-ISPs nicht wirklich mit Ruhm zu bekleckern was das Ausrollen von IPv6 angeht.

Top

Seagate Phish Exposes All Employee W-2’s

Postby BrianKrebs via Krebs on Security »

Email scam artists last week tricked an employee at data storage giant Seagate Technology into giving away W-2 tax documents on all current and past employees, KrebsOnSecurity has learned. W-2 forms contain employee Social Security numbers, salaries and other personal data, and are highly prized by thieves involved in filing phony tax refund requests with the Internal Revenue Service (IRS) and the states.

Seagate headquarters in Cupertino, Calif. Image: Wikipedia
According to Seagate, the scam struck on March 1, about a week after KrebsOnSecurity warned readers to be on the lookout for email phishing scams directed at finance and HR personnel that spoof a letter from the organization’s CEO requesting all employee W-2 forms.

KrebsOnSecurity first learned of this incident from a former Seagate employee who received a written notice from the company. Seagate spokesman Eric DeRitis confirmed that the notice was, unfortunately, all too real.

“On March 1, Seagate Technology learned that the 2015 W-2 tax form information for current and former U.S.-based employees was sent to an unauthorized third party in response to the phishing email scam,” DeRitis said. “The information was sent by an employee who believed the phishing email was a legitimate internal company request.”

DeRitis continued:

“When we learned about it, we immediately notified federal authorities who are now actively investigating it. We deeply regret this mistake and we offer our sincerest apologies to everyone affected. Seagate is aggressively analyzing where process changes are needed and we will implement those changes as quickly as we can.”

Asked via email how many former and current employees may have been impacted, DeRitis declined to be specific.

“We’re not giving that out publicly — only to federal law enforcement,” he said. “It’s accurate to say several thousand. But less 10,000 by a good amount.”

Naturally, Seagate is offering affected employees at least two-years’ membership to Experian’s ProtectMyID service, paid for by the company. Too bad having credit monitoring through Experian won’t protect employees from the real threat here — tax refund fraud.

As I noted in last month’s warning about W-2 phishing, fraudsters who perpetrate tax refund fraud prize W-2 information because it contains virtually all of the data one would need to fraudulently file someone’s taxes and request a large refund in their name. Indeed, scam artists involved in refund fraud stole W-2 information on more than 330,000 people last year directly from the Web site of the Internal Revenue Service (IRS). Scammers last year also massively phished online payroll management account credentials used by corporate HR professionals.

According to recent stats from the Federal Trade Commission, tax refund fraud was responsible for a nearly 50 percent increase in consumer identity theft complaints last year. The best way to avoid becoming a victim of tax refund fraud is to file your taxes before the fraudsters can. See Don’t Be A Victim of Tax Refund Fraud in ’16 for more tips on avoiding this ID theft headache.

Update, March 7, 12:36 p.m. ET: Several readers have forwarded news reports about other companies similarly victimized in W-2 phishing scams, including mobile communications firm Snapchat and GCI, an Alaskan ISP and telecom provider that handed thieves some 2,500 employee W-2’s.
Top

Adding SLOG to a zpool

Postby Dan Langille via Dan Langille's Other Diary »

I have recently added two 480 GB SSDs to a 10 x HDD raidz2 system. The SSDs will be mainly used for spooling to tape during backups, but I’m going to use a small part of it for a SLOG. Not all systems benefits from a SLOG (Separate intent LOG), but synchronous writes, such as [...]
Top

knew

Postby Dan Langille via Dan Langille's Other Diary »

For future reference, this is the knew server. It runs a few jails, including Bacula regression testing services. File systems Paritions zpools /var/run/dmesg.boot
Top

Hasp HL Library

Postby via Nerdling Sapple »

Hasp HL Library

git clone https://git.zx2c4.com/hasplib
The Hasp HL is a copy protection dongle that ships horrible closed-source drivers.

This is a very simple OSS library based on libusb for accessing MemoHASP functions of the Hasp HL USB dongle. It currently can view the ID of a dongle, validate the password, read from memory locations, and write to memory locations.

This library allows use of the dongle without any drivers!

API

Include hasplib.h, and compile your application alongside hasplib.c and optionally hasplib-simple.c.

Main Functions
Get a list of all connected dongles:

size_t hasp_find_dongles(hasp_dongle ***dongles);
Login to that dongle using the password, and optionally view the memory size:

bool hasp_login(hasp_dongle *dongle, uint16_t password1, uint16_t password2, uint16_t *memory_size);
Instead of the first two steps, you can also retreive the first connected dongle that fits your password:

hasp_dongle *hasp_find_login_first_dongle(uint16_t password1, uint16_t password2);
Read the ID of a dongle:

bool hasp_id(hasp_dongle *dongle, uint32_t *id);
Read from a memory location:

bool hasp_read(hasp_dongle *dongle, uint16_t location, uint16_t *value);
Write to a memory location:

bool hasp_write(hasp_dongle *dongle, uint16_t location, uint16_t value);
Free the list of dongles opened earlier:

void hasp_free_dongles(hasp_dongle **dongles);
Free a single dongle:

void hasp_free_dongle(hasp_dongle *dongle);
Simple Functions
The simple API wraps the main API and provides access to a default dongle, which is the first connected dongle that responds to the given passwords. It handles dongle disconnects and reconnections.

Create a hasp_simple * object for a given password pair:

hasp_simple *hasp_simple_login(uint16_t password1, uint16_t password2);
Free this object:

void hasp_simple_free(hasp_simple *simple);
Read an ID, returning 0 if an error occurred:

uint32_t hasp_simple_id(hasp_simple *simple);
Read a memory location, returning 0 if an error occurred:

uint16_t hasp_simple_read(hasp_simple *simple, uint16_t location);
Write to a memory location, returning its success:

bool hasp_simple_write(hasp_simple *simple, uint16_t location, uint16_t value);

Licensing

This is released under the GPLv3. See COPYING for more information. If you need a less restrictive license, please contact me.
Top

FreeBSD 10.3-RC1 Available

Postby Webmaster Team via FreeBSD News Flash »

The first Release Candidate build for the FreeBSD 10.3 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.
Top

VC4/RPi3 status update

Postby Eric Anholt via anholt's lj »

It's been a busy month.  I spent most of it working on the Raspberry Pi 3 support so I could have a working branch for upstream day 1.  That involved cleaning up the SDHOST driver for submission, cleaning up pinctrl DT, writing an I2C GPIO expander driver, debugging the I2C controller, fixing HDMI hotplug handling, debugging EMMC (not quite done!), scraping together some wireless firmware, and a bunch of work trying to get BT working on the UART.  I'm happy to say that on day 1 I published a branch that worked the same as a RPi2, and by the end of the day I had wireless working.  Some of the patches are now out for review, and I'll be working on cleaning up the rest in the near future.

For VC4, my big push recently has been to support some sort of panel.  Panels are really big with Raspberry Pi users, and it's the primary complaint I hear about the open driver.  The official 7" DSI touchscreen seems like the most promising device to support, since it doesn't hog all your GPIOs (letting me use my serial console) and it's quite popular.

Unfortunately, DSI isn't going well.  The DSI0 peripheral is responding to me, but while I can read from DSI1 it won't respond to any writes.  DSI1 is, unfortunately, the one that the RPi exposes on its DSI connector.  (Note: this debugging is with the panel up and running from the firmware's boot configuration).  Debug code is at drm-vc4-dsi-boot

So, since DSI1's not cooperating, I switched tasks.  I had also picked up a DPI panel using the Adafruit Kippah, and a little SPI-driven panel.  I hadn't started with DPI because hogging all the GPIOs makes kernel debugging a mostly black box experience.  The upside is that DPI is crazy simple -- set the GPIOs muxes to output from DPI, set one register in DPI, and use the same pixelvalve setup from before.  I was surprised when 2 days in I got display output.  Here it is, running HDMI and DPI at the same time:



Expect patches soon on a mailing list near you.  Until then, it's at drm-vc4-dpi-boot
Top

Importing a zpool and renaming it

Postby Dan Langille via Dan Langille's Other Diary »

Last night, I moved a SAS card and had to renumber some devices. I also moved two SSDs into the same box (see photos). Those SSDs were the boot drives from the system from which I pulled the SAS drive. That box, tape01, contains the Bacula configuration file I need for this new box. I [...]
Top

Adding in a SAS card renumbered my devices

Postby Dan Langille via Dan Langille's Other Diary »

Last night, I added a SAS card into a box which is connected to a tape library (photos here). After starting up the box, Nagios reported that the tape device was no longer present. I first added such checks back in 2012 and updated them for more recent acquisitions. This means that the copy-to-tape jobs [...]
Top

Credit Unions Feeling Pinch in Wendy’s Breach

Postby BrianKrebs via Krebs on Security »

A number of credit unions say they have experienced an unusually high level of debit card fraud from the breach at nationwide fast food chain Wendy’s, and that the losses so far eclipse those that came in the wake of huge card breaches at Target and Home Depot.

As first noted on this blog in January, Wendy’s is investigating a pattern of unusual card activity at some stores. In a preliminary 2015 annual report, Wendy’s confirmed that malware designed to steal card data was found on some systems. The company says it doesn’t yet know the extent of the breach or how many customers may have been impacted.

According to B. Dan Berger, CEO at the National Association of Federal Credit Unions, many credit unions saw a huge increase in debit card fraud in the few weeks before the Wendy’s breach became public. He said much of that fraud activity was later tied to customers who’d patronized Wendy’s locations less than a month prior.

“This is what we’ve heard from three different credit union CEOs in Ohio now: It’s more concentrated and the amounts hitting compromised debit accounts is much higher that what they were hit with after Home Depot or Target,” Berger said. “It seems to have been been [the work of] a sophisticated group, in terms of the timing and the accounts they targeted. They were targeting and draining debit accounts with lots of money in them.”

Berger shared an email sent by one credit union CEO who asked not to be named in this story:

“Please take this Wendy’s story very seriously. We have been getting killed lately with debit card fraud. We have already hit half of our normal yearly fraud so far this year, and it is not even the end of January yet. After reading this, we reviewed activity on some of our accounts which had fraud on them. The first six we checked had all been to Wendy’s in the last quarter of 2015.”

All I am suggesting is that we are experiencing much high[er] losses lately than we ever did after the Target or Home Depot problems. I think we may be end up with 5 to 10 times the loss on this breach, wherever it occurred. Accordingly, please put this story in the proper perspective.”

Wendy’s declined to comment for this story.

Even if thieves don’t know the PIN assigned to a given debit card, very often banks and credit unions will let customers call in and change their PIN using automated systems that ask the caller to verify the cardholder’s identity by keying in static identifiers, like Social Security numbers, dates of birth and the card’s expiration date.

Thieves can abuse these automated systems to reset the PIN on the victim’s debit card, and then use a counterfeit copy of the card to withdraw cash from the account at ATMs. As I reported in September 2014, this is exactly what happened in the wake of the Home Depot breach.

Berger said NAFCU’s members are still trying to figure out whether they should just reissue cards for any customers who ate at Wendy’s anytime recently. After all, the restaurant chain hasn’t yet said how long the breach lasted — or indeed if the breach is even fully contained yet.

This brings up a fascinating phenomenon that occurs with card fraud linked to breached retailers or restaurants that customers patronize frequently. I recently spoke with a bank security consultant who was helping several financial institutions deal with the fallout from the Wendy’s breach. The consultant, who spoke on condition of anonymity, said many of his client banks had customers who re-compromised their cards several times in a month because they ate at several different Wendy’s locations throughout the month.

“A lot of them are kind of having a tough time because of they’re having trouble putting context around the exposure window, and because customers keep re-compromising themselves,” the consultant said. “The banks are reluctant to keep re-issuing cards if the cards are going to get re-compromised over and over because some customers just have to have their hamburgers each week.”

Many banks and credit unions are now issuing more secure (and more expensive to manufacture) chip-based credit and debit cards. The chip cards — combined with chip card readers at merchant cash registers — are designed to make it much harder and more expensive for thieves to counterfeit stolen cards. It’s not for certain yet but seems likely that the breached Wendy’s locations were not asking customers to dip their chip cards but instead swipe the card’s magnetic stripe.

Curious about why so many retailers have chip-enabled credit/debit card terminals and yet still ask customers to swipe? Check out The Great EMV Fakeout: No Chip For You! For a primer on why so many financial institutions in the United States are adopting chip-and-signature over chip-and-PIN, see this piece.
Top

py3status v2.9

Postby ultrabug via Ultrabug »

py3status v2.9 is out with a good bunch of new modules, exciting improvements and fixes !

Thanks

This release is made of their stuff, thank you contributors !

  • @4iar
  • @AnwariasEu
  • @cornerman
  • Alexandre Bonnetain
  • Alexis ‘Horgix’ Chotard
  • Andrwe Lord Weber
  • Ben Oswald
  • Daniel Foerster
  • Iain Tatch
  • Johannes Karoff
  • Markus Weimar
  • Rail Aliiev
  • Themistokle Benetatos

New modules

  • arch_updates module, by Iain Tatch
  • deadbeef module to show current track playing, by Themistokle Benetatos
  • icinga2 module, by Ben Oswald
  • scratchpad_async module, by johannes karoff
  • wifi module, by Markus Weimar

Fixes and enhancements

  • Rail Aliiev implement flake8 check via travis-ci, we now have a new build-passing badge
  • fix: handle format_time tztime parameter thx to @cornerman, fix issue #177
  • fix: respect ordering of the ipv6 i3status module even on empty configuration, fix #158 as reported by @nazco
  • battery_level module: add multiple battery support, by 4iar
  • battery_level module: added formatting options, by Alexandre Bonnetain
  • battery_level module: added option hide_seconds, by Andrwe Lord Weber
  • dpms module: added color support, by Andrwe Lord Weber
  • spotify module: added format_down option, by Andrwe Lord Weber
  • spotify module: fixed color & playbackstatus check, by Andrwe Lord Weber
  • spotify module: workaround broken dbus, removed PlaybackStatus query, by christian
  • weather_yahoo module: support woeid, add more configuration parameters, by Rail Aliiev

What’s next ?

Some major core enhancements and code clean up are coming up thanks to @cornerman, @Horgix and @pydsigner. The next release will be faster than ever and even less CPU consuming !

Meanwhile, this 2.9 release is available on pypi and Gentoo portage, have fun !
Top

Thieves Nab IRS PINs to Hijack Tax Refunds

Postby BrianKrebs via Krebs on Security »

Last year, KrebsOnSecurity warned that the Internal Revenue Service‘s (IRS) solution for helping victims of tax refund fraud avoid being victimized two years in a row was vulnerable to compromise by identity thieves. According to a story shared by one reader, the crooks are well aware of this security weakness and are using it to revisit tax refund fraud on at least some victims two years running — despite the IRS’s added ID theft protections.

Tax refund fraud affects hundreds of thousands — if not millions — of U.S. citizens annually. It starts when crooks submit your personal data to the IRS and claim a refund in your name, but have the money sent to an account or address you don’t control.

Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

The IRS’s preferred method of protecting tax refund victims from getting hit two years in a row — the Identity Protection (IP) PIN — has already been mailed to some 2.7 million tax ID theft victims. The six-digit PIN must be supplied on the following year’s tax application before the IRS will accept the return as valid.

As I’ve noted in several stories here, the trouble with this approach is that the IRS allows IP PIN recipients to retrieve their PIN via the agency’s Web site, after supplying the answers to four easy-to-guess questions from consumer credit bureau Equifax.  These so-called knowledge-based authentication (KBA) or “out-of-wallet” questions focus on things such as previous address, loan amounts and dates and can be successfully enumerated with random guessing.  In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

Becky Wittrock, a certified public accountant (CPA) from Sioux Falls, S.D., said she received an IP PIN in 2014 after crooks tried to impersonate her to the IRS.

Wittrock said she found out her IP PIN had been compromised by thieves this year after she tried to file her tax return on Feb. 25, 2016. Turns out, the crooks beat her to the punch by more than three weeks, filing a large refund request with the IRS on Feb. 2, 2016. 

“So, last year I was devastated by this,” Wittrock said, “But this year I’m just pissed.”

Wittrock said she called the toll-free number for the IRS that was printed on the identity theft literature she received from the year before.

“I tried to e-file this weekend and the return was rejected,” Wittrock said. “I received the PIN since I had IRS fraud on my 2014 return. I called the IRS this morning and they stated that the fraudulent use of IP PINs is a big problem for them this year.”

Wittrock said that to verify herself to the IRS representative, she had to regurgitate a litany of static data points about herself, such as her name, address, Social Security number, birthday, how she filed the previous year (married/single/etc), whether she claimed any dependents and if so how many. 

“The guy said, ‘Yes, I do see a return was filed under your name on Feb. 2, and that there was the correct IP PIN supplied’,” Wittrock recalled. “I asked him how can that be, and he said, ‘You’re not the first, we’ve had many cases of that this year.'”

According to Wittrock, the IRS representative shared that the agency wouldn’t be relying on IP PINs for long.

“He said, ‘We won’t be using the six digit PIN next year. We’re working on coming up with another method of verification’,” she recalled. “He also had thrown in something about [requiring] a driver’s license, which didn’t sound like a good solution to me.”

Interestingly, the IRS’s own failure to use anything close to modern authentication methods may have contributed to Wittrock’s original victimization. From January 2014 to May 2015, the IRS allowed anyone to access someone else’s previous year’s W-2 forms, just by supplying the taxpayer’s name, date of birth, Social Security number, address, and the answers to easy-to-guess-or-Google KBA questions.

The IRS killed the Get Transcript function in May 2015 after it was revealed (first on this blog) that crooks were abusing it to hijack consumer identities and refunds. But here’s the problem: the agency requires IP PIN holders seeking a copy of their PIN to jump through the exact same flawed authentication process that afflicted its now-defunct Get Transcript service.

According to the IRS, at least 724,000 citizens had their tax data stolen through the IRS’s Get Transcript feature between January 2014 and May 2015. This may in fact be a lowball number: The IRS previously said the number of those affected was 334,000, figures that were sharply revised from an initial estimate of 110,000 taxpayers.

The IRS did not respond to requests for comment for this story. But in a related story by Quartz last year, the IRS said access to an IP PIN itself “does not expose taxpayer Personally Identifiable Information.” However, this may be of small solace to taxpayers who had their tax and income data stolen directly from the IRS in the first place.

The IRS told Quartz that taxpayers who use IP PINs will be sent a new one in the mail each year, prior to each tax season—making it much harder for an identity thief to access this information.

“That is, hackers would have a small window—between the end of the tax year and the moment a taxpayer files a return—to try to steal the IP PIN,” Keith Collins wrote. “The statement added: “In addition, we carefully monitor IP PIN traffic in order to respond swiftly to any potentially suspicious activity.”

I suppose time will tell how swiftly the IRS is moving to respond to suspicious IP PIN activity. In the meantime, if you’d like to know more about tax ID theft and what you can do to minimize your chances of becoming the next victim, check out Don’t Be a Victim of Tax Fraud in ’16.
Top

FreeBSD Project to participate in Google Summer of Code 2016

Postby Webmaster Team via FreeBSD News Flash »

The FreeBSD Project is pleased to announce its participation in Google's 2016 Summer of Code program, which funds summer students to participate in open source projects. This will be the FreeBSD Project's twelfth year in the program, having mentored over 180 successful students through summer-long coding projects between 2005 and 2015.
Top


FreeBSD 10.3-BETA3 Available

Postby Webmaster Team via FreeBSD News Flash »

The third BETA build for the FreeBSD 10.3 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.
Top


Gentoo Ought to be About Choice

Postby rich0 via Rich0's Gentoo Blog »

“Gentoo is about choice.”  We’ve said it so often that it seems like we just don’t bother to say it any more.  However, with some of the recent conflicts on the lists (which I’ve contributed to) and indeed across the FOSS community at large, I think this is a message that is worth repeating…

Ok, bare with me because I’m going to talk about systemd.  This post isn’t really about systemd, but it would probably not be nearly as important in its absence.  So, we need to talk about why I’m bringing this up.

How we got here

Systemd has brought a wave of change in the Linux community, and most of the popular distros have decided to adopt it.  This has created a bit of a vacuum for those who strongly prefer to avoid it, and many of these have adopted Gentoo (the only other large-ish option is Slackware), and indeed some have begun to contribute back.  The resulting shift in demographics have caused tensions in the community, and I believe this has created a tendency for us to focus too much on what makes us different.

Where we are now

Every distro has a niche of some kind – a mission that gives it a purpose for existence.  It is the thing that its community coalesces around.  When a distro loses this sense of purpose, it will die or fork, whether by the forces of lost contributors or lost profits.  This purpose can certainly evolve over time, but ultimately it is this purpose which holds everything together.

For many years in Gentoo our purpose has been about providing choices, and enabling the user.  Sometimes we enable them to shoot their own feet, and we often enable them to break things in ways that our developers would prefer not to troubleshoot.  We tend to view the act of suppressing choices as contrary to our values, even if we don’t always have the manpower to support every choice that can possibly exist.

The result of this philosophy is what we all see around us.  Gentoo is a distro that can be used to build the most popular desktop linux-based operating system (ChromeOS), and which reportedly is also used as the basis of servers that run NASDAQ[1].  It shouldn’t be surprising that Gentoo works with no fewer than 7 device-manager implementations and 4 service managers.

Still, many in the Linux community struggle to understand us.  They mistake our commitment to providing a choice as some kind of endorsement of that choice.  Gentoo isn’t about picking winners.  We’re not an anti-systemd distro, even if many who dislike systemd may be found among us and it is straightforward to install Gentoo without “systemd” appearing anywhere in the filesystem.  We’re not a pro-systemd distro, even if (IMHO) we offer one of the best and undiluted systemd experiences around.  We’re a distro where developers and users with a diverse set of interests come together to contribute using a set of tools that makes it practical for each of us to reach in and pull out the system that we want to have.

Where we need to be

Ultimately, I think a healthy Gentoo is one which allows us all to express our preferences and exchange our knowledge, but where in the end we all get behind a shared goal of empowering our users to make the decisions.  There will always be conflict when we need to pick a default, but we must view defaults as conveniences and not endorsements.  Our defaults must be reasonably well-supported, but not litmus tests against which packages and maintainers are judged.  And, in the end, we all benefit when we are exposed to those who disagree and are able to glean from them the insights that we might have otherwise missed on our own.

When we stop making Gentoo about a choice, and start making it about having a choice, we find our way.

1 – http://www.computerworld.com/article/2510334/financial-it/how-linux-mastered-wall-street.html

Filed under: foss, gentoo, linux, Uncategorized
Top

Adding a failed HDD back into a ZFS mirror

Postby Dan Langille via Dan Langille's Other Diary »

I do have a FreeBSD-11 box, cuppy: $ uname -a FreeBSD cuppy.int.unixathome.org 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r279394: Sat Feb 28 21:01:21 UTC 2015 dan@cuppy.unixathome.org:/usr/obj/usr/src/sys/GENERIC amd64 $ That box is used mostly for testing and/or erasing DLT tapes. The current status of that box is not healthy. It’s running fine, but it is not optimal: Let’s [...]
Top

EFF's Panopticlick at Enigma 2016

Postby Flameeyes via Flameeyes's Weblog »

One of the thing I was the most interested to hear about, at Enigma 2016, was news about EFF's Panopticlick. For context, here is the talk from Bill Burlington:

I wrote before about the tool, but they have recently reworked and rebranded it to use it as a platform for promoting their Privacy Badger, which I don't particularly care for. For my intents, they luckily still provide the detailed information, and this time around they make it more prominent that they rely on the fingerprintjs2 library for this information. Which means I could actually try and extend it.

I tried to bring up one of my concerns at the post-talk Q&A at the conference (the Q&A were not recorded), so I thought it wold be nice to publish my few comments about the tool as it is right now.

The first comment is this: both Panopticlick and Privacy Badger do not consider the idea of server-side tracking. I have said that before, and I will repeat it now: there are plenty of ways to identify a particular user, even across sites, just by tracking behaviour that are seen passively on the server side. Bill Budington's answer to this at the conference was that Privacy Badger's answer is allowing cookies only if if there is a policy in place from the site, and count on this policy being binding for the site.

But this does not mean much — Privacy Badger may stop the server from setting a cookie, but there are plenty of behaviours that can be observed without the help of the browser, or even more interestingly, with the help of Privacy Badger, uBlock, and similar other "privacy conscious" extensions.

Indeed, not allowing cookies is, already, a piece of trackable information. And that's where the problem with self-selection, which I already hinted at before, comes to: when I ran Panopticlick on my laptop earlier it told me that one out of 1.42 browsers have cookies enabled. While I don't have any access to facts and statistics about that, I do not think it's a realistic number to say that about 30% of browsers have cookies disabled.

If you connect this to the commentaries on NSA's Rob Joyce said at the closing talk, which unfortunately I was not present for, you could say that the fact that Privacy Badger is installed, and fetches a given path from a server trying to set a cookie, is a good way to figure out information on a person, too.

The other problem is more interesting. In the talk, Budington introduces briefly the concept of Shannon Entropy), although not by that name, and gives an example on different amount of entropy provided by knowing someone's zodiac sign versus knowing their birthday. He also points out that these two information are not independent so you cannot sum their entropy together, which is indeed correct. But there are two problems with that.

The first, is that the Panopticlick interface does seem to think that all the information it gathers is at least partially independent and indeed shows a number of entropy bits higher than the single highest entry they have. But it is definitely not the case that all entries are independent. Even leaving aside browser specific things such as the type of images requested and so on, for many languages (though not English) there is a timezone correlation: the vast majority of Italian users would be reporting the same timezone, either +1 or +2 depending on the time of the year; sure there are expats and geeks, but they are definitely not as common.

The second problem is that there is a more interesting approach to take, when you are submitted key/value pair of information that should not be independent, in independent ways. Going back to the example of date of birth and zodiac sign, the calculation of entropy in this example is done starting from facts, particularly those in which people cannot lie — I'm sure that for any one database of registered users, January 1st is skewed as having many more than than 1/365th of the users.

But what happens if the information is gathered separately? If you ask an user both their zodiac sign and their date of birth separately, they may lie. And when (not if) they do, you may have a more interesting piece of information. Because if you have a network of separate social sites/databases, in which only one user ever selects being born on February 18th but being a Scorpio, you have a very strong signal that it might be the same user across them.

This is the same situation I described some time ago of people changing their User-Agent string to try to hide, but then creating unique (or nearly unique) signatures of their passage.

Also, while Panopticlick will tell you if the browser is doing anything to avoid fingerprinting (how?) it still does not seem to tell you if any of your extensions are making you more unique. And since it's hard to tell whether some JavaScript bit is trying to load a higher-definition picture, or hide pieces of the UI for your small screen, versus telling the server about your browser setup, it is not like they care if you disabled your cookies…

For a more proactive approach to improve users' privacy, we should ask for more browser vendors to do what Mozilla did six years ago and sanitize what their User-Agent content should be. Currently, Android mobile browsers would report both the device type and build number, which makes them much easier to track, even though the suggestion has been, up to now, to use mobile browsers because they look more like each other.

And we should start wondering how much a given browser extension adds or subtract from the uniqueness of a session. Because I think most of them are currently adding to the entropy, even those that are designed to "improve privacy."
Top

IRS: 390K More Victims of IRS.Gov Weakness

Postby BrianKrebs via Krebs on Security »

The U.S. Internal Revenue Service (IRS) today sharply revised previous estimates on the number of citizens that had their tax data stolen since 2014 thanks to a security weakness in the IRS’s own Web site. According to the IRS, at least 724,000 citizens had their personal and tax data stolen after crooks figured out how to abuse a (now defunct) IRS Web site feature called “Get Transcript” to steal victim’s prior tax data.

The number is more than double the figures the IRS released in August 2015, when it said some 334,000 taxpayers had their data stolen via authentication weaknesses in the agency’s Get Transcript feature.

Turns out, those August 2015 estimates were more than tripled from May 2015, when the IRS shut down its Get Transcript feature and announced it thought crooks had abused the Get Transcript feature to pull previous year’s tax data on just 110,000 citizens.

In a statement released today, the IRS said a more comprehensive, nine-month review of the Get Transcript feature since its inception in January 2014 identified the “potential access of approximately 390,000 additional taxpayer accounts during the period from January 2014 through May 2015.”

The IRS said an additional 295,000 taxpayer transcripts were targeted but access was not successful, and that mailings notifying these taxpayers will start February 29. The agency said it also is offering free credit monitoring through Equifax for affected consumers, and placing extra scrutiny on tax returns from citizens with affected SSNs.

The criminal Get Transcript requests fuel refund fraud, which involves crooks claiming a large refund in the name of someone else and intercepting the payment. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

As I warned in March 2015, the flawed Get Transcript function at issue required taxpayers who wished to obtain a copy of their most recent tax transcript had to provide the IRS’s site with the following information: The applicant’s name, date of birth, Social Security number and filing status. After that data was successfully supplied, the IRS used a service from credit bureau Equifax that asks four so-called “knowledge-based authentication” (KBA) questions. Anyone who succeeds in supplying the correct answers could see the applicant’s full tax transcript, including prior W2s, current W2s and more or less everything one would need to fraudulently file for a tax refund.

These KBA questions — which involve multiple choice, “out of wallet” questions such as previous address, loan amounts and dates — can be successfully enumerated with random guessing. But in practice it is far easier, as we can see from the fact that thieves were successfully able to navigate the multiple questions more than half of the times they tried. The IRS said it identified some 1.3 million attempts to abuse the Get Transcript service since its inception in January 2014; in 724,000 of those cases the thieves succeeded in answering the KBA questions correctly.

The IRS’s answer to tax refund victims — the Identity Protection (IP) PIN — is just as flawed as the now defunct Get Transcript system. These IP PINS, which the IRS has already mailed to some 2.7 million tax ID theft victims, must be supplied on the following year’s tax application before the IRS will accept the return.

The only problem with this approach is that the IRS allows IP PIN recipients to retrieve their PIN via the agency’s Web site, after supplying the answers to the same type of KBA questions from Equifax that opened the Get Transcript feature to exploitation by fraudsters.  These KBA questions focus on things such as previous address, loan amounts and dates and can be successfully enumerated with random guessing.  In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.

ID thieves understand this all to well, and even a relatively unsophisticated gang engaged in this activity can make millions via tax refund fraud. Last week, a federal grand jury in Oregon unsealed indictments against three men accused of using the IRS’s Get Transcript feature to obtain 1,200 taxpayers transcripts. In total, the authorities allege the men filed over 2,900 false federal tax returns seeking over $25 million in fraudulent refunds.  The IRS says it rejected most of those claims, but that the gang managed to successfully obtain $4.7 million in illegal refunds.



HOW BAD WAS IT OVERALL IN 2015?

The IRS hasn’t officially released numbers on how much tax refund fraud it saw overall in 2015, but in response to questions from KrebsOnSecurity it offered figures on how many fraudulent returns it detected and blocked last year.

“In calendar year 2015, the IRS rejected or suspended the processing of 4.8 million suspicious returns. The IRS stopped 1.4 million confirmed identity theft returns, totaling $8.7 billion,” the agency said in a statement. “Additionally, in calendar year 2015, the IRS stopped $3.1 billion worth of refunds in other types of fraud. That’s a total of $11.8 billion in confirmed fraudulent refunds protected.”

Again, these numbers do not reflect how many fraudulent refunds were paid out in calendar year 2015 due to ID theft, and as we can see with the numbers tied to the Get Transcript fiasco these numbers have a way of changing upward over time significantly. I mention that because something about these numbers doesn’t seem to square with figures previously released by the Government Accountability Office and the Federal Trade Commission.

Last month, the FTC said it saw an almost 50 percent spike in ID theft claims in 2015, a jump that was thanks largely to a huge uptick in consumer reports of tax refund fraud. Likewise, a report by the IRS last year indicates that between Jan. 1, 2015 and Sept. 30, 2015, the IRS saw more than 600,000 incidents of ID tax-related ID theft, up more than 50 percent over 2014, and 30 percent over 2013.

According to a January 2015 GAO report (PDF), the IRS estimated it prevented $24.2 billion in fraudulent identity theft refunds in 2013. Unfortunately, the IRS also paid $5.8 billion that year for refund requests later determined to be fraud. The GAO noted that because of the difficulties in knowing the amount of undetected fraud, the actual amount could far exceed those estimates.

The best way to avoid becoming a victim of tax refund fraud is to file your taxes before the fraudsters can. See Don’t Be A Victim of Tax Refund Fraud in ’16 for more tips on avoiding this ID theft headache.
Top

Setting USE_EXPAND flags in package.use

Postby voyageur via Voyageur's corner »

This has apparently been supported in Portage for some time, but I only learned it recently from a gentoo-dev mail: you do not have to write down the expanded USE-flags in package.use anymore (or set them in make.conf)!

For example, if I wanted to set some APACHE2_MODULES and a custom APACHE2_MPM, the standard package.use entry would be something like:

www-servers/apache apache2_modules_proxy apache2_modules_proxy apache2_modules_proxy_http apache2_mpms_event ssl
Not as pretty/convenient as a ‘APACHE2_MODULES=”proxy proxy_http”‘ line in make.conf. Here is the best-of-both-worlds syntax (also supported in Paludis apparently):

www-servers/apache ssl APACHE2_MODULES: proxy proxy_http APACHE2_MPMS: event
Or if you use python 2.7 as your main python interpreter, set 3.4 for libreoffice-5.1

app-office/libreoffice PYTHON_SINGLE_TARGET: python3_4
Have fun cleaning your package.use file
Top

FreeBSD and ZFS

Postby Anne Dickison via FreeBSD Foundation »

ZFS has been making headlines lately, so it seems like the right time to talk about the longstanding relationship between FreeBSD and ZFS.


For nearly seven years, FreeBSD has included a production quality ZFS implementation, making it one of the key features of the FreeBSD operating system. ZFS is a combined file system and volume manager. Decoupling physical media from logical volumes allows free space to be efficiently shared between all of the file systems. ZFS introduced unprecedented data integrity and reliability guarantees to storage on FreeBSD. ZFS supports varying levels of redundancy for tolerance of hardware failures and includes cryptographic checksums on all data to guard against corruption.


Allan Jude, VP of Operations at ScaleEngine and coauthor of FreeBSD Mastery: ZFS, said “We started using ZFS in 2011 because we needed to safely store a huge quantity of video for our customers. FreeBSD was, and still is, the best platform for deploying ZFS in production. We now store more than a petabyte of video using ZFS, and use ZFS Boot Environments on all of our servers.”


So why does FreeBSD include ZFS and contribute to its continued development? FreeBSD community members understand the need for continued development work as technologies evolve. OpenZFS is the truly open source successor to the ZFS project and the FreeBSD Project has participated in OpenZFS since its founding in 2013. FreeBSD developers and those from Delphix, Nexenta, Joyent, the ZFS on Linux project, and the Illumos project work together to continue improving OpenZFS.


FreeBSD’s unique open source infrastructure, copyfree license, and engaged community support the integration of a variety of free software components, including OpenZFS. FreeBSD makes an excellent operating system for servers and end users, and it provides a foundation for many open source projects and commercial products.

We're happy that ZFS is available in FreeBSD as a fully integrated, first class file system and wish to thank all of those who have contributed to it over the years.
Top

Breached Credit Union Comes Out of its Shell

Postby BrianKrebs via Krebs on Security »

Notifying people and companies about data breaches often can be a frustrating and thankless job. Despite my best efforts, sometimes a breach victim I’m alerting will come away convinced that I am not an investigative journalist but instead a scammer. This happened most recently this week, when I told a California credit union that its online banking site was compromised and apparently had been for nearly two months.

On Feb. 23, I contacted Coast Central Credit Union, a financial institution based in Eureka, Calif. that serves more than 60,000 customers. I explained who I was, how they’d likely been hacked, how they could verify the hack, and how they could fix the problem. Two days later when I noticed the site was still hacked, I contacted the credit union again, only to find they still didn’t believe me.

News of the compromise came to me via Alex Holden, a fellow lurker in the cybercrime underground and founder of Hold Security [full disclosure: While Holden’s site lists me as an advisor to his company, I receive zero compensation for that role]. Holden told me that crooks had hacked the credit union’s site and retrofitted it with a “Web shell,” a simple backdoor program that allows an attacker to remotely control the Web site and server using nothing more than a Web browser.

A screen shot of the credit union’s hacked Web site via the Web shell.
The credit union’s switchboard transferred me to a person in Coast Central’s tech department who gave his name only as “Vincent.” I told Vincent that the credit union’s site was very likely compromised, how he could verify it, etc. I also gave him my contact information, and urged him to escalate the issue. After all, I said, the intruders could use the Web shell program to upload malicious software that steals customer passwords directly from the credit union’s Web site. Vincent didn’t seem terribly alarmed about the news, and assured me that someone would be contacting me for more information.

This afternoon I happened to reload the login page for the Web shell on the credit union’s site and noticed it was still available. A call to the main number revealed that Vincent wasn’t in, but that Patrick in IT would take my call. For better or worse, Patrick was deeply skeptical that I was not impersonating the author of this site.

I commended him on his wariness and suggested several different ways he could independently verify my identity. When asked for a contact at the credit union that could speak to the media, Patrick said that person was him but declined to tell me his last name. He also refused to type in a Web address on his own employer’s Web site to verify the Web shell login page.

“I hope you do write about this,” Patrick said doubtfully, after I told him that I’d probably put something up on the site today about the hack. “That would be funny.”

The login page for the Web shell that was removed today from Coast Central Credit Union’s Web site.
Exasperated, I told Patrick good luck and hung up. Thankfully, I did later hear from Ed Christians, vice president of information systems at Coast Central. Christians apologized for the runaround and said everyone in his department were regular readers of KrebsOnSecurity. “I was hoping I’d never get a call from you, but I guess I can cross that one off my list,” Christians said. “We’re going to get this thing taken down immediately.”

The credit union has since disabled the Web shell and is continuing to investigate the extent and source of the breach. There is some evidence to suggest the site may have been hacked via an outdated version of Akeeba Backup — a Joomla component that allows users to create and manage complete backups of a Joomla-based website. Screen shots of the files listed by the Web shell planted on Coast Central Credit Union indeed indicate the presence of Akeeba Backup on the financial institution’s Web server.

A Web search on one backdoor component that the intruders appear to have dropped on the credit union’s site on Dec. 29, 2015 — a file called “sfx.php” — turns up this blog post in which Swiss systems engineer Claudio Marcel Kuenzler described his investigation of a site that was hacked through the Akeeba Backup function.

“The file was uploaded with a simple GET request by using a vulnerability in the com_joomlaupdate (which is part of Akeeba Backup) component,” Kuenzler wrote, noting that there is a patch available for the vulnerability.

These Web shell components are extremely common, have been around for years, and are used by online miscreants for a variety of tasks — from selling ad traffic and spreading malware to promoting malicious and spammy Web sites.

It’s not clear yet whether the hackers who hit the credit union’s site did anything other than install the backdoor, but Kuenzler wrote that in his case the intruders indeed used their access to relay spam. The attackers could just have easily booby-trapped the credit union’s site to foist malicious software disguised as a security update when customers tried to log in at the site.

Holden said he’s discovered more than 13,000 sites that are currently infected with Web shells just like the one that hit Coast Central Credit Union, and that the vast majority of them are Joomla and WordPress blogs that get compromised through outdated and insecure third-party plugins for these popular content management systems. Worse yet, all of the 13,000+ backdoored sites are being remotely controlled with the same username and password.

“It’s a bot,” he said of the self-replicating malware used to deploy the Web shell that infested the credit union’s site. “It goes and exploits vulnerable sites and installs a backdoor with the same credentials.”

Holden said his company has been reaching out to the affected site owners, but that it hasn’t had much luck getting responses. In any case, Holden said he doesn’t relish the idea of dealing with pushback and suspicion from tons of victims.

“To be fair, most vulnerable sites belong to individuals or small companies that do not have contacts, and a good portion of them are outside of US,” Holden said. “We try to find owners for some but very few reply.”

If you run a Web site, please make sure to keep your content management system up to date with the latest patches, and don’t put off patching or disabling outdated third-party plugins. And if anyone wants to verify who I am going forward, please feel free contact me through this site, via encrypted email, or through Wickr (I’m “krebswickr”).
Top

PortsCamp Malaysia

Postby miwi via Martin Wilke »

A quick statement regarding MyPortsCamp thing :).

When my old friend Marcelo Araujo told me all about Taiwan PortsCamp, I was really excited. I was even more excited when I saw the pictures and heard the outcome of it.

I was told that it was very successful. I’ve shared from day one in our Telegram mybsd group, that I love the idea :). The other day there was a discussion again in the group on learning more about FreeBSD and ports. I pointed out that I would like to see some contributors or committers from Malaysia. Currently I am the only one from here and I am not even local :(.

I had a conversation with Mohd Fazli Azran, who is well known for his OpenSource passion and support around here. He posted on Facebook a question if anyone would want to become my “protege” to learn more about ports, how to do things, etc. To my surprise, the feedback was quite good. I reminded him about Taiwan PortsCamp and that I love the idea to see it happening here in Malaysia. Fazli created an FB Event to find out how much interest this event could gain, and the teaser will run till the end of next week, Friday March 4th. After that we will decide if the event will be organized here. The target is to find at least 20 people who are interested.

What is PortsCamp?
This is a community event where FreeBSD committers help people to understand what are ports and how to package new software to submit it as a new port.

The Basic idea
The Ports System used in FreeBSD is dead simple. It should be easy for any open source software publisher to submit their code to FreeBSD. But they just don’t know how simple it is, so we are gonna show them.

That’s it!
Top