Planet

We Need More Practical, Business-Oriented Open Source — Case Study: The Pizzeria CRM

Post by Flameeyes via Flameeyes's Weblog »

I’m an open source developer, because I think that open source makes for safer, better software for the whole community of users. I also think that, by making more software available to a wider audience, we improve the quality, safety and security of every user out there, and as such I will always push for more, and more open, software. This is why I support the Public Money, Public Code campaign by the FSFE for opening up the software developed explicitly for public administrations.

But there is one space that I found is quite lacking when it comes with open source: business-oriented software. The first obvious thing is the lack of good accounting software, as Jonathan has written extensively about, but there is more. When I was consulting as a roaming sysadmin (or with a more buzzwordy, and marketing-friendly term, a Managed Services Provider — MSP), a number of my customers relied heavily on nearly off-the-shelf software to actually run their business. And in at least a couple of cases, they commissioned me custom-tailored software for that.

In a lot of cases, there isn’t really a good reason not to open-source this software: while it is required to run certain businesses, it is clearly not enough to run them. And yet there are very few examples of such software in the open, and that includes from me: my customers didn’t really like the idea of releasing the software to others, even after I offered a discount on the development price.

I want to show the details of an example of one such custom software, something that, to give a name to it, would be a CRM (Customer Relationship Manager), that I built for a pizzeria in Italy. I won’t be opening the source code for it (though I wish I could do so), and I won’t be showing screenshots or provide the name of the actual place, instead referring to it as Pizza Planet.

This CRM (although the name sounds more professional than what it really was), was custom-designed to suit the work environment of the pizzeria — that is to say, I did whatever they asked me, despite it disagreeing with my sense of aesthetics and engineering. The basic idea was very simple: when a customer calls, they wanted to know who the customer was even before picking up the phone — effectively inspecting the caller ID, and connecting it with the easiest database editing facility I could write, so that they could give it a name and a freeform text box to write down addresses, notes, and preferences.

The reason why they called me to write this is that they originally bought a hardware PBX (for a single room pizzeria!) just so that a laptop could connect to it and use the Address Book functionality of the vendor. Except this functionality kept crashing, and after many weeks of back-and-forth with the headquarters in Japan, the integrator could not figure out how to get it to work.

As the pizzeria was wired with ISDN (legacy technology, heh), to be able to take at least two calls at the same time, the solution I came up with was building a simple “industrial” PC, with an ISDN line card and Asterisk, get them a standard SIP phone, and write the “CRM” so that it would initiate a SIP connection to the same Asterisk server (but never answer it). Once an inbound call arrived, it would look up if there was an entry in a simple storage layer for the phone number and display it with very large fonts, to be easily readable while around the kitchen.

As things moved and changed, a second pizzeria was opened and it required a similar setup. Except that, as ISDN are legacy technology, the provider was going to charge up to the nose for connecting a new line. Instead we decided to set up a VoIP account instead, and instead of a PC to connect the software, Asterisk ran on a server (in close proximity to the VoIP provider). And since at that point the limitation of an ISDN line on open calls is limited, the scope of the project expanded.

First of all, up to four calls could be queued, “your call is very important to us”-style. We briefly discussed allowing for reserving a spot and calling back, but at the time calls to mobile phones would still be expensive enough they wanted to avoid that. Instead the calls would get a simple message telling them to wait in line to contact the pizzeria. The CRM started showing the length of the queue (in a very clunky way), although it never showed the “next call” like the customer wanted (the relationship between the customer and the VoIP provider went South, and all of us had to end up withdrawing from the engagement).

Another feature we ended up implementing was opening hours: when call would arrive outside of the advertised opening hours, an announcement would play (recorded by a paid friend, who used to act in theatre, and thus had a good pronunciation).

I’m fairly sure that none of this would actually comply with the new GDPR requirements. At the very least, the customers should be advised that their data (phone number, address) will be saved.

But why am I talking about this in the context of Open Source software? Well, while a lot of the components used in this set up were open source, or even Free Software, it still requires a lot of integration to become usable. There’s no “turnkey pizzeria setup” — you can build up the system from components, but you need not just an integrator, you need a full developer (or development team) to make sure all the components fit together.

I honestly wish I had opensourced more of this. If I was to design this again right now, I would probably make sure that there was a direct, real-time API between Asterisk and a Web-based CRM. It would definitely make it easier to secure the data for GDPR compliance. But there is more than just that: having an actual integrated, isolated system where you can make configuration changes give the user (customer) the ability to set up things without having to know how the configuration files are structured.

To set up the Asterisk, it took me a week or two reading through documentation, books on the topic, and a significant amount of experimentation with a VoIP number and a battery of testing SIM cards at home. To make the recordings work I had to fight with converting the files to G.729 beforehand, or the playback would use a significant amount of CPU.

But these are not unknown needs. There are plenty of restaurants (who don’t have to be pizza places) out there that probably need something like this. And indeed services such as Deliveroo appear to now provide a similar all-in-one solution… which is good for restaurants in cities big enough to sustain Deliveroo, but probably not grate for the smaller restaurants in smaller cities, who probably would not have much of a chance of hiring developers to make such a system themselves.

So, rambling asides, I really wish we had more ready-to-install Open Source solutions for businesses (restaurants, hotels, … — I would like to add banks to that but I know regulatory compliance is hard). I think these would actually have a very good social impact on all those towns and cities that don’t have a critical mass of tech influence, that they come with their own collection of mobile apps, for instance.

If you’re the kind of person who complains that startups only appear to want to solve problems in San Francisco, maybe think of what problems you can solve in and around your town or city.
Top

Twitter to All Users: Change Your Password Now!

Post by BrianKrebs via Krebs on Security »

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.
“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal
Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.
If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have sent some more specific questions about this incident in to Twitter. More updates as available.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!
Top

C is a Lie | BSD Now 244

Post by Jupiter Broadcasting via BSD Now Video Feed »

Arcan and OpenBSD, running OpenBSD 6.3 on RPI 3, why C is not a low-level language, HardenedBSD switching back to OpenSSL, how the Internet was almost broken, EuroBSDcon CfP is out, and the BSDCan 2018 schedule is available.

Top

The ultimate guide to EAPI 7

Post by Michał Górny via Michał Górny »

Back when EAPI 6 was approved and ready for deployment, I have written a blog post entitled the Ultimate Guide to EAPI 6. Now that EAPI 7 is ready, it is time to publish a similar guide to it.

Of all EAPIs approved so far, EAPI 7 brings the largest number of changes. It follows the path established by EAPI 6. It focuses on integrating features that are either commonly used or that can not be properly implemented in eclasses, and removing those that are either deemed unnecessary or too complex to support. However, the circumstances of its creation are entirely different.

EAPI 6 was more like a minor release. It was formed around the time when Portage development has been practically stalled. It aimed to collect some old requests into an EAPI that would be easy to implement by people with little knowledge of Portage codebase. Therefore, the majority of features oscillated around bash parts of the package manager.

EAPI 7 is closer to a proper major release. It included some explicit planning ahead of specification, and the specification has been mostly completed even before the implementation work started. We did not initially skip features that were hard to implement, even though the hardest of them were eventually postponed.

I will attempt to explain all the changes in EAPI 7 in this guide, including the rationale and ebuild code examples.

Continue reading
Top

When Your Employees Post Passwords Online

Post by BrianKrebs via Krebs on Security »

Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.

A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations.
KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.

Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others. Ensign said Uber found the unauthorized Trello board exposed information related to two users in South America who have since been notified.

“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, We may have dodged a bullet here, and it definitely could have been worse.”

Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.


Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.
Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

Update: Added information from Ensign about two users who had their data exposed from the credentials in the unauthorized Trello page published by Uber employees.
Top

Gerbera Media Server 1.2 Now Available

Post by Ian Whyman via v00d00.net »

I am happy to announce that Gerbera Media Server 1.2.0 is now available.

We have had contributions from 15 different contributors, putting together 172 changes since the 1.1. Thank you so much to everyone involved!

New Web UI

The most noticeable change since v1.1 will be new awesome new Web UI. We shipped a beta of this in the previous version, but now it it is ready for prime time, and enabled by default.



Thank you to Eamonn for all of his ongoing hard work!



UPnP Search

Gerbera now supports searching! This allows clients to offload the hard work of finding your media to the server, this is both faster an more reliable than clients doing it themselves.

If you are a BubbleUPnP user on android, this means the "Random Tracks" feature now works.

Thank you to astylos for this great feature!

Improved Docs

Some of you have mentioned that Gerbera's documentation, as with many other free software projects, has not been as good as it could or should have been.

We are happy to point you to docs.gerbera.io, kindly hosted by Read the Docs.

If you wish to help, it is very easy to get started, simply open a pull request against Sphinx files in the doc folder.

Bye, Youtube

Youtube support has not worked for a long time so it has been removed from the code base entirely. Similar functionality may return in a future release.

Bugfixes

  • Fixed AUXDATA truncation with UTF-8 characters.
  • Improved message when libupnp fails to bind correctly.
  • Allow use of FFMpeg to extract AUXDATA
  • Duktape JS scripting errors are now visible in log file
  • Fixed a crash in EXIV2 image handler.
  • Fixed "root path" sometimes missing for scripted layouts.

Get it

Find out how to install Gerbera.

Download the source from Github.

Top

EuroBSDcon 2018

Post by Upcoming FreeBSD Events via Upcoming FreeBSD Events »

EuroBSDcon 2018 (https://2018.eurobsdcon.org), University Politehnica of Bucharest, Bucharest, Romania 20 - 23 September, 2018. EuroBSDcon is the European annual technical conference gathering users and developers working on and with 4.4BSD (Berkeley Software Distribution) based operating systems family and related projects. EuroBSDcon gives the exceptional opportunity to learn about latest news from the BSD world, witness contemporary deployment case studies, and meet personally other users and companies using BSD oriented technologies. EuroBSDcon is also a boiler plate for ideas, discussions and information exchange, which often turn into programming projects. The conference has always attracted active programmers, administrators and aspiring students, as well as IT companies at large, which found the conference a convenient and quality training option for its staff. We firmly believe that high profile education is vital to the future of technology, and hence greatly welcome students and young people to this regular meeting.
Top

Security Trade-Offs in the New EU Privacy Law

Post by BrianKrebs via Krebs on Security »

On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.

Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.



In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”

The following Q&A is intended to address many of the more misleading claims and assertions made in that article.

Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?

I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.

To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.

This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.

All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. 

It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.

This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.

It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.

The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?

It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.

As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.

But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?

Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.

What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.

Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.

As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).

Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.

Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.

Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”

This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.

Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.

Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?

This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.

Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? 

That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.

According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”

I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.

According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.

I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?

It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.

Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.

After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?

That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.

The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.

It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”

Do you have something against people not getting spammed, or against better privacy in general? 

To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.

Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.

If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.

But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.
Top

Harbebrained Battletech

Post by kris via The Isoblog. »

So two days ago Harebrained Schemes’ Battletech came out. Harebrained Schemes is an Indie Game Dev Studio from Seattle, previously known for the very successful Shadowrun Games.

I found time to play the intro and the first few chapters in the campaign, and the game is exceptionally nice. It feels like one of those over designed simulation games, where you can minmax the player characters and their gear to the limit, but the computer is taking over the task of all the bookkeeping, so the paperwork is not going to pull you down.

There is a good deal of storytelling or RPG content in this – characters have a history and motivations, and the missions in the campaign take care about tracking and portraying the factions appropriately. There is betrayal and loyalty, and there are plot twists and reveals.



But of course the game centers around the crunchy bits. I mean even the Battetech Books have been Military Scifi in which the premise of the universe has been that no side ever gained an advantage in a millennium  long galactic war, and in which soldiers of fortune fight in the end for any side of the ever changing factions.

And the game does a good job of those crunchy bits: The mechs and their equipment handle clearly differently, and need to be led into battle differently. Environments matter, too, and adjustments in equipment and tactics make the difference between success and ammo explosions due to overheating.

Internally, the game is a simulationists dream: You can perform called shots, and the game keeps track of armor per bodypart of the Mech, knows where Ammo is being stored. I had a situation where a hit on a body part set off an explosion in an Ammo store behind the armor, which triggered a cascade of internal explosions in neighboring compartments, which ultimately felled the enemy mech, and that was the result of internal damage tracking and bookkeeping by the game. But all of that happens in the background and you may or may not care about it – it’s a choice.

Similarly, you can enjoy your Mech’s heat exchangers glowing orange and the water it stands in steaming, or you can open a detail view and track internal heat, armor state and other stats in detail for each move or proposed attack – again, it’s a choice.

So it’s possible to play this at the (in Sid Meier terms) Starships level, or at the Civ VI level. Either way, the game delivers – nice graphics, turn based tactical military simulation wrapped in a trade simulation game, in which you run a commercial Soldiers of Fortune operation, which a “revenge for my house and restore the rightful ruler” background story woven into the background.

Definitively a buy. Goes well with a soundtrack by Sabaton.

BATTLETECH on Steam
Harebrained Schemes
39.99 EUR
Top

Understanding The Scheduler | BSD Now 243

Post by Jupiter Broadcasting via BSD Now Video Feed »

OpenBSD 6.3 and DragonflyBSD 5.2 are released, bug fix for disappearing files in OpenZFS on Linux (and only Linux), understanding the FreeBSD CPU scheduler, NetBSD on RPI3, thoughts on being a committer for 20 years, and 5 reasons to use FreeBSD in 2018.

Top

DDoS-for-Hire Service Webstresser Dismantled

Post by BrianKrebs via Krebs on Security »

Authorities in the U.S., U.K. and the Netherlands on Tuesday took down popular online attack-for-hire service WebStresser.org and arrested its alleged administrators. Investigators say that prior to the takedown, the service had more than 136,000 registered users and was responsible for launching somewhere between four and six million attacks over the past three years.

The action, dubbed “Operation Power Off,” targeted WebStresser.org (previously Webstresser.co), one of the most active services for launching point-and-click distributed denial-of-service (DDoS) attacks. WebStresser was one of many so-called “booter” or “stresser” services — virtual hired muscle that anyone can rent to knock nearly any website or Internet user offline.

Webstresser.org (formerly Webstresser.co), as it appeared in 2017.
“The damage of these attacks is substantial,” reads a statement from the Dutch National Police in a Reddit thread about the takedown. “Victims are out of business for a period of time, and spend money on mitigation and on (other) security measures.”

In a separate statement released this morning, Europol — the law enforcement agency of the European Union — said “further measures were taken against the top users of this marketplace in the Netherlands, Italy, Spain, Croatia, the United Kingdom, Australia, Canada and Hong Kong.” The servers powering WebStresser were located in Germany, the Netherlands and the United States, according to Europol.

The U.K.’s National Crime Agency said WebStresser could be rented for as little as $14.99, and that the service allowed people with little or no technical knowledge to launch crippling DDoS attacks around the world.

Neither the Dutch nor U.K. authorities would say who was arrested in connection with this takedown. But according to information obtained by KrebsOnSecurity, the administrator of WebStresser allegedly was a 19-year-old from Prokuplje, Serbia named Jovan Mirkovic.

Mirkovic, who went by the hacker nickname “m1rk,” also used the alias “Mirkovik Babs” on Facebook where for years he openly discussed his role in programming and ultimately running WebStresser. The last post on Mirkovic’s Facebook page, dated April 3 (the day before the takedown), shows the young hacker sipping what appears to be liquor while bathing. Below that image are dozens of comments left in the past few hours, most of them simply, “RIP.”





A story in the Serbia daily news site Blic.rs notes that two men from Serbia were arrested in conjunction with the WebStresser takedown; they are named only as “MJ” (Jovan Mirkovik) and D.V., aged 19 from Ruma.

Mirkovik’s fake Facebook page (Mirkovik Babs) includes countless mentions of another Webstresser administrator named “Kris” and includes a photograph of a tattoo that Kris got in 2015. That same tattoo is shown on the Facebook profile of a Kristian Razum from Zapresic, Croatia. According to the press releases published today, one of the administrators arrested was from Croatia.

Multiple sources are now pointing to other booter businesses that were reselling WebStresser’s service but which are no longer functional as a result of the takedown, including powerboot[dot]net, defcon[dot]pro, ampnode[dot]com, ripstresser[dot]com, fruitstresser[dot]com, topbooter[dot]com, freebooter[dot]co and rackstress[dot]pw.

Tuesday’s action against WebStresser is the latest such takedown to target both owners and customers of booter services. Many booter service operators apparently believe (or at least hide behind) a wordy “terms of service” agreement that all customers must acknowledge, under the assumption that somehow this absolves them of any sort of liability for how their customers use the service — regardless of how much hand-holding and technical support booter service administrators offer customers.

In October the FBI released an advisory warning that the use of booter services is punishable under the Computer Fraud and Abuse Act, and may result in arrest and criminal prosecution.

In 2016, authorities in Israel arrested two 18-year-old men accused of running vDOS, until then the most popular and powerful booter service on the market. Their arrests came within hours of a story at KrebsOnSecurity that named the men and detailed how their service had been hacked.

Many in the hacker community have criticized authorities for targeting booter service administrators and users and for not pursuing what they perceive as more serious cybercriminals, noting that the vast majority of both groups are young men under the age of 21. In its Reddit thread, the Dutch Police addressed this criticism head-on, saying Dutch authorities are working on a new legal intervention called “Hack_Right,” a diversion program intended for first-time cyber offenders.

“Prevention of re-offending by offering a combination of restorative justice, training, coaching and positive alternatives is the main aim of this project,” the Dutch Police wrote. “See page 24 of the 5th European Cyber Security Perspectives and stay tuned on our THTC twitter account #HackRight! AND we are working on a media campaign to prevent youngsters from starting to commit cyber crimes in the first place. Expect a launch soon.”

In the meantime, it’s likely we’ll sooner see the launch of yet more booter services. According to reviews and sales threads at stresserforums[dot]net — a marketplace for booter buyers and sellers — there are dozens of other booter services in operation, with new ones coming online almost every month.
Top

Some of my thoughts on comments in general

Post by Flameeyes via Flameeyes's Weblog »

One of the points that is the hardest for me to make when I talk to people about my blog is how important comments are for me. I don’t mean comments in source code as documentation, but comments on the posts themselves.

You may remember that was one of the less appealing compromises I made when I moved to Hugo was accepting to host the comments on Disqus. A few people complained when I did that because Disqus is a vendor lock-in. That’s true in more ways than one may imagine.

It’s not just that you are tied into a platform with difficulty of moving out of it — it’s that there is no way to move out of it, as it is. Disqus does provide you the ability to download a copy of all the comments from your site, but they don’t guarantee that’s going to be available: if you have too many, they may just refuse to let you download them.

And even if you manage to download the comments, you’ll have fun time trying to do anything useful with them: Disqus does not let you re-import them, say in a different account, as they explicitly don’t allow that format to be imported. Nor does WordPress: when I moved my blog I had to hack up a script that took the Disqus export format, a WRX dump of the blog (which is just a beefed up RSS feed), and produces a third file, attaching the Disqus comments to the WRX as WordPress would have exported them. This was tricky, but it resolved the problem, and now all the comments are on the WordPress platform, allowing me to move them as needed.

Many people pointed out that there are at least a couple of open-source replacements for Disqus — but when I looked into them I was seriously afraid they wouldn’t really scale that well for my blog. Even WordPress itself appears sometimes not to know how to deal with a >2400 entries blog. The WRX file is, by itself, bigger than the maximum accepted by the native WordPress import tool — luckily the Automattic service has higher limits instead.

One of the other advantages of having moved away from Disqus is that the comments render without needing any JavaScript or third party service, make them searchable by search engines, and most importantly, preserves them in the Internet Archive!

But Disqus is not the only thing that disappoints me. I have a personal dislike for the design, and business model, of Hacker News and Reddit. It may be a bit of a situation of “old man yells at cloud”, but I find that these two websites, much more than Facebook, LinkedIn and other social media, are designed to take the conversation away from the authors.

Let me explain with an example. When I posted about Telegram and IPv6 last year, the post was sent to reddit, which I found because I have a self-stalking recipe for IFTTT that informs me if any link to my sites get posted there. And people commented on that — some missing the point and some providing useful information.

But if you read my blog post you won’t know about that at all, because the comments are locked into Reddit, and if Reddit were to disappear the day after tomorrow there won’t be any history of those comments at all. And this is without going into the issue of the “karma” going to the reposter (who I know in this case), rather than the author — who’s actually discouraged in most communities from submitting their own writings!

This applies in the same or similar fashion to other websites, such as Hacker News, Slashdot, and… is Digg still around? I lost track.

I also find that moving the comments off-post makes people nastier: instead of asking questions ready to understand and talk things through with the author, they assume the post exist in isolation, and that the author knows nothing of what they are talking about. And I’m sure that at least a good chunk of that is because they don’t expect the author to be reading them — they know full well they are “talking behind their back”.

I have had the pleasure to meet a lot of people on the Internet over time, mostly through comments on my or other blogs. I have learnt new thing and been given suggestions, solutions, or simply new ideas of what to poke at. I treasure the comments and the conversation they foster. I hope that we’ll have more rather than fewer of them in the future.
Top

Transcription Service Leaked Medical Records

Post by BrianKrebs via Krebs on Security »

MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.

On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.

What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.

This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.
Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.



Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.

“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”

It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.



Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.

Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.

Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.

Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.

“Human error is a major contributor to those stats,” the report concluded.

Source: Verizon Business 2018 Data Breach Investigations Report.
According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.

Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.

“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”

Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.

KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.
Top

Donaldism is what happens, …

Post by kris via The Isoblog. »

Donaldism (Wikipedia de en) ist what happens, if you science the shit out of the work of Carl Barks and Erika Fuchs, using the framework of your chosen profession.
This particular work deals with The Law and Ducktown, and reveals Ducktown to be actual Sin City.

The Wikipedia article of the english Wikipedia is actually quite bad, and in parts outright wrong. The german Wikipedia nails it. That’s rare.

“Not suitable for children.”
Top

TIL there is no Google Spaces any more

Post by kris via The Isoblog. »

TIL Google shut down Spaces on April, 17th. That was 2017, though.
I just had a discussion about a bunch of people getting excited about “Chat”, Googles new upcoming Chat client based on Joyn. And we listed Talk, Hangouts, Allo and Duo, as well as Spaces and Wave as being interactive chat-like things that nobody uses.

Joyn is based on a standard from 2008, initiated by the late Nokia. It’s currently on it’s fifth major revision and has zero adoption, zero encryption and an intransparent cost – it’s implemented by each telco, with each telco providing it’s own implementation and gateways, and cost model, which may be charging only for the bytes, or like SMS for each message element. Also, each telcos message server intercepts all messages in the clear, by design.

Anyway, speaking about Spaces, I checked the site and learned that it was shut down on April, 17th. 2017. One year ago. Literally nobody noticed.
Top

Perl @INC – customizing it for FreeBSD

Post by Dan Langille via Dan Langille's Other Diary »

This post is all about creating technical debt. If you accept that, go for it. I’m avoiding porting my FreshPorts scripts into SITEPERL. Why? I’ll migrate them to SITEPERL after BSDCan & PGCon. Right now, I need to get the servers upgraded from Perl 5.24 to Perl 5.26, because 5.24 is deprecated.

FreshPorts uses Perl for processing incoming commits and for various administrative backend tasks. Everything on the front end (website) is PHP and HTML. Perl is great at text processing, and text processing is at the heart of what drives FreshPorts.

Lately, my daily security run output FreeBSD emails have been mentioning:

perl5-5.24.4: Tag: expiration_date Value: 2019-05-09
perl5-5.24.4: Tag: deprecated Value: Support ends three years after .0 release, please upgrade to a more recent version of Perl
I’ve been using Perl 5.24 for a while. It is the default on my Poudriere builds.

I decided to upgrade the old FreshPorts server to Perl 5.26, which is now the default on FreeBSD, since 19 April 2018.

Immediately after upgrading, I started getting this problem:

Can't locate port.pm in @INC (you may need to install the port module) (@INC contains: /usr/local/lib/perl5/site_perl/mach/5.26 /usr/local/lib/perl5/site_perl /usr/local/lib/perl5/5.26/mach /usr/local/lib/perl5/5.26) at hourly_stats.pl line 10.
BEGIN failed--compilation aborted at hourly_stats.pl line 10.

Can't locate db_utils.pm in @INC (you may need to install the db_utils module) (@INC contains: /usr/local/lib/perl5/site_perl/mach/5.26 /usr/local/lib/perl5/site_perl /usr/local/lib/perl5/5.26/mach /usr/local/lib/perl5/5.26) at test-master-port.pl line 15.
BEGIN failed--compilation aborted at test-master-port.pl line 15.

Can't locate port.pm in @INC (you may need to install the port module) (@INC contains: /usr/local/lib/perl5/site_perl/mach/5.26 /usr/local/lib/perl5/site_perl /usr/local/lib/perl5/5.26/mach /usr/local/lib/perl5/5.26) at /usr/websites/freshports.org/scripts/daily_rendering_times.pl line 10.
BEGIN failed--compilation aborted at /usr/websites/freshports.org/scripts/daily_rendering_times.pl line 10.
Those are from three separate scripts.

Options

I started investigating and found a few options:

  1. I could alter all the perl scripts with

    use lib ‘path.to.files’

    But that hardcodes the location in the script. Less than ideal.
  2. I could set PERL5LIB in an environment variable.
  3. I could use -I when the scripts are invoked.
I decided, based on suggestions from mat_ on IRC to use the SITECUSTOMIZE option when building Perl.

Enabling SITECUSTOMIZE

I added this to the supernew-make.conf file for my poudriere build:

lang_perl5.26_SET+=SITECUSTOMIZE
Then I rebuilt Perl and installed it. Then I searched for perl SITECUSTOMIZE and found
the INSTALL docs for Perl 5.26. What I learned there: When enabled, this makes perl run $sitelibexp/sitecustomize.pl before anything else. This script can then be set up to add additional entries to @INC.

Great. So what do I have now? This is how you see @INC for your perl:

$ perl -E 'map{say}@INC'
/usr/local/lib/perl5/site_perl/mach/5.26
/usr/local/lib/perl5/site_perl
/usr/local/lib/perl5/5.26/mach
/usr/local/lib/perl5/5.26
As ChoHag pointed out: ‘say for @INC’ is a simpler idiom than ‘map{say}@INC’. Which means this also works:

$ perl -E 'say for @INC'
/usr/local/lib/perl5/site_perl/mach/5.26
/usr/local/lib/perl5/site_perl
/usr/local/lib/perl5/5.26/mach
/usr/local/lib/perl5/5.26
/usr/websites/freshports.org/scripts
You can also use this command to see various environment variables related to Perl installed on your system:

$ env -i perl -V
Summary of my perl5 (revision 5 version 26 subversion 2) configuration:
   
  Platform:
    osname=freebsd
    osvers=11.1-release-p1
    archname=amd64-freebsd-thread-multi
    uname='freebsd 111amd64-default-supernews-job-01 11.1-release-p1 freebsd 11.1-release-p1 amd64 '
    config_args='-sde -Dprefix=/usr/local -Dlibperl=libperl.so.5.26.2 -Darchlib=/usr/local/lib/perl5/5.26/mach -Dprivlib=/usr/local/lib/perl5/5.26 -Dman3dir=/usr/local/lib/perl5/5.26/perl/man/man3 -Dman1dir=/usr/local/lib/perl5/5.26/perl/man/man1 -Dsitearch=/usr/local/lib/perl5/site_perl/mach/5.26 -Dsitelib=/usr/local/lib/perl5/site_perl -Dscriptdir=/usr/local/bin -Dsiteman3dir=/usr/local/lib/perl5/site_perl/man/man3 -Dsiteman1dir=/usr/local/lib/perl5/site_perl/man/man1 -Ui_malloc -Ui_iconv -Uinstallusrbinperl -Dusenm=n -Dcc=cc -Duseshrplib -Dinc_version_list=none -Dcf_by=perl -Dcf_email=perl@FreeBSD.org -Dcf_time=Sat Apr 14 11:27:49 UTC 2018 -Alddlflags=-L/wrkdirs/usr/ports/lang/perl5.26/work/perl-5.26.2 -L/usr/local/lib/perl5/5.26/mach/CORE -lperl -Dshrpldflags=$(LDDLFLAGS:N-L/wrkdirs/usr/ports/lang/perl5.26/work/perl-5.26.2:N-L/usr/local/lib/perl5/5.26/mach/CORE:N-lperl) -Wl,-soname,$(LIBPERL:R) -Doptimize=-O2 -pipe  -fstack-protector -fno-strict-aliasing -Dusedtrace -Ui_gdbm -Dusemultiplicity=y -Duse64bitint -Dusemymalloc=n -Dusesitecustomize -Dusethreads=y'
    hint=recommended
    useposix=true
    d_sigaction=define
    useithreads=define
    usemultiplicity=define
    use64bitint=define
    use64bitall=define
    uselongdouble=undef
    usemymalloc=n
    default_inc_excludes_dot=define
    bincompat5005=undef
  Compiler:
    cc='cc'
    ccflags ='-DHAS_FPSETMASK -DHAS_FLOATINGPOINT_H -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_FORTIFY_SOURCE=2'
    optimize='-O2 -pipe -fstack-protector -fno-strict-aliasing'
    cppflags='-DHAS_FPSETMASK -DHAS_FLOATINGPOINT_H -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include'
    ccversion=''
    gccversion='4.2.1 Compatible FreeBSD Clang 4.0.0 (tags/RELEASE_400/final 297347)'
    gccosandvers=''
    intsize=4
    longsize=8
    ptrsize=8
    doublesize=8
    byteorder=12345678
    doublekind=3
    d_longlong=define
    longlongsize=8
    d_longdbl=define
    longdblsize=16
    longdblkind=3
    ivtype='long'
    ivsize=8
    nvtype='double'
    nvsize=8
    Off_t='off_t'
    lseeksize=8
    alignbytes=8
    prototype=define
  Linker and Libraries:
    ld='cc'
    ldflags ='-pthread -Wl,-E  -fstack-protector-strong -L/usr/local/lib'
    libpth=/usr/lib /usr/local/lib /usr/bin/../lib/clang/4.0.0/lib /usr/lib
    libs=-lpthread -lm -lcrypt -lutil
    perllibs=-lpthread -lm -lcrypt -lutil
    libc=
    so=so
    useshrplib=true
    libperl=libperl.so.5.26.2
    gnulibc_version=''
  Dynamic Linking:
    dlsrc=dl_dlopen.xs
    dlext=so
    d_dlsymun=undef
    ccdlflags='  -Wl,-R/usr/local/lib/perl5/5.26/mach/CORE'
    cccdlflags='-DPIC -fPIC'
    lddlflags='-shared  -L/usr/local/lib/perl5/5.26/mach/CORE -lperl -L/usr/local/lib -fstack-protector-strong'


Characteristics of this binary (from libperl): 
  Compile-time options:
    HAS_TIMES
    MULTIPLICITY
    PERLIO_LAYERS
    PERL_COPY_ON_WRITE
    PERL_DONT_CREATE_GVSV
    PERL_IMPLICIT_CONTEXT
    PERL_MALLOC_WRAP
    PERL_OP_PARENT
    PERL_PRESERVE_IVUV
    USE_64_BIT_ALL
    USE_64_BIT_INT
    USE_ITHREADS
    USE_LARGE_FILES
    USE_LOCALE
    USE_LOCALE_COLLATE
    USE_LOCALE_CTYPE
    USE_LOCALE_NUMERIC
    USE_LOCALE_TIME
    USE_PERLIO
    USE_PERL_ATOF
    USE_REENTRANT_API
    USE_SITECUSTOMIZE
  Built under freebsd
  @INC:
    /usr/local/lib/perl5/site_perl/mach/5.26
    /usr/local/lib/perl5/site_perl
    /usr/local/lib/perl5/5.26/mach
    /usr/local/lib/perl5/5.26
That USE_SITECUSTOMIZE is key to our cunning plan.

Now, back to that INSTALL post.

Where does it go?

My next problem was: where is $sitelibexp? That is the question I put to #perl on IRC and haarg gave me this command:

$ perl -V:sitelibexp
sitelibexp='/usr/local/lib/perl5/site_perl';
OK, good, so I need /usr/local/lib/perl5/site_perl/sitecustomize.pl. Based on something I found while researching this topic, I tried this:

$ cat /usr/local/lib/perl5/site_perl/sitecustomize.pl 
push @INC, "/usr/websites/freshports.org/scripts";
Now look at @INC:

$ perl -E 'say for @INC'
/usr/local/lib/perl5/site_perl/mach/5.26
/usr/local/lib/perl5/site_perl
/usr/local/lib/perl5/5.26/mach
/usr/local/lib/perl5/5.26
/usr/websites/freshports.org/scripts
SCORE!

The FreshPorts scripts are happy for now. I’ll upgrade to 5.26 after the old server has been running it carefree for a few days.
Top

Updates

Post by Rafael Martins via Rafael Martins »

Since I don't write anything here for almost 2 years, I think it is time for some short updates:

  • I left RedHat and moved to Berlin, Germany, in March 2017.
  • The series of posts about balde was stopped. The first post got a lot of Hacker News attention, and I will come back with it as soon as I can implement the required changes in the framework. Not going to happen very soon, though.
  • I've been spending most of my free time with flight simulation. You can expect a few related posts soon.
  • I left the Gentoo GSoC administration this year.
  • blogc is the only project that is currently getting some frequent attention from me, as I use it for most of my websites. Check it out! ;-)
That's all for now.
Top

Mobile Web, Internet of Things, and the Geeks

Post by Flameeyes via Flameeyes's Weblog »

I’m a geek. It’s not just the domain I used to use, but it’s a the truth at the core of myself. I’m also a gadgeteer: if there’s a new gadget that may do something I’m interested in, and I can afford it, I’ll have it (sometimes even if I can barely afford it). I love “toys” and novelties, and I don’t mind if they are a bit on the rough side, “some assembly required”.

All of this, though, is sometimes hard to reconcile with the absolute vitriol I see online, among the communities that include geeks, free software activists, privacy activists and so on.

I sometimes still hear a lot of people complaining about websites optimizing for mobile, sometimes to the disadvantage of 32″ 4K HiDPI monitors — despite the fact that the latter are definitely in a minority of use cases, while the former is the new reality of web access. I do understand that sometimes it’s bothersome just how messy some websites become when they decide to focus primarily on mobile, but there are plenty of cases in which a “mobile-first” point of view is just what people are more likely to need, and ignoring this can be actively harmful.

Let me try to build up an example, which may sound a bit contrived but I would expect to be very realistic.

As you now know, I now live in London, and things here are different than in Ireland. In particular, I can no longer just drop by the pharmacy every other week and go “Just refill me in on this stuff please”. Instead I need to order the prescription to the pharmacy by going online, to the portal of the service provider my surgery contracted, and fill in the form. Then I need to note down when the stuff will be available and go pick it up.

The service provider that my surgery is using did not do a particularly good job in the UI/UX of their product. The website is definitely not mobile optimised, it does not integrate with anything and does not send email reminders for anything, let alone ICS attachments. And when I spoke about that with friends and colleagues, reactions were mixed between the «Why would they spend time on mobile? It’s just fancy stuff» and «Only geeks would care about receiving ICS attachments».

I disagree because the thing is, I can definitely see myself taking the last pills from the blister while on vacation and remembering I need to order more — but I probably don’t have my computer at hand. Being able to just go on the mobile website (or app) and ordering them on the fly can easily be a lifesaver, particularly for people who don’t usually travel with their laptop at all.

And similarly, if I were to ask people about the ICS attachments themselves they would probably wonder what the heck am I talking about, but ask people if they’d appreciate their calendar to show when they are meant to pick up their prescription, or when they have an appointment with their GP, and they probably would go “Yes, please!”

Let me take another example: the Internet of Things. Of course it’s a buzzword, nowadays, but it does not come out of nowhere. The concept of home automation (which in Italian actually takes the word “domotica” for well over 20 years) is not new and it’s not just a matter of being the trend of the year.

While there indeed are a number of “connected things” ideas that make me raise eyebrow or frown on “what the heck were they thinking?”, dissing the ideas tout-court just because they are, well, “connected things” is, in my opinion, short sighted.

I don’t remember if it was Samsung, LG, or whoever else, that proposed first on the market a fridge with an Internet-connected webcam, so that you can check on what you have inside. I heard people complain that it’s just a gimmick and for the lazy — but I could definitely see myself using it. See something on sale at the supermarket, which you didn’t put on the list? Do you remember if you have enough space to put it in the fridge, or if it would be wasted?

Plenty of the solutions that relate around Internet of Things are indeed easy to disavow as “lazy” – I would love to have a washing machine that could be started while I’m the bus because I forgot to do so before leaving the apartment – but at the same time, they are very valuable for people who do have real problems with remembering about things later on. It does not strictly have to be available from the phone in the middle of London — if my phone could, once I get home, remind me “The dishwasher is done. The washing machine is done. You’re out of milk. You need to take out the trash”, that would make my day.

But instead of saying “Hey folks, we need better, safer products!”, I see lots of geeks just going “That’s the Internet of Shit for you, why would you want your coffee machine connected to the Internet?” — like this was never dreamed of by geeks. Or insisting that, since “Internet of Things” is a marketing term, it is cursed and everything that relates to it is “ungeek”.

From my point of view, a lot of these people are those that are now looking down on iPhone users, but were sending email instead of text messages back when you had to use WAP to access anything mobile.

Stop blaming the users. Accept that you may not like or have a need for something but someone else might want it anyway. And if you really want to help, start figuring out how we can make things more secure by default instead of making fun of those that get burnt by the latest vulnerability.
Top

Is Facebook’s Anti-Abuse System Broken?

Post by BrianKrebs via Krebs on Security »

Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate.

Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members.

However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.

To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.

That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.

In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies:



Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets.

Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort.

In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example:



Within minutes of my tweeting about this, the group was gone. I also tweeted about “Best of the Best,” which was selling accounts from many different e-commerce vendors, including Amazon and eBay:



That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards.

Pete Voss, Facebook’s communications manager, apologized for the oversight.

“We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.”

Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.

“But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com.

Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.

Update, 1:32 p.m. ET: Several readers pointed my attention to a Huffington Post story just three days ago, “Facebook Didn’t Seem To Care I Was Being Sexually Harassed Until I Decided To Write About It,” about a journalist whose reports of extreme personal harassment on Facebook were met with a similar response about not violating the company’s Community Standards. That is, until she told Facebook that she planned to write about it.
Top

Updates on Silicon Labs CP2110

Post by Flameeyes via Flameeyes's Weblog »

One month ago I started the yak shave of supporting the Silicon Labs CP2110 with a fully opensource stack, that I can even re-use for glucometerutils.

The first step was deciding how to implement this. While the device itself supports quite a wide range of interfaces, including a GPIO one, I decided that since I’m only going to be able to test and use practically the serial interface, I would at least start with just that. So you’ll probably see the first output as a module for pyserial that implements access to CP2110 devices.

The second step was to find an easy way to test this in a more generic way. Thankfully, Martin Holzhauer, who commented on the original post, linked to an adapter by MakerSpot that uses that chip (the link to the product was lost in the migration to WordPress, sigh), which I then ordered and received a number of weeks later, since it had to come to the US and clear customs through Amazon.

All of this was the easy part, the next part was actually implementing enough of the protocol described in the specification, so that I could actually send and receive data — and that also made it clear that despite the protocol being documented, it’s not as obvious as it might sound — for instance, the specification says that the reports 0x01 to 0x3F are used to send and receive data, but it does not say why there are so many reports… except that it turns out they are actually used to specify the length of the buffer: if you send two bytes, you’ll have to use the 0x02 report, for ten bytes 0x0A, and so on, until the maximum of 63 bytes as 0x3F. This became very clear when I tried sending a long string and the output was impossible to decode.

Speaking of decoding, my original intention was to just loop together the CP2110 device with a CH341 I bought a few years ago, and have them loop data among each other to validate that they work. Somehow this plan failed: I can get data from the CH341 into the CP2110 and it decodes fine (using picocom for the CH341, and Silicon Labs own binary for the CP2110), but I can’t seem to get the CH341 to pick up the data sent through the CP2110. I thought it was a bad adapter, but then I connected the output to my Saleae Logic16 and it showed the data fine, so… no idea.

The current status is:

  • I know the CH341 sends out a good signal;
  • I know the CP2110 can receive a good signal from the CH341, with the Silicon Labs software;
  • I know the CP2110 can send a good signal to the Saleae Logic16, both with the Silicon Labs software and my tiny script;
  • I can’t get the CH341 to receive data from the CP2110.
Right now the state is still very much up in the air, and since I’ll be travelling quite a bit without a chance to bring with me the devices, there probably won’t be any news about this for another month or two.

Oh and before I forget, Rich Felker gave me another interesting idea: CUSE (Character Devices in User Space) is a kernel-supported way to “emulate” in user space devices that would usually be implemented in the kernel. And that would be another perfect application for this: if you just need to use a CP2110 as an adapter for something that needs to speak with a serial port, then you can just have a userspace daemon that implements CUSE, and provide a ttyUSB-compatible device, while not requiring short-circuiting the HID and USB-Serial subsystems.
Top

A Sobering Look at Fake Online Reviews

Post by BrianKrebs via Krebs on Security »

In 2016, KrebsOnSecurity exposed a network of phony Web sites and fake online reviews that funneled those seeking help for drug and alcohol addiction toward rehab centers that were secretly affiliated with the Church of Scientology. Not long after the story ran, that network of bogus reviews disappeared from the Web. Over the past few months, however, the same prolific purveyor of these phantom sites and reviews appears to be back at it again, enlisting the help of Internet users and paying people $25-$35 for each fake listing.

Sometime in March 2018, ads began appearing on Craigslist promoting part-time “social media assistant” jobs, in which interested applicants are directed to sign up for positions at seorehabs[dot]com. This site promotes itself as “leaders in addiction recovery consulting,” explaining that assistants can earn a minimum of $25 just for creating individual Google for Business listings tied to a few dozen generic-sounding addiction recovery center names, such as “Integra Addiction Center,” and “First Exit Recovery.”

The listing on Craigslist.com advertising jobs for creating fake online businesses tied to addiction rehabilitation centers.
Applicants who sign up are given detailed instructions on how to step through Google’s anti-abuse process for creating listings, which include receiving a postcard via snail mail from Google that contains a PIN which needs to be entered at Google’s site before a listing can be created.

Assistants are cautioned not to create more than two listings per street address, but otherwise to use any U.S.-based street address and to leave blank the phone number and Web site for the new business listing.

A screen shot from Seorehabs’ instructions for those hired to create rehab center listings.
In my story Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers, I showed how a labyrinthine network of fake online reviews that steered Internet searches toward rehab centers funded by Scientology adherents was set up by TopSeek Inc., which bills itself as a collection of “local marketing experts.” According to LinkedIn, TopSeek is owned by John Harvey, an individual (or alias) who lists his address variously as Sacramento, Calif. and Hawaii.

Although the current Web site registration records from registrar giant Godaddy obscure the information for the current owner of seorehabs[dot]com, a historic WHOIS search via DomainTools shows the site was also registered by John Harvey and TopSeek in 2015. Mr. Harvey did not respond to requests for comment. [Full disclosure: DomainTools previously was an advertiser on KrebsOnSecurity].

TopSeek’s Web site says it works with several clients, but most especially Narconon International — an organization that promotes the rather unorthodox theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction.

As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts to Scientology, but also for treating all substance abuse addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. Their Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect.

A LUCRATIVE RACKET

Bryan Seely, a security expert who has written extensively about the use of fake search listings to conduct online bait-and-switch scams, said the purpose of sites like those that Seorehabs pays people to create is to funnel calls to a handful of switchboards that then sell the leads to rehab centers that have agreed to pay for them. Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, Seely said, some facilities can then turn around and bill insurance providers for thousands of dollars per patient.

Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely has learned a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up.

“Mr. Harvey and TopSeek are crowdsourcing the data input for these fake rehab centers,” Seely said. “The phone numbers all go to just a few dedicated call centers, and it’s not hard to see why. The money is good in this game. He sells a call for $50-$100 at a minimum, and the call center then tries to sell that lead to a treatment facility that has agreed to buy leads. Each lead can be worth $5,000 to $10,000 for a patient who has good health insurance and signs up.”

This graph illustrates what happens when someone calls one of these Seorehabs listings. Source: Bryan Seely.
Many of the listings created by Seorehab assistants are tied to fake Google Maps entries that include phony reviews for bogus treatment centers. In the event those listings get suspended by Google, Seorehab offers detailed instructions on how assistants can delete and re-submit listings.



Assistants also can earn extra money writing fake, glowing reviews of the treatment centers:



Below are some of the plainly bogus reviews and listings created in the last month that pimp the various treatment center names and Web sites provided by Seorehabs. It is not difficult to find dozens of other examples of people who claim to have been at multiple Seorehab-promoted centers scattered across the country. For example, “Gloria Gonzalez” supposedly has been treated at no fewer than seven Seorehab-marketed detox locations in five states, penning each review just in the last month:



A reviewer using the name “Tedi Spicer” also promoted at least seven separate rehab centers across the United States in the past month. Getting treated at so many far-flung facilities in just the few months that the domains for these supposed rehab centers have been online would be an impressive feat:



Bring up any of the Web sites for these supposed rehab listings and you’ll notice they all include the same boilerplate text and graphic design. Aside from combing listings created by the reviewers paid to promote the sites, we can find other Seorehab listings just by searching the Web for chunks of text on the sites. Doing so reveals a long list (this is likely far from comprehensive) of domain names registered in the past few months that were all created with hidden registration details and registered via Godaddy.

Seely said he spent a few hours this week calling dozens of phone numbers tied to these rehab centers promoted by TopSeek, and created a spreadsheet documenting his work and results here (Google Sheets).

Seely said while he would never advocate such activity, TopSeek’s fake listings could end up costing Mr. Harvey plenty of money if someone figured out a way to either mass-report the listings as fraudulent or automate calls to the handful of hotlines tied to the listings.

“It would kill his business until he changes all the phone numbers tied to these fake listings, but if he had to do that he’d have to pay people to rebuild all the directories that link to these sites,” he said.

WHAT YOU CAN DO ABOUT FAKE ONLINE REVIEWS

Before doing business with a company you found online, don’t just pick the company that comes up at the top of search results on Google or any other search engine. Unfortunately, that generally guarantees little more than the company is good at marketing.

Take the time to research the companies you wish to hire before booking them for jobs or services — especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at rehabs.com, a legitimate rehab search engine.

It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site.

Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online.

A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the the organization (although the ability to do this may soon be a thing of the past).

Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing.

“Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.”

In 2016, Seely published a book on Amazon about the thriving and insanely lucrative underground business of fake online reviews. He’s agreed to let KrebsOnSecurity republish the entire e-book, which is available for free at this link (PDF).

“This is literally the worst book ever written about Google Maps fraud,” Seely said. “It’s also the best. Is it still a niche if I’m the only one here? The more people who read it, the better.”
Top

Linux Takes The Fastpath | BSD Now 242

Post by Jupiter Broadcasting via BSD Now Video Feed »

TrueOS Stable 18.03 released, a look at F-stack, the secret to an open source business model, intro to jails and jail networking, FreeBSD Foundation March update, and the ipsec Errata.

Top

portage API now provides an asyncio event loop policy

Post by zmedico via Zac Medico »

In portage-2.3.30, portage’s python API provides an asyncio event loop policy via a DefaultEventLoopPolicy class. For example, here’s a little program that uses portage’s DefaultEventLoopPolicy to do the same thing as emerge --regen, using an async_iter_completed function to implement the --jobs and --load-average options:

#!/usr/bin/env python

from __future__ import print_function

import argparse
import functools
import multiprocessing
import operator

import portage
from portage.util.futures.iter_completed import (
    async_iter_completed,
)
from portage.util.futures.unix_events import (
    DefaultEventLoopPolicy,
)


def handle_result(cpv, future):
    metadata = dict(zip(portage.auxdbkeys, future.result()))
    print(cpv)
    for k, v in sorted(metadata.items(),
        key=operator.itemgetter(0)):
        if v:
            print('\t{}: {}'.format(k, v))
    print()


def future_generator(repo_location, loop=None):

    portdb = portage.portdb

    for cp in portdb.cp_all(trees=[repo_location]):
        for cpv in portdb.cp_list(cp, mytree=repo_location):
            future = portdb.async_aux_get(
                cpv,
                portage.auxdbkeys,
                mytree=repo_location,
                loop=loop,
            )

            future.add_done_callback(
                functools.partial(handle_result, cpv))

            yield future


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--repo',
        action='store',
        default='gentoo',
    )
    parser.add_argument(
        '--jobs',
        action='store',
        type=int,
        default=multiprocessing.cpu_count(),
    )
    parser.add_argument(
        '--load-average',
        action='store',
        type=float,
        default=multiprocessing.cpu_count(),
    )
    args = parser.parse_args()

    try:
        repo_location = portage.settings.repositories.\
            get_location_for_name(args.repo)
    except KeyError:
        parser.error('unknown repo: {}\navailable repos: {}'.\
            format(args.repo, ' '.join(sorted(
            repo.name for repo in
            portage.settings.repositories))))

    policy = DefaultEventLoopPolicy()
    loop = policy.get_event_loop()

    for future_done_set in async_iter_completed(
        future_generator(repo_location, loop=loop),
        max_jobs=args.jobs,
        max_load=args.load_average,
        loop=loop):
        loop.run_until_complete(future_done_set)


if __name__ == '__main__':
    main()
Top


Deleted Facebook Cybercrime Groups Had 300,000 Members

Post by BrianKrebs via Krebs on Security »

Hours after being alerted by KrebsOnSecurity, Facebook last week deleted almost 120 private discussion groups totaling more than 300,000 members who flagrantly promoted a host of illicit activities on the social media network’s platform. The scam groups facilitated a broad spectrum of shady activities, including spamming, wire fraud, account takeovers, phony tax refunds, 419 scams, denial-of-service attack-for-hire services and botnet creation tools. The average age of these groups on Facebook’s platform was two years.

On Thursday, April 12, KrebsOnSecurity spent roughly two hours combing Facebook for groups whose sole purpose appeared to be flouting the company’s terms of service agreement about what types of content it will or will not tolerate on its platform.

One of nearly 120 different closed cybercrime groups operating on Facebook that were deleted late last week. In total, there were more than 300,000 members of these groups. The average age of these groups was two years, but some had existed for up to nine years on Facebook
My research centered on groups whose singular focus was promoting all manner of cyber fraud, but most especially those engaged in identity theft, spamming, account takeovers and credit card fraud. Virtually all of these groups advertised their intent by stating well-known terms of fraud in their group names, such as “botnet helpdesk,” “spamming,” “carding” (referring to credit card fraud), “DDoS” (distributed denial-of-service attacks), “tax refund fraud,” and account takeovers.

Each of these closed groups solicited new members to engage in a variety of shady activities. Some had existed on Facebook for up to nine years; approximately ten percent of them had plied their trade on the social network for more than four years.

Here is a spreadsheet (PDF) listing all of the offending groups reported, including: Their stated group names; the length of time they were present on Facebook; the number of members; whether the group was promoting a third-party site on the dark or clear Web; and a link to the offending group. A copy of the same spreadsheet in .csv format is available here.

The biggest collection of groups banned last week were those promoting the sale and use of stolen credit and debit card accounts. The next largest collection of groups included those facilitating account takeovers — methods for mass-hacking emails and passwords for countless online accounts such Amazon, Google, Netflix, PayPal, as well as a host of online banking services.

This rather active Facebook group, which specialized in identity theft and selling stolen bank account logins, was active for roughly three years and had approximately 2,500 members.
In a statement to KrebsOnSecurity, Facebook pledged to be more proactive about policing its network for these types of groups.

“We thank Mr. Krebs for bringing these groups to our attention, we removed them as soon as we investigated,” said Pete Voss, Facebook’s communications director. “We investigated these groups as soon as we were aware of the report, and once we confirmed that they violated our Community Standards, we disabled them and removed the group admins. We encourage our community to report anything they see that they don’t think should be in Facebook, so we can take swift action.”

KrebsOnSecurity’s research was far from exhaustive: For the most part, I only looked at groups that promoted fraudulent activities in the English language. Also, I ignored groups that had fewer than 25 members. As such, there may well be hundreds or thousands of other groups who openly promote fraud as their purpose of membership but which achieve greater stealth by masking their intent with variations on or mispellings of different cyber fraud slang terms.

Facebook said its community standards policy does not allow the promotion or sale of illegal goods or services including credit card numbers or CVV numbers (stolen card details marketed for use in online fraud), and that once a violation is reported, its teams review a report and remove the offending post or group if it violates those policies.

The company added that Facebook users can report suspected violations by loading a group’s page, clicking “…” in the top right and selecting “Report Group”. Users who wish to learn more about reporting abusive groups can visit facebook.com/report.

“As technology improves, we will continue to look carefully at other ways to use automation,” Facebook’s statement concludes, responding to questions from KrebsOnSecurity about what steps it might take to more proactively scour its networks for abusive groups. “Of course, a lot of the work we do is very contextual, such as determining whether a particular comment is hateful or bullying. That’s why we have real people looking at those reports and making the decisions.”

Facebook’s stated newfound interest in cleaning up its platform comes as the social networking giant finds itself reeling from a scandal in which Cambridge Analytica, a political data firm, was found to have acquired access to private data on more than 50 million Facebook profiles — most of them scraped without user permission.
Top

Using mtqq to create a notification network: mosquitto, mqttwarn, hare, and hared

Post by Dan Langille via Dan Langille's Other Diary »

As you read this post, keep in mind that my particular use case of notification on ssh login is not for everyone. It may not appeal to you. In fact, you might find this to be an absolutely ridiculous thing to do. I respect that. I suggest that somewhere within your network there is at least one type of error condition, one urgent situation, one thing that you would like pushed to your cellphone / pager / etc. It might be a failed HDD for example.

That is what I ask you to keep in mind as you read this. Think of the possibilities.

One last proviso: I elected to distribute my notifications via Pushover.net but there are many services which mqttwarn supports. Pick the ones you like. Have a read of the list of plugins, there are at least 38 others to choose fromt.

The background: it started at $WORK and moved to Twitter

This project started off small. A coworker suggested that we try a login notification system. Each time we logged in, we’d get a notice to our mobile devices. For us, it was an easy solution: add a curl statement to ~/.bash_profile and we would hear about every time our account was used to login.

That curl statement would invoke pushover.net and we would get a notice on my watch, cellphone, web browser. If someone logged into our accounts, we’d know. We were using NFS for our home directories on each server, so we just had one file to maintain.

Then I started to think if there was another way to do this, other than via shell hooks, something that would also catch scp, etc. I tweeted my idea and goals. Allan Jude suggested PAM. Michael Lucas suggested pam_exec and a shell script.

An evil plan was born. I started playing and learning about pam_exec. The next day, I had played with PAM and created something rather short. It worked. It worked well.

Michael Lucas rightly said:

Me: “pam_exec is scary, avoid it. Especially with shell scripts.”

Dan: “How about I use pam_exec to call a shell script, but add a call to curl(1)?”

Excuse me. I’ll be in the corner, sobbing piteously for the future of humanity.

Enter Jan-Piet MENS with code. He created a small C program which sends off a UDP packet. This code is invoked via /etc/pam.d/sshd, just like my code was, but that is where the similarity stops.

This is where it might sound complex, but it’s not. There are several moving parts, and that’s the way it should be. Small programs, each doing something very well.

I immediately took a liking to it, mostly because of what I could do with MQTT for other things, not just this login notification obsession.

The moving parts

Here is what happens, more or less, when someone logs in with this solution:

  1. successful login occurs. This line in /etc/pam.d/sshd is invoked:
    session    optional   pam_exec.so     /usr/local/sbin/hare 10.25.0.10
    
  2. hare is a small C program. By small, I mean about 100 lines of code. It sends a small udp packet to 10.25.0.10 on port 8035. That packet looks something like this:
    $ sudo tcpdump -A -ni ix0 port 8035
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on ix0, link-type EN10MB (Ethernet), capture size 262144 bytes
    20:16:23.165360 IP 10.25.0.40.14990 > 10.25.0.10.8035: UDP, length 145
    E.......@...
    7..
    7.
    :..c..Z.{"tst":1523823383,"hostname":"dbclone.int.unixathome.org","user":"dan","service":"sshd","rhost":"air01.startpoint.vpn.unixathome.org","tty":null}
    You will notice that is a JSON string. Very flexible.

    This is me logging into dbclone from my MacBook air. What is at 10.25.0.10? hared

  3. That packet is received by hared, a small Go program. The original was a very short Python program. hared composes a message and deposits it into mqtt.
  4. The mqtt implementation I am using is mosquitto. Mosquitto accepts this message and processes it like any other.
  5. mqttwarn, a multiple-purpose mqtt-client, accepts that message, and notifies pushover.net.
  6. A notification arrives on my cellphone and/or web browser informing me of that login.
What do those notifications look like? You can adjust the icon, but here is what I have chosen:

This is what the notification looks like on my iPhone.
This is a cropped screen shot of the Pushover.net application on my iPhone.
This is the notification I saw on my Apple Watch.
This is the web client for Pushover.net notifications
Here is a diagram which outlines what just happened:

The flow of the message as it goes from PAM to Pushover.net

Where do I get this code?

  • hare & hared are on GitHub. I maintain the FreeBSD ports for both hare and hared (written in Go). There is also a python version of hared in FreeBSD but I use the Go version because it has supports TLS.
  • The MQTT broker (that’s the terminology for what Mosquitto does) I am using is net/mosquitto (I love that category for that port name). Mosquitto is an Eclipse Project
  • mqttwarn is a GitHub project available as sysutils/py-mqttwarn in the FreeBSD ports tree.
  • It is recommended by the author that mqttwarn be run via supervisord, available as sysutils/py-supervisor in the FreeBSD ports tree. I follow that advice and also run hared via supervisord.

About the configuration

In this configuration, I am using just one host. Everything is installed on that host. My home installation differs. At home I have one instance each of hared, mqttwarn, mosquitto and supervisord which services all the servers and jails at home. My servers in other other locations run one instance of hared and one mosquitto (which forwards messages to the mosquitto instance at home). This is very flexible.

I do much of the following with an Ansible script, but I’ll show you the manual commands.

Glossary

hare – code that gets invoked on login. Install on each host/jail you want to monitor

hared – parses udp packet and tosses it into MQTT. Install on one host in your local network. Mine is at 10.25.0.10 listening on port 8035.

mosquitto – Message broker. Install on one host in your local network. Mine is at 10.25.0.10 listening on port 8883.

mqttwarn – Processes the message and initiates the notifications via third-party plugins. Install on one host in your local network. Mine is at 10.25.0.10

supervisord – Processing management tool. Runs code which can’t background itself. Install it where ever you run hared and/or mqttwarn. Mine is at 10.25.0.10

Installing hare

To monitor a host for ssh-logins, install hare on that host:

pkg install hare
Add the following entry to /etc/pam.d/sshd:

session         optional       pam_exec.so             /usr/local/sbin/hare 10.25.0.10

Installing hared

pkg install hared
This is the configuration file I use:

[defaults]
$ sudo cat /usr/local/etc/hared.ini
verbose    = True
udpHost    = 10.25.0.10

mqttURI    = tls://mqtt01.int.unixathome.org:8883
mqttClient = hared
mqttTopic  = logging/hare

mqttUser   = hared
mqttPass   = passwordForHare

mqttCAfile = /usr/local/etc/ssl/cert.pem
Details of the above:

  • udpHost – the IP address which hared should listen on. It should match the address in /etc/pam.d/sshd.
  • mqttURI – the MQTT instance to which hared should send messages. Since I am using TLS, and you should too, I use the FQDN of the host. By convention, mosquitto listens on port 8883 when using TLS.
  • mqttClient – the name used when hared connects to MQTT. It shows up on log entries. It is not used for authorization/authentication.
  • mqttTopic – the MQTT message topic which we will be sending. This will match something in the mqttwarn configuration.
  • mqttUser & mqttPass – the authentication used to connect to mosquitto. We will use this later when configuring mosquitto using mosquitto_passwd.
  • mqttCAfile – the CA file which hared should used for verifying the certificate presented by the MQTT broker (i.e.mosquitto).
Starting hared will be covered under supervisord configuration.

Installing mosquitto

To install mosquitto, issue this command:

pkg install mosquitto
This is the configuration file I use for mosquitto:

$ sudo cat /usr/local/etc/mosquitto/mosquitto.conf
pid_file /var/run/mosquitto.pid
port 8883
user mosquitto

allow_anonymous false

certfile /usr/local/etc/ssl/mqtt01.int.unixathome.org.fullchain.cer
keyfile  /usr/local/etc/ssl/mqtt01.int.unixathome.org.key
cafile   /usr/local/etc/ssl/cert.pem

ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4

password_file /usr/local/etc/mosquitto/mosquitto.passwd

connection_messages true
log_dest            syslog
The only item worth mentioning: allow_anonymous prevents non-authenticated connections. That is, only clients with valid entries as found in /usr/local/etc/mosquitto/mosquitto.passwd can connect.

When adding using mosquitto_passwd be cautious with the -c option: it will overwrite any existing file.

Here is me adding a password entry for hared:

sudo mosquitto_passwd -c /usr/local/etc/mosquitto/mosquitto.passwd hared
Password: 
Reenter password: 
Remember to enable mosquitto:

$ sudo sysrc mosquitto_enable="YES"
mosquitto_enable:  -> YES
Then start mosquitto:

$ sudo service mosquitto start
Starting mosquitto.
Not shown above is other logging, handled by supervisord.

Installing mqttwarn

To install mqttwarn:

pkg install mqttwarn
This is my configuration file:

$ sudo cat /usr/local/etc/mqttwarn/mqttwarn.ini
[defaults]
hostname  = mqtt01.int.unixathome.org
port      = 8883
username  = mqttwarn
password  = mqttwarn_password

ca_certs    = '/usr/local/etc/ssl/cert.pem'
tls_version = 'tlsv1'

libdir    = '/usr/local/lib/python2.7/site-packages/mqttwarn/lib'

loglevel  = debug

functions = '/usr/local/etc/mqttwarn/my-functions.py'

; name the service providers you will be using.
launch	 = file, pushover, log

[config:file]
append_newline = True
targets = {
    'sshd_log'     : ['/var/log/mqttwarn/sshd.log']
    }

[config:log]

targets = {
    'info'   : [ 'info' ],
    'warn'   : [ 'warn' ],
    'crit'   : [ 'crit' ],
    'error'  : [ 'error' ]
  }

[test/+]
targets = file:sshd_log, log:error


[config:pushover]
targets = {
    'pam'       : ['userkey1', 'appkey1'],
  }


[logging/hare]
targets = pushover:pam, log:info, file:sshd_log
alldata = moredata()
title: {service} login on {hostname}
filter = logging_filter()
format = User {user} logged into {hostname} from {rhost} at {tstamp}
My notes for the above configuration:

  • tls_version – I use tlsv1 because the other option, sslv3, has known issues.
  • libdir – was needed for earlier versions of
    mqttwarn
    on FreeBSD, but not after 0.10.0.

  • sshd_log – I also log everything to a file. Yes, this duplicates existing logs on the hosts in question, but I figure, if you can log, you should log. Space is cheap. Time is not. This can be useful when debugging. So I log.
  • userkey1 & appkey1 are obtained from your Pushover.net account.
You will need to set a password for mqttwarn so it can connect to mosquitto:

mosquitto_passwd /usr/local/etc/mosquitto/mosquitto.passwd mqttwarn
Not shown above is other logging, handled by supervisord.

Installing supervisord

Why supervisord? Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems. It is ideal for this situation where the code you want to run does not background itself.

I install supervisord, I did this:

pkg install py27-supervisor
These are the setting I have in /etc/rc.conf for supervisord:

$ grep super /etc/rc.conf
supervisord_enable="YES"
supervisord_config="/usr/local/etc/supervisord/supervisord.conf"
supervisord, in it’s current FreeBSD port, expects its configuration file to be at /usr/local/etc/supervisord.conf, but there are a number of configuration files associated with supervisord, I prefer to keep then all in one directory /usr/local/etc/supervisord.

This is the configuration file I use for supervisord:

$ cat /usr/local/etc/supervisord/supervisord.conf
[unix_http_server]
file=/var/run/supervisor/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10          ; (num of main logfile rotation backups;default 10)
loglevel=debug               ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisor/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false              ; (start in foreground if true;default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)

childlogdir=/tmp

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.sock ; use a unix:// URL  for a unix socket

[include]
files = /usr/local/etc/supervisord/*.inc
Of note:

  • childlogdir – this is in the default file, and I still keep it, but read one for where I put child logs.
  • files – other files to include within this configuration. I will explain this later. Great idea from Jan-Piet MENS.
There are the include files I have installed, one for each of mqttwarn and hared:

[dan@mqtt01:/usr/local/etc/supervisord] $ ls -l *.inc
-rw-r-----  1 root  wheel  154 Apr 10 22:29 hared.inc
-rw-r-----  1 root  wheel  286 Apr 13 19:16 mqttwarn.inc
[dan@mqtt01:/usr/local/etc/supervisord] $ 
These files instruct supervisord how to run each of those daemons. Those programs themselves do not background nicely like most programs do. This is why supervisord is used.

The configuration for hared looks like this:

$ cat hared.inc 
[program:hared]
command = /usr/local/bin/hared
user = hared
stdout_logfile=/var/log/hared/hared-stdout.log
stderr_logfile=/var/log/hared/hared-stderr.log
You might want to mkir /var/log/hared.

The include file for mqttwarn:

$ cat mqttwarn.inc
[program:mqttwarn]
command = /usr/local/bin/mqttwarn
user    = mqttwarn
environment=MQTTWARNINI="/usr/local/etc/mqttwarn/mqttwarn.ini",MQTTWARNLOG="/var/log/mqttwarn/mqttwarn.log"
stdout_logfile=/var/log/mqttwarn/mqttwarn-stdout.log
stderr_logfile=/var/log/mqttwarn/mqttwarn-stderr.log
As above, you might want to do this:

mkdir /var/log/mqttwarn
chown mqttwarn:mqttwarn /var/log/mqttarn
The mqttwarn port will create the mqttwarn user.

Of note above: MQTTWARNINI and MQTTWARNLOG are environment variables which instruct the configuration and logging files, respectively for mqttwarn.

To start supervisord:

service supervisord start

What does it all look like?

There are all the processes running on my host at home:

[dan@mqtt01:~] $ ps auwwx | egrep -e 'hared|mqtt|mosq'
mosquitto 28869  0.9  0.0  38500 19208  -  SsJ  22:22   0:53.71 /usr/local/sbin/mosquitto -c /usr/local/etc/mosquitto/mosquitto.conf -d
hared     30341  0.0  0.0  15472  9964  -  IJ   22:32   0:00.22 /usr/local/bin/hared
mqttwarn  30342  0.0  0.1 132884 33068  -  SJ   22:32   0:00.95 /usr/local/bin/python2.7 /usr/local/bin/mqttwarn
dan       42814  0.0  0.0  10732  1848  2  R+J  23:32   0:00.00 egrep -e hared|mqtt|mosq
[dan@mqtt01:~] $ 
I also create a hared user myself. Perhaps the port should create that user.

That’s it

That’s it. That’s everything. It should work.

If it doesn’t, here are the main things to check:

  • Is hare installed on the host you are logging into?
  • Is hare configured in /etc/pam.d/sshd on that host?
  • Is the right IP address specified in /etc/pam.d/sshd on that host?
  • Is hared listing on that IP address?
  • Is hared talking to the right MQTT instance? i.e. mosquitto?
  • Is mosquitto running?
  • Is mqttwarn running?
There are several moving parts. Check the logs, make sure each step is working.

For debugging mosquitto, you can run this command see stuff moving into MQTT:

$ mosquitto_sub -v -t 'logging/hare' -u USER -P PASSWORD -p 8883 -h mqtt01  --cafile /usr/local/etc/ssl/cert.pem  --insecure
logging/hare logging/hare {"hostname":"dbclone.int.unixathome.org","remote":"10.5.0.14","rhost":"air01.startpoint.vpn.unixathome.org","service":"sshd","tst":1523838071,"tty":null,"user":"dan"}
That shows the message going into MQTT.

Here is that same message recorded by hared:

*** /var/log/hared/hared-stdout.log ***
{"tst":1523838071,"hostname":"dbclone.int.unixathome.org","user":"dan","service":"sshd","rhost":"air01.startpoint.vpn.unixathome.org","tty":null}
{"hostname":"dbclone.int.unixathome.org","remote":"10.5.0.14","rhost":"air01.startpoint.vpn.unixathome.org","service":"sshd","tst":1523838071,"tty":null,"user":"dan"}
This is the mqttwarn logging:

2018-04-16 00:21:11,062 DEBUG    [mqttwarn.core            ] Message received on logging/hare: {"hostname":"dbclone.int.unixathome.org","remote":"10.5.0.14","rhost":"air01.startpoint.vpn.unixathome.org","service":"sshd","tst":1523838071,"tty":null,"user":"dan"}
2018-04-16 00:21:11,063 DEBUG    [mqttwarn.core            ] Section [logging/hare] matches message on logging/hare. Processing...
2018-04-16 00:21:11,065 DEBUG    [mqttwarn.core            ] Message on logging/hare going to pushover:pam
2018-04-16 00:21:11,065 DEBUG    [mqttwarn.core            ] New `pushover:pam' job: logging/hare
2018-04-16 00:21:11,065 DEBUG    [mqttwarn.core            ] Message on logging/hare going to log:info
2018-04-16 00:21:11,065 DEBUG    [mqttwarn.core            ] Processor #0 is handling: `pushover' for pam
2018-04-16 00:21:11,065 DEBUG    [mqttwarn.core            ] New `log:info' job: logging/hare
2018-04-16 00:21:11,067 DEBUG    [mqttwarn.core            ] Message on logging/hare going to file:sshd_log
2018-04-16 00:21:11,067 DEBUG    [mqttwarn.core            ] New `file:sshd_log' job: logging/hare
2018-04-16 00:21:11,067 DEBUG    [mqttwarn.services.pushover] *** MODULE=/usr/local/lib/python2.7/site-packages/mqttwarn/services/pushover.pyc: service=pushover, target=pam
2018-04-16 00:21:11,068 DEBUG    [mqttwarn.services.pushover] Sending pushover notification to pam [{'priority': 0, 'message': 'User dan logged into dbclone.int.unixathome.org from air01.startpoint.vpn.unixathome.org at 2018-04-16 00:21:11', 'retry': 60, 'expire': 3600, 'title': 'sshd login on dbclone.int.unixathome.org'}]....
2018-04-16 00:21:11,070 DEBUG    [urllib3.connectionpool   ] Starting new HTTPS connection (1): api.pushover.net
2018-04-16 00:21:11,258 DEBUG    [urllib3.connectionpool   ] https://api.pushover.net:443 "POST /1/messages.json HTTP/1.1" 200 None
2018-04-16 00:21:11,260 DEBUG    [mqttwarn.services.pushover] Successfully sent pushover notification
2018-04-16 00:21:11,300 DEBUG    [mqttwarn.core            ] Job queue has 2 items to process
2018-04-16 00:21:11,300 DEBUG    [mqttwarn.core            ] Processor #0 is handling: `log' for info
2018-04-16 00:21:11,302 DEBUG    [mqttwarn.services.log    ] *** MODULE=/usr/local/lib/python2.7/site-packages/mqttwarn/services/log.pyc: service=log, target=info
2018-04-16 00:21:11,303 INFO     [mqttwarn.services.log    ] User dan logged into dbclone.int.unixathome.org from air01.startpoint.vpn.unixathome.org at 2018-04-16 00:21:11
2018-04-16 00:21:11,303 DEBUG    [mqttwarn.core            ] Job queue has 1 items to process
2018-04-16 00:21:11,303 DEBUG    [mqttwarn.core            ] Processor #0 is handling: `file' for sshd_log
2018-04-16 00:21:11,305 DEBUG    [mqttwarn.services.file   ] *** MODULE=/usr/local/lib/python2.7/site-packages/mqttwarn/services/file.pyc: service=file, target=sshd_log
2018-04-16 00:21:11,306 DEBUG    [mqttwarn.core            ] Job queue has 0 items to process

What use is this?

I can think of two other notifications I would want:

  • The power has gone off.
  • Backups are out of space.
I get email notices regarding those issues, but they are urgent enough to be pushed to my phone.

I’m sure you can find interesting uses for MQTT apart from login notifications.

Future posts

In a future post, I will describe how I install these tools on remote hosts. I have only one instance of mqttwarn. Each remote host runs its own hared and mosquitto. That instance of mosquitto sends its messages to my main MQTT instance.
Top

A review of the Curve debit card

Post by Flameeyes via Flameeyes's Weblog »

Somehow, I end up spending a significant amount of my time thinking, testing and playing with financial services, both old school banks and fintech startups.

One of the most recent ones I have been playing with is Curve. The premise of the service is to allow you to use a single card for all transactions, having the ability to then charge any card underneath it as it is convenient. This was a curious enough idea, so I asked the friend who was telling me about it to give me his referral code to sign up. If you want to sign up, my code is BG2G3.

Signing up and getting the card is quite easy, even though they have (or had when I signed up) a “waitlist” — so after you sign up it takes a few days for you to be able to actually order the card and get it in your hands. They suggest you to make other people sign up for it as well to lower the time in the waitlist, but that didn’t seem to be a requirement for me. The card arrived, to my recalling, no more than two days after they said they shipped it, probably because it was coming from London itself, and that’s all you need to receive.

So how does this all work? You need to connect your existing cards to Curve, and verify them to be able to charge them — verification can be either through a 3Dsecure/Verified by Visa login, or through the usual charge-code-reverse dance that Google, PayPal and the others all use. Once you connect the cards, and select the currently-charged card, you can start using the Curve card to pay, and it acts as a “proxy” for the other card, charging it for the same amount, with some caveats.

The main advantage that my friend suggested for this set up is that you if you have a corporate card (I do), you can add that one to Curve too, and rely on that to not have to do the payback process at work if you make a mistake paying for something. As this happened to me a few times before, mostly out of selecting the wrong payment profile in apps such as Uber or Hailo, or going over the daily allowance for meals as I changed plans, it sounded interesting enough. This can work both by making sure to select the corporate card before making the purchase (for instance by defaulting to it during a work trip), or by “turning back time” on an already charged transaction. Cool.

I also had a hope that the card could be attached to Google Pay, but that’s not the case. Nor they implement their own NFC payment application, which is a bit disappointing.

Beside the “turn back time” feature, the app also has some additional features, such as the integration with the accounting software Xero, including the ability to attach a receipt image to the expense (if this was present for Concur, I’d be a real believer, but until then it’s not really that useful to me), and to receive email “receipts” (more like credit card slips) for purchases made to a certain card (not sure why that’s not just a global, but meh).

Not much else is available in the app to make it particularly useful or interesting to me, honestly. There’s some category system for expenses, very similar to the one for Revolut, but that’s about it.

On the more practical side of things, Curve does not apply any surcharges as long as the transaction is in the same currency as the card, and that includes the transactions in which you turned back time. Which is handy if you don’t know what the currency you’ll be charged in will be in, though that does not really happen often.

What I found particularly useful for this is that the card itself look like a proper “British” card — with my apartment as the verified address on it. But then I can charge one of my cards in Euro, US Dollars, or Revolut itself… although I could just charge Revolut in those cases. The main difference between the two approach is that I can put the Euro purchases onto an Euro credit card, instead of a debit one… except that the only Euro credit card I’m left also has my apartment as its verifiable billing address, so… I’d say I’m not the target audience for this feature.

For “foreign transactions” (that is, where the charged currency and the card currency disagree), Curve charges a 1% foreign transaction fee. This is pointless for me thanks to Revolut, but it still is convenient if you only have accounts with traditional banks, particularly in the UK where most banks apply a 3% foreign transaction fee instead.

In addition to the free card, they also offer a £50 (a year, I guess — it’s not clear!) “black” card that offers 1% cashback at selected retailers. You can actually get 90 days cashback for three retailers of your choice on the free card as well, but to be honest, American Express is much more widely accepted in chains, and its rewards are better. I ended up choosing to do the cashback with TfL, Costa (because they don’t take Amex contactless anyway), and Sainsbury’s just to select something I use.

In all of this, I have to say I fail to see where the business makes money. Okay, financial services are not my area of expertise, but if you’re just proxying payments, without even taking deposits (the way Revolut does), and not charging additional foreign transaction fees, and even giving cashback… where’s the profit?

I guess there is some money to be made by profiling users and selling the data to advertisers — but between GDPR and the fact that most consumers don’t like the idea of being made into products with no “kick back”. I guess if it was me I would be charging 1% on the “turn back time” feature, but that might make moot the whole point of the service. I don’t know.

At the end of the day, I’m also not sure how useful this card is going to be for me, on the day to day. The ability to have a single entry in those systems that are used “promiscuously” for business and personal usage sounds good, but even in that case, it means ignoring the advantages of having a credit card, and particularly a rewards card like my Amex. So, yeah, don’t really see much use for it myself.

It also complicates things when it comes to risk engines for fraud protection: your actual bank will see all the transactions as coming from a single vendor, with the minimum amount of information attached to it. This will likely defeat all the fraud checks by the bank, and will likely rely on Curve’s implementation of fraud checks — which I have no idea how they work, since I have not yet managed to catch them.

Also as far as I could tell, Curve (like Revolut) does not implement 3DSecure (the “second factor” authentication used by a number of merchants to validate e-commerce transactions), making it less secure than any of the other cards I have — a cloned/stolen card can only be disabled after the fact, and replaced. Revolut at least allows me to separate the physical card from my e-commerce transactions, which is neat (and now even supports one time credit card numbers).

There is also another thing that is worth considering, that shows the different point of views (and threat models) of the two services: Curve becomes your single card (single point of failure, too) for all your activities: it makes your life easy by making sure you only ever need to use one card, even if you have multiple bank accounts in multiple countries, and you can switch between them at the tap of a finger. Revolut on the other hand allows you to give each merchant its own credit card number (Premium accounts get unlimited virtual cards) — or even have “burner” cards that change numbers after use.

All in all, I guess it depends what you want to achieve. Between the two, I think I’ll vastly stick to Revolut, and my usage of Curve will taper off once the 90 days cashback offer is completed — although it’s still nice to have for a few websites that gave me trouble with Revolut, as long as I’m charging my Revolut account again, and the charge is in Sterling.

If you do want to sign up, feel free to use BG2G3 as the referral code, it would give a £5 credit for both you and me, under circumstances that are not quite clear to me, but who knows.
Top

The Other Blog

Post by via www.my-universe.com Blog Feed »

Roughly one year ago, I started a blog more focused on technical stuff (mostly Python programming and the like), back to my roots if you want. Recently, I have migrated this blog back to Ghost (which I used already earlier for blogging). As Ghost is much more convenient for content-focused blogging than Jimdo's blog engine, I decided to also to start posting X-Plane related content over there, and giving the blog here on this site a little pause. So head over to the other blog, and update your bookmarks! Content updates are more likely to appear over there, so don't be shy and have a look.

See you on the other side!

Top

Things you cannot say on Facebook, SQL Edition

Post by kris via The Isoblog. »

EDIT: This is now fixed. Facebook worked on the bug report and fixed the problem within 72 hours, including rollout.

Where I work, I am using an instance of Facebook at Work to communicate with colleagues. That is basically a grey-styled instance of Facebook which is supposed to run a forked codebase on isolated servers.

Today, it would not let me write the following SQL in Chat, in Facebook notes or comments:



Other versions of the error message complain about it being Spam, or mention the string sd.date as being problematic.

Why is that?


So when you write anything anywhere in Facebook, Facebook tries to URLify things.

For that, it breaks things into items that look like potential domain names, and checks if they could be domain names. It then create an “a href” wrapper around that and links to that.

Because that’s a way to transport Spam and malicious content, there is a list of blocked things, and that then triggers the above blocking mechanism.

So we have the above SQL, which contains the phrase “sd.date_of_delivery”.

The underscore is not part of DNS, so the tokeniser turns this into sd.date.

There is a dot date TLD.

So Facebook wrongly tries to turn sd.date_of_delivery into sd.date_of_delivery, which clearly is not my intention, and then spamblocks me.

This is wrong on many levels:

  • On a business instance of Facebook it has literally no business to listen in, but does.
  • When every word of the english language turns into a TLD, auto-URLifying stuff becomes completely, utterly useless. The above example contains the expression sd.id, too, and no, it’s not intended to be a link.
  • The is clearly and obviously code, SQL to be specific. All the machine learning in the world did not help you to detect that, though.
And it’s not even sd.date, you only made it into that, because

  • You can’t write proper parsers, too.
  • And you forgot to implement the switch to globally turn off that misfeature in the preferences. Because I am old-school. When I want a link, I actually write https://
So annoying.

EDIT: As a colleague of mine cleverly pointed out, you can fool the thing with zero width characters in UTF-8. I am undecided if that is a bug or a feature.

EDIT: As a user, I want control over URLification: Give me an off switch that does not URLify unless I prefix http:// or https://.

As a user, I want a proper parser, that does not try to turn sd.date_of_delivery into a sd.date-URL.

As a user, I would rather have anything not URLified, but also not blocked, than having things blocked after URLification.

As a user of corporate facebook, I’d like to have the entire product take into account that it is in a corporate environment instead of still trying to behave like blue facebook. The engagement engine, the link-wrapping and and much of the other stuff are a complete nuisance and counterproductive in a paid-for corp use-case.
Top

Bowling in the LimeLight | BSD Now 241

Post by Jupiter Broadcasting via BSD Now Video Feed »

Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers

Top

When Identity Thieves Hack Your Accountant

Post by BrianKrebs via Krebs on Security »

The Internal Revenue Service has been urging tax preparation firms to step up their cybersecurity efforts this year, warning that identity thieves and hackers increasingly are targeting certified public accountants (CPAs) in a bid to siphon oodles of sensitive personal and financial data on taxpayers. This is the story of a CPA in New Jersey whose compromise by malware led to identity theft and phony tax refund requests filed on behalf of his clients.

Last month, KrebsOnSecurity was alerted by security expert Alex Holden of Hold Security about a malware gang that appears to have focused on CPAs. The crooks in this case were using a Web-based keylogger that recorded every keystroke typed on the target’s machine, and periodically uploaded screenshots of whatever was being displayed on the victim’s computer screen at the time.

If you’ve never seen one of these keyloggers in action, viewing their output can be a bit unnerving. This particular malware is not terribly sophisticated, but nevertheless is quite effective. It not only grabs any data the victim submits into Web-based forms, but also captures any typing — including backspaces and typos as we can see in the screenshot below.

The malware records everything its victims type (including backspaces and typos), and frequently takes snapshots of the victim’s computer screen.
Whoever was running this scheme had all victim information uploaded to a site that was protected from data scraping by search engines, but the site itself did not require any form of authentication to view data harvested from victim PCs. Rather, the stolen information was indexed by victim and ordered by day, meaning anyone who knew the right URL could view each day’s keylogging record as one long image file.

Those records suggest that this particular CPA — “John,” a New Jersey professional whose real name will be left out of this story — likely had his computer compromised sometime in mid-March 2018 (at least, this is as far back as the keylogging records go for John).

It’s also not clear exactly which method the thieves used to get malware on John’s machine. Screenshots for John’s account suggest he routinely ignored messages from Microsoft and other third party Windows programs about the need to apply critical security updates.

Messages like this one — about critical security updates available for QuickBooks — went largely ignored, according to multiple screenshots from John’s computer.
More likely, however, John’s computer was compromised by someone who sent him a booby-trapped email attachment or link. When one considers just how frequently CPAs must need to open Microsoft Office and other files submitted by clients and potential clients via email, it’s not hard to imagine how simple it might be for hackers to target and successfully compromise your average CPA.

The keylogging malware itself appears to have been sold (or perhaps directly deployed) by a cybercriminal who uses the nickname ja_far. This individual markets a $50 keylogger product alongside a malware “crypting” service that guarantees his malware will be undetected by most antivirus products for a given number of days after it is used against a victim.

Ja_far’s sales threads for the keylogger used to steal tax and financial data from hundreds of John’s clients.
It seems likely that ja_far’s keylogger was the source of this data because at one point — early in the morning John’s time — the attacker appears to have accidentally pasted ja_far’s jabber instant messenger address into the victim’s screen instead of his own. In all likelihood, John’s assailant was seeking additional crypting services to ensure the keylogger remained undetected on John’s PC. A couple of minutes later, the intruder downloaded a file to John’s PC from file-sharing site sendspace.com.

The attacker apparently messing around on John’s computer while John was not sitting in front of the keyboard.
What I found remarkable about John’s situation was despite receiving notice after notice that the IRS had rejected many of his clients’ tax returns because those returns had already been filed by fraudsters, for at least two weeks John does not appear to have suspected that his compromised computer was likely the source of said fraud inflicted on his clients (or if he did, he didn’t share this notion with any of his friends or family via email).

Instead, John composed and distributed to his clients a form letter about their rejected returns, and another letter that clients could use to alert the IRS and New Jersey tax authorities of suspected identity fraud.

Then again, perhaps John ultimately did suspect that someone had commandeered his machine, because on March 30 he downloaded and installed Spyhunter 4, a security product by Enigma Software designed to detect spyware, keyloggers and rootkits, among other malicious software.

Evidently suspecting someone or something was messing with his computer, John downloaded the trial version of Spyhunter 4 to scan his PC for malware.
Spyhunter appears to have found ja_far’s keylogger, because shortly after the malware alert pictured above popped up on John’s screen, the Web-based keylogging service stopped recording logs from his machine. John did not respond to requests for comment (via phone).

It’s unlikely John’s various clients who experience(d) identity fraud, tax refund fraud or account takeovers as a result of his PC infection will ever learn the real reason for the fraud. I opted to keep his name out of this story because I thought the experience documented and explained here would be eye opening enough and I have no particular interest in ruining his business.

But a new type of identity theft that the IRS first warned about this year involving CPAs would be very difficult for a victim CPA to conceal. Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms and using them to file phony refund requests. Once the IRS processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Tax professionals might consider using something other than Microsoft Windows to manage their client’s data. I’ve long dispensed this advice for people in charge of handling payroll accounts for small- to mid-sized businesses. I continue to stand by this advice not because there isn’t malware that can infect Mac or Linux-based systems, but because the vast majority of malicious software out there today still targets Windows computers, and you don’t have to outrun the bear — only the next guy.

Many readers involved in handling corporate payroll accounts have countered that this advice is impractical for people who rely on multiple Windows-based programs to do their jobs. These days, however, most systems and services needed to perform accounting (and CPA) tasks can be used across multiple operating systems — mainly because they are now Web-based and rely instead on credentials entered at some cloud service (e.g., UltraTax, QuickBooks, or even Microsoft’s Office 365).

Naturally, users still must be on guard against phishing scams that try to trick people into divulging credentials to these services, but when your entire business of managing other people’s money and identities can be undone by a simple keylogger, it’s a good idea to do whatever you can to keep from becoming the next malware victim.

According to the IRS, fraudsters are using spear phishing attacks to compromise computers of tax pros. In this scheme, the “criminal singles out one or more tax preparers in a firm and sends an email posing as a trusted source such as the IRS, a tax software provider or a cloud storage provider. Thieves also may pose as clients or new prospects. The objective is to trick the tax professional into disclosing sensitive usernames and passwords or to open a link or attachment that secretly downloads malware enabling the thieves to track every keystroke.”

The IRS warns that some tax professionals may be unaware they are victims of data theft, even long after all of their clients’ data has been stolen by digital intruders. Here are some signs there might be a problem:

  • Client e-filed returns begin to be rejected because returns with their Social Security numbers were already filed;
  • The number of returns filed with tax practitioner’s Electronic Filing Identification Number (EFIN) exceeds number of clients;
  • Clients who haven’t filed tax returns begin to receive authentication letters from the IRS;
  • Network computers running slower than normal;
  • Computer cursors moving or changing numbers without touching the keyboard;
  • Network computers locking out tax practitioners.
Top

Manuscript on Lab::Measurement submitted for publication

Post by Andreas via the dilfridge blog »

Today's news is that we have submitted a manuscript for publication, describing Lab::Measurement and with it our approach towards fast, flexible, and platform-independent measuring with Perl! The manuscript mainly focuses on the new, Moose-based class hierarchy. We have uploaded it to arXiv as well; here is the (for now) full bibliographic information of the preprint:
 "Lab::Measurement - a portable and extensible framework for controlling lab equipment and conducting measurements"
S. Reinhardt, C. Butschkow, S. Geissler, A. Dirnaichner, F. Olbrich, C. Lane, D. Schröer, and A. K. Hüttel
submitted for publication; arXiv:1804.03321 (PDF, HTML, BibTeX entry)
If you're using Lab::Measurement in your lab, and this results in some nice publication, then we'd be very grateful for a citation of our work - for now the preprint, and later hopefully the accepted version.
Top

Introducing Snallygaster - a Tool to Scan for Secrets on Web Servers

Post by Hanno Böck via Hanno's blog »

A few days ago I figured out that several blogs operated by T-Mobile Austria had a Git repository exposed which included their wordpress configuration file. Due to the fact that a phpMyAdmin installation was also accessible this would have allowed me to change or delete their database and subsequently take over their blogs.

Git Repositories, Private Keys, Core Dumps

Last year I discovered that the German postal service exposed a database with 200.000 addresses on their webpage, because it was simply named dump.sql (which is the default filename for database exports in the documentation example of mysql). An Australian online pharmacy exposed a database under the filename xaa, which is the output of the "split" tool on Unix systems.

It also turns out that plenty of people store their private keys for TLS certificates on their servers - or their SSH keys. Crashing web applications can leave behind coredumps that may expose application memory.

For a while now I became interested in this class of surprisingly trivial vulnerabilities: People leave files accessible on their web servers that shouldn't be public. I've given talks at a couple of conferences (recordings available from Bornhack, SEC-T, Driving IT). I scanned for these issues with a python script that extended with more and more such checks.

Scan your Web Pages with snallygaster

It's taken a bit longer than intended, but I finally released it: It's called Snallygaster and is available on Github and PyPi.

Apart from many checks for secret files it also contains some checks for related issues like checking invalid src references which can lead to Domain takeover vulnerabilities, for the Optionsleed vulnerability which I discovered during this work and for a couple of other vulnerabilities I found interesting and easily testable.

Some may ask why I wrote my own tool instead of extending an existing project. I thought about it, but I didn't really find any existing free software vulnerability scanner that I found suitable. The tool that comes closest is probably Nikto, but testing it I felt it comes with a lot of checks - thus it's slow - and few results. I wanted a tool with a relatively high impact that doesn't take forever to run. Another commonly mentioned free vulnerability scanner is OpenVAS - a fork from Nessus back when that was free software - but I found that always very annoying to use and overengineered. It's not a tool you can "just run". So I ended up creating my own tool.

A Dragon Legend in US Maryland

Finally you may wonder what the name means. The Snallygaster is a dragon that according to some legends was seen in Maryland and other parts of the US. Why that name? There's no particular reason, I just searched for a suitable name, I thought a mythical creature may make a good name. So I searched Wikipedia for potential names and checked for name collisions. This one had none and also sounded funny and interesting enough.

I hope snallygaster turns out to be useful for administrators and pentesters and helps exposing this class of trivial, but often powerful, vulnerabilities. Obviously I welcome new ideas of further tests that could be added to snallygaster.
Top

Adobe, Microsoft Push Critical Security Fixes

Post by BrianKrebs via Krebs on Security »

Adobe and Microsoft each released critical fixes for their products today, a.k.a “Patch Tuesday,” the second Tuesday of every month. Adobe updated its Flash Player program to resolve a half dozen critical security holes. Microsoft issued updates to correct at least 65 security vulnerabilities in Windows and associated software.

The Microsoft updates impact many core Windows components, including the built-in browsers Internet Explorer and Edge, as well as Office, the Microsoft Malware Protection Engine, Microsoft Visual Studio and Microsoft Azure.

The Malware Protection Engine flaw is one that was publicly disclosed earlier this month, and one for which Redmond issued an out-of-band (outside of Patch Tuesday) update one week ago.

That flaw, discovered and reported by Google’s Project Zero program, is reportedly quite easy to exploit and impacts the malware scanning capabilities for a variety of Microsoft anti-malware products, including Windows Defender, Microsoft Endpoint Protection and Microsoft Security Essentials.

Microsoft really wants users to install these updates as quickly as possible, but it might not be the worst idea to wait a few days before doing so: Quite often, problems with patches that may cause systems to end up in an endless reboot loop are reported and resolved with subsequent updates within a few days after their release. However, depending on which version of Windows you’re using it may be difficult to put off installing these patches.

Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update. In any case, don’t put off installing these updates too long.

Adobe’s Flash Player update fixes at least two critical bugs in the program. Adobe said it is not aware of any active exploits in the wild against either flaw, but if you’re not using Flash routinely for many sites, you probably want to disable or remove this buggy program.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Google Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is 29.0.0.140  for Windows, MacLinux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

More information on today’s updates is available from security vendors Ivanti and Qualys.

As always, if you experience problems installing any of these updates, feel free to note your issues in the comments below. Chances are, another reader here has experienced something similar and can assist in troubleshooting the issue.
Top

The dot-EU kerfuffle — or how EURid is messing with their own best supporters

Post by Flameeyes via Flameeyes's Weblog »

TL;DR summary: be very careful if you use a .eu domain as your point of contact for anything. If you’re thinking of registering a .eu domain to use as your primary domain, just don’t.


I have forecasted a rant when I pointed out I changed domain with my move to WordPress.

I have registered flameeyes.eu nearly ten years ago, part of the reason was because flameeyes.com was (at the time) parked to a domain squatter, and part because I have been a strong supported of the European Union.

In those ten years I started using the domain not just for my website, but as my primary contact email. It’s listed as my contact address everywhere, I have all kind of financial, commercial and personal services attached to that email. It’s effectively impossible for me to ever detangle from it, even if I spend the next four weeks doing nothing but amending registrations — some services just don’t allow you to ever change email address; many requires you to contact support and spend time talking with a person to get the email updated on the account.

And now, because I moved to the United Kingdom, which decided to leave the Union, the Commission threatens to prevent me from keeping my domain. It may sound obvious, since EURid says

A website with a .eu or .ею domain name extension tells your customers that you are a legal entity based in the EU, Iceland, Liechtenstein or Norway and are therefore, subject to EU law and other relevant trading standards.

But at the same time it now provides a terrible collapse of two worlds: technical and political. The idea that you any entity in control of a .eu domain is by requirement operating under EU law sounds good on paper… until you come to this corner case where a country leaves the Union — and now either you water down this promise, eroding trust in the domain by not upholding this law domain, or you end up with domain takeover, eroding trust in the domain on technical merit.

Most of the important details for this are already explained in a seemingly unrelated blog post by Hanno Böck: Abandoned Domain Takeover as a Web Security Risk. If EURid will forbid renewal of .eu domains for entities that are no longer considered part of the EU, a whole lot of domains will effectively be “up for grabs”. Some may currently be used as CDN aliases, and be used to load resources on other websites; those would be the worst, as they would allow the controller of the domains to inject content in other sites that should otherwise be secure.

But even more important for companies that used their .eu domain as their primary point of contact: think of any PO, or invoice, or request for information, that would be sent to a company email address — and now think of a malicious actor getting access to those communications! This is not just the risk that me (and any other European supporter who happened to live in the UK, I’m sure I’m not alone) as a single individual have — it’s a possibly unlimited amount of scams that people would be subjected to, as it would be trivial to pass for a company, once their domain is taken over!

As you can see from the title, I think this particular move is also going to hit the European supporters the most. Not just because of those individuals (like me!) who wanted to signal how they feel part of something bigger than their country of birth, but also because I expect a number of UK companies used .eu domain specifically to declare themselves open to European customers — as otherwise, between pricing in Sterling, and a .co.uk domain, it would always feel like buying “foreign goods”. Now those companies, that believed in Europe, find themselves in the weakest of positions.

Speaking of individuals, when I read the news I had a double-take, and had to check the rules for .eu domains again. At first I assumed that something was clearly wrong: I’m a European Union citizen, surely I will be able to keep my domain, no matter where I live! Unfortunately, that’s not the case:

In this first step the Registrant must verify whether it meets the General
Eligibility Criteria, whereby it must be:
(i) an undertaking having its registered office, central administration or
principal place of business within the European Union, Norway, Iceland
or Liechtenstein, or
(ii) an organisation established within the European Union, Norway, Iceland
or Liechtenstein without prejudice to the application of national law, or
(iii) a natural person resident within the European Union, Norway, Iceland or
Liechtenstein.

If you are a European Union citizen, but you don’t want your digital life to ever be held hostage by the Commission or your country’s government playing games with it, do not use a .eu domain. Simple as that. EURid does not care about the well-being of their registrants.

If you’re a European company, do think twice on whether you want to risk that a change in government for the country you’re registered in would lead you to open both yourself, your suppliers and your customers into the a wild west of overtaken domains.

Effectively, what EURid has signalled with this is that they care so little about the technical hurdles of their customers, that I would suggest against ever relying on a .eu domain for anyone at all. Register it as a defense against scammers, but don’t do business on it, as it’s less stable than certain microstate domains, or even the more trendy and modern gTLDs.

I’ll call this a self-goal. I still trust the European Union, and the Commission, to have the interests of the many in their mind. But the way they tried to apply a legislative domain to the .eu TLD was brittle at best to begin with, and now there’s no way out of here that does not ruin someone’s day, and erode the trust in that very same domain.

It’s also important to note that most of the bigger companies, those that I hear a lot of European politicians complain about, would have no problem with something like this: just create a fully-own subsidiary somewhere in Europe, say for instance Slovakia, and have it hold onto the domain. And have it just forward onto a gTLD to do business on, so you don’t even give the impression of counting on that layer of legislative trust.

Given the scary damage that would be caused by losing control over my email address of ten years, I’m honestly considering looking for a similar loophole. The cost of establishing an LLC in another country, firmly within EU boundaries, is not pocket money, but it’s still chump change compared to the amount of damage (financial, reputation, relationships, etc) that it would be a good investment.
Top

Twenty years

Post by Dag-Erling Smørgrav via tech – May Contain Traces of Bolts »

Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today.

My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD.

My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year.

In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices.

Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends.

For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it.

I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon.

Top

Don’t Give Away Historic Details About Yourself

Post by BrianKrebs via Krebs on Security »

Social media sites are littered with seemingly innocuous little quizzes, games and surveys urging people to reminisce about specific topics, such as “What was your first job,” or “What was your first car?” The problem with participating in these informal surveys is that in doing so you may be inadvertently giving away the answers to “secret questions” that can be used to unlock access to a host of your online identities and accounts.

I’m willing to bet that a good percentage of regular readers here would never respond — honestly or otherwise — to such questionnaires (except perhaps to chide others for responding). But I thought it was worth mentioning because certain social networks — particularly Facebook — seem positively overrun with these data-harvesting schemes. What’s more, I’m constantly asking friends and family members to stop participating in these quizzes and to stop urging their contacts to do the same.

On the surface, these simple questions may be little more than an attempt at online engagement by otherwise well-meaning companies and individuals. Nevertheless, your answers to these questions may live in perpetuity online, giving identity thieves and scammers ample ammunition to start gaining backdoor access to your various online accounts.

Consider, for example, the following quiz posted to Facebook by San Benito Tire Pros, a tire and auto repair shop in California. It asks Facebook users, “What car did you learn to drive stick shift on?”

I hope this is painfully obvious, but for many people the answer will be the same as to the question, “What was the make and model of your first car?”, which is one of several “secret questions” most commonly used by banks and other companies to let customers reset their passwords or gain access to the account without knowing the password.

This simple one-question quiz has been shared more than 250 times on Facebook since it was posted a week ago. Thousands of Facebook users responded in earnest, and in so doing linked their profile to the answer.
Probably the most well-known and common secret question, “what was the name of your first pet,” comes up in a number of Facebook quizzes that, incredibly, thousands of people answer willingly and (apparently) truthfully. When I saw this one I was reminded of this hilarious 2007 Daily Show interview wherein Jon Stewart has Microsoft co-founder Bill Gates on and tries to slyly ask him the name of his first pet.

Almost 5,000 Facebook users answered this common password reset secret question.
Womenworking.com asked a variation on this same question of their huge Facebook following and received an impressive number of responses:



Here’s a great one from springchicken.co.uk, an e-commerce site in the United Kingdom. It asks users to publicly state the answer to yet another common secret question: “What street did you grow up on?”

More than 500 Facebook users have shared this quiz with their network, and hundreds more shared the answer using their real names and links to their profiles.
This question, from the Facebook account of Rving.how — a site for owners of recreational vehicles — asks: “What was your first job?” How the answer to this question might possibly relate to RV camping is beyond me, but that didn’t stop people from responding.



The question, “What was your high school mascot” is another common secret question, and yet you can find this one floating around lots of Facebook profiles:



Among the most common secret questions is, “Where did you meet your spouse or partner?” Loads of people like to share this information online as well, it seems:

This common secret question has been shared on Facebook almost 10,000 times and has garnered more than 2,300 responses.
Here’s another gem from the Womenworking Facebook page. Who hasn’t had to use the next secret question at some point? Answering this truthfully — in a Facebook quiz or on your profile somewhere — is a bad idea.

Incredibly, 6,800 Facebook users answered this question.
Do you remember your first grade teacher’s name? Don’t worry, if you forget it after answering this question, Facebook will remember it for you:



I’ve never seen a “what was the first concert you ever saw” secret question, but it is unique as secret questions go and I wouldn’t be surprised if some companies use this one. “What is your favorite band?” is definitely a common secret question, however:



Giving away information about yourself, your likes and preferences, etc., can lead to all kinds of unexpected consequences. This practice may even help turn the tide of elections. Just take the ongoing scandal involving Cambridge Analytica, which reportedly collected data on more than 50 million Facebook users without their consent and then used this information to build behavioral models to target potential voters in various political campaigns.

I hope readers don’t interpret this story as KrebsOnSecurity endorsing secret questions as a valid form of authentication. In fact, I have railed against this practice for years, precisely because the answers often are so easily found using online services and social media profiles.

But if you must patronize a company or service that forces you to select secret questions, I think it’s a really good idea not to answer them truthfully. Just make sure you have a method for remembering your phony answer, in case you forget the lie somewhere down the road.

Many thanks to RonM for assistance with this post.
Top

WordPress, really?

Post by Flameeyes via Flameeyes's Weblog »

If you’re reading this blog post, particularly directly on my website, you probably noticed that it’s running on WordPress and that it’s on a new domain, no longer referencing my pride in Europe, after ten years of using it as my domain. Wow that’s a long time!

I had two reasons for the domain change: the first is that I didn’t want to keep the full chain of redirects of extremely old link onto whichever new blogging platform I would select. And the second is it that it made it significantly easier to set up a WordPress.com copy of the blog while I tweaked and set it up, rather than messing up with the domain at once. The second one will come with a separate rant very soon, but it’s related to the worrying statement from the European Commission regarding the usage of dot-EU domains in the future. But as I said, that’s a separate rant.

I have had a few people surprised when I was talking over Twitter about the issues I faced on the migration. I want to give some more context on why I went this way.

As you remember, last year I complained about Hugo – to the point that a lot of the referrers to this blog are still coming from the Hacker News thread about that – and I started looking for alternatives. And when I looked at WordPress I found that setting it up properly would take me forever, so I kept my mouth shut and doubled-down on Hugo.

Except, because of the way it is set up, it meant not having an easy way to write blog posts, or correct blog posts, from a computer that is not my normal Linux laptop with the SSH token and everything else. Which was too much of a pain to keep working with. While Hector and others suggested flows that involved GIT-based web editors, it all felt too Rube Goldberg to me… and since moving to London my time is significantly limited compared to before, so I either spend time on setting everything up, or I can work on writing more content, which can hopefully be more useful.

I ended up deciding to pay for the personal tier of WordPress.com services, since I don’t care about monetization of this content, and even the few affiliate links I’ve been using with Amazon are not really that useful at the end of the day, so I gave up on setting up OneLink and the likes here. It also turned out that Amazon’s image-and-text links (which use JavaScript and iframes) are not supported by WordPress.com even with the higher tiers, so those were deleted too.

Nobody seems to have published an easy migration guide from Hugo to WordPress, as most of the search queries produced results for the other way around. I will spend some time later on trying to refine the janky template I used and possibly release it. I also want to release the tool I wrote to “annotate” the generated WRX file with the Disqus archive… oh yes, the new blog has all the comments of the old one, and does not rely on Disqus, as I promised.

On the other hand, there are a few things that did get lost in the transition: while JetPack Plugin gives you the ability to write posts in Markdown (otherwise I wouldn’t have even considered WordPress), it doesn’t seem like the importer knows at all how to import Markdown content. So all the old posts have been pre-rendered — a shame, but honestly that doesn’t happen very often that I need to go through old posts. Particularly now that I merged in the content from all my older blogs into Hugo first, and now this one massive blog.

Hopefully expect more posts from me very soon now, and not just rants (although probably just mostly rants).

And as a closing aside, if you’re curious about the picture in the header, I have once again used one of my own. This one was taken at the maat in Lisbon. The white balance on this shot was totally off, but I liked the result. And if you’re visiting Lisbon and you’re an electronics or industrial geek you definitely have to visit the maat!
Top


beep, patch and ed

Post by kris via The Isoblog. »

So a few days ago, somebody found an exploit in beep – now CVE-2018-0492. beep is a program that is part of Debian (and Ubuntu) to have the PC speaker multiple times, at different frequencies, with different pauses and beep lengths. That works just fine.

It’s also SUID root.

There is zero code in it that deals with the fact that it may run privileged. The author confidently writes:

Some users will encounter a situation where beep dies with a complaint from ioctl(). The reason for this, as Peter Tirsek was nice enough to point out to me, stems from how the kernel handles beep’s attempt to poke at (for non-programmers: ioctl is a sort of catch-all function that lets you poke at things that have no other predefined poking-at mechanism) the tty, which is how it beeps. The short story is, the kernel checks that either:

  • you are the superuser
  • you own the current tty
What this means is that root can always make beep work (to the best of my knowledge!), and that any local user can make beep work, BUT a non-root remote user cannot use beep in it’s natural state. What’s worse, an xterm, or other x-session counts, as far as the kernel is concerned, as ‘remote’, so beep won’t work from a non-privileged xterm either. I had originally chalked this up to a bug, but there’s actually nothing I can do about it, and it really is a Good Thing that the kernel does things this way. There is also a solution.

By default beep is not installed with the suid bit set, because that would just be zany. On the other hand, if you do make it suid root, all your problems with beep bailing on ioctl calls will magically vanish, which is pleasant, and the only reason not to is that any suid program is a potential security hole. Conveniently, beep is very short, so auditing it is pretty straightforward.

Decide for yourself, of course, but it looks safe to me – there’s only one buffer and fgets doesn’t let it overflow, there’s only one file opening, and while there is a potential race condition there, it’s with /dev/console. If someone can exploit this race by replacing /dev/console, you’ve got bigger problems. :)

So here is how you race this: beepbop

Which is bad, because Ubuntu and Debian both install beep SUID root, by default. You can check with “dpkg-reconfigure -plow beep”.



Debian installs beep SUDI root by default, “dpkg-reconfigure -plow beep”Somebody also created a fun website for the beep problem, complete with logo and name: Holey Beep.



The site, which is done very tongue in cheek, also contained a patch for the problem, advertising it like this:



So why would your computer beep when it applies a patch?

That’s another fun thing. The patch contains this hunk:

--- /dev/null	2018-13-37 13:37:37.000000000 +0100
+++ b/beep.c	2018-13-37 13:38:38.000000000 +0100
1337a
1,112d
!id>~/pwn.lol;beep # 13-21 12:53:21.000000000 +0100
.
That’s an ed style patch. As I wrote elsewhere, GNU patch detects ed style diffs and opens a pipeline to an actual instance of ed, then feeds it the code. Any ed command is accepted, hence also the “!” command, which spawns a shell and feeds the rest of the line to that shell.

In patch, src/pch.h:2383, it says:

/* Apply an ed script by feeding ed itself. */

and until recently that is exactly what it did: popen a copy of ed and feed it. The line “id > ~/pwn.lol; beep” will hence be run, create a file pwn.lol in $HOME with the current user id, and then execute beep. That’s the promised beep you hear.

Of course code execution by a thing you expect to handle data is a problem, and that is now CVE-2018-1000156.

It’s fixed by writing the ed code to a tmp file while filtering the commands fed to ed, and then feeding the filtered input into ed.
Top

Secret Service Warns of Chip Card Scheme

Post by BrianKrebs via Krebs on Security »

The U.S. Secret Service is warning financial institutions about a new scam involving the temporary theft of chip-based debit cards issued to large corporations. In this scheme, the fraudsters intercept new debit cards in the mail and replace the chips on the cards with chips from old cards. When the unsuspecting business receives and activates the modified card, thieves can start draining funds from the account.

Signs of a card with an old or invalid chip include heat damage around the chip or on the card, or a small hole in the plastic used to pry the chip off the card. Image: U.S. Secret Service.
According to an alert sent to banks late last month, the entire scheme goes as follows:

1. Criminals intercept mail sent from a financial institution to large corporations that contain payment cards, targeting debit payment cards with access to large amount of funds.

2. The crooks remove the chip from the debit payment card using a heat source that warms the glue.

3. Criminals replace the chip with an old or invalid chip and repackage the payment card for delivery.

4. Criminals place the stolen chip into an old payment card.

5. The corporation receives the debit payment card without realizing the chip has been replaced.

6. The corporate office activates the debit payment card; however, their payment card is inoperable thanks to the old chip.

7. Criminals use the payment card with the stolen chip for their personal gain once the corporate office activates the card.

The reason the crooks don’t just use the debit cards when intercepting them via the mail is that they need the cards to be activated first, and presumably they lack the privileged information needed to do that. So, they change out the chip and send the card on to the legitimate account holder and then wait for it to be activated.

The Secret Service memo doesn’t specify at what point in the mail process the crooks are intercepting the cards. It could well involve U.S. Postal Service employees (or another delivery service), or perhaps the thieves are somehow gaining access to company mailboxes directly. Either way, this alert shows the extent to which some thieves will go to target high-value customers.

One final note: It seems almost every time I write about the Secret Service in relation to credit card fraud, some readers are mystified why an agency entrusted with protecting the President of the United States is involved at all in these types of investigations. The truth is that safeguarding the nation’s currency supply from counterfeiters was the Secret Service’s original mission when it was first created in 1865. Only after the assassination of President William McKinley — the third sitting president to be assassinated — did that mandate come to include protecting the president and foreign dignitaries.

Incidentally, if you enjoy reading historical non-fiction, I’d highly recommend Candice Millard‘s magnificently researched and written book, Destiny of the Republic, about the life and slow, painful death of President James A. Garfield after he was shot in the back by his lunatic assailant.
Top

Dot-cm Typosquatting Sites Visited 12M Times So Far in 2018

Post by BrianKrebs via Krebs on Security »

A story published here last week warned readers about a vast network of potentially malicious Web sites ending in “.cm” that mimic some of the world’s most popular Internet destinations (e.g. espn[dot]cm, aol[dot]cm and itunes[dot].cm) in a bid to bombard visitors with fake security alerts that can lock up one’s computer. If that piece lacked one key detail it was insight into just how many people were mistyping .com and ending up at one of these so-called “typosquatting” domains.

On March 30, an eagle-eyed reader noted that four years of access logs for the entire network of more than 1,000 dot-cm typosquatting domains were available for download directly from the typosquatting network’s own hosting provider. The logs — which include detailed records of how many people visited the sites over the past three years and from where — were deleted shortly after that comment was posted here, but not before KrebsOnSecurity managed to grab a copy of the entire archive for analysis.

The geographic distribution of 25,000 randomly selected Internet addresses (IP addresses) in the logs seen accessing the dot-cm typosquatting domains in February 2018. Batchgeo, the service used to produce this graphic, limits free lookups to 25,000, but the above image is likely still representative of the overall geographic distribution. Perhaps unsurprisingly, the largest share of traffic is coming from the United States.
Matthew Chambers, a security expert with whom this author worked on the original dot-cm typosquatting story published last week, analyzed the access logs from just the past three months and found the sites were visited approximately 12 million times during the first quarter of 2018.

Chambers said he combed through the logs and weeded out hits from Internet addresses that appeared to be bots or search engine scrapers. Here’s Chambers’ analysis of the 2018 access log data:

January 2018; 3,732,488 visitors
February 2018: 3,799,109 visitors
Mar 2018: 4,275,998 visitors

Total Jan-Mar 2018 is 11.8 million

Those figures suggest that the total number of visits to these typosquatting sites in the first quarter of 2018 was approximately 12 million, or almost 50 million hits per year. Certainly, not everyone visiting these sites will have the experience that Chambers’ users reported (being bombarded with misleading malware alerts and redirected to scammy and spammy Web sites), but it seems clear this network could make its operators a pretty penny regardless of the content that ends up getting served through it.

Until very recently, the site access logs for this network of more than 1,000 dot-cm typosquatting sites were available on the same server hosting the network.
Chambers also performed “reverse DNS” lookups on the IP addresses listed in the various dot-cm access logs for the month of February 2018. It’s worth noting here that many of the dot-cm (.cm) typosquatting domains in this network (PDF) are trying to divert traffic away from extremely popular porn sites (e.g. pornhub[dot]cm).

“I’ve been diving thru the data thus far, and came up with some interesting visitors,” Chambers said. “I pulled those when it was easy to observe that a particular agency owned a large range of IPs.”

Chambers queried the logs from 2018 for any hits coming from .gov or .mil sites. Here’s what he found:

-National Aeronautics and Space Administration (JSC, GSFC, JPL, NDC): Accessed one of the .cm typosquatting sites 104 times in February, including 16 porn sites.
Department of Justice (80 times) [7 porn sites]
United States House of Representatives (47 times) [17 porn sites]
Central Intelligence Agency (6 times)
United State Army (29 times)
United States Navy (25 times)
Environmental Protection Agency (15 times)
New York State Court System (4 times)

Other federal agencies with typosquatting victims visiting these domains include:

Defense Information Systems Agency (DISA)
Sandia National Laboratories
National Oceanic and Atmospheric Administration (NOAA)
United States Department of Agriculture
Berkeley Lab
Pacific Northwest Lab

Last week’s story noted this entire network appears to be rented out by a Colorado-based online marketing firm called Media Breakaway. That company is headed by Scott Richter, a convicted felon and once self-avowed “spam king” who’s been successfully sued for spamming by Microsoft, MySpace and the New York attorney general. Neither Richter nor anyone else at Media Breakaway has responded to requests for comment.

If you’re in the habit of directly navigating to Web sites (i.e. typing the name of the site into a Web browser address bar), consider weaning yourself of this risky practice. As these ubiquitous typosquatting sites show, it’s a good idea to avoid directly navigating to Web sites you frequent. Instead, bookmark the sites you visit most, particularly those that store your personal and financial information, or that require a login for access.

Update, April 5, 8:05 a.m. ET: An earlier version of this story included numbers that didn’t quite add up to almost 12 million hits. That’s because Chambers sent me multiple sets of numbers as he refined his research, and I inadvertently used figures from an early email he shared with KrebsOnSecurity. The monthly figures above have been adjusted to reflect that.
Top

TCP Blackbox Recording | BSD Now 240

Post by Jupiter Broadcasting via BSD Now Video Feed »

New ZFS features landing in FreeBSD, MAP_STACK for OpenBSD, how to write safer C code with Clang’s address sanitizer, Michael W. Lucas on sponsor gifts, TCP blackbox recorder, and Dell disk system hacking.

Top




py3status v3.8

Post by ultrabug via Ultrabug »

Another long awaited release has come true thanks to our community!

The changelog is so huge that I had to open an issue and cry for help to make it happen… thanks again @lasers for stepping up once again

Highlights

  • gevent support (-g option) to switch from threads scheduling to greenlets and reduce resources consumption
  • environment variables support in i3status.conf to remove sensible information from your config
  • modules can now leverage a persistent data store
  • hundreds of improvements for various modules
  • we now have an official debian package
  • we reached 500 stars on github #vanity

Milestone 3.9

  • try to release a version faster than every 4 months (j/k)
The next release will focus on bugs and modules improvements / standardization.

Thanks contributors!

This release is their work, thanks a lot guys!

  • alex o’neill
  • anubiann00b
  • cypher1
  • daniel foerster
  • daniel schaefer
  • girst
  • igor grebenkov
  • james curtis
  • lasers
  • maxim baz
  • nollain
  • raspbeguy
  • regnat
  • robert ricci
  • sébastien delafond
  • themistokle benetatos
  • tobes
  • woland
Top

Panerabread.com Leaks Millions of Customer Records

Post by BrianKrebs via Krebs on Security »

Panerabread.com, the Web site for the American chain of bakery-cafe fast casual restaurants by the same name, leaked millions of customer records — including names, email and physical addresses, birthdays and the last four digits of the customer’s credit card number — for at least eight months before it was yanked offline earlier today, KrebsOnSecurity has learned.

The data available in plain text from Panera’s site appeared to include records for any customer who has signed up for an account to order food online via panerabread.com. The St. Louis-based company, which has more than 2,100 retail locations in the United States and Canada, allows customers to order food online for pickup in stores or for delivery.

Redacted records from Panera’s site, which let anyone search by a variety of customer attributes, including phone number, email address, physical address or loyalty account number. In this example, the phone number was a main line at an office building where many different employees apparently registered to order food online.
KrebsOnSecurity learned about the breach earlier today after being contacted by security researcher Dylan Houlihan, who said he initially notified Panera about customer data leaking from its Web site back on August 2, 2017.

A long message thread that Houlihan shared between himself and Panera indicates that Mike Gustavison, Panera’s director of information security, initially dismissed Houlihan’s report as a likely scam. A week later, however, those messages suggest that the company had validated Houlihan’s findings and was working on a fix.

“Thank you for the information we are working on a resolution,” Gustavison wrote.

Panera was alerted about the data leakage in early August 2017, and said it was fixing the problem then.
Fast forward to early this afternoon — exactly eight months to the day after Houlihan first reported the problem — and data shared by Houlihan indicated the site was still leaking customer records in plain text. Worse still, the records could be indexed and crawled by automated tools with very little effort.

For example, some of the customer records include unique identifiers that increment by one for each new record, making it potentially simple for someone to scrape all available customer accounts. The format of the database also lets anyone search for customers via a variety of data points, including by phone number.

“Panera Bread uses sequential integers for account IDs, which means that if your goal is to gather as much information as you can instead about someone, you can simply increment through the accounts and collect as much as you’d like, up to and including the entire database,” Houlihan said.

Asked whether he saw any indication that Panera ever addressed the issue he reported in August 2017 until today, Houlihan said no.

“No, the flaw never disappeared,” he said. “I checked on it every month or so because I was pissed.”

Shortly after KrebsOnSecurity spoke briefly with Panera’s chief information officer John Meister by phone today, the company briefly took the Web site offline. As of this publication, the site is back online but the data referenced above no longer appears to be reachable.

Panera took its site down today after being notified by KrebsOnSecurity.
Another data point exposed in these records included the customer’s Panera loyalty card number, which could potentially be abused by scammers to spend prepaid accounts or to otherwise siphon value from Panera customer loyalty accounts.

It is not clear yet exactly how many Panera customer records may have been exposed by the company’s leaky Web site, but incremental customer numbers indexed by the site suggest that number may be higher than seven million. It’s also unclear whether any Panera customer account passwords may have been impacted.

In a written statement, Panera said it had fixed the problem within less than two hours of being notified by KrebsOnSecurity. But Panera did not explain why it appears to have taken the company eight months to fix the issue after initially acknowledging it privately with Houlihan.

“Panera takes data security very seriously and this issue is resolved,” the statement reads. “Following reports today of a potential problem on our website, we suspended the functionality to repair the issue.  Our investigation is continuing, but there is no evidence of payment card information nor a large number of records being accessed or retrieved.”

Update, 8:40 p.m. ET: Almost minutes after this story was published, Panera gave a statement to Fox News downplaying the severity of this breach, stating that only 10,000 customer records were exposed. Almost in an instant, multiple sources — especially @holdsecurity — pointed out that Panera had basically “fixed” the problem by requiring people to log in to a valid user account at panerabread.com in order to view the exposed customer records (as opposed to letting just anyone with the right link access the records).

Subsequent links shared by Hold Security indicate that this data breach may be far larger than the 7 million customer records initially reported as exposed in this story. The vulnerabilities also appear to have extended to Panera’s commercial division which serves countless catering companies. At last count, the number of customer records exposed in this breach appears to exceed 37 million. Thank you to Panera for pointing out the shortcomings of our research. As of this update, the entire Web site panerabread.com is offline.

For anyone interested in my response to Panera’s apparent end-run around my reporting, see my tweets.
Top

Book Review: Ed Mastery

Post by Dan Langille via Dan Langille's Other Diary »

I don’t normally offer guest posts here, but on this rare occasion I couldn’t say no. Here’s a guest post from Michael W Lucas.

What one of us finds delightful, another person might find loathsome. That’s human nature. I keep reminding myself of this every time people start babbling about the virtues of something I find completely idiotic. Keeping my mouth shut gets me out of many arguments before they start.

People learn by crossing the line between poor taste and simple stupidity. That’s rough, but expected.

A fraction of those folks never learn. They keep crossing that line. A man keeps publicly humiliating himself, never doing better, but doesn’t hurt anyone? We’re all best ignoring him, or perhaps offering him help.

When someone repeatedly offers terrible advice, and not only gets people to pay him for that terrible advice but somehow has fans that enable him to make a living giving that advice, the question becomes: how much do you want to have that fight? Maybe the person who’s always wrong is sincere in his beliefs. I’m not going to walk into a church whose beliefs I disagree with and throw down at the altar.

At the top of the pyramid of wrong, though, is the person who gives terrible advice, knows it’s terrible advice, and charges for it anyway. And right there, right now, is where I must take a stand.

I’m talking, of course, about Lucas’ latest book, Ed Mastery. He’s obviously running off of Patrick LoPresti’s alt.religion.emacs post advocating ed as the One True Editor. That’s a hilarious post, well worth reading.

It was published in 1991.

Lucas took a gag old enough to be doing its own post-doc work and beat that dead horse until all that remained is a greasy spot on the ground.

First off, ed is not the standard Unix editor. And EVERYBODY KNOWS IT. Despite Dennis Ritchie’s famous slide on the primordial Unix utilities, ed isn’t “a standard.” At the time, it was “the only.” Text editing has advanced since the Sinclair ZX80 was considered a desktop computer.

His cutesy babbling about the necessity of knowing ed? Oh, please. People constantly argue about text editors. Vi! Emacs! Nano! Frankly, any editor you can access in single user mode will do. Modern computers measure memory in gigabytes. The few extra megabytes you’ll need to load all of vim’s overblown dependencies just… don’t… matter. At all.

Lucas carries that “standard editor” trash through the entire book. If you can call a hundred pages of blithering a “book.”

I’m not sure who’s supposed to buy this. Retired sysadmins who remember when memory was measured in bits, because the size of a byte was dictated by the hardware architecture? Unix hipsters? Users looking for gag gifts?

The section on regular expressions might be convenient, yes… for the small percentage of sysadmins unfamiliar with basic regular expressions. No, not extended regexes. Not Perl-compatible regular expressions. Basic regular expressions. The syntax that underlies almost every regular expression implementation in use today. You’ve been exposed, whether you want it or not. You’ve probably learned it by osmosis and excruciating trial and error, not through reading the essentials.

Especially not essentials weighted down by what Lucas passes off as “humor.” My six year old neighbor writes better footnotes.

Yes, you can script ed. But we have Perl, and Python, and bash, and sh, and a thousand other ways to have scripts edit files.

The multiple editions of this book? Yes, he says he’ll give money to charity if you buy the expensive version. But I can assure you that even with that expenditure, the expensive edition puts more money in his pocket than the cheap edition. But that’s not the worst part. That other edition, regardless of any chances in the contents, is pure social commentary. A “Manly McManface” edition is pure sucking up to social radicals.

(In that edition’s defense, however, I should say that my wife caught a look at the cover and shouted “That’s what computer books need! More of that!”)

So: don’t buy this book. Seriously. You’ll just encourage him.

It is completely obvious that Lucas does not now have, and in fact never has had, any idea whatsoever about what he writes about. His second edition of Absolute FreeBSD existed both to correct all of the errors from the first edition and to generate a whole new slew of inaccuracies. I have no doubt that the forthcoming third edition will contribute a whole new brew of witchcraft and superstition to systems administration, damaging a whole generation of BSD users.

His haphazard approach to any technology subject could be improved upon by covering the floor with the source code in question, releasing an intoxicated capybara in the room, and making him follow its tracks. An even more sensible approach would be finding him other employment–and no, not writing tasteless Linux “erotica.” Preferably digging ditches, or perhaps painting jail cells. But not crossing guard, because that would put childrens’ lives at risk.

His work could be excused by delusional overconfidence. Sure, someone should write books on BSD operating systems. Lucas filled that gap. It’s easy to be the best when you’re the only competitor.

But by writing a book on ed, Lucas demonstrates that he knows precisely what he’s doing. He’s offering harmful advice. He’s almost certainly a scam artist.

To be charitable, it’s possible that we’re witnessing a final, perhaps even tragic end state. Lucas’ constant self-deprecation on social media would be called impostor syndrome in anyone competent. And yet, he somehow thinks that he has something worth saying to the world. The irreconcilable conflict might be tearing his psyche apart. It’s possible that if he got extensive treatment and heavy medication, he might be able to contribute to society as a crossing guard.

Otherwise, I wouldn’t be shocked if his personality shatters from the strain and starts attacking itself.
Top

Creating read-only PostgreSQL database users for pg_dump and pg_dumpall

Post by Dan Langille via Dan Langille's Other Diary »

Sometimes you want a user which can only dump the database, but nothing else. Fortunately, I searched, and found a solution. I’m writing it down so I only have to search this blog.

I want the user rsyncer to have read-rights on the database freebsddiary.org, so it can run pg_dump.

Similarly, I want the same user to be able to run pg_dumpall -g and get users/roles/passwords/etc.

Here is what you’ll get if you try to dump the globals without the required permissions.

[rsyncer@dbclone ~]$ pg_dumpall -g
pg_dumpall: could not connect to database "template1": FATAL:  role "rsyncer" does not exist

[rsyncer@dbclone ~]$ 
Oh, yeah, let’s create the user first.

psql postgres
create user rsyncer with password 'notmyactualpassword';
Now, with that user created, we get farther, but not all the way:

[rsyncer@dbclone ~]$ pg_dumpall -g
--
-- PostgreSQL database cluster dump
--

SET default_transaction_read_only = off;

SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;

pg_dumpall: query failed: ERROR:  permission denied for relation pg_authid
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, rolbypassrls, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid WHERE rolname !~ '^pg_' ORDER BY 2
[rsyncer@dbclone ~]$ 
To solve that permissions issue, I did this:

psql postgres
grant select on all tables    in schema pg_catalog to rsyncer;
grant select on all sequences in schema pg_catalog to rsyncer;
Now when we try the dump, we get:

[rsyncer@dbclone ~]$ pg_dumpall -g
--
-- PostgreSQL database cluster dump
--

SET default_transaction_read_only = off;

SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;

--
-- Roles
--

CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;
CREATE ROLE rsyncer;
ALTER ROLE rsyncer WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'md5cd3276a9ed743fdfa0695607ffab6496';






--
-- PostgreSQL database cluster dump complete
--

[rsyncer@dbclone ~]$ 
There, now we can get all the globals.

Next, we want the user to have read-only access on the database[s] we dump.

My solution involves granting read-only accesss on tables and sequences. That’s enough for my databases.

psql freebsddiary.org
grant connect on database "freebsddiary.org" to rsyncer;

grant select on all tables    in schema public to rsyncer;
grant select on all sequences in schema public to rsyncer;
I also added this to the pg_hba.conf file for this PostgreSQL server:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
local   freebsddiary.org rsyncer                                md5
local   template1        rsyncer                                md5
Top

Coinhive Exposé Prompts Cancer Research Fundraiser

Post by BrianKrebs via Krebs on Security »

A story published here this week revealed the real-life identity behind the original creator of Coinhive — a controversial cryptocurrency mining service that several security firms have recently labeled the most ubiquitous malware threat on the Internet today. In an unusual form of protest against that story, members of a popular German language image-posting board founded by the Coinhive creator have vented their dismay by donating tens hundreds of thousands of euros to local charities that support cancer research.

On Monday KrebsOnSecurity published Who and What is Coinhive, an in-depth story which proved that the founder of Coinhive was indeed the founder of the German image hosting and discussion forum pr0gramm[dot]com (not safe for work). I undertook the research because Coinhive’s code primarily is found on tens of thousands of hacked Web sites, and because the until-recently anonymous Coinhive operator(s) have been reluctant to take steps that might curb the widespread abuse of their platform.

One of countless pages of images posted about this author by pr0gramm users in response to the story about Coinhive.
In an early version of its Web site, Coinhive said its service was first tested on pr0gramm, and that the founder(s) of Coinhive considered pr0gramm “their platform” of 11 years (exactly the length of time pr0gramm has been online). Coinhive declined to say who was running their service, and tried to tell me their earlier statement about Coinhive’s longtime affiliation with pr0gramm was a convenient lie that was used to helped jump-start the service by enlisting the help of pr0gramm’s thousands of members.

Undeterred, I proceeded with my research based on the assumption that one or more of the founders of pr0gramm were involved in Coinhive. When I learned the real-life identities of the pr0gramm founders and approached them directly, each deflected questions about their apparent roles in founding and launching Coinhive.

However, shortly after the Coinhive story went live, the original founder of pr0gramm (Dominic Szablewski, a.k.a. “cha0s”) published a blog post acknowledging that he was in fact the creator of Coinhive. What’s more, Coinhive has since added legal contact information to its Web site, and has said it is now taking steps to ensure that it no longer profits from cryptocurrency mining activity after hacked Web sites owners report finding Coinhive’s code on their sites.

Normally, when KrebsOnSecurity publishes a piece that sheds light on a corner of the Internet that would rather remain in the shadows, the response is as predictable as it is swift: Distributed denial-of-service (DDoS) attacks on this site combined with threats of physical violence and harm from anonymous users on Twitter and other social networks.

While this site did receive several small DDoS attacks this week — and more than a few anonymous threats of physical violence and even death related to the Coinhive story — the response from pr0gramm members has been remarkably positive overall.

The pr0gramm community quickly seized on the fact that my last name — Krebs — means “crab” and “cancer” in German. Apparently urged by one of the pr0gramm founders named in the story to express their anger in “objective and polite” ways, several pr0gramm members took to donating money to the Deutsche Krebshilfe (German Cancer Aid/DKMS) Web site as a way to display their unity and numbers.

The protest (pr0test?) soon caught on in the Twitter hashtag “#KrebsIsCancer,” promoted and re-tweeted heavily by pr0gramm members as a means to “Fight Krebs” or fight cancer. According to a statement on DKMS’s Web site, the KrebsIsCancer campaign involved donations from more than 8,300 people totaling 207,500 euros (~USD $256,000).

Update, 2:46 p.m. ET: Updated donation figures per statement posted today on DKMS site.
Top

TrueOS STABLE 18.03 Release

Post by Ken Moore via TrueOS »

The TrueOS team is pleased to announce the availability of a new STABLE release of the TrueOS project (version 18.03). This is a special release due to the security issues impacting the computing world since the beginning of 2018. In particular, mitigating the “Meltdown” and “Spectre” system exploits make it necessary to update the entire package ecosystem for TrueOS. This release does not replace the scheduled June STABLE update, but provides the necessary and expected security updates for the STABLE release branch of TrueOS, even though this is part-way through our normal release cycle.

Important changes between version 17.12 and 18.03

  • Meltdown” security fixes: This release contains all the fixes to FreeBSD which mitigate the security issues for systems that utilize Intel-based processors when running virtual machines such as FreeBSD jails. Please note that virtual machines or jails must also be updated to a version of FreeBSD or TrueOS which contains these security fixes.
  • “Spectre” security mitigations: This release contains all current mitigations from FreeBSD HEAD for the Spectre memory-isolation attacks (Variant 2). All 3rd-party packages for this release are also compiled with LLVM/Clang 6 (the “retpoline” mitigation strategy). This fixes many memory allocation issues and enforces stricter requirements for code completeness and memory usage within applications. Unfortunately, some 3rd-party applications became unavailable as pre-compiled packages due to non-compliance with these updated standards. These applications are currently being fixed either by the upstream authors or the FreeBSD port maintainers. If there are any concerns about the availability of a critical application for a specific workflow, please search through the changelog of packages between TrueOS 17.12 and 18.03 to verify the status of the application.
 

Most systems will need microcode updates for additional Spectre mitigations. The microcode updates are not enabled by default. This work is considered experimental because it is in active development by the upstream vendors. If desired, the microcode updates are available with the new devcpu-data package, which is available in the Appcafe. Install this package and enable the new microcode_update service to apply the latest runtime code when booting the system.

Important security-based package updates

    • LibreSSL is updated from version 2.6.3 -> 2.6.4
      • Reminder: LibreSSL is used on TrueOS to build any package which does not explicitly require OpenSSL. All applications that utilize the SSL transport layer are now running with the latest security updates.
    • Browser updates: (Keep in mind that many browsers have also implemented their own security mitigations in the aftermath of the Spectre exploit.)
      • Firefox: 57.0.1 -> 58.0.2
      • Chromium: 61.0.3163.100 -> 63.0.3239.132
      • Qt5 Webengine (QupZilla, Falkon, many others): 5.7.1 -> 5.9.4
 

  • All pre-compiled packages for this release are built with the latest versions of LLVM/Clang, unless the package explicitly requires GCC. These packages also utilize the latest compile-time mitigations for memory-access security concerns.
 

 

Package changes between 17.12 and 18.03

Summary of Package Changes:

 

Notes about package statistics:

  • Some packages that have been renamed between releases (like the KDE4 packages) will appear on both the removed list (old name) and the new list (new name) simultaneously.
  • Many of the new packages are the result of the new “flavor” system being activated for Python based ports. Many of these applications now have two packages available, one for Python 2.7 (py27-*) and one for Python 3.6 (py36-*).
  • The updated packages list does not include minor port revision changes. The list only contains packages that had an actual change in the upstream version of the application.
The post TrueOS STABLE 18.03 Release appeared first on TrueOS.
Top

The Return To ptrace | BSD Now 239

Post by Jupiter Broadcasting via BSD Now Video Feed »

OpenBSD firewalling Windows 10, NetBSD’s return to ptrace, TCP Alternative Backoff, the BSD Poetic license, and AsiaBSDcon 2018 videos available.

Top

Omitting the “o” in .com Could Be Costly

Post by BrianKrebs via Krebs on Security »

Take care when typing a domain name into a browser address bar, because it’s far too easy to fat-finger a key and wind up somewhere you don’t want to go. For example, if you try to visit some of the most popular destinations on the Web but omit the “o” in .com (and type .cm instead), there’s a good chance your browser will be bombarded with malware alerts and other misleading messages — potentially even causing your computer to lock up completely. As it happens, many of these domains appear tied to a marketing company whose CEO is a convicted felon and once self-proclaimed “Spam King.”



Matthew Chambers is a security professional and researcher in Atlanta. Earlier this month Chambers penned a post on his personal blog detailing what he found after several users he looks after accidentally mistyped different domains — such as espn[dot]cm.

Chambers said the user who visited that domain told him that after typing in espn.com he quickly had his computer screen filled with alerts about malware and countless other pop-ups. Security logs for that user’s system revealed the user had actually typed espn[dot]cm, but when Chambers reviewed the source code at that Web page he found an innocuous placeholder content page instead.

“One thing we notice is that any links generated off these domains tend to only work one time, if you try to revisit it’s a 404,” Chambers wrote, referring to the standard 404 message displayed in the browser when a Web page is not found. “The file is deleted to prevent researchers from trying to grab it, or automatic scanners from downloading it. Also, some of the exploit code on these sites will randomly vaporize, and they will have no code on them, but were just being weaponized in campaigns. It could be the user agent, or some other factor, but they definitely go dormant for periods of time.”

Espn[dot]cm is one of more than a thousand so-called “typosquatting” domains hosted on the same Internet address (85.25.199.30), including aetna[dot]cmaol[dot]cm, box[dot]cm, chase[dot]cm, citicards[dot]cmcostco[dot]cm, facebook[dot]cmgeico[dot]cm, hulu[dot]cmitunes[dot]cm, pnc[dot]cmslate[dot]cmsuntrust[dot]cm, turbotax[dot]cm, and walmart[dot]cm. I’ve compiled a partial list of the most popular typosquatting domains that are part of this network here (PDF).

KrebsOnSecurity sought to dig a bit deeper into Chambers’ findings, researching some of the domain registration records tied to the list of dot-cm typosquatting domains. Helpfully, all of the domains currently redirect visitors to just one of two landing pages — either antistrophebail[dot]com or chillcardiac[dot]com.

For the moment, if one visits either of these domains directly via a desktop Web browser (I’d advise against this) chances are the site will display a message saying, “Sorry, we currently have no promotions available right now.” Browsing some of them with a mobile device sometimes leads to a page urging the visitor to complete a “short survey” in exchange for “a chance to get an gift [sic] cards, coupons and other amazing deals!”



Those antistrophebail and chillcardiac domains — as well as 1,500+ others — were registered to the email address: ryanteraksxe1@yahoo.com. A Web search on that address doesn’t tell us much, but entering it at Yahoo‘s “forgot password” page lists a partially obfuscated address to which Yahoo can send an account key that may be used to reset the password for the account. That address is k*****ng@mediabreakaway[dot]com.

The full email address is kmanning@mediabreakaway[dot]com. According to the “leadership” page at mediabreakaway[dot]com, the email address ryanteraksxe1@yahoo.com almost certainly belongs to one Kacy Manning, who is listed as the “Manager of Special Projects” at Colorado based marketing firm Media Breakaway LLC.

Media Breakaway is headed by Scott Richter, a convicted felon who’s been successfully sued for spamming by some of the biggest media companies over the years.

In 2003, New York’s attorney general sued Richter and his former company OptInRealBig[dot]com after an investigation by Microsoft found his company was the source of hundreds of millions of spam emails daily touting dubious products and services. OptInRealBig later declared bankruptcy in the face of a $500 million judgment against the company. At the time, anti-spam group Spamhaus listed Richter as the world’s third most prolific spammer worldwide. For more on how Richter views his business, check out this hilarious interview that Richter gave to the The Daily Show in 2004.

In 2006, Richter paid $7 million to Microsoft in a settlement arising out of a lawsuit alleging illegal spamming.

In 2007, Richter and Media Breakaway were sued by social networking site MySpace; a year later, an arbitration firm awarded MySpace $6 million in damages and attorneys fees.

In 2013, the Internet Corporation for Assigned Names and Numbers (ICANN), the nonprofit firm which oversees the domain name registration industry, terminated the registrar agreement for Dynamic Dolphin, a domain name registrar of which Richter was the CEO.

According to the contracts that ICANN requires all registrars to sign, registrars may not have anyone as an officer of the company who has been convicted of a criminal offense involving financial activities. While Richter’s spam offenses all involve civil matters, KrebsOnSecurity discovered several years ago that Richter had actually pleaded guilty in 2003 to a felony grand larceny charge.

Scott Richter. Image: 4law.co.il
Richter, then 32, was busted for conspiring to deal in stolen goods, including a Bobcat, a generator, laptop computers, cigarettes and tools. He later pleaded guilty to one felony count of grand larceny, and was ordered to pay nearly $38,000 in restitution to cover costs linked to the case.

Neither Richter nor Media Breakaway responded to requests for comment on this story.

Chambers said it appears Media Breakaway is selling advertising space to unscrupulous actors who are pushing potentially unwanted programs (PUPs) or adware over the network.

“You end up coming out of the funnel into an advertiser’s payload site, and Media Breakaway is the publisher that routes those ‘parked/typosquatting’ sites as a gateway,” Chambers said. “Research on the IP under VirusTotal Communicating Files shows files like this one (42/66 engines) reporting to the IP address that hosts all these sites, and it goes back to at least March 2015. That file SETUPINST.EXE has detection primarily as LiveSoftAction, GetNow, ElDorado, and Multitoolbar. I think it’s just pushing out [content] for sketchy advertisers.”

It’s remarkable that so many huge corporate brand names aren’t doing more to police their trademarks and to prevent would-be visitors from falling victim to such blatant typosquatting traps.

Under the Uniform Domain-Name Dispute-Resolution Policy (UDRP), trademark holders can lodge typosquatting complaints with the World Intellectual Property Organization (WIPO). The complainant can wrest control over a disputed domain name, but first needs to show that the registered domain name is identical or “confusingly similar” to their trademark, that the registrant has no legitimate interest in the domain name, and that the domain name is being used in bad faith.

Everyone makes typ0s from time to time, which is why it’s a good idea to avoid directly navigating to Web sites you frequent. Instead, bookmark the sites you visit most, particularly those that store your personal and financial information, or that require a login for access.

Oh, and if your security or antivirus software allows you to block all Web sites in a given top-level domain, it might not be a bad idea to block anything coming out of dot-cm (the country code top-level domain for Cameroon): A report published in December 2009 by McAfee found that .cm was the riskiest domain in the world, with 36.7% of the sites posing a security risk to PCs.
Top

A simple method of measuring audio latency

Post by Nirbheek via Nirbheek’s Rantings »

In my previous blog post, I talked about how I improved the latency of GStreamer's default audio capture and render elements on Windows.

An important part of any such work is a way to accurately measure the latencies in your audio path.

Ideally, one would use a mechanism that can track your buffers and give you a detailed breakdown of how much latency each component of your system adds. For instance, with an audio pipeline like this:

audio-capture → filter1 → filter2 → filter3 → audio-output

If you use GStreamer, you can use the latency tracer to measure how much latency filter1 adds, filter2 adds, and so on.

However, sometimes you need to measure latencies added by components outside of your control, for instance the audio APIs provided by the operating system, the audio drivers, or even the hardware itself. In that case it's really difficult, bordering on impossible, to do an automated breakdown.

But we do need some way of measuring those latencies, and I needed that for the aforementioned work. Maybe we can get an aggregated (total) number?

There's a simple way to do that if we can create a loopback connection in the audio setup. What's a loopback you ask?


Essentially, if we can redirect the audio output back to the audio input, that's called a loopback. The simplest way to do this is to connect the speaker-out/line-out to the microphone-in/line-in with a two-sided 3.5mm jack.


Now, when we send an audio wave down to the audio output, it'll show up on the audio input.

Hmm, what if we store the current time when we send the wave out, and compare it with the current time when we get it back? Well, that's the total end-to-end latency!

If we send out a wave periodically, we can measure the latency continuously, even as things are switched around or the pipeline is dynamically reconfigured.

Some of you may notice that this is somewhat similar to how the `ping` command measures latencies across the Internet.



Just like a network connection, the loopback connection can be lossy or noisy, f.ex. if you use loudspeakers and a microphone instead of a wire, or if you have (ugh) noise in your line. But unlike network packets, we lose all context once the waves leave our pipeline and we have no way of uniquely identifying each wave.

So the simplest reliable implementation is to have only one wave traveling down the pipeline at a time. If we send a wave out, say, once a second, we can wait about one second for it to show up, and otherwise presume that it was lost.

That's exactly how the audiolatency GStreamer plugin that I wrote works! Here you can see its output while measuring the combined latency of the WASAPI source and sink elements:


The first measurement will always be wrong because of various implementation details in the audio stack, but the next measurements should all be correct.

This mechanism does place an upper bound on the latency that we can measure, and on how often we can measure it, but it should be possible to take more frequent measurements by sending a new wave as soon as the previous one was received (with a 1 second timeout). So this is an enhancement that can be done if people need this feature.

Hope you find the element useful; go forth and measure!
Top

Who and What Is Coinhive?

Post by BrianKrebs via Krebs on Security »

Multiple security firms recently identified cryptocurrency mining service Coinhive as the top malicious threat to Web users, thanks to the tendency for Coinhive’s computer code to be used on hacked Web sites to steal the processing power of its visitors’ devices. This post looks at how Coinhive vaulted to the top of the threat list less than a year after its debut, and explores clues about the possible identities of the individuals behind the service.

Coinhive is a cryptocurrency mining service that relies on a small chunk of computer code designed to be installed on Web sites. The code uses some or all of the computing power of any browser that visits the site in question, enlisting the machine in a bid to mine bits of the Monero cryptocurrency.



Monero differs from Bitcoin in that its transactions are virtually untraceble, and there is no way for an outsider to track Monero transactions between two parties. Naturally, this quality makes Monero an especially appealing choice for cybercriminals.

Coinhive released its mining code last summer, pitching it as a way for Web site owners to earn an income without running intrusive or annoying advertisements. But since then, Coinhive’s code has emerged as the top malware threat tracked by multiple security firms. That’s because much of the time the code is installed on hacked Web sites — without the owner’s knowledge or permission.

Much like a malware infection by a malicious bot or Trojan, Coinhive’s code frequently locks up a user’s browser and drains the device’s battery as it continues to mine Monero for as long a visitor is browsing the site.

According to publicwww.com, a service that indexes the source code of Web sites, there are nearly 32,000 Web sites currently running Coinhive’s JavaScript miner code. It’s impossible to say how many of those sites have installed the code intentionally, but in recent months hackers have secretly stitched it into some extremely high-profile Web sites, including sites for such companies as The Los Angeles Times, mobile device maker Blackberry, Politifact, and Showtime.

And it’s turning up in some unexpected places: In December, Coinhive code was found embedded in all Web pages served by a WiFi hotspot at a Starbucks in Buenos Aires. For roughly a week in January, Coinhive was found hidden inside of YouTube advertisements (via Google’s DoubleClick platform) in select countries, including Japan, France, Taiwan, Italy and Spain. In February, Coinhive was found on “Browsealoud,” a service provided by Texthelp that reads web pages out loud for the visually impaired. The service is widely used on many UK government websites, in addition to a few US and Canadian government sites.

What does Coinhive get out of all this? Coinhive keeps 30 percent of whatever amount of Monero cryptocurrency that is mined using its code, whether or not a Web site has given consent to run it. The code is tied to a special cryptographic key that identifies which user account is to receive the other 70 percent.

Coinhive does accept abuse complaints, but it generally refuses to respond to any complaints that do not come from a hacked Web site’s owner (it mostly ignores abuse complaints lodged by third parties). What’s more, when Coinhive does respond to abuse complaints, it does so by invalidating the key tied to the abuse.

But according to Troy Mursch, a security expert who spends much of his time tracking Coinhive and other instances of “cryptojacking,” killing the key doesn’t do anything to stop Coinhive’s code from continuing to mine Monero on a hacked site. Once a key is invalidated, Mursch said, Coinhive keeps 100 percent of the cryptocurrency mined by sites tied to that account from then on.

Mursch said Coinhive appears to have zero incentive to police the widespread abuse that is leveraging its platform.

“When they ‘terminate’ a key, it just terminates the user on that platform, it doesn’t stop the malicious JavaScript from running, and it just means that particular Coinhive user doesn’t get paid anymore,” Mursch said. “The code keeps running, and Coinhive gets all of it. Maybe they can’t do anything about it, or maybe they don’t want to. But as long as the code is still on the hacked site, it’s still making them money.”

Reached for comment about this apparent conflict of interest, Coinhive replied with a highly technical response, claiming the organization is working on a fix to correct that conflict.

“We have developed Coinhive under the assumption that site keys are immutable,” Coinhive wrote in an email to KrebsOnSecurity. “This is evident by the fact that a site key can not be deleted by a user. This assumption greatly simplified our initial development. We can cache site keys on our WebSocket servers instead of reloading them from the database for every new client. We’re working on a mechanism [to] propagate the invalidation of a key to our WebSocket servers.”

AUTHEDMINE

Coinhive has responded to such criticism by releasing a version of their code called “AuthedMine,” which is designed to seek a Web site visitor’s consent before running the Monero mining scripts. Coinhive maintains that approximately 35 percent of the Monero cryptocurrency mining activity that uses its platform comes from sites using AuthedMine.

But according to a report published in February by security firm Malwarebytes, the AuthedMine code is “barely used” compared to the use of Coinhive’s mining code that does not seek permission from Web site visitors. Malwarebytes’ telemetry data (drawn from antivirus alerts when users browse to a site running Coinhive’s code) determined that AuthedMine is used in a little more than one percent of all cases that involve Coinhive’s mining code.

Image: Malwarebytes. The statistic above refer to the number of times per day between Jan. 10 and Feb. 7 that Malwarebytes blocked connections to AuthedMine and Coinhive, respectively.
Asked to comment on the Malwarebytes findings, Coinhive replied that if relatively few people are using AuthedMine it might be because anti-malware companies like Malwarebytes have made it unprofitable for people to do so.

“They identify our opt-in version as a threat and block it,” Coinhive said. “Why would anyone use AuthedMine if it’s blocked just as our original implementation? We don’t think there’s any way that we could have launched Coinhive and not get it blacklisted by Antiviruses. If antiviruses say ‘mining is bad,’ then mining is bad.”

Similarly, data from the aforementioned source code tracking site publicwww.com shows that some 32,000 sites are running the original Coinhive mining script, while the site lists just under 1,200 sites running AuthedMine.

WHO IS COINHIVE?

[Author’s’ note: Ordinarily, I prefer to link to sources of information cited in stories, such as those on Coinhive’s own site and other entities mentioned throughout the rest of this piece. However, because many of these links either go to sites that actively mine with Coinhive or that include decidedly not-safe-for-work content, I have included screenshots instead of links in these cases. For these reasons, I would strongly advise against visiting pr0gramm’s Web site.]

According to a since-deleted statement on the original version of Coinhive’s Web site — coin-hive[dot]com — Coinhive was born out of an experiment on the German-language image hosting and discussion forum pr0gramm[dot]com.

A now-deleted “About us” statement on the original coin-hive[dot]com Web site. This snapshop was taken on Sept. 15, 2017. Image courtesy archive.org.
Indeed, multiple discussion threads on pr0gramm[dot]com show that Coinhive’s code first surfaced there in the third week of July 2017. At the time, the experiment was dubbed “pr0miner,” and those threads indicate that the core programmer responsible for pr0miner used the nickname “int13h” on pr0gramm. In a message to this author, Coinhive confirmed that “most of the work back then was done by int13h, who is still on our team.” I asked Coinhive for clarity on the disappearance of the above statement from its site concerning its affiliation with pr0gramm. Coinhive replied that it had been a convenient fiction:

“The owners of pr0gramm are good friends and we’ve helped them with their infrastructure and various projects in the past. They let us use pr0gramm as a testbed for the miner and also allowed us to use their name to get some more credibility. Launching a new platform is difficult if you don’t have a track record. As we later gained some publicity, this statement was no longer needed.”

Asked for clarification about the “platform” referred to in its statement (“We are self-funded and have been running this platform for the past 11 years”) Coinhive replied, “Sorry for not making it clearer: ‘this platform’ is indeed pr0gramm.”

After receiving this response, it occurred to me that someone might be able to find out who’s running Coinhive by determining the identities of the pr0gramm forum administrators. I reasoned that if they were not one and the same, the pr0gramm admins almost certainly would know the identities of the folks behind Coinhive.

WHO IS PR0GRAMM?

So I set about trying to figure out who’s running pr0gramm. It wasn’t easy, but in the end all of the information needed to determine that was freely available online.

Let me be crystal clear on this point: All of the data I gathered (and presented in the detailed ‘mind map’ below) was derived from either public Web site WHOIS domain name registration records or from information posted to various social media networks by the pr0gramm administrators themselves. In other words, there is nothing in this research that was not put online by the pr0gramm administrators themselves.

I began with the pr0gramm domain itself which, like many other domains tied to this research, was originally registered to an individual named Dr. Matthias Moench. Mr. Moench is only tangentially connected to this research, so I will dispense with a discussion of him for now except to say that he is a convicted spammer and murderer, and that the last subsection of this story explains who Moench is and why he may be connected to so many of these domains. His is a fascinating and terrifying story.

Through many weeks of research, I learned that pr0gramm was originally tied to a network of adult Web sites linked to two companies that were both incorporated more than a decade ago in Las Vegas, Nevada: Eroxell Limited, and Dustweb Inc. Both of these companies stated they were involved in online advertising of some form or another.

Both Eroxell and Dustweb, as well as several related pr0gramm Web sites (e.g., pr0mining[dot]com, pr0mart[dot]de, pr0shop[dot]com) are connected to a German man named Reinhard Fuerstberger, whose domain registration records include the email address “admin@pr0gramm[dot]com”. Eroxell/Dustweb also each are connected to a company incorporated in Spain called Suntainment SL, of which Fuerstberger is the apparent owner.

As stated on pr0gramm’s own site, the forum began in 2007 as a German language message board that originated from an automated bot that would index and display images posted to certain online chat channels associated with the wildly popular video first-person shooter game Quake.

As the forum’s user base grew, so did the diversity of the site’s cache of images, and pr0gramm began offering paid so-called “pr0mium” accounts that allowed users to view all of the forum’s not-safe-for-work images and to comment on the discussion board. When pr0gramm last July first launched pr0miner (the precursor to what is now Coinhive), it invited pr0gramm members to try the code on their own sites, offering any who did so to claim their reward in the form of pr0mium points.

A post on pr0gramm post concerning pr0miner, the precursor to what would later become known as Coinhive.

DEIMOS AND PHOBOS

Pr0gramm was launched in late 2007 by a Quake enthusiast from Germany named Dominic Szablewski, a computer expert better known to most on pr0gramm by his screen name “cha0s.”

At the time of pr0gramm’s inception, Szbalewski ran a Quake discussion board called chaosquake[dot]de, and a personal blog — phoboslab[dot]org. I was able to determine this by tracing a variety of connections, but most importantly because phoboslab and pr0gramm both once shared the same Google Analytics tracking code (UA-571256).

Reached via email, Szablewski said he did not wish to comment for this story beyond stating that he sold pr0gramm a few years ago to another, unnamed individual.

Multiple longtime pr0gramm members have remarked that since cha0s departed as administrator, the forum has become overrun by individuals with populist far-right political leanings. Mr. Fuerstberger describes himself on various social media sites as a “politically incorrect, Bavarian separatist” [Wiki link added]. What’s more, there are countless posts on pr0gramm that are particularly hateful and denigrating to specific ethnic or religious groups.

Responding to questions via email, Fuerstberger said he had no idea pr0gramm was used to launch Coinhive.

“I can assure you that I heard about Coinhive for the first time in my life earlier this week,” he said. “I can assure you that the company Suntainment has nothing to do with it. I do not even have anything to do with Pr0gram. That’s what my partner does. When I found out now what was abusing my company, I was shocked.”

Below is a “mind map” I assembled to keep track of the connections between and among the various names, emails and Web sites mentioned in this research.

A “mind map” I put together to keep track of and organize my research on pr0gramm and Coinhive. This map was created with Mindnode Pro for Mac. Click to enlarge.

GAMB

I was able to learn the identity of Fuerstberger’s partner — the current pr0gramm administrator, who goes by the nickname “Gamb” — by following the WHOIS data from sites registered to the U.S.-based company tied to pr0gramm (Eroxell Ltd).

Among the many domains registered to Eroxell was deimoslab[dot]com, which at one point was a site that sold electronics. As can be seen below in a copy of the site from 2010 (thanks to archive.org), the proprietor of deimoslab used the same Gamb nickname.

Deimos and Phobos are the names of the two moons of the planet Mars. They also refer to the names of the fourth and fifth level in the computer game “Doom.” In addition, they are the names of two spaceships that feature prominently in the game Quake II.

A screenshot of Deimoslab.com from 2010 (courtesy of archive.org) shows the user “Gamb” was responsible for the site.
A passive DNS lookup on an Internet address long used by pr0gramm[dot]com shows that deimoslab[dot]com once shared the server with several other domains, including phpeditor[dot]de. According to a historic WHOIS lookup on phpeditor[dot]de, the domain was originally registered by an Andre Krumb from Gross-Gerau, Germany.

When I discovered this connection, I still couldn’t see anything tying Krumb to “Gamb,” the nickname of the current administrator of pr0gramm. That is, until I began searching the Web for online forum accounts leveraging usernames that included the nickname “Gamb.”

One such site is ameisenforum[dot]de, a discussion forum for people interested in creating and raising ant farms. I didn’t know what to make of this initially and so at first disregarded it. That is, until I discovered that the email address used to register phpeditor[dot]de also was used to register a rather unusual domain: antsonline[dot]de.

In a series of email exchanges with KrebsOnSecurity, Krumb acknowledged that he was the administrator of pr0gramm (as well as chief technology officer at the aforementioned Suntainment SL), but insisted that neither he nor pr0gramm was involved in Coinhive.

Krumb repeatedly told me something I still have trouble believing: That Coinhive was the work of just one individual — int13h, the pr0gramm user credited by Coinhive with creating its mining code.

“Coinhive is not affiliated with Suntainment or Suntainment’s permanent employees in any way,” Krumb said in an email, declining to share any information about int13h. “Also it’s not a group of people you are looking for, it’s just one guy who sometimes worked for Suntainment as a freelancer.”

COINHIVE CHANGES ITS STORY, WEB SITE

Very soon after I began receiving email replies from Mr. Fuerstberger and Mr. Krumb, I started getting emails from Coinhive again.

“Some people involved with pr0gramm have contacted us, saying they’re being extorted by you,” Coinhive wrote. “They want to run pr0gramm anonymously because admins and moderators had a history of being harassed by some trolls. I’m sure you can relate to that. You have them on edge, which of course is exactly where you want them. While we must applaud your efficiency for finding information, your tactics for doing so are questionable in our opinion.”

Coinhive was rather dramatically referring to my communications with Krumb, in which I stated that I was seeking more information about int13h and anyone else affiliated with Coinhive.

“We want to make it very clear again that Coinhive in its current form has nothing to do with pr0gramm or its owners,” Coinhive said. “We tested a ‘toy implementation’ of the miner on pr0gramm, because they had a community open for these kind of things. That’s it.”

When asked about their earlier statement to this author — that the people behind Coinhive claimed pr0gramm as “their platform of 11 years” (which, incidentally, is exactly how long pr0gramm has been online) — Coinhive reiterated its revised statement: That this had been a convenient fabrication, and that the two were completely separate organizations.

On March 22, the Coinhive folks sent me a follow-up email, saying that in response to my inquiries they consulted their legal team and decided to add some contact information to their Web site.

“Legal information” that Coinhive added to its Web site on March 22 in response to inquiries from this reporter.
That addition, which can be viewed at coinhive[dot]com/legal, lists a company in Kaiserlautern, Germany called Badges2Go UG. Business records show Badges2Go is a limited liability company established in April 2017 and headed by a Sylvia Klein from Frankfurt. Klein’s Linkedin profile states that she is the CEO of several organizations in Germany, including one called Blockchain Future.

“I founded Badges2Go as an incubator for promising web and mobile applications,” Klein said in a instant message chat with KrebsOnSecurity. “Coinhive is one of them. Right now we check the potential and fix the next steps to professionalize the service.”

THE BIZARRE SIDE STORY OF DR. MATTHIAS MOENCH

I have one final and disturbing anecdote to share about some of the Web site registration data in the mind map above. As mentioned earlier, readers can see that many of the domain names tied to the pr0gramm forum administrators were originally registered to an individual named “Dr. Matthias Moench.”

When I first began this research back in January 2018, I guessed that Mr. Moench was almost certainly a pseudonym used to throw off researchers. But the truth is Dr. Moench is indeed a real person — and a very scary individual at that.

According to a chilling 2014 article in the German daily newspaper Die Welt, Moench was the son of a wealthy entrepreneurial family in Germany who was convicted at age 19 of hiring a Turkish man to murder his parents a year earlier in 1988. Die Welt says the man Moench hired used a machete to hack to death Moench’s parents and the family poodle. Moench reportedly later explained his actions by saying he was upset that his parents bought him a used car for his 18th birthday instead of the Ferrari that he’d always wanted.

Matthias Moench in 1989. Image: Welt.de.
Moench was ultimately convicted and sentenced to nine years in a juvenile detention facility, but he would only serve five years of that sentence. Upon his release, Moench claimed he had found religion and wished to become a priest.

Somewhere along the way, however, Moench ditched the priest idea and decided to become a spammer instead. For years, he worked assiduously to pump out spam emails pimping erectile dysfunction medications, reportedly earning at least 21.5 million Euros from his various spamming activities.

Once again, Mr. Moench was arrested and put on trial. In 2015, he and several other co-defendants were convicted of fraud and drug-related offenses. Moench was sentenced to six years in prison. According to Lars-Marten Nagel, the author of the original Die Welt story on Moench’s murderous childhood, German prosecutors say Moench is expected to be released from prison later this year.

It may be tempting to connect the pr0gramm administrators with Mr. Moench, but it seems likely that there is little to no connection here. An incredibly detailed blog post from 2006 which sought to determine the identity of the Matthias Moench named as the original registrant of so many domains (they number in the tens of thousands) found that Moench himself stated on several Internet forums that his name and mailing addresses in Germany and the Czech Republic could be freely used or abused by any like-minded spammer or scammer who wished to hide his identity. Apparently, many people took him up on that offer.

Update, 4:14 p.m. ET: Shortly after this story went live, an update was added to phoboslab[dot]org, the personal blog of Dominic Szablewski, the founder of pr0gramm[dot]com. In it, Szablewski claims responsibility for starting Coinhive. As for who’s running it now, we’re left to wonder. The new content there reads:

“Brian Krebs recently published a story about Coinhive and I want to clarify some things.”

“In 2007 I built a simple image board – pr0gramm – for my friends and me. Over the years, this board has evolved and grown tremendously. When some trolls in 2015 found out who was behind pr0gramm, I received death threats for various moderation decisions on that board. I decided to get out of it and sold pr0gramm. I was still working on pr0gramm behind the scenes and helped with technical issues from time to time, but abstained from moderating completely.”

“Mid last year I had the idea to try and implement a Cryptocurrency miner in WebAssembly. Just as an experiment, to see if it would work. Of course I needed some users to test it. The owners of pr0gramm were generous enough to let me try but had no part in the development. I quickly built a separate page on pr0gramm.com that users could open to earn a premium account by mining. It worked tremendously well.”

“So I decided to expand this idea into its own platform. I launched Coinhive a few months later and quickly realized that I couldn’t do this alone. So I was searching for someone who would take over.”

“I found a company interested in a new venture. They have taken over Coinhive and are now working on a big overhaul.”

Top

San Diego Sues Experian Over ID Theft Service

Post by BrianKrebs via Krebs on Security »

The City of San Diego, Calif. is suing consumer credit bureau Experian, alleging that a data breach first reported by KrebsOnSecurity in 2013 affected more than a quarter-million people in San Diego but that Experian never alerted affected consumers as required under California law.



The lawsuit, filed by San Diego city attorney Mara Elliott, concerns a data breach at an Experian subsidiary that lasted for nine months ending in 2013. As first reported here in October 2013, a Vietnamese man named Hieu Minh Ngo ran an identity theft service online and gained access to sensitive consumer information by posing as a licensed private investigator in the United States.

In reality, the fraudster was running his identity theft service from Vietnam, and paying Experian thousands of dollars in cash each month for access to 200 million consumer records. Ngo then resold that access to more than 1,300 customers of his ID theft service. KrebsOnSecurity first wrote about Ngo’s ID theft service — alternately called Superget[dot]info and Findget[dot]mein 2011.

Ngo was arrested after being lured out of Vietnam by the U.S. Secret Service. He later pleaded guilty to identity fraud charges and was sentenced in July 2015 to 13 years in prison.

News of the lawsuit comes from The San Diego Union-Tribune, which says the city attorney alleges that some 30 million consumers could have had their information stolen in the breach, including an estimated 250,000 people in San Diego.

“Elliott’s office cited the Internal Revenue Service in saying hackers filed more than 13,000 false returns using the hacked information, obtaining $65 million in fraudulent tax refunds,” writes Union-Tribune reporter Greg Moran.

Experian did not respond to requests for comment.

Ngo’s Identity theft service, superget.info, which relied on access to consumer databases maintained by a company that Experian purchased in 2012.
In December 2013, an executive from Experian told Congress that the company was not aware of any consumers who had been harmed by the incident. However, soon after Ngo was extradited to the United States, the Secret Service began identifying and rounding up dozens of customers of Ngo’s identity theft service. And most of Ngo’s customers were indeed involved in tax refund fraud with the states and the IRS.

Tax refund fraud affects hundreds of thousands of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

In May 2014, KrebsOnSecurity reported that Ngo’s identity theft service was connected to an identity theft ring that operated out of New Jersey and New York and specialized in tax refund and credit card fraud.

In October 2014, a Florida man was sentenced to 27 months for using Ngo’s service to purchase Social Security numbers and bank account records on more than 100 Americans with the intent to open credit card accounts and file fraudulent tax refund requests in the victims’ names. Another customer of Ngo’s ID theft service led U.S. Marshals on a multi-state fugitive chase after being convicted of fraud and sentenced to 124 months in jail.

According to the Union-Tribune, the lawsuit seeks civil monetary penalties under the state’s Unfair Competition Law, as well as a court order compelling the Costa Mesa-based company to formally notify consumers whose personal information was stolen and to pay costs for identity protection services for those people. If the city prevails in its lawsuit, Experian also could be facing some hefty fines: Companies that fail to notify California residents when their personal information is exposed in a breach could face penalties of up to $2,500 for each violation.
Top

Home Internet Speed Guarantees

Post by kris via The Isoblog. »

The Netherlands are a Central European country that guarantees network neutrality and consumer rights. So, by law, providers are sending out consumer notices in which they explain the bandwidth promises they make.

Tweak sends me:

Service Guarantees
and they link to this to explain it.

This documents links itself again to the netneutraliteitsverordening, and defines:

  • De ACM verstaat het volgende onder de minimale snelheid: de snelheid die een internettoegangsdienst te allen tijde moet kunnen behalen. The network speed cannot be lower than the minimale snelheid.

  • De ACM verstaat het volgende onder de normaliter beschikbare snelheid: de snelheid die een internettoegangsdienst in de praktijk behaalt op een willekeurig moment van de dag. The normaliter snelheid is the internet speed at some random time of the day. They later explain that this is the speed that has to be reached in 8/10 calibrated measurements made over a week, with some spreading rules that make sure a representative set of days and hours is chosen.

  • De ACM verstaat het volgende onder de maximumsnelheid: de hoogst haalbare snelheid die een internettoegangsdienst daadwerkelijk kan leveren binnen het afgesloten abonnement. Naar het oordeel van de ACM wordt tenminste 90% van de maximumsnelheid bij één van de tien metingen die een eindgebruiker in één week uitvoert gerealiseerd. At least 90% max speed must be reached at least once in said spread of 10 measurements.

So the Netherlands actually has a bandwidth warranty and a set of practical measurement rules – that is, as in Germany a measurement system does not yet exist, but that does not stop the Netherlands from making measurements and seeing what the outcome is:

Op het moment van vaststelling van deze beleidsregel is een dergelijk gecertificeerd meetsysteem er nog niet. De ACM vindt het wenselijk dat eindgebruikers hun internetverbinding goed kunnen meten. Dat er nog geen gecertificeerd meetsysteem is, betekent niet dat eindgebruikers geen metingen van hun internetsnelheid kunnen doen. Ook zonder gecertificeerd meetsysteem zijn eindgebruikers in staat om dergelijke metingen te doen.

There is a bit of advice given on how to actually measure fairly, but since the measurement is not yet certified, a positive outcome is valid, and a negative outcome that is repeatable is worthy further investigation to find the root cause, which may well be a bad connection. So, use a wired connection, make sure there is no other traffic, and use modern hardware:

De eindgebruiker moet bij het meten van zijn vaste internettoegang vanaf zijn eigen apparaat de meting uitvoeren met een bedrade verbinding. In het geval van het meten van een mobiele verbinding moet de Wi-Fi verbinding eerst worden uitgezet voordat de meting wordt gestart. Bij het door de eindgebruiker laten meten van de internetsnelheid kan het voorkomen dat de meting wordt beïnvloed door onder andere:

a. het hebben van eigen verkeer dat tegelijkertijd over dezelfde verbinding gaat (cross traffic); en

b. verouderde hard- en software.

Het is niet gewenst dat de meting wordt beïnvloed door voornoemde aspecten. Eindgebruikers moeten er dan ook zorg voor dragen dat voornoemde omstandigheden zich niet voordoen.

But all in all, this is pretty relaxed compared to the discussion in Germany, where the demands of the providers basically prevented any measurement from being done in the first place.

This can always be tied down more if it turns out to be problematic.

It’s not, for me:

No complaints from me about the Tweak.nl deliverted network speed.
That was done with a remote screen session coming in, so there are a few hundred KB/s screen update traffic on top of that. Also, it may or may not be that any number of Sonoses was running.

All in all, this is realistic and way within spec, so I am fine.
Top


VLAN-Zezes-ki in Hardware | BSD Now 238

Post by Jupiter Broadcasting via BSD Now Mobile »

Looking at Lumina Desktop 2.0, 2 months of KPTI development in SmartOS, OpenBSD email service, an interview with Ryan Zezeski, NomadBSD released & John Carmack's programming retreat with OpenBSD.
Top

Survey: Americans Spent $1.4B on Credit Freeze Fees in Wake of Equifax Breach

Post by BrianKrebs via Krebs on Security »

Almost 20 percent of Americans froze their credit file with one or more of the big three credit bureaus in the wake of last year’s data breach at Equifax, costing consumers an estimated $1.4 billion, according to a new study. The findings come as lawmakers in Congress are debating legislation that would make credit freezes free in every state.

The figures, commissioned by small business loan provider Fundera and conducted by Wakefield Research, surveyed some 1,000 adults in the U.S. Respondents were asked to self-report how much they spent on the freezes; 32 percent said the freezes cost them $10 or less, but 38 percent said the total cost was $30 or more. The average cost to consumers who froze their credit after the Equifax breach was $23.

A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.

Depending on your state of residence, the cost of placing a freeze on your credit file can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).

Image: Wakefield Research.
In a blog post published today, Fundera said the percentage of people who froze their credit in response to the Equifax breach incrementally decreases as people get older.

“Thirty-two percent of millennials, 16 percent of Generation Xers and 12 percent of baby boomers froze their credit,” Fundera explained. “This data is surprising considering that older generations have been working on building their credit for a longer period of time, and thus they have a more established record to protect.”

However, freeze fees could soon be a thing of the past. A provision included in a bill passed by the U.S. Senate on March 14 would require credit-reporting firms to let consumers place a freeze without paying (the measure is awaiting action in the House of Representatives).

But there may be a catch: According to CNBC, the congressional effort to require free freezes is part of a larger measure, S. 2155, which rolls back some banking regulations put in place after the financial crisis that rocked the U.S. economy a decade ago.

Consumer advocacy groups like Consumers Union and the U.S. Public Interest Research Group (USPIRG) have long advocated for free credit freezes. But they’re not too wild about S. 2155, arguing that it would undermine banking regulations passed in the wake of the 2007-2008 financial crisis.

In a March 8 letter (PDF) opposing the bill, Consumers Union said the security freeze section fails to include a number of important consumer protections, such as a provision for the consumer to temporarily “lift” the freeze in order to open credit.

“Moreover, it could preclude the states from making important improvements to expand protections against identity theft,” Consumers Union wrote.

While it may seem like credit bureaus realized a huge financial windfall as a result of the Equifax breach, it’s important to keep in mind that credit bureaus also make money by selling your credit report to potential lenders — something they can’t do if there’s a freeze on your credit file.

Curious about what a freeze involves, how to file one, and other options aside from the credit freeze? Check out this in-depth Q&A that KrebsOnSecurity published not long after the Equifax breach.

Also, if you haven’t done so lately, take a moment to visit annualcreditreport.com to get a free copy of your credit file. A consumer survey published earlier this month found that roughly half of all Americans haven’t bothered to do this since the Equifax breach.
Top

Low-latency audio on Windows with GStreamer

Post by Nirbheek via Nirbheek’s Rantings »

Digital audio is so ubiquitous that we rarely stop to think or wonder how the gears turn underneath our all-pervasive apps for entertainment. Today we'll look at one specific piece of the machinery: latency.

Let's say you're making a video of someone's birthday party with an app on your phone. Once the recording starts, you don't care when the app starts writing it to disk—as long as everything is there in the end.

However, if you're having a Skype call with your friend, it matters a whole lot how long it takes for the video to reach the other end and vice versa. It's impossible to have a conversation if the lag (latency) is too high.

The difference is, do you need real-time feedback or not?

Other examples, in order of increasingly stricter latency requirements are: live video streaming, security cameras, augmented reality games such as Pokémon Go, multiplayer video games in general, audio effects apps for live music recording, and many many more.

“But Nirbheek”, you might ask, “why doesn't everyone always ‘immediately’ send/store/show whatever is recorded? Why do people have to worry about latency?” and that's a great question!

To understand that, checkout my previous blog post, Latency in Digital Audio. It's also a good primer on analog vs digital audio!

Low latency on consumer operating systems



Each operating system has its own set of application APIs for audio, and each has a lower bind on the achievable latency:


GStreamer already has plugins for almost all of these¹ (plus others that aren't listed here), and on Windows, GStreamer has been using the DirectSound API by default for audio capture and output since the very beginning.

However, the DirectSound API was deprecated in Windows XP, and with Vista, it was removed and replaced with an emulation layer on top of the newly-released WASAPI. As a result, the plugin can't be configured to have less than 200ms of latency, which makes it unsuitable for all the low-latency use-cases mentioned above. The DirectSound API is quite crufty and unnecessarily complex anyway.

GStreamer is rarely used in video games, but it is widely used for live streaming, audio/video calls, and other real-time applications. Worse, the WASAPI GStreamer plugins were effectively untouched and unused since the initial implementation in 2008 and were completely broken².

This left no way to achieve low-latency audio capture or playback on Windows using GStreamer.

The situation became particularly dire when GStreamer added a new implementation of the WebRTC spec in this release cycle. People that try it out on Windows were going to see much higher latencies than they should.

Luckily, I rewrote most of the WASAPI plugin code in January and February, and it should now work well on all versions of Windows from Vista to 10! You can get binary installers for GStreamer or build it from source.

Shared and Exclusive WASAPI


WASAPI allows applications to open sound devices in two modes: shared and exclusive. As the name suggests, shared mode allows multiple applications to output to (or capture from) an audio device at the same time, whereas exclusive mode does not.

Almost all applications should open audio devices in shared mode. It would be quite disastrous if your YouTube videos played without sound because Spotify decided to open your speakers in exclusive mode.

In shared mode, the audio engine has to resample and mix audio streams from all the applications that want to output to that device. This increases latency because it must maintain its own audio ringbuffer for doing all this, from which audio buffers will be periodically written out to the audio device.

In theory, hardware mixing could be used if the sound card supports it, but very few sound cards implement that now since it's so cheap to do in software. On Windows, only high-end audio interfaces used for professional audio implement this.

Another option is to allocate your audio engine buffers directly in the sound card's memory with DMA, but that complicates the implementation and relies on good drivers from hardware manufacturers. Microsoft has tried similar approaches in the past with DirectSound and been burned by it, so it's not a route they took with WASAPI³.

On the other hand, some applications know they will be the only ones using a device, and for them all this machinery is a hindrance. This is why exclusive mode exists. In this mode, if the audio driver is implemented correctly, the application's buffers will be directly written out to the sound card, which will yield the lowest possible latency.

Audio latency with WASAPI


So what kind of latencies can we get with WASAPI?

That depends on the device period that is being used. The term device period is a fancy way of saying buffer size; specifically the buffer size that is used in each call to your application that fetches audio data.

This is the same period with which audio data will be written out to the actual device, so it is the major contributor of latency in the entire machinery.

If you're using the AudioClient interface in WASAPI to initialize your streams, the default period is 10ms. This means the theoretical minimum latency you can get in shared mode would be 10ms (audio engine) + 10ms (driver) = 20ms. In practice, it'll be somewhat higher due to various inefficiencies in the subsystem.

When using exclusive mode, there's no engine latency, so the same number goes down to ~10ms.

These numbers are decent for most use-cases, but like I explained in my previous blog post, this is totally insufficient for pro-audio use-cases such as applying live effects to music recordings. You really need latencies that are lower than 10ms there.

Ultra-low latency with WASAPI


Starting with Windows 10, WASAPI removed most of its aforementioned inefficiencies, and introduced a new interface: AudioClient3. If you initialize your streams with this interface, and if your audio driver is implemented correctly, you can configure a device period of just 2.67ms at 48KHz.

The best part is that this is the period not just in exclusive mode but also in shared mode, which brings WASAPI almost at-par with JACK and CoreAudio

So that was the good news. Did I mention there's bad news too? Well, now you know.

The first bit is that these numbers are only achievable if you use Microsoft's implementation of the Intel HD Audio standard for consumer drivers. This is fine; you follow some badly-documented steps and it turns out fine.

Then you realize that if you want to use something more high-end than an Intel HD Audio sound card, unless you use one of the rare pro-audio interfaces that have drivers that use the new WaveRT driver model instead of the old WaveCyclic model, you still see 10ms device periods.

It seems the pro-audio industry made the decision to stick with ASIO since it already provides <5ms latency. They don't care that the API is proprietary, and that most applications can't actually use it because of that. All the apps that are used in the pro-audio world already work with it.

The strange part is that all this information is nowhere on the Internet and seems to lie solely in the minds of the Windows audio driver cabals across the US and Europe. It's surprising and frustrating for someone used to working in the open to see such counterproductive information asymmetry, and I'm not the only one.

This is where I plug open-source and talk about how Linux has had ultra-low latencies for years since all the audio drivers are open-source, follow the same ALSA driver model, and are constantly improved. JACK is probably the most well-known low-latency audio engine in existence, and was born on Linux. People are even using Pulseaudio these days to work with <5ms latencies.

But this blog post is about Windows and WASAPI, so let's get back on track.

To be fair, Microsoft is not to blame here. Decades ago they made the decision of not working more closely with the companies that write drivers for their standard hardware components, and they're still paying the price for it. Blue screens of death were the most user-visible consequences, but the current audio situation is an indication that losing control of your platform has more dire consequences.

There is one more bit of bad news. In my testing, I wasn't able to get glitch-free capture of audio in the source element using the AudioClient3 interface at the minimum configurable latency in shared mode, even with critical thread priorities unless there was nothing else running on the machine.

As a result, this feature is disabled by default on the source element. This is unfortunate, but not a great loss since the same device period is achievable in exclusive mode without glitches.

Measuring WASAPI latencies


Now that we're back from our detour, the executive summary is that the GStreamer WASAPI source and sink elements now use the latest recommended WASAPI interfaces. You should test them out and see how well they work for you!

By default, a device is opened in shared mode with a conservative latency setting. To force the stream into the lowest latency possible, set low-latency=true. If you're on Windows 10 and want to force-enable/disable the use of the AudioClient3 interface, toggle the use-audioclient3 property.

To open a device in exclusive mode, set exclusive=true. This will ignore the low-latency and use-audioclient3 properties since they only apply to shared mode streams. When a device is opened in exclusive mode, the stream will always be configured for the lowest possible latency by WASAPI.

To measure the actual latency in each configuration, you can use the new audiolatency plugin that I wrote to get hard numbers for the total end-to-end latency including the latency added by the GStreamer audio ringbuffers in the source and sink elements, the WASAPI audio engine (capture and render), the audio driver, and so on.

I look forward to hearing what your numbers are on Windows 7, 8.1, and 10 in all these configurations! ;)


1. The only ones missing are AAudio because it's very new and ASIO which is a proprietary API with licensing requirements.

2. It's no secret that although lots of people use GStreamer on Windows, the majority of GStreamer developers work on Linux and macOS. As a result the Windows plugins haven't always gotten a lot of love. It doesn't help that building GStreamer on Windows can be a daunting task . This is actually one of the major reasons why we're moving to Meson, but I've already written about that elsewhere!

3. My knowledge about the history of the decisions behind the Windows Audio API is spotty, so corrections and expansions on this are most welcome!

4. The ALSA drivers in the Linux kernel should not be confused with the ALSA userspace library.
Top

Videos für iPod touch 2nd gen transkodieren

Post by Administrator via Das Rootserver-Experiment »

Ich habe einen alten iPod touch der zweiten Generation gefunden (ca. 2009 gekauft), den die Kinder nutzen können, Videos anzuschauen und Musik zu hören ohne dass die Eltern Gefahr laufen, dass wieder mal ein Tablet mit gebrochenem Display endet. Mit 128MB RAM und einem sehr betagten PowerVX-Grafikchip ist das Ding allerdings recht wählerisch bei den Codecs. Heruntergeladenes Videomaterial muss also erst einmal transkodiert werden, nach H.264 Baseline 3.0 mit AAC Audio. Dabei skaliere ich auch 320px Breite bei variabler Höhe.


Erster Pass:

ffmpeg -y -i input.mp4 -vf scale=320:-1 -c:v libx264 -profile:v baseline -level 3.0 -b:v 400k -maxrate 1M -pass 1 -c:a copy -f mp4 /dev/null
Zweiter Pass, wichtig ist der Titel, sonst steht irtgendwas drin:

ffmpeg -y -i input.mp4 -vf scale=320:-1 -c:v libx264 -profile:v baseline -level 3.0 -b:v 400k -maxrate 1M -pass 2 -c:a aac -b:a 128k -f mp4 -metadata title="Cooler Kinderfilm" output.mp4
Fertig, das transkodierte Video kann nun einfach per iTunes o.ä. zum iPod touch kopiert werden.
Top

15-Year-old Finds Flaw in Ledger Crypto Wallet

Post by BrianKrebs via Krebs on Security »

A 15-year-old security researcher has discovered a serious flaw in cryptocurrency hardware wallets made by Ledger, a French company whose popular products are designed to physically safeguard public and private keys used to receive or spend the user’s cryptocurrencies.

Ledger’s Nano-S cryptocurrency hardware wallet. Source: Amazon.
Hardware wallets like those sold by Ledger are designed to protect the user’s private keys from malicious software that might try to harvest those credentials from the user’s computer.  The devices enable transactions via a connection to a USB port on the user’s computer, but they don’t reveal the private key to the PC.

Yet Saleem Rashid, a 15-year-old security researcher from the United Kingdom, discovered a way to acquire the private keys from Ledger devices. Rashid’s method requires an attacker to have physical access to the device, and normally such hacks would be unremarkable because they fall under the #1 rule of security — namely, if an attacker has physical access to your device, then it is not your device anymore.

The trouble is that consumer demand for Ledger’s products has frequently outpaced the company’s ability to produce them (it has sold over a million of its most popular Nano S models to date). This has prompted the company’s chief technology officer to state publicly that Ledger’s built-in security model is so robust that it is safe to purchase their products from a wide range of third-party sellers, including Amazon and eBay.

Ledger’s message to users regarding the lack of anti-tampering mechanisms on its cryptocurrency hardware wallets.
But Rashid discovered that a reseller of Ledger’s products could update the devices with malicious code that would lie in wait for a potential buyer to use it, and then siphon the private key and drain the user’s cryptocurrency account(s) when the user goes to use it.

The crux of the problem is that Ledger’s devices contain a secure processor chip and a non-secure microcontroller chip. The latter is used for a variety of non-security related purposes, from handling the USB connections to displaying text on the Ledger’s digital display, but the two chips still pass information between each other. Rashid found that an attacker could compromise the insecure processor (the microcontroller) on Ledger devices to run malicious code without being detected.

Ledger’s products do contain a mechanism for checking to ensure the code powering the devices has not been modified, but Rashid’s proof-of-concept code — being released today in tandem with an announcement from Ledger about a new firmware update designed to fix the bug — allows an attacker to force the device to sidestep those security checks.

“You’re essentially trusting a non-secure chip not to change what’s displayed on the screen or change what the buttons are saying,” Rashid said in an interview with KrebsOnSecurity. “You can install whatever you want on that non-secure chip, because the code running on there can lie to you.”

Kenneth White, director of the Open Crypto Audit Project, had an opportunity to review Rashid’s findings prior to their publication today. White said he was impressed with the elegance of the proof-of-concept attack code, which Rashid sent to Ledger approximately four months ago. A copy of Rashid’s research paper on the vulnerability is available here (PDF). A video of Rashid demonstrating his attack is below.

White said Rashid’s code subverts the security of the Ledger’s process for generating a backup code for a user’s private key, which relies on a random number generator that can be made to produce non-random results.

“In this case [the attacker] can set it to whatever he wants,” White said. “The victim generates keys and backup codes, but in fact those codes have been predicted by the attacker in advance because he controls the Ledger’s random number generator.”

Rashid said Ledger initially dismissed his findings as implausible. But in a blog post published today, Ledger says it has since fixed the flaw Rashid found — as well as others discovered and reported by different security researchers — in a firmware update that brings Ledger Nano S devices from firmware version 1.3.1 to version 1.4.1 (the company actually released the firmware update on March 6, potentially giving attackers time to reverse engineer Rashid’s method).

The company is still working on an update for its pricier Ledger Blue devices, which company chief security officer Charles Guillemet said should be ready soon. Guillemet said Nano-S devices should alert users that a firmware update is available when the customer first plugs the device into a computer.

“The vulnerability he found was based on the fact that the secure element tries to authenticate the microcontroller, and that authentication is not strong enough,” Guillemet told KrebsOnSecurity. “This update does authentication more tightly so that it’s not possible to fool the user.”

Rashid said unlike its competitors in the hardware wallet industry, Ledger includes no tamper protection seal or any other device that might warn customers that a Nano S has been physically opened or modified prior to its first use by the customer.

“They make it so easy to open the device that you can take your fingernail and open it up,” he said.

Asked whether Ledger intends to add tamper protection to its products, Guillemet said such mechanisms do not add any security.

“For us, a tamper proof seal is nothing that adds security to the device because it’s very easy to counterfeit,” Guillemet said. “You can buy some security seals on the web. For us, it’s a lie to our customers to use this kind of seal to prove the genuineness of our product.”

Guillemet said despite Rashid’s findings, he sees no reason to change his recommendation that interested customers should feel free to purchase the company’s products through third party vendors.

“As we have upgraded our solution to prove the genuineness of our product using cryptographic checks, I don’t see why we should change this statement,” he said.

Nevertheless, given that many cryptocurrency owners turn to hardware wallets like Ledger to safeguard some or all of their virtual currency, it’s probably a good idea if you are going to rely on one of these devices to purchase it directly from the source, and to apply any available firmware updates before using it.
Top

Adding IPv6 to an existing server

Post by Dan Langille via Dan Langille's Other Diary »

I am adding IPv6 addresses to each of my servers. This post assumes the server is up and running FreeBSD 11.1 and you already have an IPv6 address block. This does not cover the creation of an IPv6 tunnel, such as that provided by HE.net. This assumes native IPv6.

In this post, I am using the IPv6 addresses from the IPv6 Address Prefix Reserved for Documentation (i.e. 2001:DB8::/32). You should use your own addresses.

The IPv6 block I have been assigned is 2001:DB8:1001:8d00/64.

I added this to /etc/rc.conf:

ipv6_activate_all_interfaces="YES"
ipv6_defaultrouter="2001:DB8:1001:8d00::1"
ifconfig_em1_ipv6="inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv" # ns1
The IPv6 address I have assigned to this host is completely random (with the given block). I found a random IPv6 address generator and used it to select d389:119c:9b57:396b as the address for this service within my address block.

I don’t have the reference, but I did read that randomly selecting addresses within your block is a better approach.

In order to invoke these changes without rebooting, I issued these commands:

[dan@tallboy:~] $ sudo ifconfig em1 inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv
[dan@tallboy:~] $ 

[dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1
add net default: gateway 2001:DB8:1001:8d00::1
If you do the route add first, you will get this error:

[dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1
route: writing to routing socket: Network is unreachable
add net default: gateway 2001:DB8:1001:8d00::1 fib 0: Network is unreachable

Got ping?

Let’s try a ping:

[dan@tallboy:~] $ ping6 google.ca
PING6(56=40+8+8 bytes) 2607:fc50:1001:8d00:d389:119c:9b57:396b --> 2607:f8b0:4004:801::2003
^C
--- google.ca ping6 statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss
[dan@tallboy:~] $ 
OH. Can I ping my gateway?

[dan@tallboy:~] $ ping6 2001:DB8:1001:8d00::1
PING6(56=40+8+8 bytes) 2001:DB8:1001:8d00:d389:119c:9b57:396b --> 2607:fc50:1001:8d00::1
^C
--- 2607:fc50:1001:8d00::1 ping6 statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss
[dan@tallboy:~] $ 
No.

I bet it’s my firewall.

I am using pf as my firewall. I added this rule to /etc/pf.conf:

pass quick inet6 proto ipv6-icmp all keep state
And reloaded the rules:

$ sudo pfctl -f /etc/pf.conf 
Now the pings work:

[dan@tallboy:~] $ ping6 2001:DB8:1001:8d00::1
PING6(56=40+8+8 bytes) 2001:DB8:1001:8d00:d389:119c:9b57:396b --> 2001:DB8:1001:8d00::1
16 bytes from 2001:DB8:1001:8d00::1, icmp_seq=0 hlim=64 time=0.890 ms
16 bytes from 2001:DB8:1001:8d00::1, icmp_seq=1 hlim=64 time=0.876 ms
^C
--- 2001:DB8:1001:8d00::1 ping6 statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.876/0.883/0.890/0.007 ms
[dan@tallboy:~] $ 
Not shown in the above is a change of block. I originally had one IPv6 block assigned to me, but after a data center move, I was given another block.
Top

Reverse Engineering and Serial Adapter Protocols

Post by Flameeyes via Flameeyes's Weblog »

In the comments to my latest post on the Silicon Labs CP2110, the first comment got me more than a bit upset because it was effectively trying to mansplain to me how a serial adapter (or more properly an USB-to-UART adapter) works. Then I realized there’s one thing I can do better than complain and that is providing even more information on this for the next person who might need them. Because I wish I knew half of what I know now back when I tried to write the driver for ch314.

So first of all, what are we talking about? UART is a very wide definition for any interface that implements serial communication that can be used to transmit between a host and a device. The word “serial port” probably bring different ideas to mind depending on the background of a given person, whether it is mice and modems connected to PCs, or servers’ serial terminals, or programming interfaces for microcontrollers. For the most part, people in the “consumer world” think of serial as RS-232 but people who have experience with complex automation systems, whether it is home, industrial, or vehicle automation, have RS-485 as their main reference. None of that actually matters, since these standards mostly deal with electrical or mechanical standards.

As physical serial ports on computer stopped appearing many years ago, most of the users moved to USB adapters. These adapters are all different between each other and that’s why there’s around 40KSLOC of serial adapters drivers in the Linux kernel (according to David’s SLOCCount). And that’s without counting the remaining 1.5KSLOC for implementing CDC ACM which is the supposedly-standard approach to serial adapters.

Usually the adapters are placed either directly on the “gadget” that needs to be connected, which expose a USB connector, or on a cable used to connect to it, in which case the device usually has a TRS or similar connectors. The TRS-based serial cables appeared to become more and more popular thanks to osmocom as they are relatively inexpensive to build, both as cables and as connectors onto custom boards.

Serial interface endpoints in operating systems (/dev/tty{S,USB,ACM}* on Linux, COM* on Windows, and so on) do not only transfer data between host and device, but also provides configuration of parameters such as transmission rate and “symbol shape” — you may or may not have heard references to something like “9600n8” which is a common way to express the transmission protocol of a serial interface: 9600 symbols per second (“baud rate”), no parity, 8-bit per symbol. You can call these “out of band” parameters, as they are transmitted to the UART interface, but not to the device itself, and they are the crux of the matter of interacting with these USB-to-UART adapters.

I already wrote notes about USB sniffing, so I won’t go too much into detail there, but most of the time when you’re trying to figure out what the control software sends to a device, you start by taking a USB trace, which gives you a list of USB Request Blocks (effectively, transmission packets), and you get to figure out what’s going on there.

For those devices that use USB-to-UART adapters and actually use the OS-provided serial interface (that is, COM* under Windows, where most of the control software has to run), you could use specialised software to only intercept the communication on that interface… but I don’t know of any such modern software, while there are at least a few well-defined interface to intercept USB communication. And that would not work for software that access the USB adapter directly from userspace, which is always the case for Silicon Labs CP2110, but is also the case for some of the FTDI devices.

To be fair, for those devices that use TRS, I actually have considered just intercepting the serial protocol using the Saleae Logic Pro, but beside being overkill, it’s actually just a tiny fraction of the devices that can be intercepted that way — as the more modern ones just include the USB-to-UART chip straight onto the device, which is also the case for the meter using the CP2110 I referenced earlier.

Within the request blocks you’ll have not just the serial communication, but also all the related out-of-band information, which is usually terminated on the adapter/controller rather than being forwarded onto the device. The amount of information changes widely between adapters. Out of those I have had direct experience, I found one (TI3420) that requires a full firmware upload before it would start working, which means recording everything from the moment you plug in the device provides a lot more noise than you would expect. But most of those I dealt with had very simple interfaces, using Control transfers for out-of-band configuration, and Bulk or Interrupt1 transfers for transmitting the actual serial interface.

With these simpler interfaces, my “analysis” scripts (if you allow me the term, I don’t think they are that complicated) can produce a “chatter” file quite easily by ignoring the whole out of band configuration. Then I can analyse those chatter files to figure out the device’s actual protocol, and for the most part it’s a matter of trying between one and five combinations of transmission protocol to figure out the right one to speak to the device — in glucometerutils I have two drivers using 9600n8 and two drivers using 38400n8. In some cases, such as the TI3420 one, I actually had to figure out the configuration packet (thanks to the Linux kernel driver and the datasheet) to figure out that it was using 19200n8 instead.

But again, for those, the “decoding” is just a matter to filtering away part of the transmission to keep the useful parts. For others it’s not as easy.

0029 <<<< 00000000: 30 12                                             0.

0031 <<<< 00000000: 05 00                                             ..

0033 <<<< 00000000: 2A 03                                             *.

0035 <<<< 00000000: 42 00                                             B.

0037 <<<< 00000000: 61 00                                             a.

0039 <<<< 00000000: 79 00                                             y.

0041 <<<< 00000000: 65 00                                             e.

0043 <<<< 00000000: 72 00                                             r.
This is an excerpt from the chatter file of a session with my Contour glucometer. What happens here is that instead of buffering the transmission and sending a single request block with a whole string, the adapter (FTDI FT232RL) sends short burts, probably to reduce latency and keep a more accurate serial protocol (which is important for device that need accurate timing, for instance some in-chip programming interfaces). This would be also easy to recompose, except it also comes with

0927 <<<< 00000000: 01 60                                             .`

0929 <<<< 00000000: 01 60                                             .`

0931 <<<< 00000000: 01 60                                             .`
which I’m somehow sceptical they come from the device itself. I have not paid enough attention yet to figure out from the kernel driver whether this data is marked as coming from the device or is some kind of keepalive or synchronisation primitive of the adapter.

In the case of the CP2110, the first session I captured starts with:

0003 <<<< 00000000: 46 0A 02                                          F..

0004 >>>> 00000000: 41 01                                             A.

0006 >>>> 00000000: 50 00 00 4B 00 00 00 03  00                       P..K.....

0008 >>>> 00000000: 01 51                                             .Q

0010 >>>> 00000000: 01 22                                             ."

0012 >>>> 00000000: 01 00                                             ..

0014 >>>> 00000000: 01 00                                             ..

0016 >>>> 00000000: 01 00                                             ..

0018 >>>> 00000000: 01 00                                             ..
and I can definitely tell you that the first three URBs are not sent to the device at all. That’s because HID (the higher-level protocol that CP2110 uses on top of USB) uses the first byte of the block to identify the “report” it sends or receives. Checking these against AN434 give me a hint of what’s going on:

  • report 0x46 is “Get Version Information” — CP2110 always returns 0x0A as first byte, followed by a device version, which is unspecified; probably only used to confirm that the device is right, and possibly debugging purposes;
  • report 0x41 is “Get/Set UART Enabled” — 0x01 just means “turn on the UART”;
  • report 0x50 is “Get/Set UART Config” — and this is a bit more complex to parse: the first four bytes (0x00004b00) define the baud rate, which is 19200 symbols per second; then follows one byte for parity (0x00, no parity), one for flow control (0x00, no flow control), one for the number of data bits (0x03, 8-bit per symbol), and finally one for the stop bit (0x00, short stop bit); that’s a long way to say that this is configured as 19200n8.
  • report 0x01 is the actual data transfer, which means the transmission to the device starts with 0x51 0x22 0x00 0x00 0x00 0x00.
This means that I need a smarter analysis script that understands this protocol (which may be as simple as just ignoring anything that does not use report 0x01) to figure out what the control software is sending.

And at the same time, it needs code to know how “talk serial” to this device. Usually the out-of-bad configuration is done by a kernel driver: you ioctl() the serial device to the transmission protocol you need, the driver sends the right request block to the USB endpoint. But in the case of the CP2110 device, there’s no kernel driver implementing this, at least per Silicon Labs design: since HID devices are usually exposed to userland, and in particular to non-privileged applications, sending and receiving the reports can be done directly from the apps. So indeed there is no COM* device exposed on Windows, even with the drivers installed.

Could someone (me?) write a Linux kernel driver that expose CP2110 as a serial, rather than HID, device? Sure. It would require fiddling around with the HID subsystem a bit to have it ignore the original device, and that means it’ll probably break any application built with Silicon Labs’ own development kit, unless someone has a suggestion on how to have both interfaces available at the same time, while it would allow accessing those devices without special userland code. But I think I’ll stick with the idea of providing a Free and Open Source implementation of the protocol, for Python. And maybe add support for it to pyserial to make it easier for me to use it.


  1. All these terms make more sense if you have at least a bit of knowledge of USB works behind the scene, but I don’t want to delve too much into that.
    [return]
Top

Using nsupdate to change NS servers

Post by Dan Langille via Dan Langille's Other Diary »

You have an old DNS server: tallboy.example.org

You have a new DNS server: ns1.example.org

You have a domain, example.com, for which you want to swap the old DNS server with the new DNS using nsupdate.

NOTE: the domain is example.com

The NS servers are in example.org (different domains).

These are the commands you issue:

update delete example.com.      IN NS tallboy.example.org.
update add    example.com. 3600 IN NS     ns1.example.org.
send
Of note, you mention the domain first, the NS server second.

This took me a while to figure out tonight.

You will also want to update the records with your domain registrar.
Top

Adrian Lamo, ‘Homeless Hacker’ Who Turned in Chelsea Manning, Dead at 37

Post by BrianKrebs via Krebs on Security »

Adrian Lamo, the hacker probably best known for breaking into The New York Times‘s network and for reporting Chelsea Manning‘s theft of classified documents to the FBI, was found dead in a Kansas apartment on Wednesday. Lamo was widely reviled and criticized for turning in Manning, but that chapter of his life eclipsed the profile of a complex individual who taught me quite a bit about security over the years.

Adrian Lamo, in 2006. Source: Wikipedia.
I first met Lamo in 2001 when I was a correspondent for Newsbytes.com, a now-defunct tech publication that was owned by The Washington Post at the time. A mutual friend introduced us over AOL Instant Messenger, explaining that Lamo had worked out a simple method allowing him to waltz into the networks of some of the world’s largest media companies using nothing more than a Web browser.

The panoply of alternate nicknames he used on instant messenger in those days shed light on a personality not easily grasped: Protagonist, Bitter Geek, AmINotMerciful, Unperceived, Mythos, Arcane, truefaith, FugitiveGame.

In this, as in so many other ways, Lamo was a study in contradictions: Unlike most other hackers who break into online networks without permission, he didn’t try to hide behind the anonymity of screen names or Internet relay chat networks.

By the time I met him, Adrian had already earned the nickname “the homeless hacker” because he had no fixed address, and found shelter most evenings in abandoned buildings or on friend’s couches. He launched the bulk of his missions from Internet cafes or through the nearest available dial-up connections, using an old Toshiba laptop that was missing seven keys. His method was the same in every case: find security holes; offer to fix them; refuse payment in exchange for help; wait until hole is patched; alert the media.

Lamo had previously hacked into the likes of AOL Time Warner, ComcastMCI Worldcom, Microsoft, SBC Communications and Yahoo after discovering that these companies had enabled remote access to their internal networks via Web proxies, a kind of security by obscurity that allowed anyone who knew the proxy’s Internet address and port number to browse internal shares and other network resources of the affected companies.

By 2002, Lamo had taken to calling me on the phone frequently to relate his various exploits, often spoofing his phone number to make it look like the call had come from someplace ominous or important, such as The White House or the FBI. At the time, I wasn’t actively taking any measures to encrypt my online communications, or to suggest that my various sources do likewise. After a few weeks of almost daily phone conversations with Lamo, however, it became abundantly clear that this had been a major oversight.

In February 2002, Lamo told me that he’d found an open proxy on the network of The New York Times that allowed him to browse the newsroom’s corporate intranet. A few days after that conversation, Lamo turned up at Washingtonpost.com’s newsroom (then in Arlington, Va.). Just around the corner was a Kinkos, and Adrian insisted that I follow him to the location so he could get online and show me his discovery firsthand.

While inside the Times’ intranet, he downloaded a copy of the Times’ source list, which included phone numbers and contact information for such household names as Yogi Berra, Warren Beatty, and Robert Redford, as well as high-profile political figures – including Palestinian leader Yassir Arafat and Secretary of State Colin Powell. Lamo also added his own contact information to the file. My exclusive story in Newsbytes about the Times hack was soon picked up by other news outlets.

In August 2003, federal prosecutors issued an arrest warrant for Lamo in connection with the New York Times hack, among other intrusions. The next month, The Washington Post’s attorneys received a letter from the FBI urging them not to destroy any correspondence I might have had with Lamo, and warning that my notes may be subpoenaed.

In response, the Post opted to take my desktop computer at work and place it in storage. We also received a letter from the FBI requesting an interview (that request was summarily denied). In October 2003, the Associated Press ran a story saying the FBI didn’t follow proper procedures when it notified reporters that their notes concerning Lamo might be subpoenaed (the DOJ’s policy was to seek materials from reporters only after all other investigative steps had been exhausted, and then only as a last resort).

In 2004, Lamo pleaded guilty to one felony count of computer crimes against the Times, as well as LexisNexis and Microsoft. He was sentenced to six month’s detention and two years probation, an ordered to pay $65,000 in restitution.

Several months later while attending a formal National Press Foundation dinner at the Washington Hilton, my bulky Palm Treo buzzed in my suit coat pocket, signaling a new incoming email message. The missive was blank save for an unusually large attachment. Normally, I would have ignored such messages as spam, but this one came from a vaguely familiar address: adrian.lamo@us.army.mil. Years before, Lamo had told me he’d devised a method for minting his own .mil email addresses.

The attachment turned out to be the Times’ newsroom source list. The idea of possessing such information was at once overwhelming and terrifying, and for the rest of the evening I felt certain that someone was going to find me out (it didn’t help that I was seated adjacent to a table full of NYT reporters and editors). It was difficult not to stare at the source list and wonder at the possibilities. But ultimately, I decided the right thing to do was to simply delete the email and destroy the file.

EARLY LIFE

Lamo was born in 1981 outside of Boston, Mass. into an educated, bilingual family. Lamo’s parents say from an early age he exhibited an affinity for computers and complex problem solving. In grade school, Lamo cut his teeth on a Commodore64, but his parents soon bought him a more powerful IBM PC when they grasped the extent of his talents.

“Ever since he was very young he has shown a tendency to be a lateral thinker, and any problem you put in front of him with a computer he could solve almost immediately,” Lamo’s mother Mary said in an interview in 2003. “He has a gifted analytical mind and a natural curiosity.”

By the time he got to high school, Lamo had graduated to a laptop computer. During a computer class his junior year, Lamo upstaged his teacher by solving a computer problem the instructor insisted was insurmountable. After an altercation with the teacher, he was expelled. Not long after that incident, Lamo earned his high school equivalency degree and left home for a life on his own.

For many years after that he lived a vagabond’s existence, traveling almost exclusively on foot or by Greyhound bus, favoring the affordable bus line for being the “only remaining form of mass transit that offers some kind of anonymity.” When he wasn’t staying with friends, he passed the night in abandoned buildings or under the stars.

In 1995, Lamo landed contract work at a promising technology upstart called America Online, working on “PlanetOut.com,” an online forum that catered to the gay and lesbian community. At the time, advertisers paid AOL based on the amount of time visitors spent on the site, and Lamo’s job was to keep people glued to the page, chatting them up for hours at a time.

Ira Wing, a security expert at one of the nation’s largest Internet service providers, met Lamo that year at PlanetOut and the two became fast friends. It wasn’t long before he joined in one of Lamo’s favorite distractions, one that would turn out to be an eerie offshoot of the young hacker’s online proclivities: exploring the labyrinth of California’s underground sewage networks and abandoned mines.

Since then, Lamo kept in touch intermittently, popping in and out of Wing’s life at odd intervals. But Wing proved a trustworthy and loyal friend, and Lamo soon granted him power of attorney over his affairs should he run into legal trouble.

In 2002, Wing registered the domain “freeadrian.com,” as a joke. He’d later remark on how prescient a decision that had been.

“Adrian is like a fast moving object that has a heavy affect on anyone’s life he encounters,” Wing told this reporter in 2003. “And then he moves on.”

THE MANNING AFFAIR

In 2010, Lamo was contacted via instant message by Chelsea Manning, a transgender Army private who was then known as Bradley Manning. The Army private confided that she’d leaked a classified video of a helicopter attack in Baghdad that killed 12 people (including two Reuters employees) to Wikileaks. Manning also admitted to handing Wikileaks some 260,000 classified diplomatic cables.

Lamo reported the theft to the FBI. In explaining his decision, Lamo told news publications that he was worried the classified data leak could endanger lives.

“He was just grabbing information from where he could get it and trying to leak it,” Mr. Lamo told The Times in 2010.

Manning was later convicted of leaking more than 700,000 government records, and received a 35 year prison sentence. In January 2017, President Barack Obama commuted Manning’s sentence after she’d served seven years of it. In January 2018, Manning filed to run for a Senate seat in Maryland.

HOMELESS IN WICHITA

The same month he reported Manning to the feds, Lamo told Wired.com that he’d been diagnosed with Asperger Syndrome after being briefly hospitalized in a psychiatric ward. Lamo told Wired that he suspected someone had stolen his backpack, and that paramedics were called when the police responding to reports of the alleged theft observed him acting erratically and perhaps slurring his speech.

Wired later updated the story to note that Lamo’s father had reported him to the Sacramento Sheriff’s office, saying he was worried that his son was over-medicating himself with prescription drugs.

In 2011, Lamo told news outlet Al Jazeera that he was in hiding because he was getting death threats for betraying Manning’s confidence and turning him in to the authorities. In 2013, he told The Guardian that he’d struggled with substance abuse “for a while.”

It’s not yet certain what led to Lamo’s demise. He was found dead in a Wichita apartment on March 14. According to The Wichita Eagle, Lamo had lived in the area for more than a year. The paper quoted local resident Lorraine Murphy, who described herself as a colleague and friend of Lamo’s. When Murphy sent him a message in December 2016 asking him what he was up to, he reportedly replied “homeless in Wichita.”

“Adrian was always homeless or on the verge of it,” Murphy is quoted as saying. “He bounced around a great deal, for no particular reason. He was a believer in the Geographic Cure. Whatever goes wrong in your life, moving will make it better. And he knew people all over the country.”

The Eagle reports that Wichita police found no signs of foul play or anything suspicious about Lamo’s death. A toxicology test was ordered but the results won’t be available for several weeks.
Top

DPG 2018, the proof: Yes we've been in Berlin!

Post by Andreas via the dilfridge blog »

The 2018 spring meeting of the DPG condensed matter physics section in Berlin is over, and we've all listened to interesting talks and seen exciting physics. And we've also presented the Lab::Measurement poster! Here's the photo proof of Simon explaining our software... click on the image or the link for a larger version!
Top


AsiaBSDcon 2018 | BSD Now 237

Post by Jupiter Broadcasting via BSD Now Mobile »

AsiaBSDcon review, Meltdown & Spectre Patches in FreeBSD stable, Interview with MidnightBSD founder, 8 months with TrueOS, the mysteries of GNU & BSD split
Top

Yak Shaving: Silicon Labs CP2110 and Linux

Post by Flameeyes via Flameeyes's Weblog »

One of my favourite passtimes in the past years has been reverse engineering glucometers for the sake of writing an utility package to export data to it. Sometimes, in the quest of just getting data out of a meter I end up embarking in yak shaves that are particularly bothersome, as they are useful only for me and no one else.

One of these yak shaves might be more useful to others, but it will have to be seen. I got my hands on a new meter, which I will review later on. This meter has software for Windows to download the readings, so it’s a good target for reverse engineering. What surprised me, though, was that once I connected the device to my Linux laptop first, it came up as an HID device, described as an “USB HID to UART adapter”: the device uses a CP2110 adapter chip by Silicon Labs, and it’s the first time I saw this particular chip (or even class of chip) in my life.

Effectively, this device piggybacks the HID interface, which allows vendor-specified protocols to be implemented in user space without needing in-kernel drivers. I’m not sure if I should be impressed by the cleverness or disgusted by the workaround. In either case, it means that you end up with a stacked protocol design: the glucometer protocol itself is serial-based, implemented on top of a serial-like software interface, which converts it to the CP2110 protocol, which is encapsulated into HID packets, which are then sent over USB…

The good thing is that, as the datasheet reports, the protocol is available: “Open access to interface specification”. And indeed in the download page for the device, there’s a big archive of just-about-everything, including a number of precompiled binary libraries and a bunch of documents, among which figures AN434, which describe the full interface of the device. Source code is also available, but having spot checked it, it appears it has no license specification and as such is to be considered proprietary, and possibly virulent.

So now I’m warming up to the idea of doing a bit more of yak shaving and for once trying not to just help myself. I need to understand this protocol for two purposes: one is obviously having the ability to communicate with the meter that uses that chip; the other is being able to understand what the software is telling the device and vice-versa.

This means I need to have generators for the host side, but parsers for both. Luckily, construct should make that part relatively painless, and make it very easy to write (if not maintain, given the amount of API breakages) such a parser/generator library. And of course this has to be in Python because that’s the language my utility is written in.

The other thing that I realized as I was toying with the idea of writing this is that, done right, it can be used together with facedancer, to implement the gadget side purely in Python. Which sounds like a fun project for those of us into that kind of thing.

But since this time this is going to be something more widely useful, and not restricted to my glucometer work, I’m now looking to release this using a different process, as that would allow me to respond to issues and codereviews from my office as well as during the (relatively little) spare time I have at home. So expect this to take quite a bit longer to be released.

At the end of the day, what I hope to have is an Apache 2 licensed Python library that can parse both host-to-controller and controller-to-host packets, and also implement it well enough on the client side (based on the hidapi library, likely) so that I can just import the module and use it for a new driver. Bonus points if I can sue this to implement a test fake framework to implement the tests for the glucometer.

In all of this, I want to make sure to thank Silicon Labs for releasing the specification of the protocol. It’s not always that you can just google up the device name to find the relevant protocol documentation, and even when you do it’s hard to figure out if it’s enough to implement a driver. The fact that this is possible surprised me pleasantly. On the other hand I wish they actually released their code with a license attached, and possibly a widely-usable one such as MIT or Apache 2, to allow users to use the code directly. But I can see why that wouldn’t be particularly high in their requirements.

Let’s just hope this time around I can do something for even more people.
Top

Who Is Afraid of More Spams and Scams?

Post by BrianKrebs via Krebs on Security »

Security researchers who rely on data included in Web site domain name records to combat spammers and scammers will likely lose access to that information for at least six months starting at the end of May 2018, under a new proposal that seeks to bring the system in line with new European privacy laws. The result, some experts warn, will likely mean more spams and scams landing in your inbox.



On May 25, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico on Thursday, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

Gregory Mounier, head of outreach at EUROPOL‘s European Cybercrime Center and member of ICANN’s Public Safety Working Group, said the new WHOIS plan could leave security researchers in the lurch — at least in the short run.

“If you don’t have an accreditation system by 25 May then there’s no means for cybersecurity folks to get access to this information,” Mounier told KrebsOnSecurity. “Let’s say you’re monitoring a botnet and have 10.000 domains connected to that and you want to find information about them in the WHOIS records, you won’t be able to do that anymore. It probably won’t be implemented before December 2018 or January 2019, and that may mean security gaps for many months.”

Rod Rasmussen, chair of ICANN’s Security and Stability Advisory Committee, said ICANN does not have a history of getting things done before or on set deadlines, meaning it may be well more than six months before researchers and others can get vetted to access personal information in WHOIS data.

Asked for his take on the chances that ICANN and the registrar community might still be designing the vetting system this time next year, Rasmussen said “100 percent.”

“A lot of people who are using this data won’t be able to get access to it, and it’s not going to be pretty,” Rasmussen said. “Once things start going dark it will have a cascading effect. Email deliverability is going to be one issue, and the amount of spam that shows up in peoples’ inboxes will be climbing rapidly because a lot of anti-spam technologies rely on WHOIS for their algorithms.”

As I noted in last month’s story on this topic, WHOIS is probably the single most useful tool we have right now for tracking down cybercrooks and/or for disrupting their operations. On any given day I probably perform 20-30 different WHOIS queries; on days I’ve set aside for deep-dive research, I may run hundreds of WHOIS searches.

WHOIS records are a key way that researchers reach out to Web site owners when their sites are hacked to host phishing pages or to foist malware on visitors. These records also are indispensable for tracking down cybercrime victims, sources and the cybercrooks themselves. I remain extremely concerned about the potential impact of WHOIS records going dark across the board.

There is one last possible “out” that could help registrars temporarily sidestep the new privacy regulations: ICANN board members told attendees at Thursday’s gathering in Puerto Rico that they had asked European regulators for a “forbearance” — basically, permission to be temporarily exempted from the new privacy regulations during the time it takes to draw up and implement a WHOIS accreditation system.

But so far there has been no reply, and several attendees at ICANN’s meeting Thursday observed that European regulators rarely grant such requests.

Some registrars are already moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And experts say it seems likely that other registrars will follow GoDaddy’s lead before the May 25 GDPR implementation date, if they haven’t already.
Top




Latency in Digital Audio

Post by Nirbheek via Nirbheek’s Rantings »

We've come a long way since Alexander Graham Bell, and everything's turned digital.

Compared to analog audio, digital audio processing is extremely versatile, is much easier to design and implement than analog processing, and also adds effectively zero noise along the way. With rising computing power and dropping costs, every operating system has had drivers, engines, and libraries to record, process, playback, transmit, and store audio for over 20 years.

Today we'll talk about the some of the differences between analog and digital audio, and how the widespread use of digital audio adds a new challenge: latency.

Analog vs Digital



Analog data flows like water through an empty pipe. You open the tap, and the time it takes for the first drop of water to reach you is the latency. When analog audio is transmitted through, say, an RCA cable, the transmission happens at the speed of electricity and your latency is:


This number is ridiculously small—especially when compared to the speed of sound. An electrical signal takes 0.001 milliseconds to travel 300 metres (984 feet). Sound takes 874 milliseconds (almost a second).

All analog effects and filters obey similar equations. If you're using, say, an analog pedal with an electric guitar, the signal is transformed continuously by an electrical circuit, so the latency is a function of the wire length (plus capacitors/transistors/etc), and is almost always negligible.

Digital audio is transmitted in "packets" (buffers) of a particular size, like a bucket brigade, but at the speed of electricity. Since the real world is analog, this means to record audio, you must use an Analog-Digital Converter. The ADC quantizes the signal into digital measurements (samples), packs multiple samples into a buffer, and sends it forward. This means your latency is now:



We saw above that the first part is insignificant, what about the second part?

Latency is measured in time, but buffer size is measured in bytes. For 16-bit integer audio, each measurement (sample) is stored as a 16-bit integer, which is 2 bytes. That's the theoretical lower limit on the buffer size. The sample rate defines how often measurements are made, and these days, is usually 48KHz. This means each sample contains ~0.021ms of audio. To go lower, we need to increase the sample rate to 96KHz or 192KHz.

However, when general-purpose computers are involved, the buffer size is almost never lower than 32 bytes, and is usually 128 bytes or larger. For single-channel 16-bit integer audio at 48KHz, a 32 byte buffer is 0.33ms, and a 128 byte buffer is 1.33ms. This is our buffer size and hence the base latency while recording (or playing) digital audio.

Digital effects operate on individual buffers, and will add an additional amount of latency depending on the delay added by the CPU processing required by the effect. Such effects may also add latency if the algorithm used requires that, but that's the same with analog effects.

The Digital Age


So everyone's using digital. But isn't 1.33ms a lot of additional latency?

It might seem that way till you think about it in real-world terms. Sound travels less than half a meter (1½ feet) in that time, and that sort of delay is completely unnoticeable by humans—otherwise we'd notice people's lips moving before we heard their words.

In fact, 1.33ms is too small for the majority of audio applications!

To process such small buffer sizes, you'd have to wake the CPU up 750 times a second, just for audio. This is highly inefficient, and wastes a lot of power. You really don't want that on your phone or your laptop, and is completely unnecessary in most cases anyway.

For instance, your music player will usually use a buffer size of ~200ms, which is just 5 CPU wakeups per second. Note that this doesn't mean that you will hear sound 200ms after hitting "play". The audio player will just send 200ms of audio to the sound card at once, and playback will begin immediately.

Of course, you can't do that with live playback such as video calls—you can't "read-ahead" data you don't have. You'd have to invent a time machine first. As a result, apps that use real-time communication have to use smaller buffer sizes because that directly affects the latency of live playback.

That brings us back to efficiency. These apps also need to conserve power, and 1.33ms buffers are really wasteful. Most consumer apps that require low latency use 10-15ms buffers, and that's good enough for things like voice/video calling, video games, notification sounds, and so on.

Ultra Low Latency


There's one category left: musicians, sound engineers, and other folk that work in the pro-audio business. For them, 10ms of latency is much too high!

You usually can't notice a 10ms delay between an event and the sound for it, but when making music, you can hear it when two instruments are out-of-sync by 10ms or if the sound for an instrument you're playing is delayed. Instruments such as drum snare are more susceptible to this problem than others, which is why the stage monitors used in live concerts must not add any latency.

The standard in the music business is to use buffers that are 5ms or lower, down to the 0.33ms number that we talked about above.

Power consumption is absolutely no concern, and the real problems are the accumulation of small amounts of latencies everywhere in your stack, and ensuring that you're able to read buffers from the hardware or write buffers to the hardware fast enough.

Let's say you're using an app on your computer to apply digital effects to a guitar that you're playing. This involves capturing audio from the line-in port, sending it to the application for processing, and playing it from the sound card to your amp.

The latency while capturing and outputting audio are both multiples of the buffer size, so it adds up very quickly. The effects app itself will also add a variable amount of latency, and at 1.33ms buffer sizes you will find yourself quickly approaching a 10ms latency from line-in to amp-out. The only way to lower this is to use a smaller buffer size, which is precisely what pro-audio hardware and software enables.

The second problem is that of CPU scheduling. You need to ensure that the threads that are fetching/sending audio data to the hardware and processing the audio have the highest priority, so that nothing else will steal CPU-time away from them and cause glitching due to buffers arriving late.

This gets harder as you lower the buffer size because the audio stack has to do more work for each bit of audio. The fact that we're doing this on a general-purpose operating system makes it even harder, and requires implementing real-time scheduling features across several layers. But that's a story for another time!

I hope you found this dive into digital audio interesting! My next post will be is about my journey in implementing ultra low latency capture and render on Windows in the WASAPI plugin for GStreamer. This was already possible on Linux with the JACK GStreamer plugin and on macOS with the CoreAudio GStreamer plugin, so it will be interesting to see how the same problems are solved on Windows. Tune in!
Top

Flash, Windows Users: It’s Time to Patch

Post by BrianKrebs via Krebs on Security »

Adobe and Microsoft each pushed critical security updates to their products today. Adobe’s got a new version of Flash Player available, and Microsoft released 14 updates covering more than 75 vulnerabilities, two of which were publicly disclosed prior to today’s patch release.

The Microsoft updates affect all supported Windows operating systems, as well as all supported versions of Internet Explorer/Edge, Office, Sharepoint and Exchange Server.

All of the critical vulnerabilities from Microsoft are in browsers and browser-related technologies, according to a post from security firm Qualys.

“It is recommended that these be prioritized for workstation-type devices,” wrote Jimmy Graham, director of product management at Qualys. “Any system that accesses the Internet via a browser should be patched.”

The Microsoft vulnerabilities that were publicly disclosed prior to today involve Microsoft Exchange Server 2010 through 2016 editions (CVE-2018-0940) and ASP.NET Core 2.0 (CVE-2018-0808), said Chris Goettl at Ivanti. Microsoft says it has no evidence that attackers have exploited either flaw in active attacks online.

But Goettl says public disclosure means enough information was released publicly for an attacker to get a jump start or potentially to have access to proof-of-concept code making an exploit more likely. “Both of the disclosed vulnerabilities are rated as Important, so not as severe, but the risk of exploit is higher due to the disclosure,” Goettl said.

Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Adobe’s Flash Player update fixes at least two critical bugs in the program. Adobe said it is not aware of any active exploits in the wild against either flaw, but if you’re not using Flash routinely for many sites, you probably want to disable or remove this awfully buggy program.

Just last month Adobe issued a Flash update to fix two vulnerabilities that were being used in active attacks in which merely tricking a victim into viewing a booby-trapped Web site or file could give attackers complete control over the vulnerable machine. It would be one thing if these zero-day flaws in Flash were rare, but this is hardly an isolated occurrence.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is 29.0.0.113  for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.
Top

Not merging stuff from FreeBSD-HEAD into production branches, or "hey FreeBSD-HEAD should just be production"

Post by adrian via Adrian Chadd's Ramblings »

I get asked all the time why I don't backport my patches into stable FreeBSD release branches. It's a good question, so let me explain it here.

I don't get paid to do it.

Ok, so now you ask "but wait, surely the users matter?" Yes, of course they do! But, I also have other things going on in my life, and the stuff I do for fun is .. well, it's the stuff I do for fun. I'm not paid to do FreeBSD work, let alone open source wireless stuff in general.

So then I see posts like this:

https://www.anserinae.net/adventures-in-wifi-freebsd-edition.html

I understand his point of view, I really do. I'm also that user when it comes to a variety of other open source software and I ask why features aren't implemented that seem easy, or why they're not in a stable release. But then I remember that I'm also doing this for fun and it's totally up to me to spend my time however I want.

Now, why am I like this?

Well, the short-hand version is - I used to bend over backwards to try and get stuff in to stable releases of the open source software I once worked on. And that was taken advantage of by a lot of people and companies who turned around to incorporate that work into successful commercial software releases without any useful financial contribution to either myself or the project as a whole. After enough time of this, you realise that hey, maybe my spare time should just be my spare time.

My hope is that if people wish to backport my FreeBSD work to a stable release then they'll either pay me to do it, pay someone else to do it, or see if a company will sponsor that work for their own benefit. I don't want to get into the game of trying to backport things to one and potentially two stable releases and deal with all the ABI changes and support fallout that happens when you are porting things into a mostly ABI stable release. And yes, my spare time is my own.

Top

Not merging stuff from FreeBSD-HEAD into production branches, or "hey FreeBSD-HEAD should just be production"

Post by Adrian via Adrian Chadd's Ramblings »

I get asked all the time why I don't backport my patches into stable FreeBSD release branches. It's a good question, so let me explain it here.

I don't get paid to do it.

Ok, so now you ask "but wait, surely the users matter?" Yes, of course they do! But, I also have other things going on in my life, and the stuff I do for fun is .. well, it's the stuff I do for fun. I'm not paid to do FreeBSD work, let alone open source wireless stuff in general.

So then I see posts like this:

https://www.anserinae.net/adventures-in-wifi-freebsd-edition.html

I understand his point of view, I really do. I'm also that user when it comes to a variety of other open source software and I ask why features aren't implemented that seem easy, or why they're not in a stable release. But then I remember that I'm also doing this for fun and it's totally up to me to spend my time however I want.

Now, why am I like this?

Well, the short-hand version is - I used to bend over backwards to try and get stuff in to stable releases of the open source software I once worked on. And that was taken advantage of by a lot of people and companies who turned around to incorporate that work into successful commercial software releases without any useful financial contribution to either myself or the project as a whole. After enough time of this, you realise that hey, maybe my spare time should just be my spare time.

My hope is that if people wish to backport my FreeBSD work to a stable release then they'll either pay me to do it, pay someone else to do it, or see if a company will sponsor that work for their own benefit. I don't want to get into the game of trying to backport things to one and potentially two stable releases and deal with all the ABI changes and support fallout that happens when you are porting things into a mostly ABI stable release. And yes, my spare time is my own.
Top

Checked Your Credit Since the Equifax Hack?

Post by BrianKrebs via Krebs on Security »

A recent consumer survey suggests that half of all Americans still haven’t checked their credit report since the Equifax breach last year exposed the Social Security numbers, dates of birth, addresses and other personal information on nearly 150 million people. If you’re in that fifty percent, please make an effort to remedy that soon.



Credit reports from the three major bureaus — Equifax, Experian and TransUnion — can be obtained online for free at annualcreditreport.com — the only Web site mandated by Congress to serve each American a free credit report every year.

Annualcreditreport.com is run by a Florida-based company, but its data is supplied by the major credit bureaus, which struggled mightily to meet consumer demand for free credit reports in the immediate aftermath of the Equifax breach. Personally, I was unable to order a credit report for either me or my wife even two weeks after the Equifax breach went public: The site just kept returning errors and telling us to request the reports in writing via the U.S. Mail.

Based on thousands of comments left here in the days following the Equifax breach disclosure, I suspect many readers experienced the same but forgot to come back and try again. If this describes you, please take a moment this week to order your report(s) (and perhaps your spouse’s) and see if anything looks amiss. If you spot an error or something suspicious, contact the bureau that produced the report to correct the record immediately.

Of course, keeping on top of your credit report requires discipline, and if you’re not taking advantage of all three free reports each year you need to get a plan. My strategy is to put a reminder on our calendar to order a new report every four months or so, each time from a different credit bureau.

Whenever stories about credit reports come up, so do the questions from readers about the efficacy and value of credit monitoring services. KrebsOnSecurity has not been particularly kind to the credit monitoring industry; many stories here have highlighted the reality that they are ineffective at preventing identity theft or existing account fraud, and that the most you can hope for from them is that they alert you when an ID thief tries to get new lines of credit in your name.

But there is one area where I think credit monitoring services can be useful: Helping you sort things out with the credit bureaus in the event that there are discrepancies or fraudulent entries on your credit report. I’ve personally worked with three different credit monitoring services, two of which were quite helpful in resolving fraudulent accounts opened in our names.

At $10-$15 a month, are credit monitoring services worth the cost? Probably not on an annual basis, but perhaps during periods when you actively need help. However, if you’re not already signed up for one of these monitoring services, don’t be too quick to whip out that credit card: There’s a good chance you have at least a year’s worth available to you at no cost.

If you’re willing to spend the time, check out a few of the state Web sites which publish lists of companies that have had a recent data breach. In most cases, those publications come with a sample consumer alert letter providing information about how to sign up for free credit monitoring. California publishes probably the most comprehensive such lists at this link. Washington state published their list here; and here’s Maryland’s list. There are more.

It’s important for everyone to remember that as bad as the Equifax breach was (and it was a dumpster fire all around), most of the consumer data exposed in the breach has been for sale in the cybercrime underground for many years on a majority of Americans. If anything, the Equifax breach may have simply refreshed some of those criminal data stores.

That’s why I’ve persisted over the years in urging my fellow Americans to consider freezing their credit files. A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand.

With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file).

Bear in mind that if you haven’t yet frozen your credit file and you’re interested in signing up for credit monitoring services, you’ll need to sign up first before freezing your file. That’s because credit monitoring services typically need to access your credit file to enroll you, and if you freeze it they can’t do that.

The previous two tips came from a primer I wrote a few days after the Equifax breach, which is an in-depth Q&A about some of the more confusing aspects of policing your credit, including freezes, credit monitoring, fraud alerts, credit locks and second-tier credit bureaus.
Top

My affidavit in the Geniatech vs. McHardy case

Post by Greg Kroah-Hartman via Linux Kernel Monkey Log »

As many people know, last week there was a court hearing in the Geniatech vs. McHardy case. This was a case brought claiming a license violation of the Linux kernel in Geniatech devices in the German court of OLG Cologne.

Harald Welte has written up a wonderful summary of the hearing, I strongly recommend that everyone go read that first.

In Harald’s summary, he refers to an affidavit that I provided to the court. Because the case was withdrawn by McHardy, my affidavit was not entered into the public record. I had always assumed that my affidavit would be made public, and since I have had a number of people ask me about what it contained, I figured it was good to just publish it for everyone to be able to see it.

There are some minor edits from what was exactly submitted to the court such as the side-by-side German translation of the English text, and some reformatting around some footnotes in the text, because I don’t know how to do that directly here, and they really were not all that relevant for anyone who reads this blog. Exhibit A is also not reproduced as it’s just a huge list of all of the kernel releases in which I felt that were no evidence of any contribution by Patrick McHardy.

AFFIDAVIT

I, the undersigned, Greg Kroah-Hartman,
declare in lieu of an oath and in the
knowledge that a wrong declaration in
lieu of an oath is punishable, to be
submitted before the Court:

I. With regard to me personally:

1. I have been an active contributor to
   the Linux Kernel since 1999.

2. Since February 1, 2012 I have been a
   Linux Foundation Fellow.  I am currently
   one of five Linux Foundation Fellows
   devoted to full time maintenance and
   advancement of Linux. In particular, I am
   the current Linux stable Kernel maintainer
   and manage the stable Kernel releases. I
   am also the maintainer for a variety of
   different subsystems that include USB,
   staging, driver core, tty, and sysfs,
   among others.

3. I have been a member of the Linux
   Technical Advisory Board since 2005.

4. I have authored two books on Linux Kernel
   development including Linux Kernel in a
   Nutshell (2006) and Linux Device Drivers
   (co-authored Third Edition in 2009.)

5. I have been a contributing editor to Linux
   Journal from 2003 - 2006.

6. I am a co-author of every Linux Kernel
   Development Report. The first report was
   based on my Ottawa Linux Symposium keynote
   in 2006, and the report has been published
   every few years since then. I have been
   one of the co-author on all of them. This
   report includes a periodic in-depth
   analysis of who is currently contributing
   to Linux. Because of this work, I have an
   in-depth knowledge of the various records
   of contributions that have been maintained
   over the course of the Linux Kernel
   project.

   For many years, Linus Torvalds compiled a
   list of contributors to the Linux kernel
   with each release. There are also usenet
   and email records of contributions made
   prior to 2005. In April of 2005, Linus
   Torvalds created a program now known as
   “Git” which is a version control system
   for tracking changes in computer files and
   coordinating work on those files among
   multiple people. Every Git directory on
   every computer contains an accurate
   repository with complete history and full
   version tracking abilities.  Every Git
   directory captures the identity of
   contributors.  Development of the Linux
   kernel has been tracked and managed using
   Git since April of 2005.

   One of the findings in the report is that
   since the 2.6.11 release in 2005, a total
   of 15,637 developers have contributed to
   the Linux Kernel.

7. I have been an advisor on the Cregit
   project and compared its results to other
   methods that have been used to identify
   contributors and contributions to the
   Linux Kernel, such as a tool known as “git
   blame” that is used by developers to
   identify contributions to a git repository
   such as the repositories used by the Linux
   Kernel project.

8. I have been shown documents related to
   court actions by Patrick McHardy to
   enforce copyright claims regarding the
   Linux Kernel. I have heard many people
   familiar with the court actions discuss
   the cases and the threats of injunction
   McHardy leverages to obtain financial
   settlements. I have not otherwise been
   involved in any of the previous court
   actions.

II. With regard to the facts:

1. The Linux Kernel project started in 1991
   with a release of code authored entirely
   by Linus Torvalds (who is also currently a
   Linux Foundation Fellow).  Since that time
   there have been a variety of ways in which
   contributions and contributors to the
   Linux Kernel have been tracked and
   identified. I am familiar with these
   records.

2. The first record of any contribution
   explicitly attributed to Patrick McHardy
   to the Linux kernel is April 23, 2002.
   McHardy’s last contribution to the Linux
   Kernel was made on November 24, 2015.

3. The Linux Kernel 2.5.12 was released by
   Linus Torvalds on April 30, 2002.

4. After review of the relevant records, I
   conclude that there is no evidence in the
   records that the Kernel community relies
   upon to identify contributions and
   contributors that Patrick McHardy made any
   code contributions to versions of the
   Linux Kernel earlier than 2.4.18 and
   2.5.12. Attached as Exhibit A is a list of
   Kernel releases which have no evidence in
   the relevant records of any contribution
   by Patrick McHardy.
Top

How a cd works | BSD Now 236

Post by Jupiter Broadcasting via BSD Now Mobile »

We’ll cover OpenBSD’s defensive approach to OS security, help you Understanding Syscall Conventions for Different Platforms, Mishandling SMTP Sender Verification, how the cd command works & the LUA boot loader coming to FreeBSD.
Top

Look-Alike Domains and Visual Confusion

Post by BrianKrebs via Krebs on Security »

How good are you at telling the difference between domain names you know and trust and impostor or look-alike domains? The answer may depend on how familiar you are with the nuances of internationalized domain names (IDNs), as well as which browser or Web application you’re using.

For example, how does your browser interpret the following domain? I’ll give you a hint: Despite appearances, it is most certainly not the actual domain for software firm CA Technologies (formerly Computer Associates Intl Inc.), which owns the original ca.com domain name:

https://www.са.com/

Go ahead and click on the link above or cut-and-paste it into a browser address bar. If you’re using Google Chrome, Apple’s Safari, or some recent version of Microsoft‘s Internet Explorer or Edge browsers, you should notice that the address converts to “xn--80a7a.com.” This is called “punycode,” and it allows browsers to render domains with non-Latin alphabets like Cyrillic and Ukrainian.

Below is what it looks like in Edge on Windows 10; Google Chrome renders it much the same way. Notice what’s in the address bar (ignore the “fake site” and “Welcome to…” text, which was added as a courtesy by the person who registered this domain):

The domain https://www.са.com/ as rendered by Microsoft Edge on Windows 10. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser.
IE, Edge, Chrome and Safari all will convert https://www.са.com/ into its punycode output (xn--80a7a.com), in part to warn visitors about any confusion over look-alike domains registered in other languages. But if you load that domain in Mozilla Firefox and look at the address bar, you’ll notice there’s no warning of possible danger ahead. It just looks like it’s loading the real ca.com:

What the fake ca.com domain looks like when loaded in Mozilla Firefox. A browser certificate ordered from Comodo allows it to include the green lock (https://) in the address bar, adding legitimacy to the look-alike domain. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser. Click to enlarge.
The domain “xn--80a7a.com” pictured in the first screenshot above is punycode for the Ukrainian letters for “s” (which is represented by the character “c” in Russian and Ukrainian), as well as an identical Ukrainian “a”.

It was registered by Alex Holden, founder of Milwaukee, Wis.-based Hold Security Inc. Holden’s been experimenting with how the different browsers handle punycodes in the browser and via email. Holden grew up in what was then the Soviet Union and speaks both Russian and Ukrainian, and he’s been playing with Cyrillic letters to spell English words in domain names.

Letters like A and O look exactly the same and the only difference is their Unicode value. There are more than 136,000 Unicode characters used to represent letters and symbols in 139 modern and historic scripts, so there’s a ton of room for look-alike or malicious/fake domains.

For example, “a” in Latin is the Unicode value “0061” and in Cyrillic is “0430.”  To a human, the graphical representation for both looks the same, but for a computer there is a huge difference. Internationalized domain names (IDNs) allow domain names to be registered in non-Latin letters (RFC 3492), provided the domain is all in the same language; trying to mix two different IDNs in the same name causes the domain registries to reject the registration attempt.

So, in the Cyrillic alphabet (Russian/Ukrainian), we can spell АТТ, УАНОО, ХВОХ, and so on. As you can imagine, the potential opportunity for impersonation and abuse are great with IDNs. Here’s a snippet from a larger chart Holden put together showing some of the more common ways that IDNs can be made to look like established, recognizable domains:

Image: Hold Security.
Holden also was able to register a valid SSL encryption certificate for https://www.са.com from Comodo.com, which would only add legitimacy to the domain were it to be used in phishing attacks against CA customers by bad guys, for example.

A SOLUTION TO VISUAL CONFUSION

To be clear, the potential threat highlighted by Holden’s experiment is not new. Security researchers have long warned about the use of look-alike domains that abuse special IDN/Unicode characters. Most of the major browser makers have responded in some way by making their browsers warn users about potential punycode look-alikes.

With the exception of Mozilla, which by most accounts is the third most-popular Web browser. And I wanted to know why. I’d read the Mozilla Wiki’s IDN Display Algorithm FAQ,” so I had an idea of what Mozilla was driving at in their decision not to warn Firefox users about punycode domains: Nobody wanted it to look like Mozilla was somehow treating the non-Western world as second-class citizens.

I wondered why Mozilla doesn’t just have Firefox alert users about punycode domains unless the user has already specified that he or she wants a non-English language keyboard installed. So I asked that in some questions I sent to their media team. They sent the following short statement in reply:

“Visual confusion attacks are not new and are difficult to address while still ensuring that we render everyone’s domain name correctly. We have solved almost all IDN spoofing problems by implementing script mixing restrictions, and we also make use of Safe Browsing technology to protect against phishing attacks. While we continue to investigate better ways to protect our users, we ultimately believe domain name registries are in the best position to address this problem because they have all the necessary information to identify these potential spoofing attacks.”

If you’re a Firefox user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true.”



Incidentally, anyone using the Tor Browser to anonymize their surfing online is exposed to IDN spoofing because Tor by default uses Mozilla as well. I could definitely see spoofed IDNs being used in targeting phishing attacks aimed at Tor users, many of whom have significant assets tied up in virtual currencies. Fortunately, the same “about:config” instructions work just as well on Tor to display punycode in lieu of IDNs.

Holden said he’s still in the process of testing how various email clients and Web services handle look-alike IDNs. For example, it’s clear that Twitter sees nothing wrong with sending the look-alike CA.com domain in messages to other users without any context or notice. Skype, on the other hand, seems to truncate the IDN link, sending clickers to a non-existent page.

“I’d say that most email services and clients are either vulnerable or not fully protected,” Holden said.

For a look at how phishers or other scammers might use IDNs to abuse your domain name, check out this domain checker that Hold Security developed. Here’s the first page of results for krebsonsecurity.com, which indicate that someone at one point registered krebsoṇsecurity[dot]com (that domain includes a lowercase “n” with a tiny dot below it, a character used by several dozen scripts). The results in yellow are just possible (unregistered) domains based on common look-alike IDN characters.

The first page of warnings for Krebsonsecurity.com from Hold Security’s IDN scanner tool.
I wrote this post mainly because I wanted to learn more about the potential phishing and malware threat from look-alike domains, and I hope the information here has been interesting if not also useful. I don’t think this kind of phishing is a terribly pressing threat (especially given how far less complex phishing attacks seem to succeed just fine for now). But it sure can’t hurt Firefox users to change the default “visual confusion” behavior of the browser so that it always displays punycode in the address bar (see the solution mentioned above).

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]
Top

Paw is nice

Post by kris via The Isoblog. »

Paw is a graphical curl with JSON decoder and a bunch of code generators.
Paw is a nice graphical curl with a JSON decoder and a bunch of code generators. If you want to test or explore a REST API, it’s really helpful.

So let’s autogenerate Grafana Dashboards from config data in a MySQL using Python now.
Top



What Is Your Bank’s Security Banking On?

Post by BrianKrebs via Krebs on Security »

A large number of banks, credit unions and other financial institutions just pushed customers onto new e-banking platforms that asked them to reset their account passwords by entering a username plus some other static identifier — such as the first six digits of their Social Security number (SSN), or a mix of partial SSN, date of birth and surname. Here’s a closer look at what may be going on (spoiler: small, regional banks and credit unions have grown far too reliant on the whims of just a few major online banking platform providers).



You might think it odd that any self-respecting financial institution would seek to authenticate customers via static data like partial SSN for passwords, and you’d be completely justified for thinking that, too. Nobody has any business using these static identifiers for authentication because they are for sale on most Americans quite cheaply in the cybercrime underground. The Equifax breach might have “refreshed” some of those data stores for identity thieves, but most U.S. adults have had their static details (DOB/SSN/MMN, address, previous address, etc) on sale for years now.

On Feb. 16, KrebsOnSecurity reader Brent Hoeft shared a copy of an email he’d just received from his financial institution Associated Bank, which at $30+ billion in assets happens to be Wisconsin’s largest by asset size.

The notice advised:

“Please read and save this information (including the password below) to prepare for your online and mobile banking upgrade.

Our refreshed online and mobile banking experience is officially launching on Monday, February 26, 2018.

We’re excited to share it with you, and want you to be aware of some important details about the transition.

TEMPORARY PASSWORD

Use this temporary password the first time you sign in after the upgrade. Your temporary password is the first four letters of your last name plus the last four digits of your Social Security Number.

XXXX#### [redacted by me but included in the email]

Note: your password is all lowercase without spaces.

Once the upgrade is complete, you will need your temporary password to begin the re-enrollment process.
• Beginning Monday, February 26, you will need to sign in using your existing user ID and the temporary password included above in this email. Please note that you are only required to reenroll in online or mobile banking but can access both using the same user ID and password.
• Once you sign in, you will be prompted to create a new password and establish other security features. Your user ID will remain the same.”

Hoeft said Associated Bank seems to treat the customer username as a secret, something to be protected along with the password.

“I contacted Associated’s customer service via email and received a far less satisfying explanation that the user name is required for re-activation and, that since [the username] was not provided in the email, the process they are using is in fact secure,” Hoeft said.

After speaking with Hoeft, I tweeted about whether to name and shame the bank before it was too late, or perhaps to try and talk some sense into them privately. Most readers advised that calling attention to the problem before the transition could cause more harm than good, and that at least until after Feb. 26 contacting some of the banks privately was the best idea (which is what I did).

Associated Bank wouldn’t say who their new consumer online banking platform provider was, but they did say it was one of the big ones. I took that to mean either FIS, Fiserv or Jack Henry, which collectively control approximately 70 percent of the market for bank core processors (according to FedFIS.com, Fiserv is by far the largest).

Image: Fedfis.com
The bank’s chief information security officer Joe Smits said Associated’s new consumer online banking platform provider required that new and existing customers log in with a username and a temporary password — which was described as choice among secondary, static data elements about customers — such as the first six digits of the customer’s SSN or date of birth.

Smits added that the bank originally started emailing customers the instructions for figuring out their temporary passwords, but then decided US mail would be a safer option and sent the rest out that way. He said only about 15 percent of Associated Bank customers (~50,000) received instructions about their temporary passwords through email.

I followed up with Hoeft to find out how his online banking upgrade went at Associated Bank. He told me that upon visiting the site, it asked for his username and the temporary password (the first four letters of his last name and the last four digits of his SSN).

“After entering that I was told to re-enter my temporary password and then create a new password,” Hoeft said. “I then was asked to select 5 security questions and provide answers. Next I was asked for a verification phone number. Upon entering that I received a text message with a 4 digit verification code. After entering the code it asked me to finish my profile information including name, email and daytime phone. After that it took me right into my online banking account.”

Hoeft said it seems like the “verification” step that was supposed to create an extra security check didn’t really add any security at all.

“If someone were able to get in with the temporary password, they would be able to create a new password, fill out all the security code information, and then provide their phone number to receive the verification code,” Hoeft said. “Armed with the verification code they then would be able to get right into my online banking account.”

OTHER BANKS

A simple search online revealed Associated Bank wasn’t alone: Multiple institutions were moving to a new online banking platform all on the same day: Feb. 26, 2018.

My Credit Union also moved to a new online banking service in February, posting a notice stating that all customers will need to log in with their current username and the last four of their SSN as a temporary password.

Customers Bank, a $10 billion bank with nearly two dozen branches between Boston and Philadelphia, also told customers that starting Feb. 26 they would need to use a temporary password — the last six digits of their Social Security number — to re-enroll in online banking. Here’s part of their advice, which was published in a PDF on the bank’s site:

• You may notice a new co-branded logo for Customers Bank and BankMobile (Division Customers Bank).
• Your existing user name for Online Banking will remain the same within the new system; however, it must be entered as all lowercase letters.
• The first time you log into the new Online Banking system, your temporary password is the last 6-digits of your social security number. Your temporary
password will expire on Friday, April 20, 2018. Please be sure to log in prior to that date.
• Online Banking includes multi-factor authentication which will need to be reestablished as part of the initial sign in to the system.
• Your username and password credentials for Online Banking will be the same for Mobile Banking. Note: Before accessing the new Mobile Banking services,
you must first login to our enhanced Online Banking system to change your password.
• You will also need to enroll your mobile device, either through Online Banking by visiting the Mobile Banking Center option, or directly on the device through the
app. Both options will require additional authentication.

Columbia Bank, which has 140 branches in Washington, Oregon and Idaho, also switched gears on Feb. 26, but used a more sensible approach: Sending customers a new user ID, organization ID and temporary password in two separate mailings.

ANALYSIS

My tweet about whether to name Associated Bank attracted the attention of at least two banking industry security regulators, each of whom spoke with KrebsOnSecurity on condition of not being identified by name or regulatory agency.

Both said their agencies would be using the above examples in briefings with member institutions as instructional on how not to do online banking securely. Both also said small to mid-sized banks are massively beholden to their platform providers, and many banks simply accept the defaults instead of pushing for stronger alternatives.

“I have a lot of communications directly with the chief information security officers, chief security officers, and chief information officers in many institutions,” one regulator said. “Many of them have massively dumbed down their password requirements. A lot of smaller institutions often don’t understand the risk involved in online banking, which is why they try to outsource the whole thing to someone else. But they can’t outsource accountability.”

One of the regulators I spoke with suggested that all of the banks they’d seen transitioning to a new online banking platform on Feb. 26 were customers of Fiserv — the nation’s largest online banking platform provider.

Fiserv did not respond to specific questions for this story, saying only in a written statement that: “Fiserv regularly partners with financial institutions to provide capabilities that help mitigate and manage risk, enhance the customer experience, and allow banks to remain competitive. A variety of methodologies are used by institutions to enroll and authenticate new users onto online banking platforms, and password authentication is one of multiple layers of security used to protect customers.”

Both banking industry regulators I spoke with said a basic problem is that many smaller institutions unfortunately still treat usernames as secret codes. I have railed against this practice for years, but far too many banks treat customer usernames as part of their security, even though most customers pick something very close to the first part of their email address (before the “@” sign). I’ve even skewered some of the airline industry giants for doing the same (United does this with its super-secret frequent flyer account number).

“I think this will be an opportunity for us to coach them on that,” one banking regulator said. “This process has to involve random password generation and that needs to be standard operating procedure. If you can shortcut security just by supplying static data like SSN, it’s all screwed. Some of these organizations have had such poor control structure for so long they don’t even understand how bad it is.”

The other regulator said another challenge is how long banks should wait before disabling accounts if consumers don’t log in to the new online banking system.

“What they’re going to do is set up all these users on this brand new system and give them default passwords,” the regulator said. “Some individuals will log into their bank account every day, others once a month and sometimes quite randomly. So, how are they going to control that window of opportunity? At some point, maybe after a couple of weeks, they need to just disable those other accounts and have people start from scratch.”

The first regulator said it appears many banks (and their platform providers) are singularly focused on making these transitions as seamless and painless as possible for the financial institution and its customers.

“I think they’re looking at making it easier for their customers and lessening the fallout as they get fewer angry and frustrated calls,” the regulator said. “That’s their incentive more than anything else.”

WHAT CAN YOU DO?

While it may appear that banks are more afraid of calls from their customers than of fallout from identity thieves and hackers, remember that you the consumer can shop with your wallet, and should move your funds to another bank if you’re unhappy with the security practices of your current institution.

Also, don’t re-use passwords. In fact, wherever possible don’t use passwords at all. Instead, choose passphrases over passwords (remember, length is key). Unfortunately, passphrases may not be possible because some banks have chosen to truncate passwords after a certain number of characters, and to disallow special symbols.

If you’re the kind of person who likes to use the same password across multiple sites, then a password manager is definitely for you. That’s because password managers pick strong, long and secure passwords for you and the only thing you have to remember is a single master password.

Please consider any two-step or two-factor authentication options your financial institution may offer, and be sure to take full advantage of that when it’s available. Also, ask your bank to require a unique verbal password before discussing any of your account details over the phone; this prevents someone from calling in to your bank and convincing a customer service rep that he’s you just because he can regurgitate your static personal details.

Finally, take steps to prevent your security from being backdoored by your mobile provider: Check out last week’s tips on blocking mobile number port-out scams, which thieves sometimes use in cashing out hacked bank accounts.
Top


Hashes in Structures

Post by kris via The Isoblog. »

In Hashes and their uses we have been talking about hash functions in general, and cryptographic hashes in particular. We wanted four things from cryptographic hashes:

  1. The hash should be fast to calculate on a large string of bytes.
  2. The hash is slow to reverse (i.e. only by trying all messages and checking each result).
  3. The hash is slow to find collisions for (i.e. it’s hard to find two input strings that have the same hash value).
  4. The hash does chaotically cascade changes (i.e. a single bit flip in the original message does flip many bits in the hash value).
With these things and general cryptography we can built three very versatile things that see many applications: Digital signatures, eternal logfiles (“blockchains”) and hash trees (“torrents”).



Digital Signatures

Using asymmetric cryptography

Digital Signatures are using primitives from asymmetric cryptography and cryptographic checksums. From asymmetric cryptography we know we can have two keys, P and Q, which actually work in symmetry: What has been encrypted with one key can be decrypted only with the other key. So we either encrypt a message M with a key P to get a ciphertext C, and decode C with Q to get M back. Or we encrypt M with Q, and decrypt with P to get M back.

The asymmetry is introduced by keeping Q secret and makig P as public as possible.

P is public.
Q is private and only known to one person.

C = encrypt(P, M)
M = decrypt(Q, C)

or

C = encrypt(Q, M)
M = decrypt(P, C)
Unfortunately, in asymmetric public key cryptography, the keys are usually rather long, and hence the encrypt() and decrypt() functions are usually quite slow. It is useful to keep M reasonably small.

Signing

For digital signatures, the signer calculates a hash H of a message M and then encrypts the hash H using the private key Q, creating CH. The message and the encrypted hash are sent together.

H  = hash(M)
CH = encrypt(Q, H)

send(M, CH)
When receiving the message, the receiver can read the message and can calculate their own checksum, RH = hash(M).

Because the public key P of the sender is, well, public, we can also decrypt the encrypted checksum the sender calculated: H = decrypt(p, CH). If that H is equal to RH, we can know that one of two things is true: Either the message has not been tampered with, or the sender lost control of the private key.

Assuming the latter is not the case, the former must be true and that means that the message we received is as it originated at the sender and we do know who the sender is.

Eternal Logfiles

When calculating hashes over a sequence of message, it is possible to seed each message with the previous messages hash:

Each log entry consists of the actual message, the current timestamp and the hash of the previous entry. The current hash is a hash over all of these three things.
Each entry in the log consists of the actual log entry, the date the entry is being made and the hash of the previous log entry. The current hash is being calculated over all of these things: the message, the current timestamp and the previous hash.

Because each entry contains the previous entries hash value, it is becoming hard to change older entries in the log: A change to an old log entry will create a ripple effect that changes each subsequent newer hash value in the log. Additional security can be produced by introducing media breaks and publishing the current hash value (with the date) in many places.

The date added to each log entry is not supplied by the log source which sends the message: While the logged message can have a structure and may also contain a timestamp as seen by the message sender, the date fields shown as separate fields in the graphics above are the local time of the logger.

The interdependency of the log messages create a public order of events. Message 2 and Hash 2 could not have been produced before Message 1 and Hash 1, as the value of Hash 2 is dependent not only on the contents of Message 2, but also on the value of Hash 1.

A very simple implementation of this can be found in Lutz Donnerhacke’s Eternal Logfile (Site in German).

Interestingly, forward secure sealing as implemented in journald/systemd does not work like this, but uses another mechanism. It also does not seal each log entry as it is being generated, but creates 15 minute windows of log entries which are then protected. The main reason for this are seekability and purgeability: It should be possible to validate the integrity of the journald log file without calculating the sequence of hashes from system installation.

Guarantees given in Eternal Logfiles

It is important to note that no assumptions whatsoever are being made of the structure of the messages being chained in this chain of hash values. It is also important to note that no mechanism to authenticate or validate the content of the messages being logged is available automatically – the only proofs such a chained eternal logfile provides is:

  1. Message n-1 is older than Message n (Order of messages can be implied from the order of log entries).
  2. If the hash chain resolves (i.e. all checksums calculate just fine), none of the messages prior to the final message have been tampered with.
By digitally signing the each message in the log, the identity of the creator for each logged message can be proven, and the authenticity of each message can be verified independently from the hash chain itself.

Limitations and Disadvantages of Eternal Logfiles

Without additional mechanisms, eternal logfiles cannot be purged nor seeked.

That is, because each log entries hash is dependent not only on the logfiles content, but also on the hash of the previous log entry, it is not possible to ever delete log entries without losing the ability to check the chains integrity. All logs have to be kept forever.

Mechanisms to work around that could be constructed, but are usually not implemented, often on purpose.

Related to that is the ability to check the integrity of the hash chain: This is only possible from the anchor point, which by construction and on purpose is only at the start of the chain. A validation of the chain therefore becomes increasingly costly as the length the chain increases and no purge mechanism exists.

A full chain validation could create a local index, which notes the position of each log entry and also the expected checksum. This would allow quick seeks and quick local entry validations. Building this index is still dependent on at least one big scan and validation of the entire chain.

Hash Trees

It is also possible to structure hashes-of-hashes in a non-linear fashion, for example in a tree structure. The Merkle-Tree is named after the cryptographer Ralph Merkle, and is one specific implementation of a Hash Tree.

Data (the messages) are at the leaf nodes (L1 to L4) of the tree. Non-Leaf nodes contain hashes of the blocks beneath them. (Image Source:Wikipedia:Hash Tree)
Hash Trees can establish order of events the same way linearly hashed logfiles do. They allow better pinpointing of the position of an error, at least down to the position of a single block, while at the same time still assuring the intrity of other data past the modified block. They also allow partial file integrity checks for each subtree of the main hash tree.

Applications of Hash Trees

Hash Trees, Merkle Trees or variants of the system, are being used in practically all P2P systems, where their properties are useful to determine which blocks of a file a client already has and which ones are still missing. At the same time, the top level hash can be used as a file name in a content addressing scheme – Magnet Links are just top level hashes of a Hash Tree in a P2P system.

They also see application for securing file integrity in file systems that target very large storages (ZFS, BTRFS), enabling partial file system checks and rapid validation of files and file trees. They are also useful in pinpointing areas of files that changes, enabling fast differential backups.

A variant of Hash Tree logic is consequently also applied in pt-table-sync of the Percona Toolkit, finding differences between two instances of the same table on two different database servers and then creating a minimal set of change instructions to mutate a target table so that it matches the contents of a source table.

Some version control systems, for example git, are also using trees of hashes as a content addressable storage, but in a way that differs from a pure Merkle Tree.

Certificate Transparency uses a Merkle Tree as a container for the log of all certificates generated by all certificate authorities participating, establishing completeness and authenticity of the CT log as well as an order of events.

Merkle Trees are also used to structure and validate the integrity of the blockchain underneath Bitcoin.

Limits and additional considerations

Like with the Eternal Logfile, no assumptions are made about the content of the messages that make up the payload of a Merkle Tree. Additional agreements between multiple parties about the content of messages are necessary to give data in a Hash Tree meaning.

Without pruning/seeking mechanisms, all sequentially ordered transactional systems are accumulating log data/tree size without bounds. Creation of the systems current state is getting progressively more expensive in memory and computation required.

History

Eternal Log files and Merkle Trees are old innovations. The patent for Merkle Trees was created by Ralph Merkle in 1979 (and granted in 1982)  and is since expired. Eternal Log files are an idea created by David Chaum even before that.

Continued from Hashes and their uses
Top

Frozen IJmeer near IJburg

Post by kris via The Isoblog. »

A DJI mavic pro flies over the IJmeer near IJburg, Amsterdam.



Please note that this area is a controlled airspace due to the closeness of Schiphol Airport. Drone flights in this area are subject to additional controls and require a permit.
Top

I C you BSD | BSD Now 235

Post by Jupiter Broadcasting via BSD Now Mobile »

How the term open source was created, running FreeBSD on ThinkPad T530, Moving away from Windows, Unknown Giants, as well as OpenBSD & FreeDOS.
Top

Automating compliance checks

Post by Sven Vermeulen via Simplicity is a form of art... »

With the configuration baseline for a technical service being described fully (see the first, second and third post in this series), it is time to consider the validation of the settings in an automated manner. The preferred method for this is to use Open Vulnerability and Assessment Language (OVAL), which is nowadays managed by the Center for Internet Security, abbreviated as CISecurity. Previously, OVAL was maintained and managed by Mitre under NIST supervision, and Google searches will often still point to the old sites. However, documentation is now maintained on CISecurity's github repositories.

But I digress...

Read-only compliance validation

One of the main ideas with OVAL is to have a language (XML-based) that represents state information (what something should be) which can be verified in a read-only fashion. Even more, from an operational perspective, it is very important that compliance checks do not alter anything, but only report.

Within its design, OVAL engineering has considered how to properly manage huge sets of assessment rules, and how to document this in an unambiguous manner. In the previous blog posts, ambiguity was resolved through writing style, and not much through actual, enforced definitions.

OVAL enforces this. You can't write a generic or ambiguous rule in OVAL. It is very specific, but that also means that it is daunting to implement the first few times. I've written many OVAL sets, and I still struggle with it (although that's because I don't do it enough in a short time-frame, and need to reread my own documentation regularly).

The capability to perform read-only validation with OVAL leads to a number of possible use cases. In the 5.10 specification a number of use cases are provided. Basically, it boils down to vulnerability discovery (is a system vulnerable or not), patch management (is the system patched accordingly or not), configuration management (are the settings according to the rules or not), inventory management (detect what is installed on the system or what the systems' assets are), malware and threat indicator (detect if a system has been compromised or particular malware is active), policy enforcement (verify if a client system adheres to particular rules before it is granted access to a network), change tracking (regularly validating the state of a system and keeping track of changes), and security information management (centralizing results of an entire organization or environment and doing standard analytics on it).

In this blog post series, I'm focusing on configuration management.

OVAL structure

Although the OVAL standard (just like the XCCDF standard actually) entails a number of major components, I'm going to focus on the OVAL definitions. Be aware though that the results of an OVAL scan are also standardized format, as are results of XCCDF scans for instance.

OVAL definitions have 4 to 5 blocks in them: - the definition itself, which describes what is being validated and how. It refers to one or more tests that are to be executed or validated for the definition result to be calculated - the test or tests, which are referred to by the definition. In each test, there is at least a reference to an object (what is being tested) and optionally to a state (what should the object look like) - the object, which is a unique representation of a resource or resources on the system (a file, a process, a mount point, a kernel parameter, etc.). Object definitions can refer to multiple resources, depending on the definition. - the state, which is a sort-of value mapping or validation that needs to be applied to an object to see if it is configured correctly - the variable, an optional definition which is what it sounds like, a variable that substitutes an abstract definition with an actual definition, allowing to write more reusable tests.

Let's get an example going, but without the XML structure, so in human language. We want to define that the Kerberos definition on a Linux system should allow forwardable tickets by default. This is accomplished by ensuring that, inside the /etc/krb5.conf file (which is an INI-style configuration file), the value of the forwardable key inside the [libdefaults] section is set to true.

In OVAL, the definition itself will document the above in human readable text, assign it a unique ID (like oval:com.example.oval:def:1) and mark it as being a definition for configuration validation (compliance). Then, it defines the criteria that need to be checked in order to properly validate if the rule is applicable or not. These criteria include validation if the OVAL statement is actually being run on a Linux system (as it makes no sense to run it against a Cisco router) which is Kerberos enabled, and then the criteria of the file check itself. Each criteria links to a test.

The test of the file itself links to an object and a state. There are a number of ways how we can check for this specific case. One is that the object is the forwardable key in the [libdefaults] section of the /etc/krb5.conf file, and the state is the value true. In this case, the state will point to those two entries (through their unique IDs) and define that the object must exist, and all matches must have a matching state. The "all matches" here is not that important, because there will generally only be one such definition in the /etc/krb5.conf file.

Note however that a different approach to the test can be declared as well. We could state that the object is the [libdefaults] section inside the /etc/krb5.conf file, and the state is the value true for the forwardable key. In this case, the test declares that multiple objects must exist, and (at least) one must match the state.

As you can see, the OVAL language tries to map definitions to unambiguous definitions. So, how does this look like in OVAL XML?

The OVAL XML structure

The full example contains a few more entries than those we declare next, in order to be complete. The most important definitions though are documented below.

Let's start with the definition. As stated, it will refer to tests that need to match for the definition to be valid.

<definitions>
  <definition id="oval:com.example.oval:def:1" version="1" class="compliance">
    <metadata>
      <title>libdefaults.forwardable in /etc/krb5.conf must be set to true</title>
      <affected family="unix">
        <platform>Red Hat Enterprise Linux 7</platform>
      </affected>
      <description>
        By default, tickets obtained from the Kerberos environment must be forwardable.
      </description>
    </metadata>
    <criteria operator="AND">
      <criterion test_ref="oval:com.example.oval:tst:1" comment="Red Hat Enterprise Linux is installed"/>
      <criterion test_ref="oval:com.example.oval:tst:2" comment="/etc/krb5.conf's libdefaults.forwardable is set to true"/>
    </criteria>
  </definition>
</definitions>
The first thing to keep in mind is the (weird) identification structure. Just like with XCCDF, it is not sufficient to have your own id convention. You need to start an id with oval: followed by the reverse domain definition (here com.example.oval), followed by the type (def for definition) and a sequence number.

Also, take a look at the criteria. Here, two tests need to be compliant (hence the AND operator). However, more complex operations can be done as well. It is even allowed to nest multiple criteria, and refer to previous definitions, like so (taken from the ssg-rhel6-oval.xml file:

<criteria comment="package hal removed or service haldaemon is not configured to start" operator="OR">
  <extend_definition comment="hal removed" definition_ref="oval:ssg:def:211"/>
  <criteria operator="AND" comment="service haldaemon is not configured to start">
    <criterion comment="haldaemon runlevel 0" test_ref="oval:ssg:tst:212"/>
    <criterion comment="haldaemon runlevel 1" test_ref="oval:ssg:tst:213"/>
    <criterion comment="haldaemon runlevel 2" test_ref="oval:ssg:tst:214"/>
    <criterion comment="haldaemon runlevel 3" test_ref="oval:ssg:tst:215"/>
    <criterion comment="haldaemon runlevel 4" test_ref="oval:ssg:tst:216"/>
    <criterion comment="haldaemon runlevel 5" test_ref="oval:ssg:tst:217"/>
    <criterion comment="haldaemon runlevel 6" test_ref="oval:ssg:tst:218"/>
  </criteria>
</criteria>
Next, let's look at the tests.

<tests>
  <unix:file_test id="oval:com.example.oval:tst:1" version="1" check_existence="all_exist" check="all" comment="/etc/redhat-release exists">
    <unix:object object_ref="oval:com.example.oval:obj:1" />
  </unix:file_test>
  <ind:textfilecontent54_test id="oval:com.example.oval:tst:2" check="all" check_existence="all_exist" version="1" comment="The value of forwardable in /etc/krb5.conf">
    <ind:object object_ref="oval:com.example.oval:obj:2" />
    <ind:state state_ref="oval:com.example.oval:ste:2" />
  </ind:textfilecontent54_test>
</tests>
There are two tests defined here. The first test just checks if /etc/redhat-release exists. If not, then the test will fail and the definition itself will result to false (as in, not compliant). This isn't actually a proper definition, because you want the test to not run when it is on a different platform, but for the sake of example and simplicity, let's keep it as is.

The second test will check for the value of the forwardable key in /etc/krb5.conf. For it, it refers to an object and a state. The test states that all objects must exist (check_existence="all_exist") and that all objects must match the state (check="all").

The object definition looks like so:

<objects>
  <unix:file_object id="oval:com.example.oval:obj:1" comment="The /etc/redhat-release file" version="1">
    <unix:filepath>/etc/redhat-release</unix:filepath>
  </unix:file_object>
  <ind:textfilecontent54_object id="oval:com.example.oval:obj:2" comment="The forwardable key" version="1">
    <ind:filepath>/etc/krb5.conf</ind:filepath>
    <ind:pattern operation="pattern match">^\s*forwardable\s*=\s*((true|false))\w*</ind:pattern>
    <ind:instance datatype="int" operation="equals">1</ind:instance>
  </ind:textfilecontent54_object>
</objects>
The first object is a simple file reference. The second is a text file content object. More specifically, it matches the line inside /etc/krb5.conf which has forwardable = true or forwardable = false in it. An expression is made on it, so that we can refer to the subexpression as part of the test.

This test looks like so:

<states>
  <ind:textfilecontent54_state id="oval:com.example.oval:ste:2" version="1">
    <ind:subexpression datatype="string">true</ind:subexpression>
  </ind:textfilecontent54_state>
</states>
This test refers to a subexpression, and wants it to be true.

Testing the checks with Open-SCAP

The Open-SCAP tool is able to test OVAL statements directly. For instance, with the above definition in a file called oval.xml:

~$ oscap oval eval --results oval-results.xml oval.xml
Definition oval:com.example.oval:def:1: true
Evaluation done.
The output of the command shows that the definition was evaluated successfully. If you want more information, open up the oval-results.xml file which contains all the details about the test. This results file is also very useful while developing OVAL as it shows the entire result of objects, tests and so forth.

For instance, the /etc/redhat-release file was only checked to see if it exists, but the results file shows what other parameters can be verified with it as well:

<unix-sys:file_item id="1233781" status="exists">
  <unix-sys:filepath>/etc/redhat-release</unix-sys:filepath>
  <unix-sys:path>/etc</unix-sys:path>
  <unix-sys:filename>redhat-release</unix-sys:filename>
  <unix-sys:type>regular</unix-sys:type>
  <unix-sys:group_id datatype="int">0</unix-sys:group_id>
  <unix-sys:user_id datatype="int">0</unix-sys:user_id>
  <unix-sys:a_time datatype="int">1515186666</unix-sys:a_time>
  <unix-sys:c_time datatype="int">1514927465</unix-sys:c_time>
  <unix-sys:m_time datatype="int">1498674992</unix-sys:m_time>
  <unix-sys:size datatype="int">52</unix-sys:size>
  <unix-sys:suid datatype="boolean">false</unix-sys:suid>
  <unix-sys:sgid datatype="boolean">false</unix-sys:sgid>
  <unix-sys:sticky datatype="boolean">false</unix-sys:sticky>
  <unix-sys:uread datatype="boolean">true</unix-sys:uread>
  <unix-sys:uwrite datatype="boolean">true</unix-sys:uwrite>
  <unix-sys:uexec datatype="boolean">false</unix-sys:uexec>
  <unix-sys:gread datatype="boolean">true</unix-sys:gread>
  <unix-sys:gwrite datatype="boolean">false</unix-sys:gwrite>
  <unix-sys:gexec datatype="boolean">false</unix-sys:gexec>
  <unix-sys:oread datatype="boolean">true</unix-sys:oread>
  <unix-sys:owrite datatype="boolean">false</unix-sys:owrite>
  <unix-sys:oexec datatype="boolean">false</unix-sys:oexec>
  <unix-sys:has_extended_acl datatype="boolean">false</unix-sys:has_extended_acl>
</unix-sys:file_item>
Now, this is just on OVAL level. The final step is to link it in the XCCDF file.

Referring to OVAL in XCCDF

The XCCDF Rule entry allows for a check element, which refers to an automated check for compliance.

For instance, the above rule could be referred to like so:

<Rule id="xccdf_com.example_rule_krb5-forwardable-true">
  <title>Enable forwardable tickets on RHEL systems</title>
  ...
  <check system="http://oval.mitre.org/XMLSchema/oval-definitions-5">
    <check-content-ref href="oval.xml" name="oval:com.example.oval:def:1" />
  </check>
</Rule>
With this set in the Rule, Open-SCAP can validate it while checking the configuration baseline:

~$ oscap xccdf eval --oval-results --results xccdf-results.xml xccdf.xml
...
Title   Enable forwardable kerberos tickets in krb5.conf libdefaults
Rule    xccdf_com.example_rule_krb5-forwardable-tickets
Ident   RHEL7-01007
Result  pass
A huge advantage here is that, alongside the detailed results of the run, there is also better human readable output as it shows the title of the Rule being checked.

The detailed capabilities of OVAL

In the above example I've used two examples: a file validation (against /etc/redhat-release) and a file content one (against /etc/krb5.conf). However, OVAL has many more checks and support for it, and also has constraints that you need to be aware of.

In the OVAL Project github account, the Language repository keeps track of the current documentation. By browsing through it, you'll notice that the OVAL capabilities are structured based on the target technology that you can check. Right now, this is AIX, Android, Apple iOS, Cisco ASA, Cisco CatOS, VMWare ESX, FreeBSD, HP-UX, Cisco iOS and iOS-XE, Juniper JunOS, Linux, MacOS, NETCONF, Cisco PIX, Microsoft SharePoint, Unix (generic), Microsoft Windows, and independent.

The independent one contains tests and support for resources that are often reusable toward different platforms (as long as your OVAL and XCCDF supporting tools can run it on those platforms). A few notable supporting tests are:

  • filehash58_test which can check for a number of common hashes (such as SHA-512 and MD5). This is useful when you want to make sure that a particular (binary or otherwise) file is available on the system. In enterprises, this could be useful for license files, or specific library files.
  • textfilecontent54_test which can check the content of a file, with support for regular expressions.
  • xmlfilecontent_test which is a specialized test toward XML files
Keep in mind though that, as we have seen above, INI files specifically have no specialization available. It would be nice if CISecurity would develop support for common textual data formats, such as CSV (although that one is easily interpretable with the existing ones), JSON, YAML and INI.

The unix one contains tests specific to Unix and Unix-like operating systems (so yes, it is also useful for Linux), and together with the linux one a wide range of configurations can be checked. This includes support for generic extended attributes (fileextendedattribute_test) as well as SELinux specific rules (selinuxboolean_test and selinuxsecuritycontext_test), network interface settings (interface_test), runtime processes (process58_test), kernel parameters (sysctl_test), installed software tests (such as rpminfo_test for RHEL and other RPM enabled operating systems) and more.
Top


Powerful New DDoS Method Adds Extortion

Post by BrianKrebs via Krebs on Security »

Attackers have seized on a relatively new method for executing distributed denial-of-service (DDoS) attacks of unprecedented disruptive power, using it to launch record-breaking DDoS assaults over the past week. Now evidence suggests this novel attack method is fueling digital shakedowns in which victims are asked to pay a ransom to call off crippling cyberattacks.



On March 1, DDoS mitigation firm Akamai revealed that one of its clients was hit with a DDoS attack that clocked in at 1.3 Tbps, which would make it the largest publicly recorded DDoS attack ever.

The type of DDoS method used in this record-breaking attack abuses a legitimate and relatively common service called “memcached” (pronounced “mem-cash-dee”) to massively amp up the power of their DDoS attacks.

Installed by default on many Linux operating system versions, memcached is designed to cache data and ease the strain on heavier data stores, like disk or databases. It is typically found in cloud server environments and it is meant to be used on systems that are not directly exposed to the Internet.

Memcached communicates using the User Datagram Protocol or UDP, which allows communications without any authentication — pretty much anyone or anything can talk to it and request data from it.

Because memcached doesn’t support authentication, an attacker can “spoof” or fake the Internet address of the machine making that request so that the memcached servers responding to the request all respond to the spoofed address — the intended target of the DDoS attack.

Worse yet, memcached has a unique ability to take a small amount of attack traffic and amplify it into a much bigger threat. Most popular DDoS tactics that abuse UDP connections can amplify the attack traffic 10 or 20 times — allowing, for example a 1 mb file request to generate a response that includes between 10mb and 20mb of traffic.

But with memcached, an attacker can force the response to be thousands of times the size of the request. All of the responses get sent to the target specified in the spoofed request, and it requires only a small number of open memcached servers to create huge attacks using very few resources.

Akamai believes there are currently more than 50,000 known memcached systems exposed to the Internet that can be leveraged at a moment’s notice to aid in massive DDoS attacks.

Both Akamai and Qrator — a Russian DDoS mitigation company — published blog posts on Feb. 28 warning of the increased threat from memcached attacks.

“This attack was the largest attack seen to date by Akamai, more than twice the size of the September, 2016 attacks that announced the Mirai botnet and possibly the largest DDoS attack publicly disclosed,” Akamai said [link added]. “Because of memcached reflection capabilities, it is highly likely that this record attack will not be the biggest for long.”

According to Qrator, this specific possibility of enabling high-value DDoS attacks was disclosed in 2017 by a Chinese group of researchers from the cybersecurity 0Kee Team. The larger concept was first introduced in a 2014 Black Hat U.S. security conference talk titled “Memcached injections.”

DDOS VIA RANSOM DEMAND

On Thursday, KrebsOnSecurity heard from several experts from Cybereason, a Boston-based security company that’s been closely tracking these memcached attacks. Cybereason said its analysis reveals the attackers are embedding a short ransom note and payment address into the junk traffic they’re sending to memcached services.

Cybereason said it has seen memcached attack payloads that consist of little more than a simple ransom note requesting payment of 50 XMR (Monero virtual currency) to be sent to a specific Monero account. In these attacks, Cybereason found, the payment request gets repeated until the file reaches approximately one megabyte in size.

The ransom demand (50 Monero) found in the memcached attacks by Cybereason on Thursday.
Memcached can accept files and host files in temporary memory for download by others. So the attackers will place the 1 mb file full of ransom requests onto a server with memcached, and request that file thousands of times — all the while telling the service that the replies should all go to the same Internet address — the address of the attack’s target.

“The payload is the ransom demand itself, over and over again for about a megabyte of data,” said Matt Ploessel, principal security intelligence researcher at Cybereason. “We then request the memcached ransom payload over and over, and from multiple memcached servers to produce an extremely high volume DDoS with a simple script and any normal home office Internet connection. We’re observing people putting up those ransom payloads and DDoSsing people with them.”

Because it only takes a handful of memcached servers to launch a large DDoS, security researchers working to lessen these DDoS attacks have been focusing their efforts on getting Internet service providers (ISPs) and Web hosting providers to block traffic destined for the UDP port used by memcached (port 11211).

Ofer Gayer, senior product manager at security firm Imperva, said many hosting providers have decided to filter port 11211 traffic to help blunt these memcached attacks.

“The big packets here are very easy to mitigate because this is junk traffic and anything coming from that port (11211) can be easily mitigated,” Gayer said.

Several different organizations are mapping the geographical distribution of memcached servers that can be abused in these attacks. Here’s the world at-a-glance, from our friends at Shadowserver.org:

The geographic distribution of memcached servers exposed to the Internet. Image: Shadowserver.org
Here are the Top 20 networks that are hosting the most number of publicly accessible memcached servers at this moment, according to data collected by Cybereason:

The global ISPs with the most number of publicly available memcached servers.
DDoS monitoring site ddosmon.net publishes a live, running list of the latest targets getting pelted with traffic in these memcached attacks.

What do the stats at ddosmon.net tell us? According to netlab@360, memcached attacks were not super popular as an attack method until very recently.

“But things have greatly changed since February 24th, 2018,” netlab wrote in a Mar. 1 blog post, noting that in just a few days memcached-based DDoS went from less than 50 events per day, up to 300-400 per day. “Today’s number has already reached 1484, with an hour to go.”

Hopefully, the global ISP and hosting community can come together to block these memcached DDoS attacks. I am encouraged by what I have heard and seen so far, and hope that can continue in earnest before these attacks start becoming more widespread and destructive.

Here’s the Cybereason video from which that image above with the XMR ransom demand was taken:

Top

Looking at Lumina Desktop 2.0

Post by Josh Smith via TrueOS »

A few weeks ago I sat down with Lead Developer Ken Moore of the TrueOS Project to get answers to some of the most frequently asked questions about Lumina Desktop from the open source community. Here is what he said on Lumina Desktop 2.0.  Do you have a question for Ken and the rest of the team over at the TrueOS Project? Make sure to read the interview and comment below. We are glad to answer your questions!

Ken: Lumina Desktop 2.0 is a significant overhaul compared to Lumina 1.x. Almost every single subsystem of the desktop has been streamlined, resulting in a nearly-total conversion in many important areas.

With Lumina Desktop 2.0 we will finally achieve our long-term goal of turning Lumina into a complete, end-to-end management system for the graphical session and removing all the current runtime dependencies from Lumina 1.x (Fluxbox, xscreensaver, compton/xcompmgr). The functionality from those utilities is now provided by Lumina Desktop itself.

Going along with the session management changes, we have compressed the entire desktop into a single, multi-threaded binary. This means that if any rogue script or tool starts trying to muck about with the memory used by the desktop (probably even more relevant now than when we started working on this), the entire desktop session will close/crash rather than allowing targeted application crashes to bypass the session security mechanisms. By the same token, this also prevents “man-in-the-middle” type of attacks because the desktop does not use any sort of external messaging system to communicate (looking at you `dbus`). This also gives a large performance boost to Lumina Desktop

The entire system for how a user’s settings get saved and loaded has been completely redone, making it a “layered” settings system which allows the default settings (Lumina) to get transparently replaced by system settings (OS/Distributor/SysAdmin) which can get replaced by individual user settings. This results in the actual changes in the user setting files to be kept to a minimum and allows for a smooth transition between updates to the OS or Desktop. This also provides the ability to “restrict” a user’s desktop session (based on a system config file) to the default system settings and read-only user sessions for certain business applications.

The entire graphical interface has been written in QML in order to fully-utilize hardware-based GPU acceleration with OpenGL while the backend logic and management systems are still written entirely in C++. This results in blazing fast performance on the backend systems (myriad multi-threaded C++ objects) as well as a smooth and responsive graphical interface with all the bells and whistles (drag and drop, compositing, shading, etc).

The post Looking at Lumina Desktop 2.0 appeared first on TrueOS.
Top

Financial Cyber Threat Sharing Group Phished

Post by BrianKrebs via Krebs on Security »

The Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry forum for sharing data about critical cybersecurity threats facing the banking and finance industries, said today that a successful phishing attack on one of its employees was used to launch additional phishing attacks against FS-ISAC members.

The fallout from the back-to-back phishing attacks appears to have been limited and contained, as many FS-ISAC members who received the phishing attack quickly detected and reported it as suspicious. But the incident is a good reminder to be on your guard, remember that anyone can get phished, and that most phishing attacks succeed by abusing the sense of trust already established between the sender and recipient.

The confidential alert FS-ISAC sent to members about a successful phishing attack that spawned phishing emails coming from the FS-ISAC.
Notice of the phishing incident came in an alert FS-ISAC shared with its members today and obtained by KrebsOnSecurity. It describes an incident on Feb. 28 in which an FS-ISAC employee “clicked on a phishing email, compromising that employee’s login credentials. Using the credentials, a threat actor created an email with a PDF that had a link to a credential harvesting site and was then sent from the employee’s email account to select members, affiliates and employees.”

The alert said while FS-ISAC was already planning and implementing a multi-factor authentication (MFA) solution across all of its email platforms, “unfortunately, this incident happened to an employee that was not yet set up for MFA. We are accelerating our MFA solution across all FS-ISAC assets.”

The FS-ISAC also said it upgraded its Office 365 email version to provide “additional visibility and security.”

In an interview with KrebsOnSecurity, FS-ISAC President and CEO Bill Nelson said his organization has grown significantly in new staff over the past few years to more than 75 people now, including Greg Temm, the FS-ISAC’s chief information risk officer.

“To say I’m disappointed this got through is an understatement,” Nelson said. “We need to accelerate MFA extremely quickly for all of our assets.”

Nelson observed that “The positive messaging out of this I guess is anyone can become victimized by this.” But according to both Nelson and Temm, the phishing attack that tricked the FS-ISAC employee into giving away email credentials does not appear to have been targeted — nor was it particularly sophisticated.

“I would classify this as a typical, routine, non-targeted account harvesting and phishing,” Temm said. “It did not affect our member portal, or where our data is. That’s 100 percent multifactor. In this case it happened to be an asset that did not have multifactor.”

In this incident, it didn’t take a sophisticated actor to gain privileged access to an FS-ISAC employee’s inbox. But attacks like these raise the question: How successful might such a phishing attack be if it were only slightly more professional and/or organized?

Nelson said his staff members all participate in regular security awareness training and testing, but that there is always room to fill security gaps and move the needle on how many people click when they shouldn’t with email.

“The data our members share with us is fully protected,” he said. “We have a plan working with our board of directors to make sure we have added security going forward,” Nelson said. “But clearly, recognizing where some of these softer targets are is something every company needs to take a look at.”
Top

How not to run a CA

Post by kris via The Isoblog. »

Trustico is an SSL reseller, which just compromised 23.000 certificates they sold.

TL;DR: Forget your EV or other certs. Just run “Let’s Encrypt”. It gets you a cert, it’s fresh, and it does not make any difference whatsoever. At least not any you or anyone else can check for, or cares for.

Actual Trustico ad from their website (Screenshot 01-Mar-2018)
Here is how:

Chrome is going to distrust the Symantec root certificate soon, because they failed at running a Certificate Authority properly, repeatedly. Certificates by a reseller that are being rooted at Symantec will become invalid, no matter what their runtime in the Cert is.

Trustico wanted to move their customers from Symantec to Comodo, another company that failed at running a Certificate Authority properly, repeatedly, but which is not distrusted by Chrome, yet.

They asked Digicert, who bought the business of Symantec, to revoke (“cancel”) the certs they sold to their customers, just because they say so. That, said Digicert, is not possible, because the conditions for revoking a certificate have not been met, and if you are trying to revoke a certificate just because someone said so you are not running a Certificate Authority properly and they would like stopping to do that.

Apparently Digicert also gave examples of proper causes for certificate revokation, and the one prime cause for revoking a cert is when the private key for a cert (which is supposed to be a secret only known to the proper owner of a cert, which is neither Digicert nor Trustico) being made known to people who have no business knowing it.

So the CEO of Trustico, Zane Lucas, mailed the private keys of 23000 Trustico customers to Digicert, effectively invalidating them and at the same time proving that Trustico has knowledge of all their customers private keys, keeping copies of them, which proves that they never knew how to run a Certficate Authority business in the first place, and compromising all their 23000 customers security retroactively, and also ending their own presence in the security space forever.

So Digicert needed help to do a mass cert revocation, and also informed the customers of Trustico of the situation, directly. Which angered the CEO of Trustico, because it made their absolute and limitless incompetence public.

Trustico Statement

Digicert Statement

Mozilla Mailing List Thread

Earlier statement (Mid 2017) of Comodo and Trustico Partnership, which may need review under the current situation

Reddit Thread

IT Wire article

Cyberscoop article

Top


Differences Between TrueOS & Linux

Post by Josh Smith via TrueOS »

Many people ask what makes TrueOS different than Linux when we’re visiting Linux conferences. A lot of us would say FreeBSD of course, but there are other reasons that we like to mention when discussing the differences that we see. Check out our article guest written for Kompulsa Magazine and tell us why you use TrueOS, Linux, or FreeBSD.

5 Differences Between TrueOS & Linux

The post Differences Between TrueOS & Linux appeared first on TrueOS.
Top