Planet 2014-07-30 17:00 UTC


Calling all FreeBSD developers needing assistance with travel expenses to EuroBSDCon 2014.

The FreeBSD Foundation will be providing a limited number of travel grants to individuals requesting assistance. Please fill out and submit  the Travel Grant Request Application at http://www.freebsdfoundation.org/documents/TravelRequestForm.pdf by August 15th, 2014 to apply for this grant.

This program is open to FreeBSD developers of all sorts (kernel hackers,  documentation authors, bugbusters, system administrators, etc).  In some cases we are also able to fund non-developers, such as active community members and FreeBSD advocates.

If you are a speaker at the conference, we expect the conference to cover your travel costs, and will most likely not approve your direct request to us.

There's some flexibility in the mechanism, so talk to us if something about the model doesn't quite work for you or if you have any questions. The travel grant program is one of the most effective ways we can spend money to help support the FreeBSD Project, as it helps developers get together in the same place at the same time, and helps advertise and advocate FreeBSD in the larger community.


Ich werde alt. Das ist eine Tatsache und nicht zu ändern.

Was aber auch eine Tatsache ist, dass eins meiner absoluten Lieblingsstrategiespiele in aktueller Grafik als nativer Linux Client erschienen ist und erst mein Sohn mich darauf gebracht hat, es wahr zu nehmen. Das Steam nun für Linux erschienen ist, wusste ich, dass es aber so einen massiven Schub an Spiele geben wird nicht.

Mein Sohn hat mir nun einfach mal das Spiel geschenkt :-) Vielen Dank dafür!

Nachdem ich extra auf Jessie upgradete, die Grafik mit dem neuesten Nvidia Treiber vom 10.6.2014 wieder zum laufen bekam, installierte ich Steam und begann das 13GB! (WTF?) Paket runterzuladen.

Maaaannn 13 GB! Habe lange keine kommerziellen Spiele gespielt, geschweige denn installiert. Scheint jetzt ja normal zu sein.

Ich habe so eine etwas ältere NVidia Karte GTX285 Gold, oder so. Damit läuft das Spiel absolut rund in 1280x1024 Auflösung. (Mehr macht mein Monitor leider nicht)

Da ich keinen HD Monitor habe, wird die Grafikkarte ja auch nicht so gefordert.

Die ersten 3 Tutorial Missionen habe ich absolviert. Ich glaube, XCOM und ich werden Freunde :-)

Was mir aufgefallen ist, im Basis Bau Menü gibt es ein Problem.

Wenn bei euch nichts oder nur sowas hier auftaucht, dann ist der Grafiktreiber nicht der richtige. Es muß der NVidia Beta Treiber

340.24 vom 8.7.2014 sein. Dieser Fehler ist nur bei Grafikkarten der älteren Generation vorhanden, neuere Karten sind nicht betroffen.

Ps: Warum ich nicht mehr Bilder zeige? Weil ich glaube, das wirkliche Fans sie eh schon kennen. Wenn nicht, guckt doch mal  bei der google Bildersuche nach xcom enemy within



Three Israeli defense contractors responsible for building the “Iron Dome” missile shield currently protecting Israel from a barrage of rocket attacks were compromised by hackers and robbed of huge quantities of sensitive documents pertaining to the shield technology, KrebsOnSecurity has learned.

The never-before publicized intrusions, which occurred between 2011 and 2012, illustrate the continued challenges that defense contractors and other companies face in deterring organized cyber adversaries and preventing the theft of proprietary information.

The Iron Dome anti-missile system in operation, 2011.

A component of the ‘Iron Dome’ anti-missile system in operation, 2011.

According to Columbia, Md.-based threat intelligence firm Cyber Engineering Services Inc. (CyberESI), between Oct. 10, 2011 and August 13, 2012, attackers thought to be operating out of China hacked into the corporate networks of three top Israeli defense technology companies, including Elisra Group, Israel Aerospace Industries, and Rafael Advanced Defense Systems.

By tapping into the secret communications infrastructure set up by the hackers, CyberESI determined that the attackers exfiltrated large amounts of data from the three companies. Most of the information was intellectual property pertaining to Arrow III missiles, Unmanned Aerial Vehicles (UAVs), ballistic rockets, and other technical documents in the same fields of study.

Joseph Drissel, CyberESI’s founder and chief executive, said the nature of the exfiltrated data and the industry that these companies are involved in suggests that the Chinese hackers were looking for information related to Israel’s all-weather air defense system called Iron Dome.

The Israeli government has credited Iron Dome with intercepting approximately one-fifth of the more than 2,000 rockets that Palestinian militants have fired at Israel during the current conflict. The U.S. Congress is currently wrangling over legislation that would send more than $350 million to Israel to further development and deployment of the missile shield technology. If approved, that funding boost would make nearly $1 billion from the United States over five years for Iron Dome production, according to The Washington Post.

Neither Elisra nor Rafael responded to requests for comment about the apparent security breaches. A spokesperson for Israel Aerospace Industries brushed off CyberESI’s finding, calling it “old news.” When pressed to provide links to any media coverage of such a breach, IAI was unable to locate or point to specific stories. The company declined to say whether it had alerted any of its U.S. industry partners about the breach, and it refused to answer any direct questions regarding the incident.

arrow3“At the time, the issue was treated as required by the applicable rules and procedures,” IAI Spokeswoman Eliana Fishler wrote in an email to KrebsOnSecurity. “The information was reported to the appropriate authorities. IAI undertook corrective actions in order to prevent such incidents in the future.”

Drissel said many of the documents that were stolen from the defense contractors are designated with markings indicating that their access and sharing is restricted by International Traffic in Arms Regulations (ITAR) — U.S. State Department controls that regulate the defense industry. For example, Drissel said, among the data that hackers stole from IAI is a 900-page document that provides detailed schematics and specifications for the Arrow 3 missile.

“Most of the technology in the Arrow 3 wasn’t designed by Israel, but by Boeing and other U.S. defense contractors,” Drissel said. “We transferred this technology to them, and they coughed it all up. In the process, they essentially gave up a bunch of stuff that’s probably being used in our systems as well.”

WHAT WAS STOLEN, AND BY WHOM?

According to CyberESI, IAI was initially breached on April 16, 2012 by a series of specially crafted email phishing attacks. Drissel said the attacks bore all of the hallmarks of the “Comment Crew,” a prolific and state-sponsored hacking group associated with the Chinese People’s Liberation Army (PLA) and credited with stealing terabytes of data from defense contractors and U.S. corporations.

Image: FBI

Image: FBI

The Comment Crew is the same hacking outfit profiled in a February 2013 report by Alexandria, Va. based incident response firm Mandiant, which referred to the group simply by it’s official designation — “P.L.A. Unit 61398.” In May 2014, the U.S. Justice Department charged five prominent military members of the Comment Crew with a raft of criminal hacking and espionage offenses against U.S. firms.

Once inside the IAI’s network, Comment Crew members spent the next four months in 2012 using their access to install various tools and trojan horse programs on systems throughout company’s network and expanding their access to sensitive files, CyberESI said. The actors compromised privileged credentials, dumped password hashes, and gathered system, file, and network information for several systems. The actors also successfully used tools to dump Active Directory data from domain controllers on at least two different domains on the IAI’s network.

All told, CyberESI was able to identify and acquire more than 700 files — totaling 762 MB total size — that were exfiltrated from IAI’s network during the compromise. The security firm said most of the data acquired was intellectual property and likely represented only a small portion of the entire data loss by IAI.

“The intellectual property was in the form of Word documents, PowerPoint presentations, spread sheets, email messages, files in portable document format (PDF), scripts, and binary executable files,” CyberESI wrote in a lengthy report produced about the breaches.

“Once the actors established a foothold in the victim’s network, they are usually able to compromise local and domain privileged accounts, which then allow them to move laterally on the network and infect additional systems,” the report continues. “The actors acquire the credentials of the local administrator accounts by using hash dumping tools. They can also use common local administrator account credentials to infect other systems with Trojans. They may also run hash dumping tools on Domain Controllers, which compromises most if not all of the password hashes being used in the network. The actors can also deploy keystroke loggers on user systems, which captured passwords to other non-Windows devices on the network.”

The attackers followed a similar modus operandi in targeting Elisra, a breach which CyberESI says began in October 2011 and persisted intermittently until July 2012. The security firm said the attackers infiltrated and copied the emails for many of Elisra’s top executives, including the CEO, the chief technology officer (CTO) and multiple vice presidents within the company.

CyberESI notes it is likely that the attackers were going after persons of interest with access to sensitive information within Elisra, and/or were gathering would be targets for future spear-phishing campaigns.

Drissel said like many other such intellectual property breaches the company has detected over the years, neither the victim firms nor the U.S. government provided any response after CyberESI alerted them about the breaches at the time.

“The reason that nobody wants to talk about this is people don’t want to re-victimze the victim,” Drissel said. “But the real victims here are the people on the other end who are put in harm’s way because of poor posture on security and the lack of urgency coming from a lot of folks on how to fix this problem. So many companies have become accustomed to low-budget IT costs. But the reality is that if you have certain sensitive information, you’ve got to spend a certain amount of money to secure it.”

ANALYSIS

While some of the world’s largest defense contractors have spent hundreds of millions of dollars and several years learning how to quickly detect and respond to such sophisticated cyber attacks, it’s debatable whether this approach can or should scale for smaller firms.

Michael Assante, project lead for Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) security at the SANS Institute, said although there is a great deal of discussion in the security industry about increased information sharing as the answer to detecting these types of intrusions more quickly, this is only a small part of the overall solution.

“We collectively talk about all of the things that we should be doing better — that we need to have better security policies, better information sharing, better detection, and we’re laying down the tome and saying ‘Do all of these things’,” Assante said. “And maybe a $100 million security program can do all these things well or make progress against these types of attacks, but that 80-person defense contractor? Not so much.

Assante said most companies in the intelligence and defense industries have gotten better at sharing information and at the so-called “cyber counter-intelligence” aspect of these attacks: Namely, in identifying the threat actors, tactics and techniques of the various state-sponsored organizations responsible. But he noted that most organizations still struggle with the front end of problem: Identifying the original intrusion and preventing the initial compromise from blossoming into a much bigger problem.

“I don’t think we’ve improved much in that regard, where the core challenges are customized malware, persistent activity, and a lot of noise,” Assante said. “Better and broader notification [by companies like CyberESI] would be great, but the problem is that typically these notifications come after sensitive data has already been exfiltrated from the victim organization. Based on the nature of advanced persistent threats, you can’t beat that time cycle. Well, you might be able to, but the amount of investment needed to change that is tremendous.”

Ultimately, securing sensitive systems from advanced, nation-state level attacks may require a completely different approach. After all, as Einstein said, “We cannot solve our problems with the same thinking we used when we created them.”

Indeed, that appears to be the major thrust of a report released this month by Richard J. Danzig, a board member of the Center for New American Security. In “Surviving on a Diet of Poison Fruit,” (PDF) Danzig notes that defensive efforts in major mature systems have grown more sophisticated and effective.

“However, competition is continuous between attackers and defender,” he wrote. “Moreover, as new information technologies develop we are not making concomitant investments in their protection. As a result, cyber insecurities are generally growing, and are likely to continue to grow, faster than security measures.”

In his conclusion, Danzig offers a range of broad (and challenging) suggestions, including this gem, which emphasizes placing a premium on security over ease-of-use and convenience in mission-critical government systems:

“For critical U.S. government systems, presume cyber vulnerability and design organizations, operations and acquisitions to compensate for this vulnerability. Do this by a four-part strategy of abnegation, use of out-of-band architectures, diversification and graceful degradation. Pursue the first path by stripping the ‘nice to have’ away from the essential, limiting cyber capabilities in order to minimize cyber vulnerabilities. For the second, create non-cyber interventions in cyber systems. For the third, encourage different cyber dependencies in different systems so single vulnerabilities are less likely to result in widespread failure or compromise. And for the fourth, invest in discovery and recovery capabilities. To implement these approaches, train key personnel in both operations and security so as to facilitate self-conscious and well- informed tradeoffs between the security gains and the operational and economic costs from pursuing these strategies.”

Source: Center for New American Security

Source: Center for New American Security



Interview about mandoc with Ingo Schwarze.  The project webpage describes mandoc as "a suite of tools compiling mdoc, the roff macro language of choice for BSD manual pages, and man, the predominant historical language for UNIX manuals."

Recorded at BSDCan 2014.

File Info: 16Min, 8MB.

Ogg Link: http://cis01.uma.edu/~wbackman/bsdtalk/bsdtalk243.ogg



We've got the stabilization of Perl 5.18 upcoming, so what better chance is there to explain a bit how the Perl-related ebuilds in Gentoo work...

First of all, there is dev-lang/perl. This contains the Perl core distribution, installing the binaries and all the Perl modules that are bundled with Perl itself.

Then, there is the perl-core category. It contains independent ebuilds for Perl modules that are also present in the core Perl distribution. Most Perl modules that are bundled with Perl are also in addition released as independent tarballs. If any of these packages is installed from perl-core, its files are placed such that the perl-core download overrides the bundled copy. This means you can also update part of the bundled Perl modules, e.g. in case of a bug, without updating Perl itself.

Next, there are a lot of virtuals "virtual/perl-..." in the virtual category of the portage tree. What are these good for? Well, imagine you want to depend on a specific version of a module that is usually bundled with Perl. For example, you need Module::CoreList at at least version 3.  This can either be provided by a new enough Perl (for example, now hardmasked Perl 5.20 contains Module::CoreList 3.10), or by a separate package from perl-core (where we have Module::CoreList 5.021001 as perl-core/Module-CoreList-5.21.1).
To make sure that everything works, you should never directly depend on a perl-core package, but always on the corresponding virtual (here virtual/perl-Module-CoreList; any perl-core package must have a corresponding virtual). Then both ways to fulfil the dependency are automatically taken into account. (Future repoman versions will warn if you directly depend on perl-core. Also you should never have anything perl-core in your world file!)

Last, we have lots of lots of modules in the dev-perl category. Most of them are from CPAN, and the only thing they have in common is that they have no copy inside core Perl.

I hope this clarifies things a bit. More Perl posts coming...


A few weeks ago, I wrote about problems with kernel 3.13 on Ubuntu 12.04 LTS  and 14.04 LTS.

Most likely, the problem that caused the excessive CPU load and occassional high network latencey has been fixed by now and the fix is going to be included in version 3.13.0-33 of the kernel package. I experienced this problem on a multi-processor machine, so it is probable that this was the problem with KSM and NUMA that has been fixed.

I am not sure, whether the problems that I had  with IPv6 connectivity are also solved by this fix: I had experienced those problems on a single-processor (but multi-core) machine, so it does not sound like a NUMA problem to me.

Anyhow, I will give the 3.13 kernel another try when the updated version is released. For the moment, I have migrated all server machines back to the 3.2 kernel, because the 3.5 kernel's end-of-life is soon and the 3.13 kernel has not been ready for production use yet. I do not expect to have considerable gains by using a newer kernel version on the servers anyway, so for the moment, the 3.2 kernel is a good option.




The longer one lurks in the Internet underground, the more difficult it becomes to ignore the harsh reality that for nearly every legitimate online business there is a cybercrime-oriented anti-business. Case in point: Today’s post looks at a popular service that helps crooked online marketers exhaust the Google AdWords budgets of their competitors.

Youtube ads from

Youtube ads from “GoodGoogle” pitching his AdWords click fraud service.

AdWords is Google’s paid advertising product, displaying ads on the top or the right side of your screen in search results. Advertisers bid on specific keywords, and those who bid the highest will have their ads show up first when Internet users search for those terms. In turn, advertisers pay Google a small amount each time a user clicks on one of their ads.

One of the more well-known forms of online ad fraud (a.k.a. “click fraud“) involves Google AdSense publishers that automate the clicking of ads appearing on their own Web sites in order to inflate ad revenue. But fraudsters also engage in an opposite scam involving AdWords, in which advertisers try to attack competitors by raising their costs or exhausting their ad budgets early in the day.

Enter “GoodGoogle,” the nickname chosen by one of the more established AdWords fraudsters operating on the Russian-language crime forums.  Using a combination of custom software and hands-on customer service, GoodGoogle promises clients the ability to block the appearance of competitors’ ads.

“Are you tired of the competition in Google AdWords that take your first position and quality traffic,?” reads GoodGoogle’s pitch. “I will help you get rid once and for all competitors in Google Adwords.”

The service, which appears to have been in the offering since at least January 2012, provides customers both a la carte and subscription rates. The prices range from $100 to block between three to ten ad units for 24 hours to $80 for 15 to 30 ad units. For a flat fee of $1,000, small businesses can use GoodGoogle’s software and service to sideline a handful of competitors’s ads indefinitely. Fees are paid up-front and in virtual currencies (WebMoney, e.g.), and the seller offers support and a warranty for his work for the first three weeks.

Reached via instant message, GoodGoogle declined to specify how his product works, instead referring me to several forums where I could find dozens of happy customers to vouch for the efficacy of the service.

Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University California, Berkeley, speculated that GoodGoogle’s service consists of two main components: A private botnet of hacked computers that do the clicking on ads, and advanced software that controls the clicking activity of the botted computers so that it appears to be done organically from search results.

Further, he said, the click fraud bots probably are not used for any other purpose (such as spam or denial-of-service attacks) since doing so would risk landing those bots on lists of Internet addresses that Google and other large Internet companies use to keep track of abuse complaints.

“You’d pretty much have to do this kind of thing as a service, because if you do it just using software alone, you aren’t going to be able to get a wide variety of traffic,” Weaver said. “Otherwise, you’re going to start triggering alarms.”

Amazingly, the individual responsible for this service not only invokes Google’s trademark in his nickname and advertises his wares via instructional videos on Google’s YouTube service, but he also lists several Gmail accounts as points of contact. My guess is it will not be difficult for Google to shutter this operation, and possibly to identity this individual in real life.



CL2014Die diesjährige Cisco Live in San Francisco ist zwar schon einige Wochen her (18.-22. Mai; was diesmal sehr früh im Jahr war) und da ich etwas knapp in der Zeit war wollte ich auf eine Nachlese schon verzichten. Aber auf Aufforderung kommt diese jetzt mit deutlicher Verspätung trotzdem.
In diesem Jahr war ich mir nicht sicher, ob ich überhaupt zur Cisco Live wollte, da mir der Termin eigentlich überhaupt nicht passte. Aber da ich über den diesjährigen Cisco Designated VIP-Status die Eintrittsgebühr wieder umsonst bekommen habe, bin ich dann doch hin.
Das späte Buchen hat sich aber zumindest in Hinsicht auf die Hotelpreise als deutlicher Nachteil herausgestellt. Über die Cisco-Webseite waren keine Hotels mehr zu bekommen und auch über die typischen Buchungsseiten war nichts “preiswertes” mehr zu bekommen.
Da ich erst am Samstag angereist bin, habe ich am Sonntag diesmal auch kein Techtorial besucht. Mit JetLag und Müdigkeit bekommt man dann doch zu wenig mit. Das ist auch einer der Gründe warum ich normalerweise versuche lieber zwei/drei Tage vorher anzureisen.

Outback Prime Rib

Outback Prime Rib

So stand diesmal am Sonntag die kostenlose Prüfung und etwas Shoppen mit den auch anwesenden Trainer-Kollegen auf dem Programm. Und da einer der Kollegen zum Glück ein Auto gemietet hatte kamen wir da auch zu unserem obligatorischen Outback-Besuch mit einem hervorragenden Prime-Rib.

Montag bis Donnerstag waren dann wieder viele sehr interessante Sessions:

  • BRKSEC-1030 – Introduction to the Cisco Sourcefire NGIPS
  • Naja, hätte ich mal vorher darauf geachtet, dass dies eine 1000er Session ist. Das war deutlich zuviel Marketing!

  • BRKSEC-2053 – Practical PKI for Remote Access VPN
  • Eine extrem wichtige Session wenn man vor hat AnyConnect-VPNs mit Zertifikaten zu implementieren. Ich habe gehofft, dass schon auf Windows 2012 eingegangen wird, was aber nicht der Fall war. Trotzdem gab es auch hier den einen oder anderen Trick und Kniff der mir sicher mal helfen wird.

  • BRKCRT-2000 – HardCore IPv6 Routing – No Fear
  • Die Session war eine riesige Enttäuschung. Es ging nur und ausschließlich um IPv6 in den Cisco Zertifizierungen vom CCNA bis CCIE. HardCore? Keine Spur! Basierend auf meiner bisherigen Erfahrung werde ich in Zukunft wohl Sessions meiden wenn Scott Morris einer der Presenter ist. Da ist irgendwie nie “Real Life” drin sondern immer nur Zertifizierung.

  • BRKCRT-2206 – Cisco Cyber Security Analyst Specialist Certification
  • Das ist eine sehr interessante neue Zertifizierung, die ich mir als mittelfristiges Ziel mal auf die Liste schreibe.

  • BRKSEC-2042 – Web Filtering and Content Control in the Enterprise
  • Hier wurden die Vor- und Nachteile der verschiedenen Methoden (CWS/WSA/WSE/SF) anhand vieler Merkmale miteinander verglichen. Das war eine sehr gute Übersicht wenn man sich fragt welche Lösung jetzt die richtige wäre. CWS war dabei fast immer führend. Tja, wenn es nicht eine Cloud-Proxy-Lösung wäre …

  • BRKSEC-3001 – Advanced IKEv2 Protocol
  • Eine super Session. IKEv2 habe ich mir bisher komplett im Selbststudium beigebracht. In dieser Session gab es dann die für mich sehr wichtige Bestätigung, dass ich die Änderungen zu IKEv1 richtig gelernt und verstanden habe. Denn es passiert sehr schnell, dass man im Selbststudium etwas falsch versteht und das Falsche verinnerlicht.

  • BRKSEC-2025 – Snort Packet Processing Architecture
  • Interessant, aber hätte deutlich tiefer gehen können.

  • BRKSEC-3032 – Advanced – ASA Clustering Deep Dive
  • Auch wieder ein Highlight. Den Presenter habe ich in der Vergangenheit schon bei “ASA Performance Optimizations” erlebt. Der Mann lebt einfach die ASA und es gab jede Menge Infos zum neuen Clustering.

  • BRKRST-3336 – WAN Virtualization Using Over-the-Top (OTP)
  • OTP ist eine EIGRP-Erweiterung, die ich schon auf der letzten Cisco Live kennengelernt habe. Super interessant. Wird Zeit, dass bei einem Kunden mal wieder eine neue Routing-Implementierung ansteht, bei der das zum Einsatz kommen könnte. Die letzten Implementierungen liefen immer auf OSPF hinaus, obwohl EIGRP lange Zeit mein Lieblings-Protokoll war.

  • BRKRST-2304 – Hitchhiker’s Guide to Troubleshooting IPv6
  • Vor zwei Jahren habe ich diese Session schon einmal gehört und sie war sehr gut. Und da sich beim Thema IPv6 ja doch eine Menge tut werde ich diese Session auch in Zukunft sicher erneut anhören.

  • BRKSEC-3697 – Advanced ISE Services, Tips and Tricks
  • Auch ein Highlight der Networkers. Aaron Woland ist einer der Top-ISE-Leute bei Cisco. Im letzten Jahr war ich auch schon in dieser Session und es gab viel neues zur damals anstehenden Version 1.2. Dieses Jahr gab es zwar auch Wiederholung zum letzten Jahr, aber auch viel Neues und Infos zu der kommenden Version 1.3.

  • BRKSEC-3061 – Detect and Protect Against Security Threats, Before It’s Too Late!
  • Und wie in jedem Jahr sind natürlich auch schlechte Sessions dabei. Diese klang vom Titel her zwar recht gut und eine 3000er Session verspricht eigentlich auch Tiefgang, aber nach der Session habe ich mich gefragt was die Presenter einem eigentlich erzählen wollten. Das war irgendwie nix.

    Was gab es so bei der CiscoLive drum herum?

    • Montag
    • Die Keynote mit John Chambers. Ja, ich wiederhole mich, aber ich höre ihm gerne zu. Und wie jedes Jahr fragt man sich was er einem überhaupt gesagt hat … ;-) Auf jeden Fall ging es wie im letzten Jahr auch schon um das “Internet of Everything”, was auch zentrales Thema dieses Jahr war. Weiteres Thema war “Fast IT” Hat er wirklich gesagt, das die Kunden wollen, dass sich die IT-Welt schneller drehen, und die Innovationszyklen schneller werden müssen? Ok, machen wir uns darauf gefasst, dass das Hamsterrad der IT noch einen Gang zulegt.
      In der Certification-Lounge der World of Solutions wurde wieder der Cisco-Badge aufgepeppt und es gab ein paar CCIE-Geschenke. U.a. eine Ersatz-Akku zum Aufladen von Handys. Sehr praktisch, denn so etwas habe ich auch immer beim Rennrad-Fahren dabei. Ein iPhone-Akku hält halt doch keine vier Stunden wenn dabei der Weg getrackt wird.
      Abends war dann das Cisco Designated VIP-Dinner. Es wurden mal wieder sehr leckere Speisen und Getränke aufgefahren, und das erneute Treffen der VIP-Kollegen war sehr schön. Auch gab es wieder ein paar Goodies wie das 2014er VIP-Polo-Shirt oder eine Wanduhr.

      Eines der zentralen Themen Dies war die 25. Cisco Live? Nun, nur dass sie früher anders hieß. In der WoS gab es wieder das Cisco-Shirt Dieses Jahr ist das Cisco Designated VIP-Shirt blau. Die steht jetzt im Büro auf dem Regal
    • Dienstag
    • Mittags war die CCIE/NetVet-Reception mit John Chambers. Sehr gefreut hat mich, dass die dieses Jahr nicht parallel zu anderen Sessions stattfand. Und ein alternatives Mittagessen nimmt man ja gerne mit. Das “normale” Mittagessen waren immer die abgepackten Sandwich-Boxen die zwar ok waren, aber halt auch kein überragender Gaumenschmaus. Die Reception fand in einem Restaurant gleich neben dem Convention-Center statt. Als ich aus dem Center kam war mein erster Gedanke “oh, was ist das denn für eine riesige Schlange; die armen müssen in der Sonne schwitzen”. Ok, eine Minute später habe ich gemerkt, dass das die Warteschlange der CCIE/NetVets war. Ein paar hundert Leute und beim Einlass nur eine Person die kontrolliert und einer der die Sammle-Pins verteilt. Aber ans Schlange stehen sollte man sich sowieso noch gewöhnen …
      Drinnen war es dann mehr als voll, was beim Essen nicht so sehr gestört hat. Als es in die Diskussion mit John Chambers ging war aber deutlich zu erkennen, dass dieser Laden viel zu klein war. Wer sich nicht direkt in der Mitte oder aber neben einem der Lautsprecher postieren konnte hat halt nicht viel mitbekommen. Ein Teil der Diskussion hat etwas widergespiegelt was als Gerüchte in letzter Zeit ab und an zu hören ist. Er werde im ersten Jahr nach seiner CEO-Zeit erst einmal überhaupt nichts machen. Ich bin schon gespannt wie irgendwann eine Cisco ohne John Chambers aussehen wird und was sich z.B. mit Sachen wie dieser immer sehr interessanten CCIE/NetVet Reception ändern wird.
      Abends war die CCIE-Party in der California Academy of Sciences. Mit Bussen wurden wir dort hingebracht und die Schlange zum Eingang war riesig. Zumindest für uns, die wir nicht-CCIEs als Gäste mitgebracht haben. Dann konnte man nämlich nicht direkt hinein. Trotzdem wie immer sehr leckeres Essen und viel Interessantes anzuschauen.

      Das Nach ca. einer viertel Stunde im Schatten abgekommen Der diesjährige CCIE/NetVet Sammel-Pin Essen, Essen, Essen ... In der California Academy of Sciences
    • Mittwoch
    • Der Customer Appreciation Event fand im AT&T-Park statt, das Stadion in dem die San Francisco Giants normalerweise ihre Siege einspielen. Der Eintritt war mal wieder abenteuerlich. Die ca. 15000 Leute durch eine viel zu kleine Anzahl von Metall-Detecktoren zu bekommen dauerte so lange, dass sich um das Stadion herum eine wirklich gewaltige Schlange gebildet hat. Der Stimmung drinnen hat das nicht geschadet, Lenny Kravitz und Imagine Dragons haben für super Stimmung gesorgt. Und Essen und Getränke? Ja, auch gut und reichlich!

      Hier war die Party Gefahren lauern überall ganz viele Jubiläums-Hüte
    • Donnerstag
    • Nachmittags gab es die Guest-Keynote. Salman Khan hat von seinem Leben erzählt und wie er die Khan Academy aufgebaut hat. Das war super erzählt und mit diversen Lachern gespickt. Eine tolle Leistung so ein Lern-Portal für jeden auf die Beine zu stellen.

      Am Donnerstag gab es dann aber auch den größten mitleidigen Lacher bzw. eine kleine Enttäuschung. Für das Ausfüllen der Konferenz-Evaluation bekommt man normalerweise ein kleines Geschenk. So hieß die Ankündigung dann auch sinngemäß “Fill out the conference-evaluation and get a Giants-Baseball-Cap”.

      Das Cisco BaseCap

      Das Cisco BaseCap

      Ok, habe ich gemacht und mein Basecap bekommen. Aber kurz nach mir hieß es nur noch “keine Caps mehr verfügbar”. Und der Lacher schlechthin? Für die 25000 angemeldeten Teilnehmer hat Cisco nur 3000 (oder 2000? Ich weiß es nicht mehr) Caps gehabt. Und davon gingen dann auch ein paar hundert ab, denn fast alle vom Staff hatten ja welche auf oder kurz vor Ende noch unter dem Tisch verschwinden lassen. Das gab dann auch einen kleinen Shitstorm, wobei einige öffentliche Kommentare dann ganz schnell wieder verschwunden sind … Das war wirklich traurig.

    Schön war, dass es das Mittagessen an verschiedenen Orten gab. U.a. im Yerba Buena Gardens, was sehr schön war:
    Ein typisches Mittagessen IMG_0020 Der Yerba Buena Gardens Der Yerba Buena Gardens

    Die Zusammenfassung:
    Es hat wieder enorm Spaß gemacht die Networkers zu besuchen und es war fast durchgehend sehr interessant. Die gesamte Organisation hat aber an vielen Stellen gewirkt, als würde diese Veranstaltung zum ersten Mal ausgerichtet. Und San Francisco hat sich dieses Jahr als Veranstaltungsort einfach als viel zu klein herausgestellt.

    Die nächste Cisco Live findet dann (wie vor zwei Jahren) wieder in San Diego statt, der 7. bis 11. Juni ist schon im Kalender markiert. Ich freue mich jetzt schon wieder.



    pkg 1.3.0 out! via Planet FreeBSD | 2014-07-25 09:57 UTC

    Hi all,

    I’m very please to announce the release of pkg 1.3.0
    This version is the result of almost 9 month of hard work

    Here are the statistics for the version:
    - 373 files changed, 66973 insertions(+), 38512 deletions(-)
    - 29 different contributors

    Please not that for the first time I’m not the main contributor, and I would
    like to particularly thanks Vsevold Stakhov for all the hard work he has done to
    allow us to get this release out. I would like also to give a special thanks to
    Andrej Zverev for the tons of hours spending on testing and cleaning the bug
    tracker!

    So much has happened that it is hard to summarize so I’ll try to highlight the
    major points:
    - New solver, now pkg has a real SAT solver able to automatically handle
    conflicts and dynamically discover them. (yes pkg set -o is deprecated now)
    - pkg install now able to install local files as well and resolve their
    dependencies from the remote repositories
    - Lots of parts of the code has been sandboxed
    - Lots of rework to improve portability
    - Package installation process has been reworked to be safer and handle properly
    the schg flags
    - Important modification of the locking system for finer grain locks
    - Massive usage of libucl
    - Simplification of the API
    - Lots of improvements on the UI to provide a better user experience.
    - Lots of improvements in multi repository mode
    - pkg audit code has been moved into the library
    - pkg -o A=B that will overwrite configuration file from cli
    - The ui now support long options
    - The unicity of a package is not anymore origin
    - Tons of bug fixes
    - Tons of behaviours fixes
    - Way more!

    Thank you to all contributors:
    Alberto Villa, Alexandre Perrin, Andrej Zverev, Antoine Brodin, Brad Davis,
    Bryan Drewery, Dag-Erling Smørgrav, Dmitry Marakasov, Elvira Khabirova, Jamie
    Landeg Jones, Jilles Tjoelker, John Marino, Julien Laffaye, Mathieu Arnold,
    Matthew Seaman, Maximilian Gaß, Michael Gehring, Michael Gmelin, Nicolas Szalay,
    Rodrigo Osorio, Roman Naumann, Rui Paulo, Sean Channel, Stanislav E. Putrya,
    Vsevolod Stakhov, Xin Li, coctic

    Regards,
    Bapt on behalf of the pkg@



    The April-June, 2014 Status Report is now available with 24 entries.


    A Russian man detained in Spain is facing extradition to the United States on charges of running an international cyber crime ring that allegedly stole more than $10 million in electronic tickets from e-tickets vendor StubHub.

    stubhubVadim Polyakov, 30, was detained while vacationing in Spain. Polyakov is wanted on conspiracy charges to be unsealed today in New York, where investigators with the Manhattan District Attorney’s office and the U.S. Secret Service are expected to announce coordinated raids of at least 20 people in the United States, Canada and the United Kingdom accused of running an elaborate scam to resell stolen e-tickets and launder the profits.

    Sources familiar with the matter describe Polyakov, from St. Petersburg, Russia, as the ringleader of the gang, which allegedly used thousands of compromised StubHub user accounts to purchase huge volumes of electronic, downloadable tickets that were fed to a global network of resellers.

    Robert Capps, senior director of customer success for RedSeal Networks and formerly head of StubHub’s global trust and safety organization, said the fraud against StubHub — which is owned by eBay — largely was perpetrated with usernames and passwords stolen from legitimate StubHub customers. Capps noted that while banks have long been the target of online account takeovers, many online retailers are unprepared for the wave of fraud that account takeovers can bring.

    “In the last year online retailers have come under significant attack by cyber criminals using techniques such as account takeover to commit fraud,” Capps said. “Unfortunately, the transactional risk systems employed by most online retailers are not tuned to detect and defend against malicious use of existing customer accounts.  Retooling these systems to detect account takeovers can take some time, leaving retailers exposed to significant financial losses in the intervening time.”

    Polyakov is the latest in a recent series of accused Russian hackers detained while traveling abroad and currently facing extradition to the United States. Dmitry Belorossov, a Russian citizen wanted in connection with a federal investigation into a cyberheist gang that leveraged the Gozi Trojan, also is facing extradition to the United States from Spain. He was arrested in Spain in August 2013 while attempting to board a flight back to Russia.

    Last month, federal authorities announced they had arrested Russian citizen Roman Seleznev as he was vacationing in the Maldives. Seleznev, the son of a prominent Russian lawyer, is currently being held in Guam and is awaiting extradition to the United States.

    Arkady Bukh, a New York criminal lawyer who frequently represents Russian and Eastern European hackers who wind up extradited to the United States, said the Polyakov case will be interesting to watch because his extradition is being handled by New York authorities, not the U.S. government.

    “I’m not saying they won’t get some help from the feds, but extradition by state prosecutors is often a failure,” Bukh said. “In fact, I don’t remember the last time we saw a successful extradition of cybercrime suspects by U.S. state prosecutors. You have to have a lot of political juice to pull off that kind of thing, and normally state prosecutors don’t have that kind of juice.”

    Nevertheless, Bukh said, U.S. authorities have made it crystal clear that there are few countries outside of Russia and Ukraine which can be considered safe havens for wanted cybercriminals.

    “The U.S. government has delivered the message that these guys can get arrested anywhere, that there are very few places they can go and go safely,” Bukh said.



    vc4 driver month 1 via Planet FreeBSD | 2014-07-23 00:42 UTC

    I've just pushed the vc4-sim-validate branch to my Mesa tree. It's the culmination of the last week's worth pondering and false starts since I got my first texture sampling in simulation last Wednesday.

    Handling texturing on vc4 safely is a pain. The pointer to texture contents doesn't appear in the normal command stream, and instead it's in the uniform stream. Which uniform happens to contain the pointer depends on how many uniforms have been loaded by the time you get to the QPU_W_TMU[01]_[STRB] writes. Since there's no iommu, I can't trust userspace to tell me where the uniform is, otherwise I'd be allowing them to just lie and put in physical addresses and read arbitrary system memory.

    This meant I had to write a shader parser for the kernel, have that spit out a collection of references to texture samples, switch the uniform data from living in BOs in the user -> kernel ABI and instead be passed in as normal system memory that gets copied to the temporary exec bo, and then do relocations on that.

    Instead of trying to write this in the kernel, with a ~10 minute turnaround time per test run, I copied my kernel code into Mesa with a little bit of wrapper code to give a kernel-like API environment, and did my development on that. When I'm looking at possibly 100s of iterations to get all the validation code working, it was well worth the day spent to build that infrastructure so that I could get my testing turnaround time down to about 15 sec.

    I haven't done actual validation to make sure that the texture samples don't access outside of the bounds of the texture yet (though I at least have the infrastructure necessary now), just like I haven't done that validation for so many other pointers (vertex fetch, tile load/stores, etc.). I also need to copy the code back out to the kernel driver, and it really deserves some cleanups to add sanity to the many different addresses involved (unvalidated vaddr, validated vaddr, and validated paddr of the data for each of render, bin, shader recs, uniforms). But hopefully once I do that, I can soon start bringing up glamor on the Pi (though I've got some major issue with tile allocation BO memory management before anything's stable on the Pi).


    There has been some confusion on my previous post with Bob Beck of LibreSSL on whether I would advocate for using a LibreSSL shared object as a drop-in replacement for an OpenSSL shared object. Let me state this here, boldly: you should never, ever, for no reason, use shared objects from different major/minor OpenSSL versions or implementations (such as LibreSSL) as a drop-in replacement for one another.

    The reason is, obviously, that the ABI of these libraries differs, sometimes subtly enought that they may actually load and run, but then perform abysmally insecure operations, as its data structures will have changed, and now instead of reading your random-generated key, you may be reading the master private key. nd in general, for other libraries you may even be calling the wrong set of functions, especially for those written in C++, where the vtable content may be rearranged across versions.

    What I was discussing in the previous post was the fact that lots of proprietary software packages, by bundling a version of Curl that depends on the RAND_egd() function, will require either unbundling it, or keeping along a copy of OpenSSL to use for runtime linking. And I think that is a problem that people need to consider now rather than later for a very simple reason.

    Even if LibreSSL (or any other reimplementation, for what matters) takes foot as the default implementation for all Linux (and not-Linux) distributions, you'll never be able to fully forget of OpenSSL: not only if you have proprietary software that you maintain, but also because a huge amount of software (and especially hardware) out there will not be able to update easily. And the fact that LibreSSL is throwing away so much of the OpenSSL clutter also means that it'll be more difficult to backport fixes — while at the same time I think that a good chunk of the black hattery will focus on OpenSSL, especially if it feels "abandoned", while most of the users will still be using it somehow.

    But putting aside the problem of the direct drop-in incompatibilities, there is one more problem that people need to understand, especially Gentoo users, and most other systems that do not completely rebuild their package set when replacing a library like this. The problem is what I would call "ABI leakage".

    Let's say you have a general libfoo that uses libssl; it uses a subset of the API that works with both OpenSSL. Now you have a bar program that uses libfoo. If the library is written properly, then it'll treat all the data structures coming from libssl as opaque, providing no way for bar to call into libssl without depending on the SSL API du jour (and thus putting a direct dependency on libssl for the executable). But it's very well possible that libfoo is not well-written and actually treats the libssl API as transparent. For instance a common mistake is to use one of the SSL data structures inline (rather than as a pointer) in one of its own public structures.

    This situation would be barely fine, as long as the data types for libfoo are also completely opaque, as then it's only the code for libfoo that relies on the structures, and since you're rebuilding it anyway (as libssl is not ABI-compatible), you solve your problem. But if we keep assuming a worst-case scenario, then you have bar actually dealing with the data structures, for instance by allocating a sized buffer itself, rather than calling into a proper allocation function from libfoo. And there you have a problem.

    Because now the ABI of libfoo is not directly defined by its own code, but also by whichever ABI libssl has! It's a similar problem as the symbol table used as an ABI proxy: while your software will load and run (for a while), you're really using a different ABI, as libfoo almost certainly does not change its soname when it's rebuilt against a newer version of libssl. And that can easily cause crashes and worse (see the note above about dropping in LibreSSL as a replacement for OpenSSL).

    Now honestly none of this is specific to LibreSSL. The same is true if you were to try using OpenSSL 1.0 shared objects for software built against OpenSSL 0.9 — which is why I cringed any time I heard people suggesting to use symlink at the time, and it seems like people are giving the same suicidal suggestion now with OpenSSL, according to Bob.

    So once again, don't expect binary-compatibility across different versions of OpenSSL, LibreSSL, or any other implementation of the same API, unless they explicitly aim for that (and LibreSSL definitely doesn't!)



    Heads up, bargain shoppers: Financial institutions across the country report that they are tracking what appears to be a series of credit card breaches involving Goodwill locations nationwide. For its part, Goodwill Industries International Inc. says it is working with the U.S. Secret Service on an investigation into these reports.

    goodwillHeadquartered in Rockville, Md., Goodwill Industries International, Inc. is a network of 165 independent agencies in the United States and Canada with a presence in 14 other countries. The organizations sell donated clothing and household items, and use the proceeds to fund job training programs, employment placement services and other community-based initiatives.

    According to sources in the financial industry, multiple locations of Goodwill Industries stores have been identified as a likely point of compromise for an unknown number of credit and debit cards.

    In a statement sent to KrebsOnSecurity, Goodwill Industries said it first learned about a possible incident last Friday, July 18. The organization said it has not yet confirmed a breach, but that it is working with federal authorities on an investigation into the matter.

    “Goodwill Industries International was contacted last Friday afternoon by a payment card industry fraud investigative unit and federal authorities informing us that select U.S. store locations may have been the victims of possible theft of payment card numbers,” the company wrote in an email.

    “Investigators are currently reviewing available information,” the statement continued. “At this point, no breach has been confirmed but an investigation is underway. Goodwills across the country take the data of consumers seriously and their community well-being is our number one concern. Goodwill Industries International is working with industry contacts and the federal authorities on the investigation. We will remain appraised of the situation and will work proactively with any individual local Goodwill involved taking appropriate actions if a data compromise is uncovered.”

    The U.S. Secret Service did not respond to requests for comment.

    It remains unclear how many Goodwill locations may have been impacted, but sources say they have traced a pattern of fraud on cards that were all previously used at Goodwill stores across at least 21 states, including Arkansas, California, Colorado, Florida, Georgia, Iowa, Illinois, Louisiana, Maryland, Minnesota, Mississippi, Missouri, New Jersey, Ohio, Oklahoma, Pennsylvania, South Carolina, Texas, Virginia, Washington and Wisconsin.

    It is also not known at this time how long ago this apparent breach may have begun, but those same financial industry sources say the breach could extend back to the middle of 2013.

    Financial industry sources said the affected cards all appear to have been used at Goodwill stores, but that the fraudulent charges on those cards occurred at non-Goodwill stores, such as big box retailers and supermarket chains. This is consistent with activity seen in the wake of other large data breaches involving compromised credit and debit cards, including the break-ins at Target, Neiman Marcus, Michaels, Sally Beauty, and P.F. Chang’s.



    Part of testing this receive side scaling work is designing a set of APIs that allow for some kind of affinity awareness. It's not easy - the general case is difficult and highly varying. But something has to be tested! So, where should it begin?

    The main tricky part of this is the difference between incoming, outgoing and listening sockets.

    For incoming traffic, the NIC has already calculated the RSS hash value and there's already a map between RSS hash and destination CPU. Well, destination queue to be much more precise; then there's a CPU for that queue.

    For outgoing traffic, the thread(s) in question can be scheduled on any CPU core and as you have more cores, it's increasingly unlikely to be the right one. In FreeBSD, the default is to direct dispatch transmit related socket and protocol work in the thread that started it, save a handful of places like TCP timers. Once the driver if_transmit() method is called to transmit a frame it can check the mbuf to see what the flowid is and map that to a destination transmit queue. Before RSS, that's typically done to keep packets vaguely in some semblance of in-order behaviour - ie, for a given traffic flow between two endpoints (say, IP, or TCP, or UDP) the packets should be transmitted in-order. It wasn't really done for CPU affinity reasons.

    Before RSS, there was no real consistency with how drivers hashed traffic upon receive, nor any rules on how it should select an outbound transmit queue for a given buffer. Most multi-queue drivers got it "mostly right". They definitely didn't try to make any CPU affinity choices - it was all done to preserve the in-order behaviour of traffic flows.

    For an incoming socket, all the information about the destination CPU can be calculated from the RSS hash provided during frame reception. So, for TCP, the RSS hash for the received ACK during the three way handshake goes into the inpcb entry. For UDP it's not so simple (and the inpcb doesn't get a hash entry for UDP - I'll explain why below.)

    For an outgoing socket, all the information about the eventual destination CPU isn't necessarily available. If the application knows the source/destination IP and source/destination port then it (or the kernel) can calculate the RSS hash that the hardware would calculate upon frame reception and use that to populate the inpcb. However this isn't typically known - frequently the source IP and port won't be explicitly defined and it'll be up to the kernel to choose them for the application. So, during socket creation, the destination CPU can't be known.

    So to make it simple (and to make it simple for me to ensure the driver and protocol stack parts are working right) my focus has been on incoming sockets and incoming packets, rather than trying to handle outgoing sockets. I can handle outbound sockets easily enough - I just need to do a software hash calculation once all of the required information is available (ie, the source IP and port is selected) and populate the inpcb with that particular value. But I decided to not have to try and debug that at the same time as I debugged the driver side and the protocol stack side, so it's a "later" task.

    For TCP, traffic for a given connection will use the same source/destination IP and source/destination port values. So for a given socket, it'll always hash to the same value. However, for UDP, it's quite possible to get UDP traffic from a variety of different source IP/ports and respond from a variety of different source/IP ports. This means that the RSS hash value that we can store in the inpcb isn't at all guaranteed to be the same for all subsequent socket writes.

    Ok, so given all of that above information, how exactly is this supposed to work?

    Well, the slightly more interesting and pressing problem is how to break out incoming requests/packets to multiple receive threads. In traditional UNIX socket setups, there are a couple of common design patterns for farming off incoming requests to multiple worker threads:

    • There's one thread that just does accept() (for TCP) or recv() (for UDP) and it then farms off new connections to userland worker threads; or
    • There are multiple userland worker threads which all wait on a single socket for accept() or recv() - and hope that the OS will only wake up one thread to hand work to.
    It turns out that the OS may wake up one thread at a time for accept() or recv() but then userland threads will sit in a loop trying to accept connections / packets - and then you tend to find they get called a lot only to find another worker thread that was running stole the workload. Oops.

    I decided this wasn't really acceptable for the RSS work. I needed a way to redirect traffic to a thread that's also pinned to the same CPU as the receive RSS bucket. I decided the cheapest way would be to allow multiple PCB entries for the same socket details (eg, multiple TCP sockets listening on *:80). Since the PCBGROUPS code in this instance has one PCB hash per RSS bucket, all I had to do was to teach the stack that wildcard listen PCB entries (eg, *:80) could also exist in each PCB hash bucket and to use those in preference to the global PCB hash.

    The idea behind this decision is pretty simple - Robert Watson already did all this great work in setting up and debugging PCBGROUPS and then made the RSS work leverage that. All I'd have to do is to have one userland thread in each RSS bucket and have the listen socket for that thread be in the RSS bucket. Then any incoming packet would first check the PCBGROUP that matched the RSS bucket indicated by the RSS hash from the hardware - and it'd find the "right" PCB entry in the "right" PCBGROUP PCB has table for the "right" RSS bucket.

    That's what I did for both TCP and UDP.

    So the programming model is thus:

    • First, query the RSS sysctl (net.inet.rss) for the RSS configuration - this gives the number of RSS buckets and the RSS bucket -> CPU mapping.
    • Then create one worker thread per RSS bucket..
    • .. and pin each thread to the indicated CPU.
    • Next, each worker thread creates one listen socket..
    • .. sets the IP_BINDANY or IP6_BINDANY option to indicate that there'll be multiple RSS entries bound to the given listen details (eg, binding to *:80);
    • .. then IP_RSS_LISTEN_BUCKET to set which RSS bucket the incoming socket should live in;
    • Then for UDP - call bind()
    • Or for TCP - call bind(), then call listen()
    Each worker thread will then receive TCP connections / UDP frames that are local to that CPU. Writing data out the TCP socket will also stay local to that CPU. Writing UDP frames out doesn't - and I'm about to cover that.

    Yes, it's annoying because now you're not just able to choose an IO model that's convenient for your application / coding style. Oops.

    Ok, so what's up with UDP?

    The problem with UDP is that outbound responses may be to an arbitrary destination setup and thus may actually be considered "local" to another CPU. Most common services don't do this - they'll send the UDP response to the same remote IP and port that it was sent from.

    My plan for UDP (and TCP in some instances, see below!) is four-fold:

    • When receiving UDP frames, optionally mark them with RSS hash and flowid information.
    • When transmitting UDP frames, allow userspace to inform the kernel about a pre-calculated RSS hash / flow information.
    • For the fully-connected setup (ie, where a single socket is connect() ed to a given UDP remote IP:port and frame exchange only occurs between the fixed IP and port details) - cache the RSS flow information in the inpcb;
    • .. and for all other situations (if it's not connected, if there's no hint from userland, if it's going to a destination that isn't in the inpcb) - just do a software hash calculation on the outgoing details.
    I mostly have the the first two UDP options implemented (ie, where userland caches the information to re-use when transmitting the response) and I'll commit them to FreeBSD soon. The other two options are the "correct" way to do the default methods but it'll take some time to get right.

    Ok, so does it work?

    I don't have graphs. Mostly because I'm slack. I'll do up some before I present this - likely at BSDCan 2015.

    My testing has been done with Intel 1G and 10G NICs on desktop Ivy Bridge 4-core hardware. So yes, server class hardware will behave better.

    For incoming TCP workloads (eg a webserver) then yes, there's no lock contention between CPUs in the NIC driver or network stack any longer. The main lock contention between CPUs is the VM and allocator paths. If you're doing disk IO then that'll also show up.

    For incoming UDP workloads, I've seen it scale linearly on 10G NICs (ixgbe(4)) from one to four cores. This is with no-defragmentation, 510 byte sized datagrams.

    Ie, 1 core reception (ie, all flows to one core) was ~ 250,000 pps into userland with just straight UDP reception and no flow/hash information via recvmsg(); 135,000 pps into userland with UDP reception and flow/hash information via recvmsg().

    4 core reception was ~ 1.1 million pps into userland, roughly ~ 255,000 pps per core. There's no contention between CPU cores at all.

    Unfortunately what I was sending was markedly different. The driver quite happily received 1.1 million frames on one queue and up to 2.1 million when all four queues were busy. So there's definitely room for improvement.

    Now, there is lock contention - it's just not between CPU cores. Now that I'm getting past the between-core contention, we see the within-core contention.

    For TCP HTTP request reception and bulk response transmission, most of the contention I'm currently seeing is between the driver transmit paths. So, the following occurs:

    • TCP stack writes some data out;
    • NIC if_transmit() method is called;
    • It tries to grab the queue lock and succeeds;
    It then appends the frame to the buf_ring and schedules a transmit out the NIC. This bit is fine.

    But then whilst the transmit lock is held, because the driver is taking frames from the buf_ring to push into the NIC TX DMA queue
    • The NIC queue interrupt fires, scheduling the software interrupt thread;
    • This pre-empts the existing running transmit thread;
    • The NIC code tries to grab the transmit lock to handle completed transmissions;
    • .. and it fails, because the code it preempted holds the transmit lock already.
    So there's some context switching and thrashing going on there which needs to be addressed.

    Ok, what about UDP? It turns out there's some lock contention with the socket receive buffer.

    The soreceive_dgram() routine grabs the socket receive buffer (SOCKBUF_LOCK()) to see if there's anything to return. If not, and if it can sleep, it'll call sbwait() that will release the lock and msleep() waiting for the protocol stack to indicate that something has been received. However, since we're receiving packets at such a very high rate, it seems that the receive protocol path contends with the socket buffer lock that is held by the userland code trying to receive a datagram. It pre-empts the user thread, tries to grab the lock and fails - and then goes to sleep until the userland code finishes with the lock. soreceive_dgram() doesn't hold the lock for very long - but I do see upwards of a million context switches a second.

    To wrap up - I'm pleased with how things are going. I've found and fixed some issues with the igb(4) and ixgbe(4) drivers that were partly my fault and the traffic is now quite happily and correctly being processed in parallel. There are issues with scaling within a core that are now being exposed and I'm glad to say I'm going to ignore them for now and focus on wrapping up what I've started.

    There's a bunch more to talk about and I'm going to do it in follow-up posts.
    • what I'm going to do about UDP transmit in more detail;
    • what about creating outbound connections and how applications can be structured to handle this;
    • handling IP fragments and rehashing packets to be mostly in-order - and what happens when we can't guarantee ordering with the hardware hashing UDP frames to a 4-tuple;
    • CPU hash rebalancing - what if a specific bucket gets too much CPU load for some reason;
    • randomly creating a toeplitz RSS hash key at bootup and how that should be verified;
    • multi-socket CPU and IO domain awareness;
    • .. and whatever else I'm going to stumble across whilst I'm slowly fleshing this stuff out.
    I hope to get the UDP transmit side of things completed in the next couple of weeks so I can teach memcached about TCP and UDP RSS. After that, who knows!



    It was over five years ago that I ranted about the bundling of libraries and what that means for vulnerabilities found in those libraries. The world has, since, not really listened. RubyGems still keep insisting that "vendoring" gems is good, Go explicitly didn't implement a concept of shared libraries, and let's not even talk about Docker or OSv and their absolutism in static linking and bundling of the whole operating system, essentially.

    It should have been obvious how this can be a problem when Heartbleed came out, bundled copies of OpenSSL would have needed separate updates from the system libraries. I guess lots of enterprise users of such software were saved only by the fact that most of the bundlers ended up using older versions of OpenSSL where heartbeat was not implemented at all.

    Now that we're talking about replacing the OpenSSL libraries with those coming from a different project, we're going to be hit by both edges of the proprietary software sword: bundling and ABI compatibility, which will make things really interesting for everybody.

    If you've seen my (short, incomplete) list of RAND_egd() users which I posted yesterday. While the tinderbox from which I took this is out of date and needs cleaning, it is a good starting point to figure out the trends, and as somebody already picked up, the bundling is actually strong.

    Software that bundled Curl, or even Python, but then relied on the system copy of OpenSSL, will now be looking for RAND_egd() and thus fail. You could be unbundling these libraries, and then use a proper, patched copy of Curl from the system, where the usage of RAND_egd() has been removed, but then again, this is what I've been advocating forever or so. With caveats, in the case of Curl.

    But now if the use of RAND_egd() is actually coming from the proprietary bits themselves, you're stuck and you can't use the new library: you either need to keep around an old copy of OpenSSL (which may be buggy and expose even more vulnerability) or you need a shim library that only provides ABI compatibility against the new LibreSSL-provided library — I'm still not sure why this particular trick is not employed more often, when the changes to a library are only at the interface level but still implements the same functionality.

    Now the good news is that from the list that I produced, at least the egd functions never seemed to be popular among proprietary developers. This is expected as egd was vastly a way to implement the /dev/random semantics for non-Linux systems, while the proprietary software that we deal with, at least in the Linux world, can just accept the existence of the devices themselves. So the only problems have to do with unbundling (or replacing) Curl and possibly the Python SSL module. Doing so is not obvious though, as I see from the list that there are at least two copies of libcurl.so.3 which is the older ABI for Curl — although admittedly one is from the scratchbox SDKs which could just as easily be replaced with something less hacky.

    Anyway, my current task is to clean up the tinderbox so that it's in a working state, after which I plan to do a full build of all the reverse dependencies on OpenSSL, it's very possible that there are more entries that should be in the list, since it was built with USE=gnutls globally to test for GnuTLS 3.0 when it came out.



    My last post spawned enough feedback that I thought I would dump some notes here for those interested in building a chroot on FreeBSD that allows you to test and prototype architectures, e.g. ARMv6 on AMD64.

    The FreeBSD buildsys has many targets used for many things, the two we care about here are buildworld and distribution.  We will also be changing the output architecture through the use of TARGET and TARGET_ARCH command line variables.  I’ll assume csh is your shell here, just for simplicity.  You’ll need 10stable or 11current to do this, as it requires the binary activator via binmiscctl(8) which has not appeared in a release version of FreeBSD yet.

    Checkout the FreeBSD source tree somewhere, your home directory will be fine and start a buildworld.  This will take a while, so get a cup of tea and relax.

    make -s -j <number of cpus on your machine> buildworld TARGET=mips TARGET_ARCH=mips64 MAKEOBJDIRPREFIX=/var/tmp

    Some valid combinations of TARGET/TARGET_ARCH are:

    mips:mips

    mips:mip64

    arm:armv6

    sparc64:sparc64

    powerpc:powerpc

    powerpc:powerpc64

    i386:i386

    amd64:amd64

    Once this is done, you have an installable tree in /var/tmp.  You need to be root for the next few steps, su now and execute these steps:

    make -s installworld TARGET=mips TARGET_ARCH=mips64 MAKEOBJDIRPREFIX=/var/tmp DESTDIR=/opt/test

    DESTDIR is where you intend on placing the installed FreeBSD system.  I chose /opt/test here only because I wanted to be FAR away from anything in my running system.  Just to be clear here, this will crush and destroy your host computer without DESTDIR set.

    Next, there are some tweaks that have to be done by the buildsys, so run this command as root:

    make -s distribution TARGET=mips TARGET_ARCH=mips64 MAKEOBJDIRPREFIX=/var/tmp DESTDIR=/opt/test

    Now we need to install the emulator tools (QEMU) to allow us to use the chroot on our system.  I suggest using emulators/qemu-user-static for this as Juergen Lock has set it up for exactly this purpose.  It will install only the tools you need here.

    Once that is installed, via pkg or ports, setup your binary activator module for the architecture of your chroot.  Use the listed options on the QEMU user mode wiki page for the architecture you want.  I know the arguments are not straight forward, but there should be examples for the target that you are looking for.

    For this mips/mips64 example:

    binmiscctl add mips64elf –interpreter “/usr/local/bin/qemu-mips64-static”
    –magic “x7fx45x4cx46x02x02x01x00x00x00x00x00x00x00x00x00x00x02x00x08″
    –mask “xffxffxffxffxffxffxffx00xffxffxffxffxffxffxffxffxffxfexffxff”
    –size 20 –set-enabled

    Copy the binary qemu that you setup in this step *into* the chroot environment:

    mkdir -p /opt/tmp/usr/local/bin

    cp /usr/local/bin/qemu-mips64-static /opt/tmp/usr/local/bin/

    Mount devfs into the chroot:

    mount -t devfs devfs /opt/tmp/dev

    Want to try building ports in your chroot?  Mount the ports tree in via nullfs:

    mkdir /opt/tmp/usr/ports

    mount -t nullfs /usr/ports /opt/tmp/usr/ports

    And now, through the QEMU and FreeBSD, you can simply chroot into the environment:

    chroot /opt/tmp

    Hopefully, you can now “do” things as though you were running on a MIPS64 or whatever architecture machine you have as a target.

    arm:armv6, mips:mips, mips:mips64 are working at about %80-90 functionality.  powerpc:powerpc64 and powerpc:powerpc are still a work in progress and need more work.  sparc64:sparc64 immediately aborts and probably needs someone with an eye familiar with the architecture to give QEMU a look.  If you are interested in further development of the qemu-user targets, please see my github repo and clone away.

    If you are looking to see what needs to be done, Stacey Son has kept an excellent log of open item on the FreeBSD Wiki



    Erwin Lansing just posted a summary of the DNS session at the FreeBSD DevSummit that was held in conjunction with BSDCan 2014 in May. It gives a good overview of the current state of affairs, including known bugs and plans for the future.

    I’ve been working on some of these issues recently (in between $dayjob and other projects). I fixed two issues in the last 48 hours, and am working on two more.

    Reverse lookups in private networks

    Fixed in 11.

    In its default configuration, Unbound 1.4.22 does not allow reverse lookups for private addresses (RFC 1918 and the like). NLNet backported a patch from the development version of Unbound which adds a configuration option, unblock-lan-zones, which disables this filtering. But that alone is not enough, because the reverse zones are actually signed (EDIT: the problem is more subtle than that, details in comments); Unbound will attempt to validate the reply, and will reject it because the zone is supposed to be empty. Thus, for reverse lookups to work, the reverse zones for all private address ranges must be declared as insecure:

    server:
        # Unblock reverse lookups for LAN addresses
        unblock-lan-zones: yes
        domain-insecure:      10.in-addr.arpa.
        domain-insecure:     127.in-addr.arpa.
        domain-insecure: 254.169.in-addr.arpa.
        domain-insecure:  16.172.in-addr.arpa.
        # ...
        domain-insecure:  31.172.in-addr.arpa.
        domain-insecure: 168.192.in-addr.arpa.
        domain-insecure: 8.e.ip6.arpa.
        domain-insecure: 9.e.ip6.arpa.
        domain-insecure: a.e.ip6.arpa.
        domain-insecure: b.e.ip6.arpa.
        domain-insecure: d.f.ip6.arpa.
    

    FreeBSD 11 now has both the unblock-lan-zones patch and an updated local-unbound-setup script which sets up the reverse zones. To take advantage of this, simply run the following command to regenerate your configuration:

    # service local_unbound setup
    

    This feature will be available in FreeBSD 10.1.

    Building libunbound writes to /usr/src (#190739)

    Fixed in 11.

    The configuration lexer and parser were included in the source tree instead of being generated at build time. Under certain circumstances, make(1) would decide that they needed to be regenerated. At best, this inserted spurious changes into the source tree; at worst, it broke the build.

    Part of the reason for this is that Unbound uses preprocessor macros to place the code generated by lex(1) and yacc(1) in its own separate namespace. FreeBSD’s lex(1) is actually Flex, which has a command-line option to achieve the same effect in a much simpler manner, but to take advantage of this, the lexer needed to be cleaned up a bit.

    Allow local domain control sockets

    Work in progress

    An Unbound server can be controlled by sending commands (such as “reload your configuration file”, “flush your cache”, “give me usage statistics”) over a control socket. Currently, this can only be a TCP socket. Ilya Bakulin has developed a patch, which I am currently testing, that allows Unbound to use a local domain (aka “unix”) socket instead.

    Allow unauthenticated control sockets

    Work in progress

    If the control socket is a local domain socket instead of a TCP socket, there is no need for encryption and little need for authentication. In the local resolver case, only root on the local machine needs access, and this can be enforced by the ownership and file permissions of the socket. A second patch by Ilya Bakulin makes encryption and authentication optional so there is no need to generate and store a client certificate in order to use unbound-control(8).



    I’ve spent the last few months banging though the bits and pieces of the work that Stacey Son implemented for QEMU to allow us to more or less chroot into a foreign architecture as though it were a normal chroot.  This has opened up a lot of opportunities to bootstrap the non-x86 architectures on FreeBSD.

    Before I get started, I’d like to thank Stacey Son, Ed Maste, Juergen Lock, Peter Wemm, Justin Hibbits, Alexander Kabaev, Baptiste Daroussin and Bryan Drewery for the group effort in getting us the point of working ARMv6, MIPS32 and MIPS64 builds.  This has been a group effort for sure.

    This will require a 10stable or 11current machine, as this uses Stacey’s binary activator patch to redirect execution of binaries through QEMU depending on the ELF header of the file.  See binmiscctl(8) for more details.

    Mechanically, this is a pretty easy setup.  You’ll need to install ports-mgmt/poudriere-devel with the qemu-user option selected.  This will pull in the qemu-user code to emulate the environment we need to get things going.

    I’ll pretend that you want an ARMv6 environment here.  This is suitable to build packages for the Rasberry PI and Beagle Bone Black.  Run this as root:

    binmiscctl add armv6 –interpreter “/usr/local/bin/qemu-arm” –magic
    “x7fx45x4cx46x01x01x01x00x00x00x00x00x00x00x00x00x02
    x00x28x00″ –mask “xffxffxffxffxffxffxffx00xffxffxffxff
    xffxffxffxffxfexffxffxff” –size 20 –set-enabled

    This magic will load the imgact_binmisc.ko kernel module.  The rest of the command line instructs the kernel to redirect execution though /usr/local/bin/qemu-arm if the ELF header of the file matches an ARMv6 signature.

    Build your poudriere jail (remember to install poudriere-devel for now as it has not been moved to stable at the time of this writing) with the following command:

    poudriere jail -c -j 11armv632 -m svn -a armv6 -v head

    Once this is done, you will be able to start a package build via poudriere bulk as you normally would:

    poudriere bulk -j 11armv632 -a

    or

    poudriere bulk -j 11armv632 -f <my_file_of_ports_to_build>

    Currently, we are running the first builds in the FreeBSD project to determine what needs the most attention first.  Hopefully, soon we’ll have something that looks like a coherent package set for non-x86 architectures.

    For more information, work in progress things and possible bugs in qemu-user code, see Stacey’s list of things at:

    https://wiki.freebsd.org/QemuUserModeHowTo

    https://wiki.freebsd.org/QemuUserModeToDo



    I was experimenting in my arm chroot, and after a gcc upgrade and emerge --depclean --ask that removed the old gcc I got the following error:

    # ls -l
    ls: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory

    Fortunately the newer working gcc was present, so the steps to make things work again were:

    # LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config -l
     * gcc-config: Active gcc profile is invalid!

     [1] armv7a-hardfloat-linux-gnueabi-4.8.2

    # LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config 1 
     * Switching native-compiler to armv7a-hardfloat-linux-gnueabi-4.8.2 ...

    Actually my first thought was using busybox. The unexpected breakage during a routine gcc upgrade made me do some research in case I can't rely on /bin/busybox being present and working.

    I highly recommend the following links for further reading:
    http://lambdaops.com/rm-rf-remains
    http://eusebeia.dyndns.org/bashcp
    http://www.reddit.com/r/linux/comments/27is0x/rm_rf_remains/ci199bk

    Read more »


    When I read about LibreSSL coming from the OpenBSD developers, my first impression was that it was a stunt. I did not change my impression of it drastically still. While I know at least one quite good OpenBSD developer, my impression of the whole set is still the same: we have different concepts of security, and their idea of "cruft" is completely out there for me. But this is a topic for some other time.

    So seeing the amount of scrutiny from other who are, like me, skeptical of the OpenBSD people left on their own is a good news. It keeps them honest, as they say. But it also means that things that wouldn't be otherwise understood by people not used to Linux don't get shoved under the rug.

    This is not idle musings: I still remember (but can't find now) an article in which Theo boasted not ever having used Linux. And yet kept insisting that his operating system was clearly superior. I was honestly afraid that the way the fork-not-a-fork project was going to be handled was the same, I'm positively happy to be proven wrong up to now.

    I actually have been thrilled to see that finally there is movement to replace the straight access to /dev/random and /dev/urandom: Ted's patch to implement a getrandom() system call that can be made compatible with OpenBSD's own getentropy() in user space. And even more I'm happy to see that at least one of the OpenBSD/LibreSSL developers pitching in to help shape the interface.

    Dropping out the egd support made me puzzled for a moment, but then I realized that there is no point in using egd to feed the randomness to the process, you just need to feed entropy to the kernel, and let the process get it normally. I have had, unfortunately, quite a bit of experience with entropy-generating daemons, and I wonder if this might be the right time to suggest getting a new multi-source daemon out.

    So a I going to just blindly trust the OpenBSD people because "they have a good track record"? No. And to anybody that suggest that you can take over lines and lines of code from someone else's crypto-related project, remove a bunch of code that you think is useless, and have an immediate result, my request is to please stop working with software altogether.

    Security Holes Copyright © Randall Munroe.

    I'm not saying that they would do it on purpose, or that they wouldn't be trying to do the darndest to make LibreSSL a good replacement for OpenSSL. What I'm saying is that I don't like the way, and the motives, the project was started from. And I think that a reality check, like the one they already got, was due and a good news.

    On my side, once the library gets a bit more mileage I'll be happy to run the tinderbox against it. For now, I'm re-gaining access to Excelsior after a bad kernel update, and I'll just go and search with elfgrep for which binaries do use the egd functionalities and need to be patched, I'll post it on Twitter/G+ once I have it. I know it's not much, but this is what I can do.



    This data comes from pipes.yahoo.com but the Pipe does not exist or has been deleted.


    Baptiste Daroussin started the session with a status update on package building. All packages are now built with poudriere. The FreeBSD Foundation sponsored some large machines on which it takes around 16 hours to build a full tree. Each Wednesday at 01:00UTC the tree is snapshot and an incremental build is started for all supported released, the 2 stable branches (9 and 10) and quarterly branches for 9.x-RELEASE and 10.x-RELEASE. The catalogue is signed on a dedicated signing machine before upload. Packages can be downloaded from 4 mirrors (us-west, us-east, UK, and Russia) and feedback so far has been very positive.

    He went on to note that ports people need better coordination with src people on ABI breakage. We currently only support i386 and amd64, with future plans for ARM and a MIPS variant. Distfiles are not currently mirrored (since fixed), and while it has seen no progress, it’s still a good idea to build a pkg of the ports tree itself.

    pkg 1.3 will include a new solver, which will help 'pkg upgrade' understand that an old packages needs to be replaced with a newer one, with no more need for 'pkg set' and other chicanery. Cross building ports has been added to the ports tree, but is waiting for pkg-1.3. All the dangerous operations in pkg have now been sandboxed as well.

    EOL for pkg_tools has been set for September 1st. An errata notice has gone out that adds a default pkg.conf and keys to all supported branches, and nagging delays have been added to ports.

    Quarterly branches based on 3 month support cycle has been started on an evaluation basis. We’re still unsure about the manpower needed to maintain those. Every quarter a snapshot of the tree is created and only security fixed, build and runtime fixed, and upgrades to pkg are allowed to be committed to it. Using the MFH tag in a commit message will automatically send an approval request to portmgr and an mfh script on Tools/ makes it easy to do the merge.

    Experience so far has been good, some minor issues to the insufficient testing. MFHs should only contain the above mentioned fixes; cleanups and other improvements should be done in separate commits only to HEAD. A policy needs to be written and announced about this. Do we want to automatically merge VuXML commits, or just remove VuXML from the branch and only use the one in HEAD?

    A large number of new infrastructure changes have been introduces over the past few months, some of which require a huge migration of all ports. To speed these changes up, a new policy was set to allow some specific fixes to be committed without maintainer approval. Experience so far has been good, things actually are being fixed faster than before and not many maintainers have complained. There was agreement that the list of fixes allowed to be committed without explicit approval should be a specific whitelist published by portmgr, and not made too broad in scope.

    Erwin Lansing quickly measured the temperature of the room on changing the default protocol for fetching distils from MASTER_SITE_BACKUP from ftp to http. Agreement all around and erwin committed the change.

    Ben Kaduk gave an introduction and update on MIT’s Athena Environment with some food for thought. While currently not FreeBSD based, he would like to see it become so. Based on debian/ubuntu and rolled out on hundreds of machines, it now has it’s software split into about 150 different packages and metapackages.

    Dag-Erling Smørgrav discussed changes to how dependencies are handled, especially splitting dependencies that are needed at install time (or staging time) and those needed at run time. This may break several things, but pkg-1.3 will come with better dependency tracking solving part of the problem.

    Ed Maste presented the idea of “package transparency”, loosely based on Google’s Certificate Transparency. By logging certificate issuance to a log server, which can be publicly checked, domain owners can search for certificates issued for their domains, and notice when a certificate is issued without their authority. Can this model be extended to packages? Mostly useful for individually signed packages, while we currently only sign the catalogue. Can we do this with the current infrastructure?

    Stacy Son gave an update on Qemu user mode, which is now working with Qemu 2.0.0. Both static and dynamic binaries are supported, though only a handful of system call are supported.

    Baptiste introduced the idea of having pre-/post-install scripts be a library of services, like Casper, for common actions. This reduces the ability of maintainers to perform arbitrary actions and can be sandboxed easily. This would be a huge security improvement and could also enhance performance.

    Cross building is coming along quite well and most of the tree should be able to be build by a simple 'make package'. Major blockers include perl and python.

    Bryan Drewery talked about a design for a PortsCI system. The idea is that committer easily can schedule a build, be it an exp-run, reference, QAT, or other, either via a web interface or something similar to a pull request, which can fire off a build.

    Steve Wills talked about using Jenkins for ports. The current system polls SVN for commits and batches several changes together for a build. It uses 8 bhyve VMs instances, but is slow. Sean Bruno commented that there are several package building clusters right now, can they be unified? Also how much hardware would be needed to speed up Jenkins? We could duse Jenkins as a fronted for the system Bryan just talked about. Also, it should be able to integrate with phabricator.

    Erwin opened up the floor to talk about freebsd-version(1) once more. It was introduced as a mechanism to find out the version of user land currently running as uname -r only represents the kernel version, and would thus miss updates of the base system that do no touch the kernel. Unfortunately, freebsd-version(1) cannot really be used like this in all cases, it may work for freebsd-update, but not in general. No real solution was found this time either.

    The session ended with a discussion about packaging the base system. It’s a target for FreeBSD 11, but lots of questions are still to be answered. What granularity to use? What should be packages into how many packages? How to handle options? Where do we put the metadata for this? How do upgrades work? How to replace shared libraries in multiuser mode? This part also included the quote of the day: “Our buildsystem is not a paragon of configurability, but a bunch of hacks that annoyed people the most.”

    Thanks to all who participated in the working group, and thanks again to DK Hostmaster for sponsoring my trip to BSDCan this year, and see you at the Ports and Packages WG meet up at EuroBSDCon in Sofia in September.



    Indexeus, a new search engine that indexes user account information acquired from more than 100 recent data breaches, has caught many in the hacker underground off-guard. That’s because the breached databases crawled by this search engine are mostly sites frequented by young ne’er-do-wells who are just getting their feet wet in the cybercrime business.

    Indexeus[dot]org

    Indexeus[dot]org

    Indexeus boasts that it has a searchable database of “over 200 million entries available to our customers.” The site allows anyone to query millions of records from some of the larger data breaches of late — including the recent break-ins at Adobe and Yahoo! – listing things like email addresses, usernames, passwords, Internet address, physical addresses, birthdays and other information that may be associated with those accounts.

    Who are Indexeus’s target customers? Denizens of hackforums[dot]net, a huge forum that is overrun by novice teenage hackers (a.k.a “script kiddies”) from around the world who are selling and buying a broad variety of services designed to help attack, track or otherwise harass people online.

    Few services are as full of irony and schadenfreude as Indexeus. You see, the majority of the 100+ databases crawled by this search engine are either from hacker forums that have been hacked, or from sites dedicated to offering so-called “booter” services — powerful servers that can be rented to launch denial-of-service attacks aimed at knocking Web sites and Web users offline.

    The brains behind Indexeus — a gaggle of young men in their mid- to late teens or early 20s — envisioned the service as a way to frighten fellow hackers into paying to have their information removed or “blacklisted” from the search engine. Those who pay “donations” of approximately $1 per record (paid in Bitcoin) can not only get their records expunged, but that price also buys insurance against having their information indexed by the search engine in the event it shows up in future database leaks.

    The team responsible for Indexeus explains the rationale for their project with the following dubious disclaimer:

    “The purpose of Indexeus is not to provide private informations about someone, but to protect them by creating awareness. Therefore we are not responsible for any misuse or malicious use of our content and service. Indexeus is not a dump. A dump is by definition a file containing logins, passwords, personal details or emails. What Indexeus provides is a single-search, data-mining search engine.”

    Such information would be very useful for those seeking to settle grudges by hijacking a rival hacker’s accounts. Unsurprisingly, a number of Hackforums users reported quickly finding many of their favorite usernames, passwords and other data on Indexeus. They began to protest against the service being marketed on Hackforums, charging that Indexeus was little more than a shakedown.

    Indeed, the search engine was even indexing user accounts stolen from witza.net, the site operated by Hackforums administrator Jesse LaBrocca and used to process payments for Hackforums who wish to upgrade the standing of their accounts on the forum.

    WHO RUNS INDEXEUS?

    The individual who hired programmers to help him build Indexeus uses the nickname “Dubitus” on Hackforums and other forums. For the bargain price of $25 and two hours of your time on a Saturday, Dubitus also sells online instructional training on “doxing” people — working backwards from someone’s various online personas to determine their real-life name, address and other personal data.

    Dubitus claims to be a master at something he calls “Web detracing,” which is basically removing all of the links from your online personas that might allow someone to dox you. I have no idea if his training class is any good, but it wasn’t terribly difficult to find this young man in the real world.

    Dubitus offering training for

    Dubitus offering training for “doxing” and “Web detracing.”

    Contacted via Facebook by KrebsOnSecurity, Jason Relinquo, 23, from Lisbon, Portugal, acknowledged organizing and running the search engine. He also claims his service was built merely as an educational tool.

    “I want this to grow and be a reference, and at some point by a tool useful enough to be used by law enforcement,” Relinquo said. “I wouldn’t have won the NATO Cyberdefense Competition if I didn’t have a bigger picture in my mind. Just keep that in yours.”

    Relinquo said that to address criticisms that his service was a shakedown, he recently modified the terms of service so that users don’t have to pay to have their information removed from the site. Even so, it remains unclear how users would prove that they are the rightful owner of specific records indexed by the service.

    Jason Relinquo

    Jason Relinquo

    “We’re going through some reforms (free blacklisting, plus subscription based searches), due some legal complications that I don’t want to escalate,” Relinquo wrote in a chat session. “If [Indexeus users] want to keep the logs and pay for the blacklist, it’s an option. We also state that in case of a minor, the removal is immediate.”

    Asked which sort of legal complications were bedeviling his project, Relinquo cited the so-called “right to be forgotten,” data protection and privacy laws in Europe that were strengthened by a May 2014 decision by the European Court of Justice in a ruling against Google. In that case, the EU’s highest court ruled that individuals have a right to request the removal of Internet search results, including their names, that are “inadequate, irrelevant or no longer relevant, or excessive.”

    I find it difficult to believe that Indexeus’s creators would be swayed by such technicalities, given that  that the service was set up to sell passwords to members of a forum known to be frequented by people who will use them for malicious purposes. In any case, I doubt this is the last time we will hear of a service like this. Some 822 million records were exposed in more than 2,160 separate data breach incidents last year, and there is plenty of room for competition and further specialization in the hacked-data search engine market.



    DAB+ Sailor-Plus-SA-123 Bernd Dau | 2014-07-17 16:49 UTC

    Ich habe seit einiger Zeit das  Dual DAB 12 Digitalradio.

    Ich benutze es im Bastler Check zum hören von Radio BOB!.

    Das Dual ist selbst in unserem Keller in der Lage die Dab+ Sender zu empfangen. Es hat einen Stereo Kopfhörer Ausgang und besticht mit ausgeglichenen Klang über die eingebauten Lautsprecher. Wenn ich an dem Radio etwas kritisiere, dann ist es der hohe standby Verbrauch. Der ist auch bei Batteriebetrieb enorm hoch und lutscht die 6 Mignonzellen relativ schnell leer. Wenn man es mit Batterien benutzen möchte muss man es unbedingt mit dem auf der Rückseite befindlichen Schalter ausschalten.

    Manchmal könnte ich noch ein zusätzliches DAB+ Radio im Haushalt gebrauchen, da kam ein Sonderangebot von Völkner anfang Januar 2014 gerade recht.

    Das Sailor-Plus-SA-123 hat weniger als 30.- Euro gekostet, da dachte ich, man kann da ja nicht viel falsch machen.

    In diesem Sailor-Plus-SA-123_testbericht ist allerdings die Rede von schlechtem Empfang, eine Abhilfe wird auch angeboten. Mein Fazit nach nun mehr als 6 Monaten Benutzung lautet: Empfang ist in Ordnung, anscheinend hat der Hersteller dort nachgebessert. Die Bedienung ist ganz ok. Allerdings ist die Ergonomie in Schulnoten eine 4. Jetzt nach 6 Monaten Gebrauch bei der Arbeit, einem Büroarbeitsplatz ohne Kundenkontakt (naja, jedenfalls nicht physikalisch) ist das Lautstärkepoti schon stark in Mitleidenschaft gezogen worden. Schätzungsweise 10 Betätigungen pro Tag * 21 Tage * 6 = 2160 Turns. Das ist ja lächerlich schlecht für ein Poti. Garantie bzw. Gewährleistung in Anspruch nehmen? Nee, zu faul und verschiebt das Problem garantiert nur um 6 Monate ;-) Also werde ich wohl das Radio mal in meinen Check bringen und gucken, ob man da nichts machen kann ....