Planet 2014-03-07 15:00 UTC
In a previous article titled ‘using deltas to speed up SquashFS ebuild repository updates’, the author has considered benefits of using binary deltas to update SquashFS images. The proposed method has proven very efficient in terms of disk I/O, memory and CPU time use. However, the relatively large size of deltas made network bandwidth a bottleneck.
The rough estimations done at the time proved that this is not a major issue for a common client with a moderate-bandwidth link such as ADSL. Nevertheless, the size is an inconvenience both to clients and to mirror providers. Assuming that there is an upper bound on disk space consumed by snapshots, the extra size reduces the number of snapshots stored on mirrors, and therefore shortens the supported update period.
The most likely cause for the excessive delta size is the complexity of correlation between input and compressed output. Changes in input files are likely to cause much larger changes in the SquashFS output that the tested delta algorithms fail to express efficiently.
For example, in the LZ family of compression algorithms, a change in input stream may affect the contents of the dictionary and therefore the output stream following it. In block-based compressors such as bzip2, a change in input may shift all the following data moving it across block boundaries. As a result, the contents of all the blocks following it change, and therefore the compressed output for each of them.
Since SquashFS splits the input into multiple blocks that are compressed separately, the scope of this issue is much smaller than in plain tarballs. Nevertheless, small changes occurring in multiple blocks are able to grow delta two to four times as large as it would be if the data was not compressed. In this paper, the author explores the possibility of introducing a transparent decompression in the delta generation process to reduce the delta size.
Nationwide beauty products chain Sally Beauty appears to be the latest victim of a breach targeting their payment systems in stores, according to both sources in the banking industry and new raw data from underground cybercrime shops that traffic in stolen credit and debit cards.
On March 2, a fresh batch of 282,000 stolen credit and debit cards went on sale in a popular underground crime store. Three different banks contacted by KrebsOnSecurity made targeted purchases from this store, buying back cards they had previously issued to customers.
The banks each then sought to determine whether all of the cards they bought had been used at the same merchant over the same time period. This test, known as “common point of purchase” or CPP, is the core means by which financial institutions determine the source of a card breach.
Each bank independently reported that all of the cards (15 in total) had been used within the last ten days at Sally Beauty locations across the United States. Denton, Texas-based Sally Beauty maintains some 2,600 stores, and the company has stores in every U.S. state.
Asked about the banks’ findings, Sally Beauty spokeswoman Karen Fugate said the company recently detected an intrusion into its network, but that neither the company’s information technology experts nor an outside forensics firm could find evidence that customer card data had been stolen from the company’s systems.
Fugate said Sally Beauty uses an intrusion detection product called Tripwire, and that a couple of weeks ago — around Feb. 24 — Tripwire detected activity. Unlike other products that try to detect intrusions based on odd or anomalous network traffic, Tripwire fires off alerts if it detects that certain key system files have been modified.
In response to the Tripwire alert, Fugate said, the company’s information technology department “shut down all external communications” and began an investigation. That included bringing in Verizon Enterprise Solutions, a company often hired to help businesses respond to cyber intrusions.
“Since [Verizon's] involvement, which has included a deconstruction of the methods used, an examination of network traffic, all our logs and all potentially accessed servers, we found no evidence that any data got out of our stores,” Fugate said. “But our investigation continues, of course with their assistance.”
In any case, the stolen cards mapping back to Sally Beauty appear to have been pilfered quite recently, roughly matching the intrusion timeline noted by Sally Beauty: All of the banks reported fraud occurring on cards shortly after they were used at Sally Beauty, in the final week of February and early March.
The advertisement produced by the criminals who are selling these cards also holds some clues about the timing of the breach. Stolen cards fetch quite high prices when they are first put on the market, but those prices tend to fall as a greater percentage of the batch come back as declined or canceled by the issuing banks. Thus, the “valid rate” advertised by the fraudsters selling these cards acts as an indicator of the recency of the breach, because as more banks begin noticing fraud associated with a particular merchant, many will begin proactively canceling any cards used at the suspected breached merchant.
In this batch of cards apparently associated with the Sally Beauty breach, for example, the thieves are advertising the cards as “98 percent valid,” meaning that if a buyer were to purchase 100 cards from the store, he could expect that all but two would still be valid.
In the weeks prior to December 18 — the day that the world learned Target had been breached in a similar card compromise — the thieves running this very same card shop had been advertising several huge batches of cards at 100 percent valid. In the days following Target’s admission that malicious software planted by cyberthieves at its store cash registers had siphoned 40 million credit and debit card numbers, the “valid rates” advertised for those stolen cards began falling precipitously (along with the prices of the stolen cards themselves).
The items for sale are not cards, per se, but instead data copied from the magnetic strip on the backs of credit cards. Armed with this information, thieves can simply re-encode the data onto new plastic and then use the counterfeit cards to buy high-priced items at big box stores, goods that can be quickly resold for cash (think iPads and gift cards, for example).
Interestingly, this batch of stolen card data was put up for sale three days ago by an archipelago of fraud shops that is closely affiliated with the Target breach. In my previous sleuthing, I reported that a miscreant using the nickname Rescator (and an online card shop by the same name) was among the first — if not the first — to openly sell cards stolen in the Target breach. Further tying the Target breach to Rescator, forensic investigators also found the text string “Rescator” buried in the guts of the malware that was found on Target’s systems. According to additional reporting by this author, Rescator may be affiliated with an individual in Odessa, Ukraine.
This release is dedicated to the people of all nations living in Ukraine. We are no fans of political messages in software announcements, but we also cannot remain silent when unmarked Russian troops are marching over a free country. The Trojitá project was founded in a republic formerly known as Czechoslovakia. We were "protected" by foreign aggressors twice in the 20th century — first in 1938 by the Nazi Germany, and second time in 1968 by the occupation forces of the USSR. Back in 1938, Adolf Hitler used the same rhetorics we hear today: that a national minority was oppressed. In 1968, eight people who protested against the occupation in Moscow were detained within a couple of minutes, convicted and sent to jail. In 2014, Moscowians are protesting on a bigger scale, yet we all see the cops arresting them on Youtube — including those displaying blank signs.
This is not about politics, this is about morality. What is happening today in Ukraine is a barbaric act, an occupation of an innocent country which has done nothing but stopped being attracted to their more prominent eastern neighbor. No matter what one thinks about the international politics and the Crimean independence, this is an act which must be condemned and fiercely fought against. There isn't much what we could do, so we hope that at least this symbolic act will let the Ukrainians know that the world's thoughts are with them in this dire moment. За вашу и нашу свободу, indeed!
Finally, we would like to thank Jai Luthra, Danny Rim, Benjamin Kaiser and Yazeed Zoabi, our Google Code-In students, and Stephan Platz, Karan Luthra, Tomasz Kalkosiński and Luigi Toscano, people who recently joined Trojitá, for their code contributions.
The Trojitá developers
Eine Webseite von Microsoft gibt Auskunft:
Puh, Glück gehabt …
Jam and jelly maker Smucker’s last week shuttered its online store, notifying visitors that the site was being retooled because of a security breach that jeopardized customers’ credit card data. Closer examination of the attack suggests that the company was but one of several dozen firms — including at least one credit card processor — hacked last year by the same criminal gang that infiltrated some of the world’s biggest data brokers.
As Smucker’s referenced in its FAQ about the breach, the malware that hit this company’s site behaves much like a banking Trojan does on PCs, except it’s designed to steal data from Web server applications.
PC Trojans like ZeuS, for example, siphon information using two major techniques: snarfing passwords stored in the browser, and conducting “form grabbing” — capturing any data entered into a form field in the browser before it can be encrypted in the Web session and sent to whatever site the victim is visiting.
The malware that tore into the Smucker’s site behaved similarly, ripping out form data submitted by visitors — including names, addresses, phone numbers, credit card numbers and card verification code — as customers were submitting the data during the online checkout process.
What’s interesting about this attack is that it drives home one important point about malware’s role in subverting secure connections: Whether resident on a Web server or on an end-user computer, if either endpoint is compromised, it’s ‘game over’ for the security of that Web session. With Zeus, it’s all about surveillance on the client side pre-encryption, whereas what the bad guys are doing with these Web site attacks involves sucking down customer data post- or pre-encryption (depending on whether the data was incoming or outgoing).
IN GOOD COMPANY
When a reader first directed my attention to the Smucker’s breach notice, I immediately recalled seeing the company’s name among a list of targets picked last year by a criminal hacking group that plundered sites running outdated, vulnerable versions of ColdFusion, a Web application platform made by Adobe Systems Inc.
According to multiple sources with knowledge of the attackers and their infrastructure, this is the very same gang responsible for an impressive spree of high-profile break-ins last year, including:
-An intrusion at Adobe in which the attackers stole credit card data, tens of millions of customer records, and source code for most of Adobe’s top selling software (ColdFusion, Adobe Reader/Acrobat/Photoshop);
-A break-in targeting data brokers LexisNexis, Dun & Bradstreet, and Kroll.
-A hack against the National White Collar Crime Center, a congressionally-funded non-profit organization that provides training, investigative support and research to agencies and entities involved in the prevention, investigation and prosecution of cybercrime.
TOO MANY VICTIMS
Not all of the above-mentioned victims involved the exploitation of ColdFusion vulnerabilities, but Smucker’s was included in a list of compromised online stores that I regrettably lost track of toward the end of 2013, amid a series of investigations involving breaches at much bigger victims.
As I searched through my archive of various notes and the cached Web pages associated with these attackers, I located the Smucker’s reference near the top of a control panel for a ColdFusion botnet that the attackers had built and maintained throughout last year (and apparently into 2014, as Smucker’s said it only became aware of the breach in mid-February 2014).
The botnet control panel listed dozens of other e-commerce sites as actively infected. Incredibly, some of the shops that were listed as compromised in August 2013 are still apparently infected — as evidenced by the existence of publicly-accessible backdoors on the sites. KrebsOnSecurity notified the companies that own the Web sites listed in the botnet panel (snippets of which appear above and below, in red and green), but most of them have yet to respond.
Some of the victims here — such as onetime Australian online cash exchange technocash.com.au — are no longer in business. According to this botnet panel, Technocash was infected on or before Feb. 25, 2013 (the column second from the right indicates the date that the malware on the site was last updated).
It’s unclear whether the infection of Technocash’s secure portal (https://secure.technocash.com.au) contributed to its demise, but the company seems to have had trouble on multiple fronts. Technocash closed its doors in June 2013, after being named in successive U.S. Justice Department indictments targeting the online drug bazaar Silk Road and the now-defunct virtual currency Liberty Reserve.
One particularly interesting victim that was heavily represented in the botnet panel was SecurePay, a credit card processing company based in Alpharetta, Ga. Reached via phone, the company’s chief operating officer Tom Tesmer explained that his organization — Calpiancommerce.com — had in early 2013 acquired SecurePay’s assets from Pipeline Data, a now-defunct entity that had gone bankrupt.
At the time, the hardware and software that powered Pipeline’s business was running out of a data center in New York. Tesmer said that Pipeline’s servers had indeed been running an outdated version of ColdFusion, but that the company’s online operations had been completely rebuilt in CalpianCommerce’s Atlanta data center under the SecurePay banner as of October 2013.
Tesmer told me the company was unaware of any breach affecting SecurePay’s environment. “We’re not aware of compromised cards,” Tesmer said in an email. This struck me as odd, since the thieves had clearly marked much of the data they had stolen as “SecurePay” and listed the URL “https://www.securepay.com/” as the infected page.
Following our conversation, I sent Tesmer approximately 5,000 card transaction records that thieves had apparently stolen from SecurePay’s payment gateway and stashed on a server along with data from other victimized companies (data that was ultimately shared via third parties with the FBI last fall). The data on the attacker’s botnet panel indicated the thieves were still collecting card data from SecurePay’s gateway as late as Aug. 26, 2013.
Tesmer came back and confirmed that the card data was in fact stolen from customer transactions processed through its SecurePay payment gateway, and that SecurePay has now contacted its sponsoring bank about the incident. Further, Tesmer said the compromised transactions mapped back to a Web application firewall alert triggered last summer that the company forwarded to its data center — then located in New York.
“That warning showed up while the system was not under our control, but under the control of the folks up in New York,” Tesmer said. “We fired that alert over to the network guys up there and they said they were going to block that IP address, and that was the last we heard of that.”
Turns out, SecurePay also received a visit from the FBI in September, but alas that inquiry also apparently went nowhere.
“We did get a visit from the FBI last September, and they said they had found the name SecurePay on a list of sites that they were pursuing some big hacker team about,” Tesmer said. “I didn’t associate one with the other. We had the FBI come over and have a look at that database, and they suggested we make a version of our system and set that one aside for them and create a new system, which we did. They said they would get back in touch with us about their findings on the database. But we never heard from them again.”
Tomorrow, we’ll look at Part II of this story, which examines the impact that this botnet has had on several small businesses, as well as the important and costly lessons these companies learned from their intrusions.
The first intake of portmgr-lurkers@ is complete, and it is now time to start with the second round of our -lurkers. Please join us in welcoming Alexey (danfe@) Dokuchaev and Frédéric (culot@) Culot to our ranks.
During this -lurker round, culot@ will be the shadow portmgr-secretary@, learning the finer points of the roles and responsibilities of the job.
First American Bank in Illinois is urging residents and tourists alike to avoid paying for cab rides in Chicago with credit or debit cards, warning that an ongoing data breach seems to be connected with card processing systems used by a large number of taxis in the Windy City.
In an unusually blunt and public statement sent to customers on Friday, Elk Grove, Ill.-based First American Bank said, “We are advising you not to use your First American Bank debit cards (or any other cards) in local taxis.” The message, penned by the bank’s chairman Tom Wells, continued:
“We have become aware of a data breach that occurs when a card is used in Chicago taxis, including American United, Checker, Yellow, and Blue Diamond and others that utilize Taxi Affiliation Services and Dispatch Taxi to process card transactions.”
“We have reported the breach to MasterCard® and have kept them apprised of details as they’ve developed. We have also made repeated attempts to deal directly with Banc of America Merchant Services and Bank of America, the payment processors for the taxis, to discontinue payment processing for the companies suffering this compromise until its source is discovered and remediated. These companies have not shared information about their actions and appear to not have stopped the breach.”
Bank of America, in a written statement, declined to discuss the matter, saying BofA “cannot discuss specific client matters.” Neither Taxi Affiliation Services nor Dispatch Taxi returned messages seeking comment.
Christi Childers, associate general counsel and compliance officer at First American Bank, said the bank made the decision to issue the warning about 18 days after being alerted to a pattern of fraud on cards that were all previously used at taxis in Chicago. The bank, which only issues MasterCard debit cards, has begun canceling cards used in Chicago taxis, and has already reissued 220 cards related to the fraud pattern. So far, the bank has seen more than 466 suspicious charges totaling more than $62,000 subsequent to those cards being used in Chicago taxis.
“We got calls from several customers, looked at their transactions and triangulated what was common here,” Childers said. “We’ve been complaining to Bank of America, saying, ‘Hey, do something about this.’ They said they couldn’t give us any information and that we need to talk to MasterCard.”
James Issokson, a spokesman for MasterCard, said in a brief emailed statement that MasterCard is “aware of and investigating reports of a potential breach affecting taxi cabs in Chicago.”
According to First American Bank and at least one bank based in the Midwestern United States, the fraud related to the affected taxis shows up on cards as “Chi Taxi,” and has been going on since at least early December 2013.
Avivah Litan, a fraud analyst with Gartner Inc., said the move by First American to publicize the incident suggests that many banks are feeling fatigued over the sheer volume of intrusions involving customer card data.
“I’m shocked, and it’s pretty amazing that they put that out there publicly, because everyone is usually so scared that they’re going to piss off Visa and MasterCard,” Litan said. “I’ve never seen any bank speak up like that. They’re probably just fed up.”
If banks are experiencing breach fatigue, it’s a good bet that consumers are feeling it as well. Over the weekend, I spent several hours contacting more than two dozen online merchants — including two relatively small credit card processors — whose Web stores were apparently compromised late last year by card-stealing malware. If you weren’t fatigued by the breaches yet, just wait until the end of this week. Stay tuned.
In response to rumors in the financial industry that Sears may be the latest retailer hit by hackers, the company said today it has no indications that it has been breached. Although the Sears investigation is ongoing, experts say there is a good chance the identification of Sears as a victim is a false alarm caused by a common weaknesses in banks’ anti-fraud systems that becomes apparent mainly in the wake of massive breaches like the one at Target late last year.
Earlier this week, rumors began flying that Sears was breached by the same sort of attack that hit Target. In December, Target disclosed that malware installed on its store cash registers compromised credit and debit card data on 40 some million transactions. This publication reached out on Wednesday to Sears to check the validity of those rumors, and earlier today Bloomberg moved a brief story saying that the U.S. Secret Service was said to be investigating a possible data breach at Sears.
But in a short statement issued today, Sears said the company has found no information indicating a breach at the company.
“There have been rumors and reports throughout the retail industry of security incidents at various retailers, and we are actively reviewing our systems to determine if we have been a victim of a breach,” Sears said in a written statement. ”We have found no information based on our review of our systems to date indicating a breach.”
The Secret Service declined to comment.
Media stories about undisclosed breaches in the retail sector have fueled rampant speculation about the identities of other victim companies. Earlier this week, The Wall Street Journal ran a piece quoting Verizon Enterprise Solutions’s Bryan Sartin saying that the company — which investigates data breaches — was responding to two different currently undisclosed breaches at major retailers.
Interestingly, Sartin gave an interview last week to this publication specifically to discuss a potential blind spot in the approach used by most banks to identify companies that may have had a payment card breach — a weakness that he said almost exclusively manifests itself directly after large breaches like the Target break-in.
The problem, Sartin said, stems from a basic anti-fraud process that the banks use called “common point of purchase” or CPP analysis. In a nutshell, banks routinely take groups of customer cards that have experienced fraudulent activity and try to see if some or all of them were used at the same merchant during a similar timeframe.
This CPP analysis can be a very effective tool for identifying breaches; according to Sartin, CPP — if done properly — can identify a breached entity nine times out of ten.
“When there is a common point of purchase, more than 9 times out of 10 not only do we later find evidence of a security breach, but we can conclusively tie the breach we found to the fraud pattern that’s been reported,” Sartin said.
However, in the shadow of massive card thefts like the one that occurred at Target, false positives abound, Sartin said. The problem of false positives often come from small institutions that may not have a broader perspective on how far a breach like Target can overlap with purchasing patterns at similar retailers.
And that can lead to a costly and frustrating situation for many retailers, particularly if enough banks report the errant finding to Visa, MasterCard and other card associations. At that point, the card brands typically secure guarantees that the identified merchant hire outside investigators to search for signs of a breach.
“CPP is linear enough that it just says look, there’s a problem in these shoppers’ accounts,” Sartin said. “So you have many banks looking at these patterns, and reporting that upstream, and the more noise these banks make about it, the more likely there will be an investigation that could be erroneous. That’s why there is often a period of probably 60 to 90 days after a major data breach that until such time as the investigating entity gets there and [identifies] the at-risk batch of accounts — there’s really no ability for them to identify what’s a false flag and what’s not.”
Apple on Friday released a software update to fix a serious security weakness in its iOS mobile operating system that allows attackers to read and modify encrypted communications on iPhones, iPads and other iOS devices. The company says it is working to produce a patch for the same flaw in desktop and laptop computers powered by its OS X operating system.
The update — iOS 7.0.6 — addresses a glaring vulnerability in the way Apple devices handle encrypted communications. The flaw allows an attacker to intercept, read or modify encrypted email, Web browsing, Tweets and other transmitted data, provided the attacker has control over the WiFi or cellular network used by the vulnerable device.
There has been a great deal of speculation and hand-waving about whether this flaw was truly a mistake or if it was somehow introduced intentionally as a backdoor. And it’s not yet clear how long this bug has been included in Apple’s software. In any case, if you have an iPhone or iPad or other iOS device, please take a moment to apply this fix.
Generally, I advise users to avoid downloading and installing security updates when they are using public WiFi or other untrusted networks. On the surface at least, it would seem that the irony of this situation for most users is that iOS devices will download updates automatically as long as users are connected to a WiFi network. But as several folks have already pointed out on Twitter, Apple uses code-signing on iOS and app updates to ensure that rogue code can’t be pushed to devices.
I will update this post when Apple ships the patch for OS X systems. For now, it may be wise to avoid using Safari on OS X systems. As Dan Goodin at Ars Technica writes, “because the Google Chrome and Mozilla Firefox browsers appear to be unaffected by the flaw, people should also consider using those browsers when possible, although they shouldn’t be considered a panacea.”
Update: Apple has fixed this and a number of other important issues with OS X, in this release.
Interview with Gentoo developer Sven Vermeulen (swift)
(by David Abbott)
1. Hi Sven, tell us about yourself?
My interest in technology & science never faded. Although computer systems and software development are my primary hobbies (as they can be handled hands-on easily without heavy investments) I still like to learn about the progress made in other fields and give myself exercises to keep my knowledge on those fields up to date. And for some reason, that always tends to help me with my real-life work (for instance for contract optimizations I used mathematical optimization methods).
I live with my daughter close to my work (between Brussels and Antwerp) which allows me to go to work by bicycle if my presence isn’t needed elsewhere. My work does require me to go abroad from time to time, but mostly within the European Union.
In my free time I enjoy … wait, free time? Nope, don’t have that nowadays. Let me rephrase it: if I had more free time, I’d probably spend it jogging or swimming (which I currently only do to clear my mind), sitting behind my computer (programming, documenting or just playing around), watching cats do stupid things on my tv (youtube – I don’t have cable or other TV services) and playing board games with friends.
Alas, most time is spent either on work, on working in my home (renovations) or providing taxi services to my daughter.
2. How did you get involved with Linux and Open Source?
I never deployed anything commercial / proprietary on my own systems anymore since. The BBS’es (and later Internet) provided all the information I needed to continue with free software. And as a C programmer (not saying I’m good at it, just saying I program in it) I took on the challenge of supporting my (then unsupported) Matrox graphics card with dual output in Linux. I got good help by the Linux development community, and got in touch with Linux’ internal structures. Which I immediately embraced as a new source of knowledge, as I moved to software engineering in my studies.
All software related things I did were in the free software world, patching here and there. After a while, I stumbled upon the next challenge, which was convincing other users to use free software. A major gap in this area was documentation, so I started learning about writing good documentation (I’m still disappointed that the Darwin Information Typing Architecture (DITA) hasn’t broken through), which is about the point that I joined Gentoo Linux.
In Gentoo, I first helped with translations, then moving on to English documentation, authoring, etc. Internally, I’ve been through various roles (regular developer, project manager, top-level project lead, trustee, council) in various areas (most of them non-technical, such as documentation, PR, recruitment). After quitting and joining a few times (I seem to have ups and downs in available time) I’m now running to keep the Gentoo documentation maintained, as well as supporting SELinux through the Gentoo Hardened project.
I often bounce from one technology or software to another, depending on the needs of the day. Need to detect installed libraries (in order to track potential vulnerabilities) but can’t find a tool? I’ll write one. Want to confirm secure configurations? I’ll learn about SCAP technologies and implement that. Require a web-based question & answer application? Let’s look how HTML5 works shall we. I’m pretty fluent in learning about technologies, protocols and what not.
Almost wished I was equally fluent in languages and history, which was my major obstacle at school…
3. I read your book Linux Sea and not only was impressed, really enjoyed it. Doing a Gentoo Linux install using the book as a classroom textbook would be the kind of class I would love to take. How did the book come about, and why Gentoo?
As a target distribution, I choose Gentoo because there aren’t many resource on Gentoo, and because Gentoo sticks close to the implementations of the projects themselves. There are no interfaces or APIs surrounding any of the functionalities that a Linux operating system provides, so I can easily discuss the real implementations. Not completely a “Linux from scratch”, but sufficiently close.
Another advantage of using Gentoo as example distribution is that readers, who use different distributions, can still enjoy the book (as it explains how things work) and then refer to the distribution-specific information of their distribution to go further, now with the knowledge of how things work “under the hood”.
4. With your skillset you would be welcome in any project, why do you support Gentoo?
If I want to do something similar with another distribution, I would most likely need to use a different set of distributions depending on my needs.
A second reason is the flexibility offered by Gentoo. Many tools offered by Gentoo are meant to assist in the maintenance and use of one or more tools or services, but without limiting the configuration abilities of the underlying components. Take portage for instance: you can hook into the various phases of package deployment easily, and many ebuilds support epatch_user, allowing for customizing deployments without removing functionality offered by Gentoo.
Or OpenRC’s dependency-based service scripts. Instead of naming it with a number depending on when you want to launch it, just put in the necessary dependencies in the scripts and you’re all set. That’s not just easy. That’s what makes Gentoo unique and powerful.
5. What could we be doing better?
But why not look for more innovative ideas? Be open and bold with ideas, discuss them publicly (now that we have the Gentoo wiki, this should be easy to implement), create concept code and documentation. Do things other distributions can’t.
We should dare to fail, in order to learn. Right now, it seems that we’re sometimes afraid of making the wrong choice. We’re an organization with several hundred developers and volunteers, but not bound by service agreements, contractual obligations or implied functional adherence based on financial contributions. We should leverage that and move towards more innovative fields.
A second item that I believe would improve Gentoo as a distribution would be to remove complexity. Often, we do things in a somewhat complex way because there is no other way. That’s fine. But after a while, new and simpler methods come by that should replace the functionality we implemented more simplified.
Think about how the Gentoo Handbook is currently developed. We used our own format / syntax for reasons that were, back then, correct reasons. But things move on and mature. And while there are now much better alternatives available, we can’t use it because we customized everything to our needs. Writing documentation in the Gentoo Handbook almost requires you to learn how to program, as we use keywords, conditionals, include directives, automatic link generation, string substitutions and more. This is complex, and we should focus on simplifying this.
*I* should focus on simplifying this.
I’m pretty sure other examples can be found. Are all our eclasses still fully needed? How come the ruby-ng eclass is quite different from python-r1 eclass, even though they generally want to offer the same functionality? TIMTOWTDI, but if there is a method better and more simple than the other, use it.
6. Describe to our readers the relationship between the council and the foundation?
7. Is this relationship working, does it need to be changed or improved?
8. Same question for improving our partnership with Förderverein Gentoo e.V.
9. What about moving the Gentoo Foundation to Belgium or somewhere in Europe?
10. What documentation is moving to the wiki?
In the next phase, handbook format documents (such as the SELinux Handbook, Gentoo Security Handbook and eventually the Gentoo Handbook itself) can be moved to the wiki as well. For the Gentoo Handbook though, this is more than just a copy of the data – it will require a refactoring of the documentation into a way that we can structure. I know the wiki supports inclusions and even conditionals, but this is some complexity I want to remove from the handbook.
A second thing a3li and I will look into (when time comes) is the ability to actually generate booklets from the wiki (like wikibooks.org does). I think this is a logical consequence, as those plugins (as used by wikibooks) are made with larger documents in mind, and allow us to align the documentation development with those best practices as gently suggested by the plugins.
But to do so, I believe that the architecture-specifics will need to be cleaned out. Either an entire chapter can be written independently of an architecture, or it can’t. Having a chapter that is “mostly” for one architecture, but with parameters and variables for each architecture just to make sure it reads fine for that architecture, is probably not doable or maintainable.
I have considered moving the larger documents in DocBook format (which is the format I use for my other, non-Gentoo documents), and that is still not abandoned. I guess I’ll need to sleep over it some more.
But first make sure that our wiki is qualitatively up to the standards we once had for our documentation.
11. With the documentation moving to the wiki have you noticed more contributions from the community?
Existing documentation, which is moved to the wiki, doesn’t get as much updates as I expected. But there are many reasons why, such as documentation being quite explicit, or people being afraid of editing documents written in a particular style they are not familiar with, or people just suggesting things in the discussion pages but not in the main page, …
12. What should we be doing to get more users involved?
A second thing is to try and get the discussions through the discussion pages more active. Right now, many discussions are still slow-paced. We should promote this more, but also make sure that we can follow up on these discussions easily. There are two ways to do this in a wiki. One is to watch the page (and the discussions), the second one is to mark the discussions as being “open”, so they can be aggregated and viewed through the proper category in the wiki.
13. Who would you like to see recruited to become Gentoo Developers?
With the (eventual) implementation of git repositories, we should also be able to work with the pull request methods allowing people who don’t want to become developer to still contribute to the portage tree.
But the most important is not what technical or non-technical abilities they have, or which role they want to take in the Gentoo project, but rather their willingness to perform and work on an operating system used by several thousand users.
14. What else can we utilize the wiki for?
Perhaps we can, one day, use the wiki as some sort of reference architecture for Gentoo. Such a reference architecture would explain readers how Gentoo could be used to create an integrated environment, where each component has bindings with other components, in a well-orchestrated manner.
Right now, most documents focus on a single technology implementation and there is no full picture as to what Gentoo can really offer to organizations and companies of reasonable size.
15. What would you like the main site to be used for and what framework / language should we use for the redesign?
Underlying, this can even be made static HTML. That’s quite powerful, well known to most people, and doesn’t need any (potentially risky) modules on the web sites.
16. As a Gentoo Developer what are some of your accomplishments?
17. What would be your dream job?
18. What are the specs of your current boxes?
Next to the systems at home, I also manage two Dell PowerEdge servers which both host virtual systems for various personal purposes (such as attempts to move current cloud-driven solutions, such as Google mail and calendar) towards self-hosted solutions. These servers are co-located (luckily, because they make too much noise to be in my home).
19. Can you describe your personal desktop setup (WM/DE)?
My previous one was fluxbox, which I enjoyed much as well. However, I ran XFCE due to a bug that someone reported (in SELinux support) and I wanted to reproduce it. And for some reason, it stuck.
20. What gives you the most enjoyment within the Gentoo community?
Gentoo @ FOSDEM 2014
(by Pavlos Ratis)
On the 1st and 2nd of February, many Gentoo users and developers attended FOSDEM, the biggest F/OSS conference in Europe. Gentoo developer and council member Donnie Berkholz(dberkholz) had a talk about the status of distribution-level package management and the latest trends. Furthermore, a Gentoo BoF took place on Saturday. There, we had the chance to meet each other and talk about our favorite distro. The day ended with a Gentoo-ish dinner and beers at city’s center.
(by Andreas K. Huettel)
First of all, Robin Johnson’s (robbat2) GnuPG key policy GLEP is progressing; it is now officially GLEP (draft) 63 , will be posted to the mailing list for discussion one last time soon, and be on the agenda of the next council meeting (March 2014) for final confirmation. In the meantime, we’ll be happy to receive feedback.
About EAPIs, the council decided to immediately deprecate EAPI 0 and EAPI 3, which means they should in general not be used in new ebuilds anymore and repoman gives a non-fatal warning on commit. EAPI 1 and EAPI 2, already deprecated for long, will be banned immediately, in the sense that repoman does not allow committing new ebuilds (but existing ones keep working and can also be modified).
Regarding stable keywords usage on m68k, sh, s390 some discussion about details took place. In the end, based on a suggestion by Mike Frysinger (vapier), it was decided that the profiles of these arches should all be marked as experimental; the consensus was that then package maintainers do not have to care about the keywording status on these particular arches and can e.g. remove the last stable marked or keyworded ebuild of a package at will.
The last important topic that was brought up was the policy on tree wide use of the gtk / gtk2 / gtk3 useflags, or to be more precise the clash between the documentation provided by the gnome team and the policy decided on in a recent QA team meeting. Both Chris Reffett (creffett) as QA team lead and Chí-Thanh Christopher Nguyễn (chithead) presented their viewpoints. Further discussion touched upon the question how far-reaching policy decisions the QA team may make. In the end the council members affirmed that “QA’s right to create standards in glep 48 includes flag names / functions”. Subsequent discussion encouraged QA and Gnome team to keep talking.
Gentoo Developer Moves
Gentoo is made up of 252 active developers, of which 40 are currently away.
This section summarizes the current state of the portage tree.
The Gentoo Foundation recently received a donation of services from Rackspace. We would like to thank Rackspace for their donation and for continuing to support Open Source and Free Software Projects.
The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.
The following tables and charts summarize the activity on Bugzilla between 27 January 2014 and 26 February 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
Closed bug ranking
The developers and teams who have closed the most bugs during this period are as follows.
Assigned bug ranking
The developers and teams who have been assigned the most bugs during this period are as follows.
Tip of the month
Are you using a packages that needs a maintainer?
Heard in the community
Problem installing net-libs/webkit-gtk:* hangs (gobject-introspection problem?) with =x11-drivers/nvidia-drivers-325.*
Thanks go to email@example.com for that
What do you do if you encounter a bug and it may have already been fixed. Search on bugzilla with this to show all the bugs even if they have been fixed and closed?
Changes to PBIâ€™s
As many of you know there was an issue with PBIâ€™s causing them to freeze at random times during use. Kris went into full-blown hermit programmer mode to track down the issue and youâ€™ll be glad to know a fix was committed that addresses this issue. Kris said of the fix: â€œitâ€™s faster, cleaner, and allows proper access to all of the filesystem data. It can even be used by FreeBSD users who want to run different sets of packages in a location other than /usr/localâ€�. To test out the new changes you will want to rebuild the pbi-manager backend. For those of you that may not know the pbi-manager utility is a backend that you never see, but is always there managing system interactions when running PBIâ€™s. Follow the instructions below to grab the pc-bsd source and rebuild the pbi-manager to apply the fix.
1. Open a new terminal and paste: git clone https://github.com/pcbsd/pcbsd.git
2. type: cd pcbsd/src-sh/pbi-manager/
3. type: sudo make install
4. Restart your system
And youâ€™ve done it! Donâ€™t forget to reset your system! PBIâ€™s will not work until the system is reset. For more information, questions, or thoughts please post below.
Changes to Life Preserver
Life preserver has been updated to bring in some exciting new changes. New automatic snapshot schedules have been added along with new replication schedule options that will allow users more flexibility and control over their Life Preserver snapshot schedules (i.e. Hourly, 30 minutes, 10 minutes). New code has been added to allow the user to change the pop-up notification policy (all, only errors, none). A minor bug was also fixed that was causing non-error messages in the â€œMessageâ€� dialog.
Unifying PC-BSD Utility Chain
Work is continuing on standardizing the PC-BSD utility chain. More information has been added @ (http://wiki.pcbsd.org/index.php/Become_a_Developer/10.1). The changes will also bring in some new keyboard accessibility through hot keys and shortcut keys. There are currently several opportunities available to help update the tool chain, so if youâ€™d like to lend a hand please let us know!
Important changes to Appcafe and PCDM (Release Notes)
* Finish overhaul of the UI
* Fix backend detection of LDAP/Active Directory users (still needs verification/testing by people with this special type of setup)
Login Manager Configuration Utility (pc-dmconf)
So today I was pointed at a funny one:
/etc/systemd/system/ntpdate.service.d/00gentoo.confNow instead of being wrongly installed in /usr/lib (whuarghllaaaaaaaawwreghhh!?!$?) there's some config files for systemd bleeding into /etc.
Apart from being inconsistent with itself this eludes all previous ways to avoid useless files from being installed. The proper response thus looks like this now:
INSTALL_MASK="/lib/systemd /lib32/systemd /lib64/systemd /usr/lib/systemd /usr/lib32/systemd /usr/lib64/systemd /etc/systemd"And on the upside this will break udev unless you carefully move config to /etc (lolwat ur no haz EUNICHS system operation?) - which just motivated me to shift everything I can to eudev.
Reading recommendation: FHS
…but remembering costs extra.
Every once in a while, I come across a patch someone sent me, or which I developed in response to a bug report I received, but it’s been weeks or months and I can’t for the life of me remember where it came from, or what it’s for.
Case in point—I’m typing this on a laptop I haven’t used in over two months, and one of the first things I found when I powered it on and opened Chrome was a tab with the following patch:
The patch fixes a long-standing bug in
Burning all the bridges. Cleaning up jails with ezjail-admin on #FreeBSD via Planet FreeBSD | 2014-02-26 21:16 UTC
I noted that my updates on my jail host didn’t actually do a delete-old/delete-old-libs during the basejail process:
ezjail-admin update -i
I tend to update my jails with my base host svn updates to -current, so there’s a bit of churn and burn with regards to old files and such. This came to a head today as my src.conf on the base host declares WITHOUT_NIS to conserve my limited space.
The python port checks for the existence of the yp binaries to determine whether or not to build NIS support. So, if the old binaries are lying around and support for NIS is removed from your system, python’s build will abort with something like the following:
Install them as needed.
I realized that even though my host system was fairly clean (I do port rebuilds after each upgrade and delete-old delete-old-libs following that), the basejail was still filled with obsoleted files.
A super dangerous and super effective way to clean that up is the following:
Dangerous, because you have to realize that your deleting binaries and libraries that might still be in use if you haven’t recompiled your ports packages. Effective, because it will cleanup and purge a lot of things if you haven’t done it in a while.
This also led me to understand that the /etc/src.conf tuneables WITHOUT_* don’t *stop* the buildsystem from creating the binaries and libraries. It doesn’t seem to shorten your build time. It *will* allow you to purge them from your system at install time with the delete-old make targets.
I just got word of an embarrassing bug in OpenPAM Nummularia. The
This macro is never used directly, but it is referenced by
The obvious course of action is to add unit tests for the character classification macros (r760) and then fix the bug (r761). In this case, complete coverage is easy to achieve since there are only 256 possible inputs for each predicate.
Unsurprisingly, writing more unit tests for OpenPAM is moving up on my TODO list. Please contact me if you have the time and inclination to help out.
Last week’s story about steeply falling prices on credit and debit card data stolen from Target mentioned several reasons why many banks may not have already reissued all of their cards impacted by the breach. But it left out one other key reason: A huge backlog of orders at companies that manufacture credit and debit cards on behalf of financial institutions.
Turns out, while the crooks responsible for monetizing the Target breach seem to have had little trouble counterfeiting stolen cards, the process by which banks obtain legitimate replacement cards for their customers is not always quite so speedy.
I recently spoke with a gentleman who heads up security at a small federal credit union, and this individual said his institution ended up printing their own cards in-house after being told by their financial services provider that their order for some 2,000 new customer cards compromised in the Target breach would have to get behind a backlog of more than 2 million existing orders from other banks.
The credit union in question issues Visa-branded cards to its customers, but the actual physical cards are produced by Fiserv, a Brookfield, Wisc. financial services firm that also handles the online banking portals for a huge number of small to mid-sized financial institutions nationwide. In addition to servicing this credit union, Fiserv also prints cards for some of the biggest banks in the world, including Bank of America and Chase.
Shortly after the holidays, the credit union began alerting affected customers, notifying them that the institution would soon be reissuing cards. But when it actually went to place the order for the new cards, the institution was told it would have to get in line.
“They informed us that there was a backlog of 2 million cards, and said basically, ‘We’ll get to you when we get to you’,” the credit union source told KrebsOnSecurity.
Murray Walton, chief risk officer at Fiserv, acknowledged that the company has experienced extraordinarily high demand for new cards in the wake of the Target breach, but that Fiserv is quickly whittling down its existing backlog of orders.
“A large breach injects additional demand into a system that is already operating at near-peak capacity at year-end,” Walton said. “As a result, producers face the challenge of juggling existing contractual commitments with this incremental demand, and turn to mandatory overtime and staff augmentation to get the most out of their equipment and infrastructure. We believe we are managing this situation as well as possible, and are beginning to see our cycle times (order to delivery) diminish compared to a few weeks ago. Meanwhile, we note that fraud prevention is a multi-faceted challenge, and card reissue is only one arrow in the quiver. Alert consumers and behind-the-scenes fraud management programs are also essential.”
Faced with mounting customer service requests from account holders who’d been told to expect new cards, the credit union decided to take matters into its own hands.
“We have the capability to print out the cards ourselves at a local branch, so some of our software developers wrote some scripts to export the customer data and we had two people who ended up burning the midnight oil for several days making these cards by hand.”
Most Internet users are familiar with the concept of updating software that resides on their computers. But this past week has seen alerts about an unusual number of vulnerabilities and attacks against some important and ubiquitous hardware devices, from consumer-grade Internet routers, data storage and home automation products to enterprise-class security solutions.
Last week, the SANS Internet Storm Center began publishing data about an ongoing attack from self-propagating malware that infects some home and small-office wireless routers from Linksys. The firewall built into routers can be a useful and hearty first line of protection against online attacks, because its job is to filter out incoming traffic that the user behind the firewall did not initiate. But things get dicier when users enable remote administration capability on these powerful devices, which is where this malware comes in.
The worm — dubbed “The Moon” — bypasses the username and password prompt on affected devices. According to Ars Technica’s Dan Goodin, The Moon has infected close to 1,000 Linksys E1000, E1200 and E2400 routers, although the actual number of hijacked devices worldwide could be higher and is likely to climb. In response, Linksys said the worm affects only those devices that have the Remote Management Access feature enabled, and that Linksys ships these products with that feature turned off by default. The Ars Technica story includes more information about how to tell whether your router may be impacted. Linksys says it’s working on an official fix for the problem, and in the meantime users can block this attack by disabling the router’s remote management feature.
Similarly, it appears that some ASUS routers — and any storage devices attached to them — may be exposed to anyone online without the need of login credentials if users have taken advantage of remote access features built into the routers, according to this Ars piece from Feb. 17. The danger in this case is with Asus router models including RT-AC66R, RT-AC66U, RT-N66R, RT-N66U, RT-AC56U, RT-N56R, RT-N56U, RT-N14U, RT-N16, and RT-N16R. Enabling any of the (by-default disabled) “AiCloud” options on the devices — such as “Cloud Disk” and “Smart Access” — opens up a potentially messy can of worms. More details on this vulnerability are available at this SecurityFocus writeup.
ASUS reportedly released firmware updates last week to address these bugs. Affected users can find the latest firmware updates and instructions for updating their devices by entering the model name/number of the device here. Alternatively, consider dumping the stock router firmware in favor of something more flexible, less buggy amd most likely more secure (see this section at the end of this post for more details).
YOUR LIGHTSWITCH DOES WHAT?
Outfitting a home or office with home automation tools that let you control and remotely monitor electronics can quickly turn into a fun and addictive (if expensive) hobby. But things get somewhat more interesting when the whole setup is completely exposed to anyone on the Internet. That’s basically what experts at IOActive found is the case with Belkin‘s WeMo family of home automation devices.
According to research released today, multiple vulnerabilities in these WeMo Home Automation tools give malicious hackers the ability to remotely control the devices over the Internet, perform malicious firmware updates, and access an internal home network. From IOActive’s advisory (PDF):
There does not appear to be anyone or anything attacking these vulnerabilities — yet. But from where I sit, the scariest part of these flaws is Belkin’s apparent silence and inaction in response to IOActive’s research. Indeed, according to a related advisory released today by Carnegie Mellon University’s Software Engineering Institute, Belkin has not responded with any type of solution or workaround for the identified flaws, even though it was first notified about them back in October 2013. So be forewarned: Belkin’s WeMo products may allow you to control your home electronics from afar, but you may not be the only one in control of them.
Update, 10:24 p.m. ET: Belkin has responded with a statement saying that it was in contact with the security researchers prior to the publication of the advisory, and, as of February 18, had already issued fixes for each of the noted potential vulnerabilities via in-app notifications and updates. Belkin notes that users with the most recent firmware release (version 3949) are not at risk for malicious firmware attacks or remote control or monitoring of WeMo devices from unauthorized devices. Belkin urges such users to download the latest app from the App Store (version 1.4.1) or Google Play Store (version 1.2.1) and then upgrade the firmware version through the app.
NETWORK ATTACKED STORAGE
As evidenced by the above-mentioned ASUS and Linksys vulnerabilities, an increasing number of Internet users are taking advantage of the remote access features of routers and network-attached storage (NAS) devices to remotely access their files, photos and music. But poking a hole in your network to accommodate remote access to NAS systems can endanger your internal network and data if and when new vulnerabilities are discovered in these devices.
One popular vendor of NAS devices — Synology — recently alerted users to a security update that fixes a vulnerability for which there has been a public exploit since December that allows attackers to remotely compromise the machines. A number of Synology users recently have been complaining that the CPUs on their devices were consistently maxing out at 100 percent usage. One user said he traced the problem back to software that intruders had left behind on his Synology RackStation device which turned his entire network storage array into a giant apparatus for mining Bitcoins.
According to an advisory that Synology emailed Monday to registered users, among the many not-to-subtle signs of a compromised NAS include:
Synology urges customers with hardware exhibiting the above-mentioned behavior to follow the instructions here and re-install the disk station management software on the devices, being sure to upgrade to the latest version. For users who haven’t encountered problems yet, Synology recommends updating to the latest version, using the device’s built-in update mechanism.
SYMANTEC ENDPOINT INFECTION?
Although not strictly hardware-related, other recent vulnerability discoveries also to be filed under the “Hey, I thought this stuff was supposed to protect my network!” department is new research on several serious security holes in Symantec Endpoint Protection Manager — a host-based intrusion protection system and anti-malware product designed to be used by businesses in search of a centrally-managed solution.
In an advisory issued today, Austrian security firm SEC Consult warned that the flaws would allow attackers “to completely compromise the Endpoint Protection Manager server, as they can gain access at the system and database level. Furthermore attackers can manage all endpoints and possibly deploy attacker-controlled code on endpoints.”
Symantec has released updates to address the vulnerabilities, and probably none too soon: According to the SANS Internet Storm Center, over the past few weeks attackers have been massively scanning the Internet for Symantec Endpoint Protection Management system. SANS’s Johannes B. Ullrich says that activity points to “someone building a target list of vulnerable systems.”
Continuing the discussion from above about alternatives to the stock firmware that ships with most wired/wireless routers, most stock router firmware is fairly clunky and barebones, or else includes undocumented “features” or limitations.
Here’s a relatively minor but extremely frustrating example, and one that I discovered on my own just this past weekend: I helped someone set up a powerful ASUS RT-N66U wireless router, and as part of the setup made sure to change the default router credentials (admin/admin) to something more complex. I tend to use passphrases instead of passwords, so my password was fairly long. However, ASUS’s stock firmware didn’t tell me that it had truncated the password at 16 characters, so that when I went to log in to the device later it would not let me in with the password I thought I’d chosen. Only by working backwards on the 25-character passphrase I’d chosen — eliminating one letter at a time and re-trying the username and password — did I discover that the login page would give an “unauthorized” response if I entered anything more than that the first 16 characters of the password.
Normally when it comes to upgrading router firmware, I tend to steer people away from the manufacturer’s firmware toward alternative, open source alternatives, such as DD-WRT or Tomato. I have long relied on DD-WRT because it with comes with all the bells, whistles and options you could ever want in a router firmware, but it generally keeps those features turned off by default unless you switch them on.
Whether you decide to upgrade the stock firmware to a newer version by the manufacturer, or turn to a third party firmware maker, please take the time to read the documentation before doing anything. Updating an Internet router can be tricky, and doing so demands careful attention; an errant click or failure to follow closely the installation/updating instructions can turn a router into an oversized paperweight in no time.
Update, Feb. 19, 8:10 a.m. ET: Fixed date in Synology exploit timeline.
I’m glad to announce the release of py3status v1.3 which brings to life a feature request from @tasse and @ttyE0. Guys, I hope this one will please you !
what’s new ?
Along with a localization bug fix thanks to @zetok from Poland, the main new feature is that py3status now supports a standalone mode which you can use when you only want your own modules displayed in an i3bar !
As usual, this release is already available for my fellow Gentoo Linux users and on pypi !
Changelog is here and quick to get, enjoy !
Some of you may already have noticed that sys-apps/ed and sys-fs/ddrescue packages started pulling in lzip archiver. «Is this some new fancy archiver?» you may ask. The answer is «no. It’s been around for a very long time, and it never got any real interest.»
You can read some of the background story in New Options in the World of File Compression Linux Gazette article. Long story short, lzip was created before xz as a response to the limitations of .lzma format used by lzma-utils. However, it never got any real attention and when xz-utils was released as a direct successor to lzma-utils it became practically redundant. And the two projects co-existed silently until lately…
Over the past five years, Antonio Diaz Diaz, lzip’s author, and a few project supporters were trying to convince the community that the lzip format is superior to xz. However, they were never able to provide any convincing arguments to the community, and while xz gained popularity lzip stayed in the shadow. And it was used mostly by the projects Diaz was member of.
It seems that he has finally decided that advocacy will not help his pet project in gaining popularity. Instead, he decided to take advantage of his administrator position in the mentioned GNU projects and discontinue providing non-.lz tarballs. As he says, «surely every user of ddrescue would like to know about lzip […]».
So, Gentoo user, would you like to know about lzip? Let’s try to get a few fair points here.
First of all, it should be noted that the two competing projects are two different implementations of the same compression algorithm — LZMA2, and they use incompatible file formats. Therefore, both can achieve the same compression ratio (and similar speed) but using either of them requires users to download the appropriate tool.
xz is gaining traction lately. The initial doubt period seems to be over, and growing number of projects is adapting the format. This includes both use of the compression library and distribution of sources in .tar.xz. An xz coder is implemented within the kernel and it can be used as a compressor for filesystems. Practically saying, it’s inevitable for Gentoo users since it is used to compress some of the larger sources (like kernel sources).
Lzip is a side project with minor interest. Most of our users were unaware of its existence until they were forced to install it in order to unpack some other package. In fact, since it is used in particularly small packages, the gain from using it (compared to gzip) is even smaller than size of lzip tarball. Not to mention if we compare it to xz that most of our users have installed already.
As analyzed by Ohloh, xz-utils «has a well established, mature codebase» though it is mostly maintained by a single person. lzip is developed by a single person with no officially known contributors, and no public source code repository. The author claims that lzip is mature and will be continually developed but those are only promises, compared to the community growing around xz.
Feature-wise, xz supports more fine-grained configuration of LZMA2 and additional filters (alike 7-Zip). Lzip has a recovery tool and a few extra promises.
xz-utils are written in C, with the compression library and basic utilities being public-domain. Lzip is C++, and GPLv3 (but there’s also a limited public-domain C version). xz-utils use clean autotools, lzip — custom configure script and Makefile.
To sum up: both tools are quite similar and have no strong advantages over the other. However, the popularity of xz makes it a better choice most of the time, while using lzip mostly forces users to install an extra tool for no real benefit. The continuous support and development in xz-utils is guaranteed by the community, while in lzip it’s just author’s promise.
This post would end here if not for the late events. Now lzip gained an important disadvantage: its author is simply unprofessional. While many other projects start shipping .xz compressed tarballs following its rise in popularity, Diaz is grasping at straws and abusing his position trying to force people to use his pet project instead. He has clearly more concern about the popularity of lzip than about the friendliness to users of the other projects he is administering.
Big news this week! Â Kris has finished re-writing the code for handling how PC-BSD handles major updates. Â The general consensus was that there were still many users out there that were having difficulty when upgrading to a new version (i.e 9.2 > 10). Â For more information on the new PC-BSD update system,Â check out the article here. Â To view Krisâ€™ address to the testing mailing list click here. Â Our goal is and has always been to have a reliable system to push out updates, and we think you will all be very pleased with the results!
Basic guidelines for PC-BSD utilities are continuing to evolve and we want your input! Â If you have ideas for the development team on what should become standard practice we want to hear from you. Â You can join the discussion on the the PC-BSD developer mailing list if youâ€™d like to submit your ideas. Â To check outÂ Guidelines for PC-BSD utilities visit the PC-BSD 10.1 wikiâ€™s â€œBecome a Developerâ€� section @Â http://wiki.pcbsd.org/index.php/Become_a_Developer/10.1
A moderate batch of fixes for trac tickets were committed this week addressing minor issues with the warden. Â Appcafe has also received a big update to itâ€™s user interface allowing for a smoother experience for users. Â If you want to give the 2 newest versions of these programs a try grab the PC-BSD source and let us know what you think!
Thatâ€™s it for this week! Â See you all next time!
Kris just posted a call for testers for a new methodology for major system upgrades (such asÂ 9.2 -> 10.0 -> 10.1, etc..) and we are looking for people who are still running 9.2Â to try it out. The full text of the call for testers is at the bottom of the post, but to give a bit of background we have been generally unimpressed with the reliability of the â€œstandardâ€� FreeBSD update tools (freebsd-update and pkg) when it comes to fetching uncorrupted update files through the internet. This new methodology takes those two utilities out of the general preparations for an update (download/verification of files), as well as a couple other upgrade steps so that there should no longer be an issue with starting an upgrade when only some of the upgrade files were actually retrieved successfully.
Please test it out and let us know how it goes!
Remember, always backup your data before doing any major upgrade like this!Â The new methodology should automatically create a boot environment for you before doing the upgrade, but better safe than sorry!
Here is the full text of request from Kris to the developer mailing list:
I've just finished up some work on a major re-write of our updating system when "upgrading" between major releases, I.E. 9.2 -> 10.0, 10.0 -> 10.1. https://github.com/pcbsd/pcbsd/commit/b95e8a83c73511568ae4291a54e0f93f6c67ef30 https://github.com/pcbsd/pcbsd/commit/9a8b3d1945fa67db8e99b0e4e82280b5626aa895 It seems to work well here, but it needs some additional testing from any users still running 9.2 who want to update to 10. To test this, first grab the latest from git: # git clone https://github.com/pcbsd/pcbsd.git pcbsd # cd pcbsd/src-sh/pc-updatemanager # make install Then run: # pc-updatemanager install fbsd-10.0-RELEASE This should start the download / upgrade process. If anything fails during the process, logs are kept in /root, which will assist me in debugging. Thanks!
For the second time this month, Adobe has issued an emergency software update to fix a critical security flaw in its Flash Player software that attackers are already exploiting. Separately, Microsoft released a stopgap fix to address a critical bug in Internet Explorer versions 9 and 10 that is actively being exploited in the wild.
The vulnerabilities in both Flash and IE are critical, meaning users could get hacked just by visiting a compromised or booby-trapped Web site. The Flash patch comes just a little over two weeks after Adobe released a rush fix for another zero-day attack against Flash.
Adobe said in an advisory today that it is aware of an exploit that exists for one of three security holes that the company is plugging with this new release, which brings Flash Player to v. 184.108.40.206 for Linux, Mac and Windows systems.
This link will tell you which version of Flash your browser has installed. IE10/IE11 and Chrome should auto-update their versions of Flash, although IE users may need to check with the Windows Update feature built into the operating system.
If your version of Flash on Chrome (on either Windows, Mac or Linux) is not yet updated, you may just need to close and restart the browser. The version of Chrome that includes this fix is v. 33.0.1750.117 for Windows, Mac, and Linux. To learn what version of Chrome you have, click the stacked bars to the right at of the address bar, and select “About Google Chrome” from the drop down menu (the option to apply any pending updates should appear here as well).
The most recent versions of Flash are available from the Adobe download center, but beware potentially unwanted add-ons, like McAfee Security Scan). To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer will need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).
Another great cross-platform approach to blocking Flash (and Java) content by default is Click-to-Play, a feature built into Google Chrome, Mozilla Firefox and Opera (and available via add-ons in Safari) that blocks plugin activity by default, replacing the plugin content on the page with a blank box. Users who wish to view the blocked content need only click the boxes to enable the Flash or Java content inside of them. Check out this post for more details on deploying Click-to-Play.
MICROSOFT FIX-IT TOOL
Microsoft has released a security advisory and a FixIt shim tool for a previously unknown zero-day vulnerability in Internet Explorer versions 9 and 10. Microsoft says it is aware of “limited, targeted attacks” that attempt to exploit a vulnerability in Internet Explorer 10. Only Internet Explorer 9 and Internet Explorer 10 are affected by this vulnerability. Other supported versions of Internet Explorer are not affected.
Microsoft says it is working on an official patch, but that in the meantime IE users should consider taking advantage of a new FixIt solution. According to Microsoft, applying the Microsoft Fix it solution here prevents the exploitation of this issue.
Microsoft warns that IE users should make sure they have the latest version of IE before appyling this FixIt solution (that means a visit to Windows Update). Also, the company says that after you install this Fix it solution, you may experience increased memory usage when you use Internet Explorer to browse the web. This behavior apparently occurs until you restart Internet Explorer.
After my not-so-good experiments with cvs2git I was pointed at cvsps. The currently masked 3.13 release (plus the lastest ~arch version of cvs) seems to do the trick quite well. It throws a handful of warnings about timestamps that appear to be harmless to me.
What I haven't figured out yet is how to "fix" the email addresses, but that's a minor thing.
Take the raw cvs repo as in the first blogpost, then:
$ time cvsps --root :local:/var/tmp/git-test/gentoo-x86-raw/ --fast-export gentoo-x86 > git-fast-export-stream cvsps: NOTICE: used alternate strip path /var/tmp/git-test/gentoo-x86-raw/gentoo-x86/ cvsps: broken revision date: 2003-02-18 13:46:55 +0000 -> 2003-02-18 13:46:55 file: dev-php/PEAR-Date/PEAR-HTML_Common-1.0.ebuild, repairing. [SNIP] real 212m56.219s user 12m11.170s sys 6m59.110sSo this step takes near 3h walltime, and consumes ~10GB RAM. It generates about 17GB of temporary data.
To get performance up you'd need a machine with 32GB+ RAM so that you can do that in TMPFS (and don't forget to make /tmp a tmpfs too, because tmpfile() creates lots and lots of temporary files there) - and the tmpfs needs to be >18GB
In theory you can pipe that directly into git-fast-import. To make testing easier I didn't do that..
Throwing everything into git takes "a while" (forgot to time it, about 20 minutes I think):
Alloc'd objects: 9680000 Total objects: 9675121 ( 190979 duplicates ) blobs : 3020032 ( 158366 duplicates 1389088 deltas of 2989578 attempts) trees : 5150778 ( 32613 duplicates 4633675 deltas of 4709477 attempts) commits: 1504311 ( 0 duplicates 0 deltas of 0 attempts) tags : 0 ( 0 duplicates 0 deltas of 0 attempts) Total branches: 8 ( 3 loads ) marks: 1073741824 ( 4682709 unique ) atoms: 431658 Memory total: 516969 KiB pools: 63219 KiB objects: 453750 KiB pack_report: getpagesize() = 4096 pack_report: core.packedGitWindowSize = 1073741824 pack_report: core.packedGitLimit = 8589934592 pack_report: pack_used_ctr = 7139457 pack_report: pack_mmap_calls = 1976288 pack_report: pack_open_windows = 3 / 9 pack_report: pack_mapped = 2545679911 / 8589934592And then run git gc (warning: Another mem-hungry operation peaking at ~8GB).
The result is about 7.2GB git repository and appears to have full history.
Files to play around with:
Raw copy of the CVS repo (~440MB)
The git-fast-importable stream created by cvsps (biiig)
The mangled compressed git repository that results from it (~6GB)
The same repo recompressed (~1.7GB)
"git repack -a -d -f --max-pack-size=10g --depth=100 --window=250" takes ~3 CPU-hours and collapses the size nicely. Thanks, Mr.Klausmann!