A New Mindcraft Moment

From Clash of Crypto Currencies
Jump to: navigation, search

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (visitor, #24616) [Link]



1. this WP article was the fifth in a collection of articles following the security of the internet from its beginnings to relevant matters of immediately. discussing the safety of linux (or lack thereof) suits properly in there. it was also a nicely-researched article with over two months of analysis and interviews, one thing you cannot quite claim yourself in your current items on the subject. you do not like the details? then say so. or even better, do something constructive about them like Kees and others have been trying. nonetheless silly comparisons to previous crap like the Mindcraft research and fueling conspiracies don't precisely help your case. 2. "We do an affordable job of discovering and fixing bugs." let's begin here. is this assertion primarily based on wishful considering or chilly onerous info you are going to share in your response? according to Kees, the lifetime of safety bugs is measured in years. that's more than the lifetime of many devices individuals purchase and use and ditch in that period. 3. "Problems, whether or not they are safety-related or not, are patched quickly," some are, some aren't: let's not overlook the recent NMI fixes that took over 2 months to trickle all the way down to stable kernels and we even have a person who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-programs.btrfs/49500 (FYI, the overflow plugin is the first one Kees is attempting to upstream, imagine the shitstorm if bugreports will probably be treated with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples should not statistics, so as soon as once more, do you've numbers or is all of it wishful thinking? (it's partly a trick query as a result of you will also have to explain how something gets to be decided to be security associated which as everyone knows is a messy enterprise in the linux world) 4. "and the stable-replace mechanism makes these patches accessible to kernel users." besides when it doesn't. and yes, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Particularly, the few builders who're working on this area have by no means made a severe try to get that work integrated upstream." you do not have to be shy about naming us, after all you did so elsewhere already. and we also defined the the reason why we have not pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i do not count on you and your readers to read any of it, here's the tl;dr: in order for you us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that is how the world works, that's how >90% of linux code will get in too. i personally find it fairly hypocritic that nicely paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter for free. and before somebody brings up the CII, go examine their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and received no solutions.



Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]



Cash (aha) quote : > I propose you spend none of your free time on this. Zero. I propose you receives a commission to do that. And properly. No one count on you to serve your code on a silver platter without spending a dime. The Linux basis and massive corporations utilizing Linux (Google, Purple Hat, Oracle, Samsung, and so forth.) ought to pay safety specialists like you to upstream your patchs.



Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link]



I might simply like to point out that the way you phrased this makes your comment a tone argument[1][2]; you've (in all probability unintentionally) dismissed the entire mother or father's arguments by pointing at its presentation. The tone of PAXTeam's comment shows the frustration constructed up over the years with the way things work which I think ought to be taken at face value, empathized with, and understood slightly than simply dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,



Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



why, is upstream known for its basic civility and decency? have you ever even read the WP post below dialogue, never mind previous lkml visitors?



Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Hyperlink]



No Argument



Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link]



Please don't; it does not belong there both, and it especially doesn't want a cheering part because the tech press (LWN typically excepted) tends to supply.



Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (visitor, #58961) [Link]



Okay, however I used to be thinking of Linus Torvalds



Posted Nov 8, 2015 16:11 UTC (Solar) by pbonzini (subscriber, #60935) [Hyperlink]



Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Hyperlink]



Why must you assume solely cash will fix this downside? Yes, I agree more resources must be spent on fixing Linux kernel security points, however don't assume someone giving a corporation (ahem, PAXTeam) cash is the one resolution. (Not imply to impugn PAXTeam's security efforts.)



The Linux improvement community may have had the wool pulled over its collective eyes with respect to safety issues (both actual or perceived), however simply throwing money at the issue will not repair this.



And sure, I do notice the business Linux distros do heaps (most?) of the kernel improvement these days, and that implies indirect monetary transactions, but it is much more concerned than just that.



Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]



Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (guest, #24616) [Link]



Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Link]



I believe you definitely agree with the gist of Jon's argument... not sufficient focus has been given to safety in the Linux kernel... the article gets that half right... money hasn't been going in the direction of security... and now it must. Aren't you glad?



Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Link]



they talked to spender, not me personally, but yes, this side of the coin is well represented by us and others who have been interviewed. the identical method Linus is a good consultant of, properly, his own pet undertaking known as linux. > And if Jon had solely talked to you, his would have been too. provided that i'm the author of PaX (part of grsec) yes, speaking to me about grsec matters makes it among the finest methods to analysis it. but when you know of someone else, be my guest and title them, i'm pretty certain the lately formed kernel self-protection folks can be dying to interact them (or not, i do not assume there's a sucker out there with hundreds of hours of free time on their hand). > [...]it also contained quite a couple of of groan-worthy statements. nothing is ideal however contemplating the viewers of the WP, this is considered one of the higher journalistic pieces on the topic, regardless of the way you and others do not like the sorry state of linux safety exposed in there. if you need to discuss more technical details, nothing stops you from speaking to us ;). talking of your complaints about journalistic qualities, since a previous LWN article saw it fit to include a number of typical dismissive claims by Linus about the quality of unspecified grsec options with no proof of what expertise he had with the code and the way recent it was, how come we did not see you or anybody else complaining about the quality of that article? > Aren't you glad? no, or not but anyway. i've heard lots of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing individual bugs and related circus (that Linus rightfully despises FWIW).



Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]



Posted Nov 8, 2015 13:06 UTC (Solar) by k3ninho (subscriber, #50375) [Link]



Proper now we've got developers from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Sadly, the encompassing cultural angle of developers is to hit functional objectives, and occasionally efficiency objectives. Security goals are sometimes missed. Ideally, the culture would shift in order that we make it difficult to follow insecure habits, patterns or paradigms -- that is a activity that will take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream eventually anyway because the ideas that they embody are now timely. I can see a option to make it occur: Linus will settle for them when a big end-user (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here is a set of improvements, we're already utilizing them to solve this kind of problem, here is how every little thing will remain working because $proof, observe carefully that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It is a game and could be gamed; I would choose that the neighborhood shepherds users to observe the pattern of declaring downside + solution + practical check evidence + performance take a look at proof + security test proof. K3n.



Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (guest, #33164) [Link]



And about that fork barrel: I would argue it's the opposite means around. Google forked and lost already.



Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Hyperlink]



Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]



Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]



So I need to confess to a specific amount of confusion. I might swear that the article I wrote mentioned precisely that, but you have put a good quantity of effort into flaming it...?



Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]



Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]



I personally suppose you and Nick Krause share opposite sides of the same coin. Programming capability and primary civility.



Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (guest, #16953) [Link]



I hope I'm improper, but a hostile perspective is not going to assist anybody receives a commission. It's a time like this the place one thing you seem to be an "knowledgeable" at and there is a demand for that expertise the place you display cooperation and willingness to participate as a result of it's a chance. I'm relatively shocked that someone doesn't get that, however I am older and have seen a couple of of those alternatives in my career and exploited the hell out of them. You solely get a couple of of those in the average career, and handful at essentially the most. Sometimes you have to invest in proving your abilities, and that is a type of moments. It seems the Kernel neighborhood might finally take this security lesson to heart and embrace it, as stated in the article as a "mindcraft moment". This is an opportunity for developers which will want to work on Linux safety. Some will exploit the opportunity and others will thumb their noses at it. In the end those builders that exploit the opportunity will prosper from it. I feel old even having to write that.



Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Perhaps there's a hen and egg problem right here, however when in search of out and funding people to get code upstream, it helps to pick out people and teams with a history of with the ability to get code upstream. It is completely affordable to prefer understanding of tree, offering the flexibility to develop spectacular and critical safety advances unconstrained by upstream necessities. That is work somebody may additionally want to fund, if that meets their needs.



Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]



You make this argument (implying you do research and Josh does not) after which fail to help it by any cite. It could be far more convincing should you hand over on the Onus probandi rhetorical fallacy and truly cite info. > living proof, it was *them* who prompt that they wouldn't fund out-of-tree work but would consider funding upstreaming work, besides when pressed for the details, all i obtained was silence. For those following along at house, this is the related set of threads: http://lists.coreinfrastructure.org/pipermail/cii-focus on... A quick precis is that they told you your project was unhealthy because the code was by no means going upstream. You instructed them it was because of kernel builders attitude so they need to fund you anyway. They instructed you to submit a grant proposal, you whined more concerning the kernel attitudes and finally even your apologist instructed you that submitting a proposal is likely to be the neatest thing to do. At that point you went silent, not vice versa as you suggest above. > obviously i will not spend time to put in writing up a begging proposal simply to be advised that 'no sorry, we don't fund multi-12 months projects at all'. that is one thing that one needs to be instructed prematurely (or heck, be a part of some public rules in order that others will know the rules too). You appear to have a fatally flawed grasp of how public funding works. If you do not inform individuals why you need the money and the way you will spend it, they're unlikely to disburse. Saying I'm sensible and I do know the issue now hand over the cash doesn't even work for many Academics who have a strong popularity in the sector; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you examine the kernel git logs (minus the stuff that was not properly credited)? jejb@jarvis> git log|grep -i 'Author: pax.*crew'|wc -l 1 Stellar, I need to say. And before you gentle off on these who've misappropriated your credit, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a massively helpful and time consuming skill and one of the explanations groups like Linaro exist and are effectively funded. If more of your stuff does go upstream, will probably be because of the not inconsiderable efforts of other folks on this area. You now have a enterprise model promoting non-upstream security patches to customers. There's nothing incorrect with that, it is a fairly traditional first stage business model, but it surely does quite depend on patches not being upstream in the primary place, calling into query the earnestness of your attempt to put them there. Now here is some free recommendation in my field, which is helping firms align their businesses in open source: The promoting out of tree patch route is always an eventual failure, particularly with the kernel, because if the performance is that useful, it gets upstreamed or reinvented in your regardless of, leaving you with nothing to sell. In case your marketing strategy B is selling expertise, you could have to keep in mind that it'll be a hard promote when you've no out of tree differentiator left and git historical past denies that you had something to do with the in-tree patches. In truth "loopy safety particular person" will become a self fulfilling prophecy. The recommendation? it was obvious to everyone else who learn this, however for you, it is do the upstreaming your self earlier than it gets finished for you. That method you might have a authentic historic declare to Plan B and you would possibly actually have a Plan A promoting a rollup of upstream observe patches built-in and delivered earlier than the distributions get round to it. Even your utility to the CII couldn't be dismissed as a result of your work wasn't going wherever. Your alternative is to proceed enjoying the function of Cassandra and possibly suffer her eventual fate.



Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]



> Second, for the probably viable pieces this can be a multi-year > full time job. Is the CII prepared to fund tasks at that degree? If not > we all would find yourself with numerous unfinished and partially broken features. please present me the answer to that query. with no definitive 'yes' there is no such thing as a level in submitting a proposal as a result of this is the time frame that for my part the job will take and any proposal with that requirement could be shot down immediately and be a waste of my time. and that i stand by my claim that such easy primary requirements should be public information. > Stellar, I must say. "Lies, damned lies, and statistics". you realize there's multiple strategy to get code into the kernel? how about you employ your git-fu to find all the bugreports/advised fixes that went in resulting from us? as for specifically me, Greg explicitly banned me from future contributions through af45f32d25cc1 so it's no marvel i don't send patches immediately in (and that one commit you discovered that went in despite mentioned ban is actually a really unhealthy example as a result of it's also the one that Linus censored for no good motive and made me decide to never ship security fixes upstream until that apply adjustments). > You now have a enterprise mannequin selling non-upstream security patches to clients. now? we've had paid sponsorship for our varied stable kernel sequence for 7 years. i wouldn't call it a enterprise model although because it hasn't paid anyone's bills. > [...]calling into question the earnestness of your try to place them there. i must be missing something right here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons discussed already). the CII mails had been exploratory to see how severe that whole organization is about really securing core infrastructure. in a way i've received my answers, there's nothing extra to the story. as on your free recommendation, let me reciprocate: complicated problems do not clear up themselves. code solving complicated problems does not write itself. people writing code fixing complicated problems are few and far between that you'll discover out in brief order. such folks (area experts) don't work at no cost with few exceptions like ourselves. biting the hand that feeds you will solely finish you up in starvation. PS: since you are so positive about kernel developers' capacity to reimplement our code, possibly take a look at what parallel options i still maintain in PaX despite vanilla having a 'totally-not-reinvented-right here' implementation and take a look at to grasp the explanation. or just look at all the CVEs that affected say vanilla's ASLR but didn't have an effect on mine. PPS: Cassandra by no means wrote code, i do. criticizing the sorry state of kernel safety is a aspect project when i'm bored or just ready for the next kernel to compile (i wish LTO was extra environment friendly).



Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]



In other words, you tried to define their course of for them ... I can't assume why that wouldn't work. > "Lies, damned lies, and statistics". The problem with ad hominem attacks is that they're singularly ineffective towards a transparently factual argument. I posted a one line command anyone might run to get the variety of patches you've got authored in the kernel. Why do not you publish an equivalent that offers figures you like more? > i've never in my life tried to submit PaX upstream (for all the reasons discussed already). So the master plan is to display your experience by the number of patches you haven't submitted? nice plan, world domination beckons, sorry that one received away from you, but I'm sure you will not let it occur once more.



Posted Nov 8, 2015 2:56 UTC (Solar) by PaXTeam (visitor, #24616) [Link]



what? since when does asking a query define anything? isn't that how we find out what someone else thinks? isn't that what *they* have that webform (never thoughts the mailing lists) for as nicely? in different words you admit that my query was not truly answered . > The issue with advert hominem attacks is that they're singularly ineffective towards a transparently factual argument. you didn't have an argument to start with, that's what i explained within the half you carefully chose not to quote. i'm not here to defend myself towards your clearly idiotic attempts at proving whatever you are attempting to prove, as they are saying even in kernel circles, code speaks, bullshit walks. you can look at mine and determine what i can or cannot do (not that you have the information to know most of it, mind you). that mentioned, there're clearly different more capable people who've executed so and determined that my/our work was price something else nobody would have been feeding off of it for the past 15 years and still counting. and as unimaginable as it may seem to you, life doesn't revolve across the vanilla kernel, not everyone's dying to get their code in there particularly when it means to place up with such foolish hostility on lkml that you just now also demonstrated here (it's ironic the way you came to the defense of josh who specifically requested people to not carry that notorious lkml style here. nice job there James.). as for world domination, there're many ways to achieve it and one thing tells me that you are clearly out of your league here since PaX has already achieved that. you are working such code that implements PaX features as we speak.



Posted Nov 8, 2015 16:Fifty two UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]



I posted the one line git script giving your authored patches in response to this authentic request by you (this one, just in case you've got forgotten http://lwn.web/Articles/663591/): > as for getting code upstream, how about you examine the kernel git logs (minus the stuff that was not properly credited)? I take it, by the way in which you have shifted ground in the previous threads, that you just wish to withdraw that request?



Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (visitor, #24616) [Link]



Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Link]



Please present one that's not improper, or much less incorrect. It should take less time than you've already wasted right here.



Posted Nov 8, 2015 22:49 UTC (Solar) by PaXTeam (guest, #24616) [Link]



anyway, since it's you guys who have a bee in your bonnet, let's test your stage of intelligence too. first determine my e-mail handle and venture title then attempt to seek out the commits that say they come from there (it introduced again some recollections from 2004 already, how occasions flies! i'm shocked i actually managed to perform this a lot with explicitly not making an attempt, think about if i did :). it is an extremely complicated task so by undertaking it you may show yourself to be the top dog here on lwn, no matter that's price ;).



Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link]



*shrug* Or don't; you're solely sullying your own popularity.



Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]



Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]



I would not both



Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]



Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Link]



Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Hyperlink]



Ah. I assumed my memory wasn't failing me. Compare to PaXTeam's response to <http: lwn.net articles 663612 />. PaXTeam will not be averse to outright lying if it means he gets to seem right, I see. Maybe PaXTeam's memory is failing, and this apparent contradiction is not a brazen lie, however on condition that the 2 posts have been made inside a day of each other I doubt it. (PaXTeam's complete unwillingness to assume good faith in others deserves some reflection. Sure, I *do* suppose he is lying by implication here, and doing so when there's nearly nothing at stake. God alone is aware of what he's prepared to stoop to when one thing *is* at stake. Gosh I wonder why his fixes aren't going upstream very fast.)



Posted Nov 12, 2015 14:11 UTC (Thu) by PaXTeam (guest, #24616) [Hyperlink]



> and that one commit you discovered that went in despite stated ban also somebody's ban doesn't mean it's going to translate into someone else's execution of that ban as it's clear from the commit in question. it is considerably unhappy that it takes a security fix to expose the fallacy of this policy though. the remainder of your pithy advert hominem speaks for itself higher than i ever might ;).



Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Link]



Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Link]



I don't see this message in my mailbox, so presumably it obtained swallowed.



Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]



You're conscious that it's totally potential that everyone seems to be fallacious right here , right? That the kernel maintainers must focus more on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to help, and that your patchsets would not assist that much and are the incorrect route for the kernel? That simply because the kernel maintainers aren't 100% proper it does not imply you are?



Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Hyperlink]



I believe you've him backwards there. Jon is comparing this to Mindcraft as a result of he thinks that despite being unpalatable to a number of the community, the article might the truth is contain lots of truth.



Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]



Posted Nov 9, 2015 15:13 UTC (Mon) by spender (guest, #23067) [Link]



"There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this might effectively be true" Simply as you criticized the article for mentioning Ashley Madison even though in the very first sentence of the following paragraph it mentions it didn't involve the Linux kernel, you can't give credence to conspiracy theories with out incurring the same criticism (in different words, you can't play the Glenn Beck "I'm just asking the questions here!" whose "questions" gas the conspiracy theories of others). Very similar to mentioning Ashley Madison as an example for non-technical readers concerning the prevalence of Linux on this planet, if you're criticizing the point out then should not likening a non-FUD article to a FUD article also deserve criticism, particularly given the rosy, self-congratulatory picture you painted of upstream Linux safety? As the PaX Team identified within the initial submit, the motivations aren't arduous to know -- you made no point out in any respect about it being the fifth in an extended-operating series following a pretty predictable time trajectory. No, we did not miss the overall analogy you had been attempting to make, we simply do not suppose you can have your cake and eat it too. -Brad



Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink]



Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Link]



It's gracious of you not to blame your readers. I figure they're a fair target: there's that line about those ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-) K3n.



Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Unfortunately, I don't understand neither the "safety" of us (PaXTeam/spender), nor the mainstream kernel folks when it comes to their attitude. I confess I have completely no technical capabilities on any of these matters, but when all of them determined to work collectively, as a substitute of getting endless and pointless flame wars and blame game exchanges, numerous the stuff would have been finished already. And all the whereas everyone involved might have made one other large pile of money on the stuff. All of them seem to need to have a better Linux kernel, so I've obtained no concept what the problem is. Evidently no one is keen to yield any of their positions even a bit of bit. As an alternative, both sides look like bent on making an attempt to insult their means into forcing the opposite aspect to give up. Which, of course, never works - it just causes more pushback. Perplexing stuff...



Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]



Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Take a scientific computational cluster with an "air gap", for example. You'd in all probability want most of the safety stuff turned off on it to achieve most efficiency, because you can trust all users. Now take a number of billion mobile phones that could be tough or gradual to patch. You'd probably want to kill most of the exploit classes there, if these units can still run fairly effectively with most safety features turned on. So, it isn't either/or. It's in all probability "it relies upon". However, if the stuff isn't there for everybody to compile/use within the vanilla kernel, it is going to be more difficult to make it part of on a regular basis decisions for distributors and users.



Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]



How sad. This Dijkstra quote involves mind immediately: Software program engineering, after all, presents itself as another worthy cause, but that is eyewash: in the event you rigorously learn its literature and analyse what its devotees really do, you'll uncover that software engineering has accepted as its charter "How one can program if you can not."



Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Hyperlink]



I guess that truth was too unpleasant to suit into Dijkstra's world view.



Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]



Indeed. And the fascinating thing to me is that once I reach that point, tests should not sufficient - mannequin checking at a minimal and really proofs are the one means forwards. I'm no safety skilled, my subject is all distributed techniques. I understand and have carried out Paxos and that i believe I can clarify how and why it works to anyone. But I'm at the moment doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is adequate as a result of there are infinite interleavings of events and my head simply could not cope with working on this either at the computer or on paper - I found I could not intuitively purpose about this stuff in any respect. So I started defining the properties and wanted and step by step proving why every of them holds. Without my notes and proofs I am unable to even explain to myself, let alone anyone else, why this thing works. I find this both utterly apparent that this can happen and completely terrifying - the maintenance price of these algorithms is now an order of magnitude increased.



Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]



> Certainly. And the interesting factor to me is that when I reach that point, assessments will not be enough - model checking at a minimal and actually proofs are the only method forwards. Or are you just utilizing the improper maths? Hobbyhorse time again :-) however to quote a fellow Pick developer ... "I typically walk right into a SQL growth store and see that wall - you recognize, the one with the large SQL schema that no-one totally understands on it - and surprise how I can simply hold all the schema for a Pick database of the identical or greater complexity in my head". But it is simple - by schooling I'm a Chemist, by curiosity a Bodily Chemist (and by profession an unemployed programmer :-). And when I'm desirous about chemistry, I can ask myself "what is an atom made from" and suppose about issues just like the robust nuclear drive. Next level up, how do atoms stick collectively and make molecules, and think in regards to the electroweak pressure and electron orbitals, and the way do chemical reactions occur. Then I think about molecules stick together to make materials, and assume about metals, and/or Van de Waals, and stuff. Level is, you need to *layer* stuff, and take a look at issues, and say "how can I break up parts off into 'black boxes' so at anyone degree I can assume the opposite levels 'simply work'". For instance, with Choose a FILE (desk to you) stores a category - a collection of identical objects. One object per File (row). And, similar as relational, one attribute per Subject (column). Can you map your relational tables to actuality so easily? :-) Going back THIRTY years, I remember a narrative about a guy who constructed little laptop crabs, that could fairly fortunately scuttle round within the surf zone. As a result of he did not attempt to work out how to solve all the problems at once - each of his (incredibly puny by at this time's requirements - that is the 8080/Z80 period!) processors was set to just process slightly little bit of the problem and there was no central "mind". Nevertheless it worked ... Perhaps it is best to just write a bunch of small modules to solve every individual drawback, and let ultimate answer "simply occur". Cheers, Wol



Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Link]



To my understanding, this is strictly what a mathematical abstraction does. For example in Z notation we'd construct schemas for the assorted modifying ("delta") operations on the base schema, after which argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A via O (for which they've been already argued). The result is a set of operations that, executed in arbitrary order, end in a set of properties holding for the consequence and outputs. Thus proving the formal design correct (w/ caveat lectors regarding scope, correspondence with its implementation [though that can be confirmed as well], and read-only ["xi"] operations).



Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]



Looking via the history of computing (and probably plenty of different fields too), you'll most likely find that folks "can't see the wooden for the trees" more usually that not. They dive into the element and fully miss the massive picture. (Medication, and curiosity of mine, suffers from that too - I remember any individual talking concerning the advisor desirous to amputate a gangrenous leg to save someone's life - oblivious to the fact that the patient was dying of cancer.) Cheers, Wol



Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]



https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Considered Dangerous") FWIW, I feel that this talk could be very relevant to why writing safe software is so laborious.. -Dave.



Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink]



While we're spending millions at a multitude of security problems, kernel points aren't on our prime-priority record. Honestly I remember only once having discussing a kernel vulnerability. The results of the analysis has been that all our methods have been operating kernels that have been older as the kernel that had the vulnerability. However "patch administration" is a real concern for us. Software should proceed to work if we install safety patches or replace to new releases because of the top-of-life policy of a vendor. The income of the corporate is relying on the IT methods working. So "not breaking consumer area" is a security feature for us, because a breakage of 1 component of our a number of ten 1000's of Linux methods will stop the roll-out of the security replace. One other drawback is embedded software program or firmware. Lately almost all hardware techniques include an operating system, typically some Linux version, offering a fill network stack embedded to support distant management. Frequently those programs do not survive our obligatory security scan, as a result of vendors nonetheless didn't replace the embedded openssl. The true challenge is to offer a software program stack that can be operated in the hostile atmosphere of the Web maintaining full system integrity for ten years and even longer with none customer upkeep. The current state of software engineering would require assist for an automatic replace process, but distributors must perceive that their enterprise model should be capable to finance the resources offering the updates. Total I'm optimistic, networked software program is not the primary expertise utilized by mankind inflicting issues that have been addressed later. Steam engine use could end in boiler explosions however the "engineers" had been able to reduce this threat considerably over just a few decades.



Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]



The following is all guess work; I'd be keen to know if others have proof either one way or another on this: The people who discover ways to hack into these systems via kernel vulnerabilities know that they expertise they've learnt have a market. Thus they don't are inclined to hack to be able to wreak havoc - certainly on the whole the place knowledge has been stolen with a view to release and embarrass folks, it _appears_ as though those hacks are by means of a lot simpler vectors. I.e. lesser expert hackers discover there is an entire load of low-hanging fruit which they'll get at. They don't seem to be being paid forward of time for the data, so they flip to extortion instead. They don't cover their tracks, and they'll often be found and charged with criminal offences. So if your security meets a sure fundamental stage of proficiency and/or your company isn't doing anything that places it close to the top of "corporations we would wish to embarrass" (I suspect the latter is much simpler at preserving methods "protected" than the previous), then the hackers that get into your system are prone to be expert, paid, and doubtless not going to do a lot injury - they're stealing knowledge for a competitor / state. So that does not bother your bottom line - at the least not in a approach which your shareholders will bear in mind of. So why fund safety?



Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Link]



Then again, some effective mitigation in kernel degree could be very useful to crush cybercriminal/skiddie's try. If one in every of your customer working a future trading platform exposes some open API to their clients, and if the server has some reminiscence corruption bugs could be exploited remotely. Then you understand there are recognized assault methods( reminiscent of offset2lib) can assist the attacker make the weaponized exploit so much easier. Will you clarify the failosophy "A bug is bug" to your buyer and inform them it might be ok? Btw, offset2lib is ineffective to PaX/Grsecurity's ASLR imp. To essentially the most industrial makes use of, more security mitigation throughout the software won't value you extra price range. You'll nonetheless should do the regression test for every improve.



Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]



Keep in mind that I focus on external internet-primarily based penetration-exams and that in-home exams (native LAN) will likely yield totally different results.



Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]



I keep reading this headline as "a new Minecraft moment", and pondering that perhaps they've decided to comply with up the .Internet thing by open-sourcing Minecraft. Oh properly. I imply, safety is sweet too, I suppose.



Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]



Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Hyperlink]



Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Hyperlink]



Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]



(Oh, and I was additionally still questioning how Minecraft had taught us about Linux performance - so due to the opposite comment thread that pointed out the 'd', not 'e'.)



Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Hyperlink]



I would similar to so as to add that in my opinion, there is a common drawback with the economics of pc safety, which is especially seen at the moment. Two issues even possibly. First, the money spent on pc security is usually diverted in the direction of the so-referred to as security "circus": quick, easy solutions that are primarily selected just in order to "do something" and get higher press. It took me a very long time - perhaps decades - to say that no safety mechanism at all is best than a foul mechanism. However now I firmly believe on this angle and would fairly take the danger knowingly (provided that I can save cash/useful resource for myself) than take a bad strategy at fixing it (and don't have any money/useful resource left when i realize I ought to have carried out something else). And that i discover there are many dangerous or incomplete approaches at present available in the pc safety discipline. Those spilling our rare cash/sources on ready-made ineffective tools ought to get the bad press they deserve. And, we actually have to enlighten the press on that because it isn't really easy to understand the effectivity of safety mechanisms (which, by definition, ought to stop issues from happening). Second, and that could be newer and extra worrying. The circulation of cash/useful resource is oriented within the path of assault instruments and vulnerabilities discovery much more than in the path of latest protection mechanisms. This is particularly worrying as cyber "defense" initiatives look increasingly like the standard idustrial initiatives geared toward producing weapons or intelligence programs. Moreover, unhealthy ineffective weapons, because they're solely working against our very vulnerable current systems; and unhealthy intelligence techniques as even fundamental school-degree encryption scares them right down to ineffective. However, all the ressources are for these grownup teenagers taking part in the white hat hackers with not-so-troublesome programming tips or network monitoring or WWI-level cryptanalysis. And now additionally for the cyberwarriors and cyberspies which have but to show their usefulness completely (particularly for peace protection...). Personnally, I might fortunately leave them all of the hype; but I will forcefully declare that they don't have any proper whatsoever on any of the finances allocation decisions. Solely Minecraft Server List engaged on safety ought to. And yep, it means we should decide where to place there sources. We've to say the exclusive lock for ourselves this time. (and I guess the PaXteam could possibly be among the primary to learn from such a change). While excited about it, I would not even leave white-hat or cyber-guys any hype in the long run. That's more publicity than they deserve. I crave for the day I'll read within the newspaper that: "One other of these ailing advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to deliver one of those unfinished and unhealthy high quality packages, X, that we're all obliged to use to its knees, annoying hundreds of thousands of regular customers with his unfortunate cyber-vandalism. All of the safety experts unanimously suggest that, once once more, the funds of the cyber-command be retargetted, or at the least leveled-off, with the intention to convey extra safety engineer positions in the educational area or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."



Hmmm - cyber-hooligans - I like the label. Though it does not apply well to the battlefield-oriented variant.



Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link]



The state of 'software security business' is a f-ng catastrophe. Failure of the highest order. There is huge quantities of cash that is going into 'cyber security', but it is usually spent on government compliance and audit efforts. This implies as an alternative of really putting effort into correcting points and mitigating future issues, nearly all of the trouble goes into taking current purposes and making them conform to committee-driven tips with the minimal quantity of effort and changes. Some stage of regulation and standardization is completely needed, but lay individuals are clueless and are fully unable to discern the distinction between any individual who has helpful experience versus some firm that has spent hundreds of thousands on slick advertising and marketing and 'native advertising' on massive web sites and laptop magazines. The folks with the cash unfortunately solely have their very own judgment to depend on when shopping for into 'cyber safety'. > Those spilling our rare cash/resources on prepared-made ineffective instruments should get the bad press they deserve. There isn't a such thing as 'our rare money/resources'. You have got your cash, I have mine. Money being spent by some company like Redhat is their cash. Cash being spent by governments is the government's cash. (you, actually, have far more control in how Walmart spends it's cash then over what your authorities does with their's) > This is very worrying as cyber "protection" initiatives look an increasing number of like the same old idustrial projects aimed at producing weapons or intelligence systems. Moreover, bad ineffective weapons, as a result of they're only working in opposition to our very weak present methods; and unhealthy intelligence methods as even primary school-degree encryption scares them all the way down to ineffective. Having secure software program with strong encryption mechanisms in the fingers of the general public runs counter to the pursuits of most major governments. Governments, like another for-revenue group, are primarily focused on self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Way more useful to them then attempting to assist the public have a secure mechanism for making telephone calls. Particularly when those secure mechanisms interfere with information assortment efforts. Sadly you/I/us can't rely upon some magical benefactor with deep pockets to sweep in and make Linux better. It is simply not going to happen. Corporations like Redhat have been massively useful to spending sources to make Linux kernel extra succesful.. nonetheless they are driven by a the need to turn a revenue, which suggests they need to cater directly to the the kind of requirements established by their buyer base. Clients for EL are usually way more targeted on reducing prices related to administration and software improvement then security at the low-level OS. Enterprise Linux clients are inclined to rely on physical, human policy, and network security to guard their 'comfortable' interiors from being exposed to exterior threats.. assuming (rightly) that there is very little they can do to actually harden their methods. The truth is when the selection comes between safety vs comfort I'm sure that the majority prospects will happily defeat or strip out any security mechanisms launched into Linux. On prime of that when most Enterprise software program is extremely dangerous. So much so that 10 hours spent on bettering a web front-end will yield extra actual-world security benefits then a 1000 hours spent on Linux kernel bugs for most companies. Even for 'normal' Linux customers a security bug of their Firefox's NAPI flash plugin is far more devastating and poses a massively increased risk then a obscure Linux kernel buffer over circulate problem. It is just probably not essential for attackers to get 'root' to get entry to the vital information... usually all of which is contained in a single user account. Ultimately it's up to people like you and myself to place the trouble and cash into enhancing Linux security. For both ourselves and different people.



Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]



Spilling has all the time been the case, but now, to me and in pc security, most of the money appears spilled because of unhealthy religion. And this is usually your money or mine: either tax-fueled governemental resources or company prices that are immediately reimputed on the prices of products/software we are instructed we're *obliged* to buy. (Have a look at corporate firewalls, home alarms or antivirus software program advertising discourse.) I believe it is time to point out that there are a number of "malicious malefactors" round and that there is an actual have to establish and sanction them and confiscate the sources they've in some way managed to monopolize. And that i do *not* suppose Linus is amongst such culprits by the way. However I think he could also be among the ones hiding their heads in the sand concerning the aforementioned evil actors, while he in all probability has extra leverage to counteract them or oblige them to reveal themselves than many people. I discover that to be of brown-paper-bag level (although head-in-the-sand is by some means a new interpretation). Ultimately, I believe you are right to say that presently it's only as much as us individuals to strive actually to do something to enhance Linux or laptop safety. However I still assume that I am right to say that this is not normal; particularly while some very serious individuals get very critical salaries to distribute randomly some difficult to evaluate budgets. [1] A paradoxical situation once you think about it: in a website where you're firstly preoccupied by malicious individuals everyone ought to have factual, transparent and trustworthy conduct as the first priority of their mind.



Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]



It even has a pleasant, seven line Fundamental-pseudo-code that describes the present situation and clearly exhibits that we are caught in an limitless loop. It doesn't answer the large question, although: How to put in writing higher software program. The sad factor is, that this is from 2005 and all of the issues that were obviously silly ideas 10 years in the past have proliferated even more.



Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]



Observe IMHO, we should examine further why these dumb issues proliferate and get a lot support. If it is solely human psychology, effectively, let's battle it: e.g. Mozilla has proven us that they'll do wonderful things given the precise message. If we're facing active people exploiting public credulity: let's establish and battle them. But, extra importantly, let's capitalize on this data and secure *our* programs, to show off at a minimal (and more later on after all). Your reference conclusion is especially nice to me. "challenge [...] the standard wisdom and the status quo": that job I would happily accept.



Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]



That rant is itself a bunch of "empty calories". The converse to the items it rants about, which it is suggesting at some level, can be as bad or worse, and indicative of the worst sort of safety pondering that has put lots of people off. Alternatively, it is just a rant that provides little of value. Personally, I feel there isn't any magic bullet. Security is and always has been, in human history, an arms race between defenders and attackers, and one that is inherently a trade-off between usability, risks and costs. If there are errors being made, it is that we should always most likely spend more resources on defences that might block complete courses of attacks. E.g., why is the GRSec kernel hardening stuff so exhausting to use to regular distros (e.g. there isn't any dependable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one safety context? Why are we still writing numerous software program in C/C++, usually with none primary security-checking abstractions (e.g. fundamental bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to provide security with velocity? Little question there are plenty of individuals engaged on "block courses of attacks" stuff, the question is, why aren't there extra assets directed there?



Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]



>There are a lot of reasons why Linux lags behind in defensive security applied sciences, however one among the important thing ones is that the businesses earning profits on Linux have not prioritized the development and integration of these technologies. This looks as if a motive which is basically value exploring. Why is it so? I think it isn't apparent why this doesn't get some more attention. Is it attainable that the individuals with the money are right to not extra extremely prioritise this? Afterall, what curiosity have they got in an unsecure, exploitable kernel? The place there may be widespread trigger, linux development gets resourced. It has been this way for many years. If filesystems qualify for frequent curiosity, surely security does. So there would not seem to be any obvious motive why this issue does not get extra mainstream consideration, besides that it really already gets enough. You might say that catastrophe has not struck yet, that the iceberg has not been hit. However it seems to be that the linux growth course of will not be overly reactive elsewhere.



Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]



That is an attention-grabbing question, certainly that's what they really imagine no matter what they publicly say about their commitment to security technologies. What's the truly demonstrated downside for Kernel builders and the organizations that pay them, so far as I can inform there isn't enough consequence for the lack of Security to drive more funding, so we are left begging and cajoling unconvincingly.



Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]



The important thing concern with this area is it relates to malicious faults. So, when consequences manifest themselves, it is just too late to act. And if the current dedication to a scarcity of voluntary technique persists, we're going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers seem pretty resistant to paranoia. That is an efficient factor. But I'm ready for the times where armed land-drones patrol US streets in the vicinity of their youngsters schools for them to find the feeling. They are not so distants the times when innocent lives will unconsciouly depend on the security of (linux-based mostly) laptop techniques; underneath water, that's already the case if I remember accurately my last dive, as well as in several current cars according to some studies.



Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]



Classic internet hosting companies that use Linux as an exposed front-end system are retreating from growth while HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their directions. This is really not that surprising: For internet hosting needs the kernel has been "finished" for fairly some time now. Besides help for current hardware there is not much use for newer kernels. Linux 3.2, or even older, works just positive. Hosting doesn't need scalability to lots of or thousands of CPU cores (one makes use of commodity hardware), advanced instrumentation like perf or tracing (programs are locked down as much as attainable) or superior power-management (if the system does not have fixed high load, it is not making enough money). So why should hosting companies still make sturdy investments in kernel development? Even when that they had one thing to contribute, the hurdles for contribution have develop into greater and higher. For their security wants, internet hosting companies already use Grsecurity. I have no numbers, but some experience suggests that Grsecurity is basically a hard and fast requirement for shared internet hosting. However, kernel safety is sort of irrelevant on nodes of a super computer or on a system operating giant business databases which might be wrapped in layers of center-ware. And cell vendors simply do not care.



Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]



Linking



Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]



Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]



The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am positive the system's laborious drives were sent off for forensic examination, and we have all been waiting patiently for the reply to the most important query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper through April 1st, 2013, kernel.org included this observe at the top of the site News: 'Because of all for your endurance and understanding throughout our outage and please bear with us as we deliver up the different kernel.org programs over the next few weeks. We shall be writing up a report on the incident in the future.' (Emphasis added.) That remark was eliminated (along with the rest of the positioning News) throughout a Could 2013 edit, and there hasn't been -- to my information -- a peep about any report on the incident since then. This has been disappointing. When the Debian Challenge discovered sudden compromise of a number of of its servers in 2007, Wichert Akkerman wrote and posted a superb public report on precisely what occurred. Likewise, the Apache Foundation likewise did the proper factor with good public autopsies of the 2010 Internet site breaches. Arstechnica's Dan Goodin was still making an attempt to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman instructed Ars that the investigation has yet to be completed and gave no timetable for when a report might be launched. [...] Kroah-Hartman also advised Ars kernel.org programs were rebuilt from scratch following the assault. Officials have developed new tools and procedures since then, but he declined to say what they're. "There will probably be a report later this year about site [sic] has been engineered, but do not quote me on when it will likely be launched as I am not answerable for it," he wrote. Who's responsible, then? Is anyone? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H mentioned there would be a report 'later this year', and four years for the reason that meltdown, nothing but. How about some data? Rick Moen [email protected]



Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]



Less significantly, note that if even the Linux mafia does not know, it should be the venusians; they're notoriously stealth of their invasions.



Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]



I do know the kernel.org admins have given talks about a few of the new protections that have been put into place. There aren't any extra shell logins, as an alternative everything uses gitolite. The completely different companies are on different hosts. There are extra kernel.org staff now. People are utilizing two issue identification. Some other stuff. Do a seek for Konstantin Ryabitsev.



Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]



I beg your pardon if I was in some way unclear: That was said to have been the path of entry to the machine (and that i can readily consider that, because it was additionally the precise path to entry into shells.sourceforge.internet, many years prior, round 2002, and into many other shared Internet hosts for many years). But that is not what is of primary curiosity, and isn't what the forensic research long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to use that to root access is currently unknown and is being investigated'. Okay, people, you have now had 4 years of investigation. What was the trail of escalation to root? (Also, other details that might logically be coated by a forensic study, comparable to: Whose key was stolen? Who stole the key?) That is the kind of autopsy was promised prominently on the entrance page of kernel.org, to reporters, and elsewhere for a long time (after which summarily eliminated as a promise from the entrance web page of kernel.org, with out comment, together with the rest of the location News section, and apparently dropped). It nonetheless can be appropriate to know and share that data. Especially the datum of whether the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected]



Posted Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Link]



I've done a closer evaluate of revelations that got here out soon after the break-in, and think I've discovered the reply, by way of a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the general public was informed), plus Aug. Thirty first comments to The Register's Dan Goodin by 'two security researchers who were briefed on the breach': Root escalation was via exploit of a Linux kernel safety gap: Per the two safety researchers, it was one both extraordinarily embarrassing (vast-open entry to /dev/mem contents including the running kernel's image in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, one in all which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Different tidbits: - Site admins left the root-compromised Internet servers working with all companies nonetheless lit up, for a number of days. - Site admins and Linux Basis sat on the knowledge and failed to inform the general public for those self same a number of days. - Site admins and Linux Foundation have never revealed whether or not trojaned Linux supply tarballs have been posted within the http/ftp tree for the 19+ days earlier than they took the location down. (Sure, git checkout was positive, however what in regards to the 1000's of tarball downloads?) - After promising a report for a number of years and then quietly eradicating that promise from the entrance web page of kernel.org, Linux Basis now stonewalls press queries.I posted my finest try at reconstructing the story, absent an actual report from insiders, to SVLUG's predominant mailing checklist yesterday. (Necessarily, there are surmises. If the individuals with the info were extra forthcoming, we would know what happened for certain.) I do must marvel: If there's one other embarrassing screwup, will we even be told about it in any respect? Rick Moen [email protected]



Posted Nov 22, 2015 14:25 UTC (Solar) by spender (visitor, #23067) [Hyperlink]



Additionally, it is preferable to use stay reminiscence acquisition previous to powering off the system, otherwise you lose out on reminiscence-resident artifacts that you could carry out forensics on. -Brad



How in regards to the lengthy overdue autopsy on the August 2011 kernel.org compromise?



Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]



Thanks on your comments, Brad. I'd been relying on Dan Goodin's declare of Phalanx being what was used to gain root, within the bit where he cited 'two security researchers who were briefed on the breach' to that impact. Goodin additionally elaborated: 'Fellow security researcher Dan Rosenberg stated he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault device, and that i famous that oddity in my posting to SVLUG. That having been stated, yeah, the Phalanx README does not particularly claim this, so then maybe Goodin and his several 'safety researcher' sources blew that detail, and nobody however kernel.org insiders but is aware of the escalation path used to gain root. Additionally, it's preferable to make use of dwell memory acquisition previous to powering off the system, in any other case you lose out on reminiscence-resident artifacts that you could carry out forensics on. Arguable, but a tradeoff; you'll be able to poke the compromised stay system for state data, but with the downside of leaving your system working below hostile control. I was always taught that, on steadiness, it's better to drag power to finish the intrusion. Rick Moen [email protected]



Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (visitor, #88005) [Hyperlink]



Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Link]



With "something" you mean those who produce these closed supply drivers, proper? If the "client product firms" just stuck to using parts with mainlined open supply drivers, then updating their merchandise can be much simpler.



A new Mindcraft moment?



Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]



They've ring zero privilege, can entry protected memory straight, and cannot be audited. Trick a kernel into working a compromised module and it is recreation over. Even tickle a bug in a "good" module, and it is in all probability sport over - in this case fairly literally as such modules are typically video drivers optimised for games ...