I think laptops are only one tiny part of why we should be excited about it.
Imagine the carbon footprint difference on a server farm...
Hopefully your server farm isn't idling enough to make much of a difference.
There's lots of reasons that might happen. Lots of businesses are tied to peaks at specific times of the day or seasonality. And not yet sophisticated enough to build in elasticity.
Even AWS has no reasonable automated way to be elastic in a vertical way, like auto changing instance size. Some apps can't scale well horizontally.
My database servers, for example, are built for "Easter Sunday Attendence" and underutilized the rest of the time. We do better with things like app and web servers, but there are inefficiencies.
The linked article mentions (on page 3) that, on servers, the gain was seen when not idling: "On this Tyan server, the idle power usage ended up being the same across these three most recent kernel branches. However, the power usage under load was found to have some nice improvements."
Many if not most servers are more idle than not in my experience. I haven’t seen many servers that utilize > 50% CPU on average.
Also, if you're designing for high availability, you're going to overprovision by definition, otherwise the loss of a server or datacenter is going to cause a cascading failure.
It's not just the power used while idling. It's the power usage in general.
>This does not seem to be shaping up to be a particularly big release, and there seems to be nothing particularly special about it.
>10-20% less power usage
Is this one of those "not big and professional like GNU" Linus moments?
That's insane. Does this apply to all laptops running Linux?
Seems like so. From Phoronix: "I began the weekend work with trying out a Lenovo ThinkPad X1 Carbon with an Intel Core i7 5600U processor... It's a mature Broadwell platform that has been working well under Linux for years. But not exactly a system I would expect years later to have a significant power boost from."
> while idle
Which never happens on laptops because of the web browser /s
I wonder if the time has come to add user-configurable resource consumption throttles to browsers, eg. settings for max CPU, max FPS (foreground and background) for developer-triggered redraws, etc. On my laptops I’d have it ratcheted all the way down, because not needing to plug in and my lap not being cooked is more important than whatever frills the websites I visit have. And if some site can’t run properly while throttled, well, there’s probably a more lightweight alternative I should be using instead.
You can do that with ulimit. But it affects the whole browser, so it is not very granular.
Tab-specific and/or domain-specific resource management would be more useful.
If using Firefox, I'm very happy using "Auto Tab Discard". It does what you want.
Firefox Quantum seems to beat Chrome in this area.
Cheers for the repo reminder, installing this now, really hoping this will make my air go longer.
FYI link to packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.17-rc1/
This should be in the title! What an improvement!
This sounds amazing and I'm looking forward to testing it. I'm thinking of finally pulling the trigger and getting my first non-Mac laptop in over half a decade. I've been rather impressed with how well Linux behaves on laptops nowadays. This is the cherry on top.
Well now is a good time. The next Ubuntu LTS is about to appear; 18.04 will be released on April 26 and is supported until April 2023.
Do you know what is the other thing that 4.17 also does? Optimizing idle loops so your laptop can run cooler for longer.
Phoronix's initial testing seems to suggest a 10-20% less power usage while idle which is a fantastic news for those who own a Linux laptop.
Since 4.0 or so Linux has become rock solid for me so I no longer anticipate new kernel releases but this is one of those times I run the cutting edge kernel because it is so cool. If you run a relatively recent version of Ubuntu, you can also test by googling Kernel PPA. It's just three clicks away at most.
> I often wish I could be paid just to refactor and rewrite existing code bases to make them more maintainable.
Yeah, me too--I'm very good at it and it's quite satisfying. Unfortunately it's very hard to communicate the business value of it (although the business value is huge in some cases).
Give me a ring if you all ever find a place that lets us use our powers to their fullest. I feel like realizing removing more code that you put back is usually a good thing is a tipping point in a coder's career.
Ironically, programmers should strive to write the least amount of code as possible, while reading as much code as possible.
If we can solve a problem without writing any code at all, then that's the most maintainable and bug free solution.
See "The Best Code is No Code At All": https://blog.codinghorror.com/the-best-code-is-no-code-at-al... .
Quoted for emphasis:
> Every new line of code you willingly bring into the world is code that has to be debugged, code that has to be read and understood, code that has to be supported. Every time you write new code, you should do so reluctantly, under duress, because you completely exhausted all your other options.
It's worth noting: code you import counts toward your total line count. Don't think that because someone else who doesn't work with you wrote the code, it doesn't count. In some ways, that's worse. I've spent all day today debugging a no-longer-maintained library which is used in a legacy codebase I'm maintaining.
See also smr.c from http://www.ioccc.org/years.html#1994 (the world's shortest quine!).
That may be one of the best things I've seen all week.
This is commoditization. People "tried and reinvented" the wheel many times back in the days out of lack of time, knowledge, legal/licensing, essential features, trying to be smart or many other reasons. And then there are more libraries today that solve the same thing way better. That automatically makes all the old ones legacy. Its like inflation, why take money away from people when you can print new ones?
Today, while I love the simplicity of Go, I shudder to fathom how much copy-pasted lines of Golang code I wrote will be commoditized in next year or two, and thus automatically creating legacy. And there will be nobody to give a ring to except my past self.
I spot a pattern with these replies. Perhaps they could be grouped together in their... ha, yep, I'm another one - put me on that there list too eh.
If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent" - Dijkstra
I've sometimes thought about setting up a consultancy that would do just this.
Was your code written by amateurs? Are you running your 50-person office off excel? Give us a call.
Clean up of anything, be it your code base or home, always feels good. Reminds me of my favorite quote:
One of my most productive days was throwing away 1,000 lines of code. -Ken Thompson
> we actually removed more lines than we added
My best year, I removed 10,000 lines of code more than I added without removing any functionality from the project (and in fact adding some new features). It isn't always a good thing to do this- but when it is, you know it as soon as you start reading the code.
I often wish I could be paid just to refactor and rewrite existing code bases to make them more maintainable.
I understood that SPDX can easily be used in new projects, for example.
But when an earlier project has a license that explicitly states something like "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.", can you replace it by a single SPDX link?
I think that technology allows you to easily link something on the internet instead of providing a offline text file, but is it lawful? And justifiable, since having _n_ text files saying one thing is easily compressable ?
Got me thinking about it, does removing lines-of-not-code does really make it smaller or just a little
For my own code I will use spdx references in the source code files but still have a LICENSE file in the repo root like I already do with the full text of the license.
IANAL so I can’t say what you legally can and cannot do with other people’s code but I would think that placing a copy of each of the licenses used by others in your repo and name them as LICENSE_THIRDPARTY_MIT, LICENSE_THIRDPARTY_GPL, etc and mention in your README that these correspond to the spdx references found in source files would mean that you were still in compliance.
As for the question “does removing lines of not-code make the file smaller or just a little”; it does not reduce the SLOC count of course. Personally I just find the idea of spdx appealing because it means that when I open files to edit them I don’t have to scroll past a lot of license text first. Additionally scanning source files for what license is being used will be simplified.
Not a complete answer to your question, but at least if you hold the copyright to the source code (including all modifications), you're of course free to re-license it.
As someone that has a full copy of the license in each of his source files, I found this the most interesting:
> we also got rid of some copyright language boiler-plate in favor of just the spdx lines
So I looked up spdx.
I’m going to start using this too.
"The architectures that are gone are blackfin, cris, frv, m32r, metag,mn10300, score, and tile. And the new architecture is the nds32 (Andes Technology 32-0bit RISC architecture)."
here is the list of survivors:
after 4.17.rc-1 linux still supports 23 archs.
The problem is that often enough it is not "known working good code", because nobody has been compiling and/or running and/or stress testing this code for a long time. Given that constant churn in the kernel, refactoring of APIs etc., it's quite likely that subtle (or not so subtle) breakages creep into that code over time. (Even if people who perform this refactoring try to update these archs, too; or perhaps I should say: "especially if", because usually you can't test them, after all).
Leaving it in gives the wrong impression that this is "known working good code", which is one reason to remove it. Another is that removing them makes it easier to be agile and refactor things, because you don't have to worry about these archs anymore.
> because usually you can't test them
We should be able to test that code. We could if, in order for an architecture/piece of hardware to be considered for inclusion, there were a software-based emulation tool that could be used to run a full-set of tests against a kernel build.
It's a lot of work to produce that and to write such a test suite, but it can grow incrementally. We have very good (and reasonably precise) emulators for a lot of the PC and mobile space and having those running would give us a lot of confidence in what we are doing.
So you want to write a new emulator each time Intel releases a new chip? I think you're vastly underestimating the scale of this task.
Video game emulators are possible because there is only one (or a very small number of) hardware configuration(s) for each console. Emulators for mobile development are possible because they simulate only the APIs, not the actual physical hardware, down to the level of interrupts and registers.
I don't think x86 would be a particularly attractive target for emulation in this case - x86 hardware is readily available and testing is much easier than, say, SuperH or H8.
Intel probably has internal tools that (somewhat) precisely emulate their chips and it'd probably be very hard to persuade them to share, but they seem committed to make sure Linux runs well on their gear, so it's probably not a huge problem.
I think of this as a way to keep the supported architectures as supported as possible even when actual hardware is not readily/easily (or yet) available for testing. One day, when x86 is not as easy to come by as today, it could prove useful.
It's good to keep the software running on more than one platform, as it exposes some bugs that can easily elude us. Also, emulators offer the best possible observability for debugging. If they are cycle-accurate then, it's a dream come true.
We are able to create tests with emulated hardware from specification, but writing emulation taking into account quirks, edge cases and speculative behaviour would be great amount of work, even for simplest devices.
I'd recommend reading Dolphin emulator release notes for a reference how much work is required to properly emulate hardware such that actual software may run without glitches and errors even for (AFAIK) 3 hardware sets.
> writing emulation taking into account quirks
I believe quirks would be added as they are uncovered. It'd also become a form of documentation. Emulation doesn't need to be precise to be valuable - if we can test it against an emulator on a commodity machine before testing it on metal on a somewhat busy machine, it's still a win - we don't need to waste time on a real multimillion zSeries or Cray if it crashes the emulator.
That way we would be able to create integration test coverage of 100% while still being able to panic on real hardware, until all messy behaviour is implemented. It's like writing unit-test kind of integration tests which then fail when provisioned :)
Passing tests and then failing when provisioned is still better than just failing when provisioned. At least you know that it should work (and maybe does, on different variations of the hardware).
> We have very good (and reasonably precise) emulators for a lot of the PC and mobile space
If that was true, services such as AWS Device Farm wouldn't exist: https://aws.amazon.com/device-farm/
I don't think covering all devices available is necessary before this approach brings some benefit to kernel developers. Also, this service would not be necessary if all phones ran the same version of Android and had displays with the same number of pixels, none of which is that much relevant for kernel developers.
Code that isn't executed and doesn't have anyone to care for it will often 'rot' inside of a larger codebase in my experience. When that happens, it adds mental overhead whenever anything related is refactored or changed, and can sometimes do nothing but create barriers for further improvements.
In this case, it looks like it was a number of unused architectures that were being put to pasture - anyone who is interested can look through commit history and pull back what they need if they're invested enough.
> m32r sound mixer
Wrong google hit. It's a minor 32-bit architecture from Renesas that seems to be targeted at ECUs. They're still on sale but I doubt there's much consumer demand to run Linux on them. They have fairly limited Flash.
Heh woops, thanks for the correction.
This is a case where a difference in degree makes a difference in kind.
Some of these haven't been sold for over 10 years and no one knows who still has one or where to get a compiler for them. Some of them are only a few years old, but have never run an upstream linux kernel (they always ran their original hacked-up/ported kernel), and again you can't find a C compiler from the last 5 years that supports them.
Linux does not drop support for architectures lightly, it was hanging on to most of these for years when they were clearly un-used un-tested zombies. And, FWIW, sound-blaster sound cards from the 90s are still supported ;)
I'm unfamiliar with kernel development cycles, but there might be some amount of maintenance needed each patch to ensure changes work for the various supported architectures, in which case leaving them in without updating them would result in insecure, increasingly buggy mess.
It _potentially_ paves the way for a net increase in functionality. If you can make changes without worrying about if it breaks some obscure architecture that you know no one is using, that _could_ make the process smoother and thus lead to the easier inclusion of functionality that will actually be used.
So I understand the whole "SLOC doesn't always need to be positive to have made useful contributions to a codebase" thing, I do it fairly frequently. However, is the argument for removing support ("known working good code"), aka functionality, a good thing? Do people now need to draw a line in the sand and say: If I want to put linux on my m32r sound mixer, I can't use a kernel newer than 4.16?
I'm not really sure I'm arguing for or against the dropping of support, more just curious about others thoughts.
sure, Linus announces more lines of code were removed than added and the crowd goes wild, but when I do it people are all "who broke the build" and "how is the repository totally empty again."
http://lkml.iu.edu/hypermail/linux/kernel/1801.1/04345.html has more detail on the dropping of Blackfin support in particular.
Interesting. I'm a bit removed from DSPs for some time, I wonder which multi-core processors AD are talking about. Are these just common ARM and the like or do they have some new architecture in the pipeline I haven't heard about?
I am mildly surprised that Blackfin support has been dropped. A few years ago, many digital cameras had a Blackfin for both GUI and DSP. I remember working with SHARC (harvard-architecture CPU from Analog Devices) and seeing the main advantage of Blackfin being that it could run Linux and, supposedly, being better equipped to run a complete GUI with several applications. I expected AD to take advantage of this and support Linux for some time, but instead, it rotted. Good thing I did not went through this route.
Also, I guess the architectures pruning was one reason he decided not to go with the 5.0 version. It would give even more meaning to the version number.
To be fair to Linux, it does support an ever-growing list of hardware. It’s not exactly lean, but continually increasing in size is hard to avoid in its case...
That could have been avoided if they hadn't been so insistent on all drivers being part of the kernel.
Then again, for as much trouble as that policy causes it is difficult to argue with its success, since Linux supports more hardware than any other open source OS and most proprietary ones too (or all, depending on how you count ancient hardware).
More source code doesn't necessarily result in a bigger compiled kernel. Your distro isn't enabling every single option at compile time.
Oh, and you can compile your own kernel if you like - it's a really simple process. :)
Linux is kind of unusual as far as large software projects go (IMO.) It has very few build dependencies and the quality of the code is usually higher than average.
Linux surely ain't getting any slimmer. We're up to how many packet filters by now? Four? Five? How many X509 parsers are in the kernel?
well, at least you can choose from text or graphical user interfaces to browse and select the options needed to build a kernel!
I haven't compiled a kernel for over a decade, however back then it was
* make config
* make menuconfig
* make xconfig
Only one of those was of any use, the others were dumb.
Same here, only used menuconfig since both config and xconfig were unpractical. I'd be curious to read how many people still compile their kernel today for normal use (ie not for testing or kernel development). Back in the day I did because a new card often meant the driver (if any) was available as source only, while today nearly everything has been already built into modules and can be installed as such.
I compile my own ... for fun mostly. There are performance gains in selecting the correct CPU architecture for your machine. I have some old machines and it really makes a difference vs. distro stock kernels.
The modern procedure (for upgrading) is pretty straightforward:
make oldconfig; make; make modules_install; make install; reboot
Even modules that aren't distributed with the kernel are compiled via dkms, which apparently came out in 2003.
It means when I install a new kernel (as a deb), proprietary drivers like the blackmagic decklink ones are automatically recompiled.
It's very different now than in 2000.
Most of the removed stuff is unused architectures, those did not get compiled into your x86-64 kernel anyway.
Unfortunately, feature-cram and bloat seem to be the mostly inescapable reality of software, even in many open source projects where you don’t have someone from sales or marketing breathing down your neck for moar moar moar. It is nice when project maintainers can spend time on bug fixes, performance improvements, memory optimization, etc. Even better when a project can agree that they are “done” with features and all additional changes are maintenance.
I think it is one of open source's biggest failings that it succumbs to feature bloat at least as easily as proprietary software does, just for different reasons. The problem is that open source people generally work on what interests them, and simple solutions are boring.
I wish it said that kind of thing in the changelogs more often (in general). I would be much more eager to download updates if I knew the app was getting slimmer, and not fatter.
On a side note, I'm annoyed Skype now forcibly updates itself on app launch. I have no idea how big that app is getting but I just assume it's slowly getting bigger and bigger.
To quote de Saint Exupéry "...perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away..."
Deletion is the only perfect code change.
I have no idea how you read that tone from the post. It was as matter of fact as you can get save for a bit of excitement at the end about the total LOC count going down.
Which part ? For the guy it reads like an honeymoon post card.
Wow Linus sounds like an ahole
At my old company we would have a small ceremony when someone left. Part of this was their team-lead listing some of their biggest contributions to the firm.
The one that always stuck with me was one engineer who was praised for having a negative line contribution count in the thousands over his time at the company!