I agree with you. I've been using Linux since the day Linus announced it on minix-list, and for the last 12 years Ubuntu has been my daily-driver system .. and it is amazing. I'm a musician, so when people see that I'm using Ubuntu Studio for my DAW, I usually hear a few chuckles .. until I fire up Ardour, show them my plugin list, the standard suite of synths and effects I use, and so on .. that usually shuts them up at least. But what gets them really curious is when I show them how easy it is to break out the source code for any of these powerful tools, make modifications, re-install and maintain my system with very, very powerful techniques.
There simply isn't any operating system as conducive to creative tinkering and progressive enhancement of key software as Ubuntu Studio. Its way, way ahead of the pack in this regard, and I think anyone who scoffs at the idea needs to be taught the lesson that Ubuntu - and of course, Linux and the ecosystem it promotes - is really worth the effort to know, learn and understand.
Ardour is pretty great, but I'm not so sure the state of audio in general on linux is. Granted last time I struggled (as in a couple of days of trying out different things with god know how many different audio subsystems there were at the time) with it is about 5 years ago but it still seemed impossible to get a configuration with really low I-O latency (say <10mSec), nor did it 'just work' as on the other two main operating systems. Is this better with Ubuntu Studio? Does it work nicely with e.g. RME cards?
For almost a decade now, I have had rock solid, high-performance audio running on my multi-channel DAW running Ubuntu Studio.
Rock solid. Better than Pro Tools.
I chose my hardware wisely. In my case, I've got two Presonus Firepod FP-10 interfaces, which work out of the box with JACK. Truly superlative Firewire audio interfaces, and I chalk that up to the fact that the Bridgeco chipset used in these devices was developed in a pro-Linux environment.
Of course, ymmv .. especially if you fall into the PulseAudio trap, which is designed - imho - to make Audio suck on Linux as much as possible. Not so with Jack and Firewire audio - in this case, its the best possible combination of audio hardware and software components, given superlative latency and multi-channel I/o capabilities ..
Do you have a link to a blog post about your setup? If not, would you consider writing about it?
I am also interested in this!
It would also be great to see some audio latency numbers for ubuntu vs the rest on common hardware
IIRC Ubuntu Studio (and most (all?) professional linux audio systems) uses JACK rather than ASLA/Pulse like most other versions of Ubuntu.
JACK has a reputation for being fiddly to set up, but puts great emphasis on low-latency/real-time and other professional use-cases. I'd expect it to be competitive, but don't really have the knowledge to make the comparisons myself.
I did find this unfinished review of latency in JACK from a few years ago: http://apps.linuxaudio.org/wiki/jack_latency_tests#does_late...
I also would like to know your setup! I'd be very interested!
I was looking into whether I should purchase Windows or buy a Mac to run a DAW, but if you're saying that Ardour is good enough, then I can just install that this weekend!
Just do it! BUT! Commit to using JACK for everything, and get rid of PulseAudio.
Ubuntu Studio isn't managed by Canonical (though I think they do receive financial contributions?) so that's a little different from the Canonical lead variations that the GP was discussing. However I do agree with you that Ubuntu Studio is a great product. I remember using it back when it was relatively new and even then I could see real potential in the making.
Looks like you got a lot of feedback from your comment, but if you have the time I'd like to ask a question too. I'm still getting my feet wet with more the more advanced aspects of Linux, primarily Ubuntu on my Chromebook. I always hear about people tweaking programs, but it never dawns on me what kind of tweaks are possible or even useful. I'd be interested in a couple use-cases you've come across, like problems you wanted to solve with a program and a brief overview of how you altered the source to do so. Maybe it's embarrassing to admit, but I've never tweaked anything outside of VS Code or Atom.
First, it is definitely not embarrassing to admit, and the answer may be overwhelming, but you can customize nearly everything.
One of the simpler things is to customize the look and feel. E.g. you could try out different desktop experiences and WMs (window managers). Most of them are just an `apt-get` away and when you logout and login again you can choose and try out a different desktop.
The only thing I could contribute to the conversation so far would be to NEVER use PulseAudio, and commit to using JACK for all your audio I/o needs, with regards to setting up Linux (UbuntuStudio) as a high-performance DAW.
This is really the great thing about Linux: we've got tons of options when it comes to audio I/o routing .. but its a liability too. My advice: commit to setting up Jack and learning it - definitely the best thing about Linux Audio.
I'm not sure if this is the kind of thing you're looking for or not, but here's a situation I encountered in just the last couple of days.
I have a new workstation PC I just recently put together. It's currently running Ubuntu 16.04 LTS but I plan to switch over to Arch Linux in the near future. Because I want to use LUKS (a.k.a. full disk encryption) and ZFS (and multiple ZFS pools) on it, I decided to work through the installation procedure on a laptop first (just to make sure I've got the steps down right -- an installation w/ the root filesystem on ZFS is "non-standard", and LUKS throws additional issues into the mix) before I wipe out my primary work machine.
First, by default, the LUKS setup only supports "unlocking" a single encrypted drive at boot. Since the laptop has a pair of SSDs (in a ZFS mirror), I need to unlock both of them (and my workstation has an NVMe, two SSDs, and two HDDs, so that'll be an even bigger problem!). I had to "hack on" the initramfs stuff so that I could unlock both drives when the system boots up.
In addition, I have several Yubikeys and wanted to use them with LUKS. I set them up in Challenge-Response mode, programmed them with a "secret key" that I generated, and then came up with a (relatively) short(er) passphrase that I can use to unlock the disks. I also had to create a custom (initramfs) "hook" to prompt for my passphrase, detect if a Yubikey is attached and, if so, send the "challenge" to the Yubikey, receive the response from it, and use that to unlock the disks. If a Yubikey isn't attached, then I have to provide a (much longer, more secure) "backup" passphrase to unlock (each of) the disks.
On the laptop, this means I can enter in my (relatively) short(er) passphrase once and unlock both disks. On my workstation, it means I'll enter it once to unlock all five disks. The alternative would be to type in my much longer, more secure passphrase FIVE separate times every time I boot the system up.
None of this was very hard or time consuming, but it was necessary in order to meet my exact goals and I'm glad that Linux allows me the opportunity to do exactly this type of non-standard stuff.
I own a Toshiba Chromebook 2, there are things that doesn't work out of the box (sleep mode, laptop mode) so I wrote some scripts to fix those issues. Also I wanted to have the same keyboard experience that I had on chrome os, so I tweaked the keyboard (upper row media keys, shift+backspace=del, etc...)
I have no doubt that you can work efficiently in such an environment and I firmly believe that audio related works these days are rarely limited by the available tools, but knowing how to use them in-depth.
However you're giving up a lot of third party instrument and effect options, aren't you? Again, those are not essential in any way, but there are really worthwhile options I wouldn't want to miss. Do you rely on internal plugins only?
And have you tried Bitwig yet? With its similarities to Ableton Live, I'm sure it has the potential to push audio production on Linux to some degree.
>However you're giving up a lot of third party instrument and effect options, aren't you?
You'd be surprised just how many VST's work smoothly and reliably under Linux.
Tried it, but not a fan - I have too many external hardware sequencers to be bothered with setting it up with the rest of my system. (Hardware sequencers run forever!) I do, however, have Ardour set up for perfect multi-channel recording, so every instrument has its own digital tracking, and this has been wonderful ..
A few years back Linux Voice reviewed Bitwig and they seemed to like it a lot:
As plugins go, many good ones are available in LV2 or similar formats so no problem on that front.
>I'm a musician, so when people see that I'm using Ubuntu Studio for my DAW, I usually hear a few chuckles .. until I fire up Ardour, show them my plugin list, the standard suite of synths and effects I use, and so on .. that usually shuts them up at least.
Well, it's not that impressive compared to something like Live or Cubase.
Everything the Beatles, Sex Pistols, Sinatra, Louis Armstrong, Lou Reed, Abba, Dylan, Led Zeppelin, The White Stripes, and Miles Davis used to record with was infinitely less powerful than Ardour. Yet somehow they were able to cough up some acceptable music.
Yeah, and Bach wrote on mere sheets of paper.
Your point being? "A subpar DAW should be enough for everybody in 2017 as long as it's better than 70s era studio technology"?
Not to mention that you compare apples and oranges. These people did rock, punk and/or jazz, not modern r&b, pop or electronic music that are usually more demanding of a DAW for one.
Second they had tens of thousands and sometimes millions of dollars as a budget for a single record, great facilities with top-notch acoustics, multiple producers, access to orchestras and top-notch musicians, and crazily expensive consoles with people to operate them for them.
Third one doesn't compete against Louis Armstrong era sound but against their own's generation's sound and production techniques.
The point is about diminishing returns. There's nothing you can do in one modern DAW that you can't with the others. The enhancements you get year over year are minimal and the amount of users who are pushing those tools to their limits is negligible.
Heck, the more advanced users end up going with a lot of hardware-based solutions and analog equipment.
There is nothing sub-par about Ardour as a DAW. It ranks up there with the other tools you mention in terms of capabilities and features. I simply don't need to use anything else.
Less powerful but more usable.
Can you modify Live or Cubase?
Or even a better question : can you plug Live to Cubase? Audio on linux is not just playing catch up, the whole jackd system, allowing to take output from any program supporting jack (most programs for musicians) and use it as an input for an other one is unparalleled, afaik.
For those who don't know, Jack allows to plug any software, virtual instrument or sound card in/out into any other one almost as simply as it would be on a physical mixer. This is also the meaning of Freedom: you are not condemned to follow the editor choices in term of possibilities.
>This is also the meaning of Freedom: you are not condemned to follow the editor choices in term of possibilities.
There's also another meaning of freedom: freedom from having to care of worry about the platform, and being free to create, because you have a thing that "just works".
For musicians that's more important than "route everything to everything" (for which there are solutions even for commercial DAWs, e.g. Rewire for those that do care for it).
It's all about tradeoffs, but Jack and "freedom to modify DAW code" is pretty down on the list of priorities for 99% of musicians.
Why do you think the free/open source stuff don't "just work"?
Also, a lot of time using FOSS is a choice we make not because of the price or principals but for convenience. I simply don't want to juggle licenses and dongles between home and studio computers, or for that matter be forced to install licensing software that is more or less a rootkit...
Live, for one, comes with a full blown programming environment called Max for Live, where you can create all kinds of things on par with native plugins and controls.
But the more essential question is: does it matter?
If one is experimental electronic musician / sound artist, it might be worth it.
But the average musician wants a good environment with the features they need built-in (and easy access to quality third party plugins etc for FX etc), not to tinker (and even less so, in C/C++) with their DAW.
"modify" is a very flexible term, but at least Live offers lots of customization options, particularly when you consider Max for Live: https://www.ableton.com/en/live/max-for-live/
Can you modify Max?
Do you know Max? It's basically a toolkit for building audio, video and other applications/plugins. Creating and modifying is its main purpose.
Then do it with Pure Data!
Why, so we can score some freedom points?
If Pure Data was as mature and polished as Max (which it is not) we just as well would.
If not, those things matter more than openness when you have music to record and work to do.
Does it matter at all when one doesn't want to?
This is a thread about a Free / Open Source operating system announcement. Why Cubase, Live and Max are even mentioned here?
The problem for me is not superior VS inferior or impressive VS unimpressive software but free software VS closed software.
>The problem for me is not superior VS inferior or impressive VS unimpressive software but free software VS closed software.
The problem for most people is good and inferior, or "getting my work done with this" and "this is lacking".
Free vs closed doesn't even come second to most people's concerns, and I'm not even sure it should.
My main problem has recently been "why can't I open these songs anymore" vs "can still edit old productions anywhere, any OS, no need for intrusive DRM that takes over my entire laptop just because Steinberg"
I think that the most charitable answer is that Ubuntu changed it's basic premise, and some people don't like the newer approach.
Back when Ubuntu started, it was explicitly a variation of Debian with a GNOME desktop, plus some custom parts to make a "Linux for Human Beings", such as an easy-to-use installation process. Mark Shuttleworth had been a Debian developer, Canonical hired it's technical people from Debian contributors, and everybody was sensitive to the need for Ubuntu to work with the upstream projects (and vice versa).
Linux is a complicated system of components, and desktops are far more complex than servers, so it's essential for developers with different employers to cooperate to get things done. Important decisions require developers from multiple organizations to reach a consensus. It can be a slow and frustrating process, and it's easy for awkward people to cause a lot of hassle for everyone else.
In practice, Canonical always struggled to work well with others, and eventually they switched to developing their own convergence stack (to span desktops, smart TVs and mobile) that happens to use Linux components but shares increasing less common ground with the rest of the community: using their own graphics systems, desktop environment, and their own software packaging systems. In other words, Ubuntu has been morphing from community-friendly Debian variant to an Android-style single-vendor system.
Google can do this with Android and not take the same level of flak because Android has always been a commercial product that happens to have FOSS components, and they seem to cooperate reasonably well with the rest of the community in areas of shared interest.
I've stopped letting these people lead me by the nose. For the community support and staying compatible I'm currently sticking with an Ubuntu upgrade path, but I've long left the default user interfaces behind. For a window manager I use openbox with tint2. I don't find much utility in the Debian/Ubuntu derived distros like Linux Mint, jumping from being based on Ubuntu,to Debian, then back to Ubuntu and breaking compatibility along the way.
(IMHO, of course)
I think a big part of the problem is Red Hat. Not that they're malicious or doing something wrong or anything of the sort; just that they're overrepresented in key projects that determines the direction of Linux. With enough people in key positions it is much easier for others to write off working upstream because they have different business interests and feel like contributing is working for you rather than with you. Here LWN with their stats on contributor affiliation for kernel releases can help (since they show that they're not even mostly one company), as does having people employed by the Linux Foundation. This doesn't occur as much for non-kernel software though.
I'm not so sure Kernel contribution is really such a big deal in day to day linux usage.
Sure, everyone loves if stuff just works - but if I'm looking back to my first years of Linux (1998-2007) and the last (nearly) decade - I haven't actively cared what kind of kernel, what version of kernel or whatever I am running on my laptops or servers. It just works[tm]. I'm really not trying to downplay the kernel developers' work - but if you're not using the latest hardware or need the last bit of performance.. many people could live with security fixes alone (hey, surprise, all the kernels in LTS distros only do get security backports).
(For me at least) in the last 5+ years the real development was in userland (for better or worse) - but I've stopped caring for anything besides "works fine" on a kernel level. Sad? Maybe.
Right; I just mentioned the kernel because it's the only thing I know of with consistent stats on what companies are contributing (and I mostly skip over those anyway). I just don't know anybody who publishes the same for, say, Gnome. It probably doesn't help that the boundary between the various projects need for a desktop Linux are blurrier.
And lets not forget the Torvalds policy of "we do not break user space". This makes it hard to get new stuff into the kernel, because any interface between the kernel and userspace that the new stuff expose will from that day on be set in stone.
The desktop devs on the other hand seems all too willing to break existing interfaces, treating them more like internal code than something exposed to third parties.
So is anyone giving Red Hat grief for, in effect, forcing everyone to use systemd? Or is it ok for them to do it because they "won"?
I don't think Ubuntu deserves flak just for forging their own path at times, even if it creates a bit of a fork in some stacks.
Is anybody giving Red Hat grief? Are you joking?
People bitch constantly about evil Red Hat. Any article about systemd, flatpak and so on.
> Or is it ok for them to do it because they "won"?
It's Ok because systemd is actually pretty great.
You're saying the ends justifies the means when it comes to "going it alone" in the Linux ecosystem, and I don't think that's clear.
No, I am saying that sometimes it is necessary to seriously modernise the 50+ year old UNIX architecture, as great as it is.
Out of the people who complain about systemd, I have heard very little of them actually managing dozens of servers with it daily, however I have heard tons of praise from sysadmins, because it is a lot saner writing & managing systemd unit files, rather than a bunch of hacky shell scripts, so much so that FreeBSD, the "*nix way or the highway OS", wants a clone of systemd for themselves.
Also, there are non-systemd options, look at Gentoo or Void Linux.
Thing is that the init side of systemd is the least of the problems.
Right now the Freedesktop approved way of handling suspend and hibernate on a Linux laptop is via systemd-logind, the systemd session and seat manager!
What used to handle it, powerkit/powerd, is not pretty much just a wrapper around the power management parts of logind.
Never mind that logind itself upped and replaced consolekit, that could be used independently.
Or that these days udev, a project for managing the content of /dev on Linux installs, and that existed for almost a decade on its own, is these days part of the systemd source ball. On paper it can be used independently, but in practice the procedure for extracting udev from systemd change at random intervals.
And where did you get the idea that FreeBSD wants to clone systemd?! Best i recall is that the developer of launchd (the OSX/MacOS inspiration for parts of systemd) was lobbying for FreeBSD to adopt Launchd. But he was largely rebuffed and has since opted to develop his own FreeBSD fork instead.
There is some effort underway to clone the external systemd APIs, but last time i read anything about it they had gotten hung up on the ever morphing nature of logind.
And more recently there has been effort spent towards developing a BSD DE that do not depend on anything Freedesktop derived, systemd included. Because The major reason for having anything Systemd related on the BSDs was to support the major DEs, Gnome in particular.
So no, the BSDs are in no way "envious" of the systemd shoggoth. And why should they be? Their own init scripts are a haven of sanity compared to the sysv derivative that RH and Debian/Ubuntu clung to for so long. Heck even venerable Slackware adopted a variant of it, and they seem uninterested in replacing it any time soon.
I think that building a good UI for the desktop is only possible if UI developers agree on "look-and-feel" and that may be simpler for Canonical than for a community like Debian and they came to the conclusion that building their own is just the way to go. AFAIK their home-grown desktop is FOSS, so it's not like you are buying into a proprietary system and this is tremendous.
When I started with Linux I used a distribution like Mandrake with a KDE Desktop and by default you got several text editors, browsers and other tools (the drop-down menu was huge) and for me as a starter (I did not know any programming then and just wanted to play with a different OS coming from a Microsoft stack) I was overwhelmed and although I could appreciate the customization it was hard to get started as there were too many options.
Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well. E.g. I use Ubuntu as you can find lots of documentation, it is supported well in cloud infrastructure and they (re)distribute a lot of packages.
In particular I do not get critics about the Desktop Environment. In no way made Ubuntu your Desktop less customizable and I use e.g. xmonad as my WM which is totally straight-forward and it just works. My mom uses the default Desktop and is pretty happy with it.
That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.
> I think that building a good UI for the desktop is only possible if UI developers agree on "look-and-feel" and that may be simpler for Canonical than for a community like Debian and they came to the conclusion that building their own is just the way to go.
Yes, and it seems like a number of other distribution developers have come to the same conclusion as well. We really need some thoughtful analysis about this trend.
> AFAIK their home-grown desktop is FOSS, so it's not like you are buying into a proprietary system and this is tremendous.
Source availability is part of enabling a broader developer community, but there's a huge amount of other work that is needed, as well. I haven't looked at the current state of Unity development, but the reputation of the project is that it is built for Ubuntu.
> Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well.
I totally agree: Ubuntu really was revolutionary when it started. It's kind of amazing how many innovative things Mark Shuttleworth and his team did right at the start.
> That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.
There is also a third audience for any piece of software that is large enough to be programmable: developers. For desktops, you have third-party theme authors as well as application developers, people that want to work on the desktop software itself, maintainers of other Linux OS components, and folks that want to use the source code to build their own custom projects.
> developers. For desktops, you have third-party theme authors as well as application developers, people that want to work on the desktop software itself, maintainers of other Linux OS components, and folks that want to use the source code to build their own custom projects.
I think they are part of the "possibility to customize-everything" crowd. The most important point for me in a distribution is its security record and a good package system to support all common use-cases. The default desktop should totally be targeted to casual users, as experienced users will not agree with your base-configuration system anyway (I remember YAST from SUSE), so I appreciate strong opinions there, in particular less is more (the original Ubuntu approach compared to other distributions at the time).
> I think they are part of the "possibility to customize-everything" crowd.
Sometimes, yes: I know developers that do customize heavily, but personally the furthest that I go is changing a desktop wallpaper :) (DevOps: I switch between systems all of the time).
The sentence wasn't very clear, but I was really talking more about APIs and developer experience: one of the reasons why projects switch from GTK/GNOME tech to QT is that it's apparently much easier for them to work with.
> I don't understand why Ubuntu is receiving so much bad comments from Linux community.
That's a complex topic, but over the years, a couple of reasons come to mind; Outdated packages and broken PPAs giving a bad impression to new users, slow and bloated in many respects, aggressive community behavior, CLAs, passive-aggressive blog posts and stances by the project leads towards alternative distros like Mint or any criticism whatsoever, Mir when everyone standardizes on Wayland, Unity is very hard to get working properly on non-Ubuntu distros, aggressive push for Snaps, reluctant to adopt systemd, not really part of the community, unless they want to push their own tech, (a bit like Apple), lack of kernel contributions compared to ie RedHat, distancing from the term Linux, using only Ubuntu as much as possible.
...reluctant to adopt systemd...
Ubuntu weren't alone in that.
No and I did not say they were, just that it was one of the things that some people took issue with. There are others I didn't mention, like the opt-out Amazon Lens integration some time back for example, the point is that they have done some questionable things over the years that left the wider community with a sour taste towards Canonical.
> No and I did not say they were, just that it was one of the things that some people took issue with
Who were those people taking issue with the lack of eagerness on adopting systemd and were they in Raleigh? Systemd is a rather radical shift from the "*nix philosophy".
I don't have patience for RH "UX contributions don't count" saltiness when they intentionally abandoned desktop Linux in favor of chasing the enterprise market (successfully!). No, I'm not bitter about RHEL at all.
1. Tongue firmly in cheek
2. I do know Fedora exists, I read the very 1st announcement. It's not the same
...were they in Raleigh?
Ha! Obviously RH and Ubuntu have different points of emphasis, so any particular person might prefer one over the other. ISTM most of the complaints about Ubuntu, however, aren't about their focus on the GUI user, but rather on their occasionally doing their own thing. As if RH don't do the same, except more!
Just sharing some of my own views on this:
1. systemd got adopted by Arch way before Fedora.
2. Most people who actually use it daily think that it's a good change, yes, it may not be 100 "one thing", but it "does do it well" and the fact is, booting up a modern computer involves more than one thing anyway, I think it's better to have it this way, writing systemd unit files is much saner than crazy shell scripts. Opinions may differ, of course.
The 50+ years old design needs upgrading from time to time.
C is also not perfect etc.
3. systemd is easy to get going on any distro, not just RH, unlike many of Canonical's creations.
4. Unlike Canonical, RH employs many people that benefit the wider community* as a whole, ie GNOME, kernel devs etc.
* ie Arch Linux is independent of RedHat, yet we (its users) greatly benefit from their work, not much from Canonical's.
> 3. systemd is easy to get going on any distro, not just RH, unlike many of Canonical's creations.
I don't know where you're going with this, but didn't "Canonical's creation" upstart ship with RHEL6?
> 4. Unlike Canonical, RH employs many people that benefit the wider community* as a whole, ie GNOME, kernel devs etc.
No shit. RH has 14-times as many employees as Canonical (Google suggests 9 870 vs 700). I didn't bother to check revenue figures, but it's obvious that Canonical has to choose it's battles.
In some cases, I find the nebulous benefits to the "community" dubious: Gnome3 and systemd seem to have been conceived fully formed from the minds of their RH-employed leads who are pretty headstrong and won't easily accept criticism of their vision by the said community. I'm sure there are enough rants online about features dropped from Gnome3 with no real reason.
The late Pieter Hintjens had something to say about who benefited as Red Hat interacted with the AMQP community
In my opinion, this falls under the old adage of "There are two kind of [Linux distros]: the ones everyone complains about, and the ones that nobody uses."
As someone who likes his free time for doing things other than fiddling with configuration files, Ubuntu is quite nice. It's not perfect, it's strayed from its original vision; but it's still my Linux desktop distro of choice.
There are many factors, IMO (probably all wrong).
First, ubuntu was initially seen as "noob's linux", debian users especially not taking the fork well, nor the fact that the numbers of linux users was raising in some kind of OS version of eternal september.
Then there was the fact that ubuntu was OK to mix up proprietary code in their repos, like proprietary drivers. It was (it is) a big fight for debian to sacrifice ease of use to enforce non-proprietary software.
Third, there was the massive success of ubuntu, making all other distros the challengers. This always tends to attract criticism.
And finally, there was the perception of canonical pushing their agenda on their users, leaving them no choice, like when ubuntu migrated to unity or with the whole amazon lens debate.
The mix of all of this makes this linux distro having the easiest setup and the more compatibility/support being looked at with disdain, which is a shame, really.
Yeah but that's the game isn't it ? Free open source licence means authors gave them permissions to do it. Complaining about something you authorized to do is not fair play.
And it's not like they became apple or microsoft. Their mistakes are minor at worst compared to other competitors. What they brought to linux, however, is huge.
Because there is a loudmouth sub-set of the community that think a single look and feel of the DE would bring about "the year of the desktop". This while they keep CADTing the APIs and ABIs, thus making third parties weary of developing for said desktop.
>>"debian users especially not taking the fork well"
This never sets well with me, especially when OSS groups are involved. Is it not fundamentally the whole point of the software freedom movement?
Amazing how it works, isn't it?
no Linux distribution is popular
All Linux users: "everyone should be on Linux! Linux is amazing! Yay Linux!"
ubuntu becomes very popular
All Linux users: "everyone should be on Linux! Ehhh... but maybe not that Linux..."
So true. The linux community is full of idealists that are unable to compromise. They would anyway not be able to live up to their own standards if they had to both ship software that new comers use and respect their ideals.
Wasn't Free Software Movement and GNU Project (which are the foundation for this whole GNU/Linux distros thing) born because of a certain idealist who was unable to compromise?
Ahh, RMS. I listened to a recording of one of his recent talks (the Grand Rapids one, IIRC).
I regard Stallman the way Randall Munroe regards Ayn Rand: "I found myself enthusiastically agreeing with the first 90% of every sentence, but getting lost at 'therefore, be a huge asshole to everyone.'"
You got a point :) But none of GNU is what I would call user friendly or mass-market ready.
As a professional, I love it.
But without canonical and his compromises, my mother wouldn't be able to use it today.
I'm glad I have a solid base, but creating something less radical on top of it will not destroy it.
I wouldn't call it "unable to compromise". There was some version of Ubuntu that wasn't any more stable than Windows XP for me - that's when I stopped using it.
I've also never tried to discourage anyone from using Ubuntu - but I noticed it's not for me. Apart from a tiny fraction of "this is a little nicer for the desktop user" I am losing a lot versus plain Debian - and I wouldn't that call that idealistic.
On the other hand... maybe there would've been a year of Linux on the desktop if a good part of the distros would've folded and people would've joined forces. Who knows?
Human nature. You can be popular, just don't be too popular. You can be rich, just don't be too rich. You can be pretty, just don't be too pretty. You can be... just don't be too...
This would be really funny if it was not that depressing.. :'(
CLAs and a history of going their own way i suspect.
Then again the "majority" way is largely dictated by a few big projects in and around Fedora, with developers largely on Red Hat payroll.
And the shit slinging didn't really take off until they up and created Unity after a spat with Gnome over the latters future course (afaik). Closely followed with Canonical starting Mir after misrepresenting/misunderstanding where Wayland was going.
So who really knows whats going on...
> So who really knows whats going on...
Pretty much all of the conversations and disputes have happened on the Internet, so if you are interested you can read the mailing lists, Google+, blog posts etc. in each case. There's no reason why anyone should unless they are interested, but everybody really is free to do so, and then draw their own conclusions.
And you know what, Unity is great.
I like it too. And the funny thing is all beginers I gave unity to liked it better than gnome. Only power users complain about it, which is lame given they are precisely the ones able to install another desktop whenever they want.
I'm pretty happy with it as well... although I swear several times a year I have to fiddle with stuff after updates. Even now, my ~/.cache/upstart/unity.7.log file fills up the disk every few days, and I can't catch what's doing it before it happens... so I just rm -rf ~/.cache every now and then. Since I'm unable to tail it, and don't have a thumb drive to fit a 90GB log file. And it seems to happen if it's in sleep (not off) and I turn off the tv/avr.
I'm actually considering switching my HTPC back to Windows, or trying Debian proper. I don't spend much time in unity there, mostly Kodi, and sometimes Chrome, but the DE matters little to me. I run Windows, mac, and linux (ubuntu unity) regularly.
Sorry for veering off into a rant... All around I actually do like Unity though.
I had the same problem with some Kodi addons being the culprits. Updated python-openssl and python-cryptography to the packages from the debian repo, disabled some theme/addon helper that was running in the addons settings and the unity7.log stopped filling my disk. Let me see if I find the link to the bug on Launchpad.
I find it completely unusable. The first thing I do after installing Ubuntu is installing XFCE or Cinnamon. Just the fact that you had to sacrifice a newborn to start two instances of the same application, an extremely common user action (I don't know if it's still the case now) shows that whoever designed it is incompetent.
Unity from a user's point of view is great. Too bad it requires patched upstream libraries, so it does not run on other distributions.
Do wonder why those patches are not accepted upstream...
I agree with you that Ubuntu had a really positive impact on Linux adoption by providing a polished operating system.
That being said I think they somewhat deserve the bad opinion the Linux community has about them: Canonical decided to play solo on several critical subject instead of cooperating with others, in particular I'm thinking about upstart (a competitor for systemd) and Mir a competitor of wayland).
Their marketing is also pain point, because they brand everything as Ubuntu and don't refer to Linux at all in many of their statements (for instance, you cannot find a single occurance of the word «Linux» on their landing page).
Upstart not only came before systemd but is the init system of the still supported RHEL 6. I am not qualified to judge which one is better but let's agree on the facts.
As far as the marketing goes, it is a larger trend. See Fedora Workstation/Server, elementaryOS. In fact, on the CentOS homepage there is nary a mention of Linux.
Not only that.. but Lennart was working on Upstart.. committed to working on some items.. Came back, and said - "Surprise, i've made systemd instead".. which is a bit of a kick when you are depending on him to do the features he said he'd do.
When you put it like that, it sort of reminds me of how Mir came about.
Maybe that's unfair.
I know what you're trying to say, but to the average user (which Ubuntu probably targets), does it matter?
If you look at the whole product, there's much more to it than Linux. Linux is 'just' the kernel, I doubt the average Joe cares what kernel his device runs. Most probably don't know that their Android phone is powered by Linux, too. Also, this leads to the old "GNU/Linux" discussion. Where would Canonical stop acknowledging important parts of the OS? Ubuntu GNU/Linux/systemd/libinput/Mesa/Qt...
I'd also love to see Canonical mention that Ubuntu uses Linux, but I understand that for their product and their target group, it doesn't really matter (and it may seem more important to push forward the brand "Ubuntu").
Yes, the same kinds of people who laughed at Stallman's "GNU/Linux" nomenclature are now apoplectic about Ubuntu's branding.
We know that Ubuntu is "Ubuntu Linux" and Linux is "GNU/Linux".
> I know what you're trying to say, but to the average user (which Ubuntu probably targets), does it matter?
I'm not trying to make a point here, I just wanted to give some context to someone asking a question. I'm not an Ubuntu user, but I'm not an Ubuntu hater either: the distribution I use (Linux Mint) wouldn't even work without Ubuntu.
thats why its hated by the "Linux community". Casual Linux user like you mentioned(me included) doesnt care much about these behind the scene stuff.
Mops will be mops...
Actually upstart came before systemd.
I think it's Ubuntu the corporation that gets criticized more than Ubuntu the OS.
They get compared to Red Hat, which from a hacker or open source point of view virtually always takes the high road and does the 'right thing'. They open source everything, they track down licenses, they sponsor the community, they insist on the purity of their own products, they're seen as being very co-operative when joining a project, et cetera. It's a high bar and Ubuntu doesn't quite reach it.
They're angels compared to pretty much any corporation but Red Hat, but it's Red Hat we compare them to.
The linux community is giving more shit to canonical than the Mac community is giving to Apple.
Worst things that does Apple ? Delegating slavery, blatant monopoly, consumer lock up, patent trolling, killing small businesses for their profits, etc.
Worst things that does Canonical ? Doing some technical mistake, making controversial design choices, spending less resources than some people would like to help FOSS in specific ways. Yeah, they totally deserve the shit storm.
Different values. That comparison is pointless.
It's not. The point is, the FOSS community value fairness a lot, but they are not fair at all with their champions.
I'm not sure if RH should really be considered an angel as such. It is just they are the grand old man of the Linux world and a massive employer of project developers.
Ubuntu is a distro I always know will just work for the most part on my system, I usually go for Kubuntu, though I may try Ubuntu Budgie once it's officially released alongside the other official flavors. The only other distro I have tried that I've enjoyed anywhere near Ubuntu was openSUSE, but I couldn't get my D compiler to cooperate for whatever reason.
I generally like Ubuntu, but it has its share of problems. For example, they will prefer to keep a package broken than fix it if fixing it means a version bump. This can be really frustrating for long term releases where things are broken in really obvious ways. An example includes gmplayer core dumps instantly in Ubuntu 14, with a known issue that requires the user to either hand tweak some files or just not use it. zsh users get no manual pages due to a slight flaw in the package, which won't be fixed.
Some other things are made much harder than they need to be. Back in the old days making a network bootable image involved compiling a custom kernel and setting up a DHCP server and NFS. These days it seems to require a flock of chickens to sacrifice. Hint: you need to pass a boot option that is entirely undocumented, except for down on page 15 of a discussion topic somewhere on the internet. The README is wrong/obsolete.
Other annoyances come from the system trying to be "smart", like when you try to dump a bootable image on a USB stick with dd, only to have the operation killed shortly after start because the OS detected a new bootable image on the stick and tried to mount it partway through the write, changing out the file descriptors from under DD.
Or when you're trying to diagnose a network problem by upping an interface and putting an IP on it, only to have NetworkManager go LOLNOPE and kick you straight in the balls.
Or when the system fails to boot because some message wasn't passed from some startup script somewhere and good luck tracking that down. That's nearly impossible to debug.
Heaven forbid you select the nVidia binary blob driver for your video card and then let Ubuntu install a new kernel. Ironically the only time the kernel upgrade goes smoothly is when I tell Ubuntu to leave it alone and install the driver directly from nVidia. This is extra fun when Ubuntu is deciding to upgrade the kernel twice a week. Even more fun when you've let it partition the disk for you and it creates a 256MB boot partition that fills up after 3 kernels.
Overall Ubuntu is easier to use than the old systems, but when it breaks it takes 10 times longer to fix it.
For servers, their choices are often bad. I recall how we had database failures in the middle of the night on the first of each month, until we figured out that a monthly chron was running slocate indexing all files and IO jumped to 100% under load. I have many anecdotes like this. Also they don't seem to understand that during the update, rebooting a server is the last resort, unlike a laptop. In the server distribution patch instructions, it often says "reboot your computer", whereas one can just restart the services like in the latest openssl security update.
locate has been a standard feature on Unix boxes for decades – it's certainly not specific to Ubuntu and the only way it causes I/O problem is either if the system is already close to breaking but nobody had been paying attention or the local configuration has been customized to do something like crawl NFS mounts.
> In the server distribution patch instructions, it often says "reboot your computer", whereas one can just restart the services like in the latest openssl security update.
It seems like a bad idea to criticize them for taking the safest tack in generic documentation intended for a wide audience. Some Ubuntu servers are run by veteran sysadmins but others run by people who are learning, primarily working on other things, etc.
Restarting processes requires a decision per-patch to understand all of the affected components and safe restart strategies for all of them – e.g. in the case of OpenSSL, the library will be loaded not only by services but also other long-running jobs – cron tasks, anything a user has been running, etc. Yes, you can script looking for open file-handles and try to restart everything but if that goes wrong in any way, you're running with a known security hole which people will incorrectly think has been patched and may even claim that a scanner must be reporting a false-positive (I've personally seen that).
If I was writing documentation to give to non-experts for operations, I'd make the same choice every time because it's simple and fails safely. Experts probably aren't going to read that documentation anyway and have enough knowledge to understand when they can make optimizations based on local knowledge.
I am all for having the mlocate program, but imposing a chron schedule on a server that causes heavy IO is stupid. Interestingly they have understood and disabled the mlocate chron in the server versions since at least 14.01. Have you administered a server with say 10m small files? It runs with negligible IO under normal traffic since recently accessed files are cached in memory, but running locate will kill its disks (sometimes physically).
There were other features in Unix for "decades", such as Bash Shellshock bug for example.
Having a cron job which makes something useful out of the box is not stupid. It's useful for most people and any sysadmin who needs to customize what is indexed can easily do so if there's a path they want to exclude which isn't already in the default list.
10M files isn't a huge amount by post-90s standards and talking about disk access destroying hardware is pure hyperbole. If you have disks failing that frequently something is wrong with your server room, cooling, etc. A daily cron task is not a make or break problem there: I mean, 15 years ago I had that many small files backing a departmental email setup (maildir, so one file per message) — if you think locate is a problem, consider the I/O characteristics of hundreds of IMAP sessions and remember that RAM was a lot more expensive back then and many individual users had mailboxes larger than the aggregate total server memory). Drives failed, yes, but usually after years of heavy service.
You are erroneously conflating disc failures with database failures, which is what was actually written.
It's generally considered poor form around here to accuse people of not reading correctly with no further discussion. Perhaps you could enlighten us by explaining how the quote I was responding to, “will kill its disks (sometimes physically)”, is so clearly about databases rather than disks?
Are you sure that was specific to Ubuntu and not general to Debian?
Isn't it just a very common and boring case of "Ubuntu has gotten too mainstream"? People like to be niche.
Kinda. Dev are hipsters that lack of fashion sense after all.
And sense of humour apparently.
I'm a dev guys.
Fully agree with you.
My first UNIX was Xenix and I got introduced to GNU/Linux with Slackware 2.0.
Ubuntu is the only reason I still have a netbook with GNU/Linux. All other my computers at home run Windows nowadays.
At the office our computers are a mix of Windows and Mac OS X, GNU/Linux installations only exist as VM instances.
I love ubuntu, but I always view an upgrade with extreme trepidation. v14 broke vmware, and it took so long to find a solution, that I removed it, and simply ran ubuntu from within a vm.
I'd love to try it again, I really would, as my host OS. But I just can't bring myself to do it again... yet.
From my experience, vmware is what breaks vmware under linux.
So upgrading your OS breaks your applications? Kind of my point.
Yes, if you application was coded to only work with a specific version of the kernel.
Works fine with 16.04.2 LTS so far though.
What about host shares in the guest OS? That stopped working at 14, and I haven't seen it work since.
Host shares should work, but might not be if you use the open-vm-tools. At least in ubuntu 14 it got broken and while upstream fixed it, Ubuntu did not care to fix it in their repo.
My main problem is that ubuntu is not as reliable as it was few years before as a desktop distribution. I still use it but it has bugs, I would so love for them to just stick to gnome and focus efforts on that :(
I totally agree. I recommend KDE Neon for newbies (up-to-date KDE on top of stable Ubuntu core) or Xubuntu if their computer is older.
A lot of it is historical. In the beginning, they ignored patented stuff compared to Fedora. Then they had various issues with their upstream at Debian. Then they used upstart in 14.04 at a time where it was clear this was a dead end and everyone was moving to systemd (as 16.04 did).
> I don't understand why Ubuntu is receiving so much bad comments from Linux community
12.04: System V
16.04: System D
Gahh...stick with something. I'm tired of learning an entirely new init system every couple years.
To be fair, this is a problem with Linux stuff in general; I just wish Ubuntu could lead the pack in picking something and sticking to it.
Here is the corrected table:
4.10: System V (in 2004), first release, adopted from Debian
6.10: Upstart (in 2006), see http://upstart.ubuntu.com/index.html
16.04: systemd (in 2016) and it was gradual.
This is part of the natural evolution of software.
There are new requirements and new software is needed to implement them.
Well, technically, systemd was 15.04.
> Gahh...stick with something. I'm tired of learning an entirely new init system every couple years.
I had this problem too - I gave up and moved to runit. It's basically an improved version of DJB's daemontools, you write very simple scripts, it monitors their output, handles logging, and service management and that's it. Very minimalist, but quite capable.
And, quite importantly, it's capable of running on top of another init system, it doesn't need to be PID 1, so I'm able to run it on FreeBSD with rc, Linux with sysvinit and Linux with systemd with the same script.
It makes it so easy to write service scripts I went from running my personal stuff inside a tmux script to runit in about an hour.
I develop lots of side projects that I always host in Ubuntu. They are web projects that require knowledge of configuring, stopping, and starting services on the box with ease. I am fully comfortable with upstart - but to learn systemD adds a huge amount of overhead to someone like me with little time who just wants to enjoy coding and shipping fun projects. I don't want to be a fulltime sysadmin just so I can launch a demo or game that nobody will ever use!
Actually the move to systemd is quite easy. There are cheatsheets available.
What's the chart look like on RHEL?
I suspect they would still be on upstart if not logind was hogtied to the rest of the systemd shoggoth.
It's the use of the CADT development model: https://www.jwz.org/doc/cadt.html
> Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about
I'm getting lots of downvotes from people talking about off-the-shelf laptops and other generic x86-based projects. Lets be clear that the following post is in the context of other CPU architectures and other platforms a little more exotic than your typical PC or laptop.
I'm not here to bash Ubuntu as I couldn't care less what platform people choose to run - even if that's Windows - just so long as I can run whatever I choose to run. However with that said, I still have to disagree with your statement above (re other distros only dream of supporting a massive number of devices). Ubuntu supports less hardware than their originating platform, Debian. Less than Suse, Redhat, and derivatives. Even Slackware and Arch support a considerable number of alternative architectures through 3rd party ports. And stepping away from GNU/Linux for a moment, FreeBSD, OpenBSD, NetBSD all too official support more platforms than Ubuntu.
Support for multiple devices and architectures isn't something unique to Ubuntu - it's pretty typical in the FOSS community. In fact back in the 90s and early 00s there used to be a running joke about people installing Linux on a whole plethora of odd devices just for kicks; talking kettles, toasters, stuffed animals(!!!), all sorts of things (baring in mind this was before the IoT revolution).
I get lots of second-hand hardware in the projects I do. Mainline Ubuntu works on all of them. The other Linux's are more inconsistent. Fixing their issues with Google is less straight-forward, too, if we're talking non-technical people. I've seen them do it with Ubuntu esp where it takes an apt-get or something.
Any of them MIPS or SPARC? Or how about something a little more exotic?
How is it everyone is overlooking that I'm talking about CPU architectures?
Seriously what's with people these days just playing on a few x86 PCs and maybe a raspberry Pi and then making bold claims their distro x runs on everything. Lets talk about some real niche platforms that are non-trivial to port software to please.
"Any of them MIPS or SPARC? Or how about something a little more exotic? How is it everyone is overlooking that I'm talking about CPU architectures?"
We're talking about a system primarily targeted toward x86 machines (esp desktops or laptops) common for consumers. So, everyone assumes you mean x86 machines with various configurations of hardware. That was my assumption, too. You assessing its hardware support by focusing on architectures virtually no Ubuntu or Linux user is using or wants to use is strange. I'll go further to say it's immaterial to an assessment of Ubuntu's hardware support given its target market. There's other distro's that focus on MIPS, SPARC, and "exotic" hardware.
> You assessing its hardware support by focusing on architectures virtually no Ubuntu or Linux user is using or wants to use is strange
I wasn't doing that though. I was literally _only_ stating that the GP's comment, "Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about.", is factually incorrect since "other distributions" can and do support not only many of the same platforms that Ubuntu supports but also a whole plethora of platforms that Ubuntu does not. I didn't disagree with the GP's other compliments towards Ubuntu and I even stated that I wasn't here to criticise Ubuntu. I even went on to say that multi-platform support in Linux isn't a new thing to reinforce the point that I'm grouping Ubuntu and Debian and all other Linux distros together with regards to their ability to run on an interesting array of hardware. I just disagree that Ubuntu is the only Linux distro that runs on multiple platforms - because, simply, it's not the only distro that does that.
However what _is_ strange about my post is how a great many people have then taken my comments as a personal attack against Ubuntu or said "those platforms don't matter" just because they personally don't use them. It's pretty nuts because the fact that I'm aware of unsupported platforms means that I've ran into situations where Ubuntu didn't run and thus means those platforms do matter to some people - like myself. The fact we are having this discussion is proof that your argument of irrelevance is itself irrelevant.
I'll probably get downvoted for saying this but reading the rest of this thread and the replies in my own branch it sounding a bit like a pro-Ubuntu echo chamber where like minded enthusiasts are all reconfirming their pre-existing biases and branding opposing experiences as liars (just as I had been by another HNer). This is a shame because there is a lot to praise Ubuntu for without needing to invent additional achievements. What's worse is downvoting members for the serious crime of actually managing to run non-Ubuntu distro's on new hardware when other enthusiasts could not. It's all a little absurd and it ruins the discussion for everyone as people like myself wont bother contributing in another Ubuntu-themed thread on HN again.
It's bs, it supports more hardware since the kernel is more recent, good luck running Debian / *BSD like on modern laptops.
> it supports more hardware since the kernel is more recent
Kernel ABIs are pretty static so drivers can be backported for earlier kernels more easily than recompiling the entirety of Ubuntu Server for an unsupported CPU architecture - such as SPARC.
That's what I meant when I said Debian supports more architectures.
However going back to your previous point about the age of Debian, you don't have to run the default repositories. If you run "testing" or "sid" then you can be just as up-to-date as Ubuntu or even more bleeding edge. In fact it wasn't that many years ago when Ubuntu was effectively just a reskinned Debian + testing repositories (I'm talking pre Unity, upstart, etc). But at the end of the day, it's all FOSS so anything Ubuntu runs can also be run on Debian, Arch, etc. It's just Debian already ships compiled binaries for more alternative CPU architectures can Canonical do with Ubuntu. Which is why I said supporting different platforms isn't anything new to Linux nor unique to Ubuntu.
> good luck Debian / BSD like on modern laptops.*
I have done. They worked fine. In fact FreeBSD was my primary OS for a period of time and Debian has always been my primary "debian-like" platform for all bar media centres (which do run vanilla Ubuntu). I have also ran officially supported variations of Ubuntu as my primary OS for short periods of time as well. I've tried them all before I finally found what OS I felt right for me.
So I do have considerable experience backing up my claims :)
First "no one" cares about SPARC, I'm glad the focus is on platforms that people use like x64_86 and ARM, secondly *BSD like and Debian don't run well on recent hardware, it's a lie to say otherwise, at best you will have terrible battery life, wifi / bluetooth that doesn't work properly, touchpad half broken ect... BSD support for modern hardware is worse than the Linux one.
> If you run "testing" or "sid"
So you drop your stability to use beta / alpha packages that are unstable where in Ubuntu LTS you have all the recent packages / drivers that have been tested. I mean with that statement you can run anything if you can compile it.
One can complain about many things for Ubuntu but not hardware support.
> First "no one" cares about SPARC
I've had to build stuff for SPARC recently. I definitely care. And frankly your opinion of SPARC does not alter the point one bit.
> I'm glad the focus is on platforms that people use like x64_86 and ARM
So am I. Reread my post: I wasn't criticising Ubuntu one bit. I was disagreeing with the GP's comment that only Ubuntu supports other platforms.
> secondly BSD like and Debian don't run well on recent hardware
"Some recent hardware". Which is a tiny subset of the amount of hardware out there that all the aforementioned platforms do support. BSD and Debian aren't even remotely as bad at supporting hardware as people make out
> it's a lie to say otherwise
That's not a valid counterargument. That's just offensive to anyone who has achieved what you claimed wasn't possible.
> at best you will have terrible battery life, wifi / bluetooth that doesn't work properly, touchpad half broken ect...
Ubuntu still runs the same kernel ABIs as Debian, ArchLinux, CentOS, Slackware, etc. It's all just x86 Linux. Veteran users like myself can get Linux to run on most platforms without too much difficulty. So what you're really trying to say is "Ubuntu is easier to get running on newer hardware". Which is the crux of Ubuntu - it's designed to make Linux easier. However to say "non-Ubuntu distros cannot run on stuff Ubuntu can and anyone who says otherwise is a liar" is insulting to all those who do successfully run non-Ubuntu platforms, and insulting to those distro maintainers.
> So you drop your stability to use beta / alpha packages that are unstable where in Ubuntu LTS you have all the recent packages / drivers that have been tested
Debian Testing isn't unstable. As already stated Ubuntu used to be based on Debian Testing.
> One can complain about many things for Ubuntu but not hardware support.
Again, I wasn't making a complaint. I was making an observation that Ubuntu wasn't the only distro that supports multiple platforms (which the GP claimed) and used CPU architecture as an example to my point. I'm sorry you've had bad experiences with other Linux distros and with BSD but my point still stands.
For what it's worth, on my last laptop I got better battery life out of ArchLinux than I did Ubuntu on the same laptop. All the other hardware was supported the same. And to follow on from my previous point: if I could have been bothered I know I could have improved the battery life in Ubuntu to match Arch. However at the time I needed the convenience more than the battery life (which was why I installed Ubuntu in the first place) so left the system unaltered until I had the time to distro-hop again.
I'm rather doubtful this is true. Any sources to back up your claim?
Just personal experience playing with unusual hardware platforms over the last couple of decades. Usually the download pages for the respective distros list all the CPU architectures they support Ubuntu needs a little more digging to locate their non-x86 downloads (which I believe are only ARM and POWER8). There used to be some ropy support for SPARC but that was nearly a decade ago.
These are exotic platforms, therefore go for a distribution that supports them.
I did. Again I feel the need to reiterate that my point was Ubuntu isn't the only multi-platform distro (despite the GP's claim). The rest of my post was exampling that point.
I don't understand why Ubuntu is receiving so much bad comments from Linux community. Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about. In my opinion, currently, Ubuntu is the best general purpose Linux distribution for new and semi advance users.
One key item to note is that switching to upgraded kernel path breaks live kernel patching at this time.
I was considering switching I saw this caveat...
"For clarity, the Canonical Livepatch Service is only available and supported against the generic and lowlatency GA kernel flavours for 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) release. HWE kernels are not supported at this time."
Also, it's not clear if there is a different kernel/command for upgraded kernels on a server.
EDIT: looks like it's going to be "linux-generic-hwe-16.04"
Do people see evidence of the live patching system doing something?
For me, canonical-livepatch status --verbose has never showed me any fixes, running linux-image-generic on 16.04.
Not really so far. I enabled it a couple months ago but have not seen any changes.
It's not clear to me if upgrading to HWE would correctly disable the livepatching.
Just to clarify, that is how Ubuntu's LTS [Long Term Support] releases are intended to work. The 'point one' release fixes bugs in the initial release and keeps the same kernel. It winds up being the actual release that is supported 'Long Term'. Releases 'point two' and later get updated kernels...and potentially new bugs to go with new features.
I won't say that the End of Life illustration is easy to interpret, but it shows how Ubuntu releases work:
Hey that's great. My biggest complaint from running 14.04 LTS for a couple years was the lack of kernel upgrades. Fortunately it's not that hard to install a kernel package from a more recent Ubuntu on it, but I had to find out how to do it and it's more manual than I was expecting for a LTS release.
The https://wiki.ubuntu.com/Kernel/LTSEnablementStack page has the appropriate upgrade command in order to upgrade the kernel to the newer and supported version.
According to https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Kernel.2FS...
you are now at the Linux kernel 4.4 and it will remain the same until the EOL of Ubuntu 14.04 (in 2019).
Thank you for providing this summary. I assume this means that if one has a fleet of servers running 16.04 that one keeps up-to-date, but choose not to update to .02 - one would have to use install media for 16.04 (sans .02) when installing new/replacement servers to fit in with the existing fleet?
It's a little bit surprising coming from Debian stable releases, but makes sense.
No, just doing updates gets you all the benefits of the point release, as you would expect. If you want a newer kernel, install the hwe kernel when it appears, it will roll until the next LTS and stabilise then in line with 18.04 (it is basically later release kernels built on 16.04).
Here is a summary of what is said in https://wiki.ubuntu.com/Kernel/LTSEnablementStack regarding the 16.04.x versions
1. If you are happy with how Ubuntu 16.04 works for you, you get to keep it and you receive support until 2021.
2. With Ubuntu 16.04.2, you get the option to switch to a new path of updated Linux kernels. If you do so, your Linux kernel will get updated every six months, until 2021.
For the first update with Ubuntu 16.04.2, you can enable to get the 4.8 kernel that was used/tested in development version of Ubuntu 16.10.
In the subsequent update with Ubuntu 16.04.3 (around July 2017), you will be updated to that Linux kernel that was used/tested in Ubuntu 17.04 (to be released in April 2017).
And so on.
The command to switch you to the new path of updated kernels (updated every six months), is
sudo apt-get install --install-recommends xserver-xorg-hwe-16.04
HWE kernel: http://askubuntu.com/a/248936
Here is HWE, https://wiki.ubuntu.com/Kernel/LTSEnablementStack and there was a change recently in the policy.
In a nutshell,
1. if you are happy with the currently kernel in Ubuntu 16.04, then you can stay with this kernel (it's version 4.4) and it gets supported until 2021).
2. if you want to jump to the new supported and tested (tested in 16.10) 4.8 Linux kernel, then there is a command described in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that helps you upgrade.
However, when you upgrade the kernel (and Xserver stack that are linked together), your Linux kernel will be upgraded every six months from now on, until 2021. The next kernel version update will be in July, and it will be whatever Linux kernel was released in Ubuntu 17.04.
Does that means you can have fairly recent kernels with LTS releases? If so, amazing. That was my biggest complaint of Ubuntu on servers.
This is the first time I'm hearing about this, but the link would suggest that yes, this is what it means. Kind of a sidethought, but I wonder if I can get an easily-administrable WRT distribution (like Tomato is) on my DSL router that has the kernel's bufferbloat fixes.
What do you mean by standard feature? HWE kernels were already introduced after each Ubuntu release.
Ubuntu used to add a kernel to the LTS from each subsequent release, but they were separate packages. Now there will be just one HWE package that will roll until the subsequent LTS.
Unfortunately, Alternate release images are not published after 16.04.1.
Everything in 16.04.1 comes from the package archives. It's what you get by default from a particular image that changes. You can still get to whatever state you want regardless of which variant and point release installation image of 16.04 you use. However, doing things manually is of course not the same as being automatic.
So are you saying they won't make any more alternate releases for future releases?
Yes. It was that way for the 14.04 LTS, too.
I think the coolest thing introduced here is that the HWE kernel is going to become a standard feature of LTS releases going forwards.
What that means is that all of the build scripts in Ubuntu 16.04 have been upgraded to Python 3 and building no longer has a dependency on Python 2. One way of looking at it is that Python 2 is not included with the current release of Ubuntu for the same reasons that MIT Scheme and PHP and Forth are not. The system does not require them.
Well, yes, but the interesting bit is that some notable organization just finished their move from 2 to 3.
Some notable organizations are planning not to move and more or less forking: https://opensource.googleblog.com/2017/01/grumpy-go-running-...
Python 2 will be with us for a long long time.
Any other links, bar this one (which isn't really much more than a proof of concept)?
Interesting bit from the release notes:
Python 3: Python2 is not installed anymore by default on the server, cloud and the touch images, long live Python3! Python3 itself has been upgraded to the 3.5 series.
You can run the Sofware Updater any time you want, including setting it to run automatically.
I actually run that every morning. So good to know :)
No need. Just keep installing the regular updates to your OS.
These minor releases are just new installer images so that new users don't immediately have to download a huge chunk of data.
Your Ubuntu will be automatically updated to 16.04.2 when the next package update kicks in. You probably had some updates around Tuesday which upgraded you to status "16.04.2" (run "lsb_release -a" in a terminal to verify). Yesterday it was only the ISO images that were released.
The important issue with 16.04.2 is that you can now decide easily upgrade the Linux kernel from the original 4.4 version, to the new 4.8 version. This 4.8 Linux kernel version was released in Ubuntu 16.10 (Oct 2016) and it has been promoted to the new kernel for 16.04.2.
May sound complex :-). There is a nice graphic in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that explains it well.
If you can ssh into your machine, it will tell you in the initial welcome message. I've been running it on my server since soon after release (I've only ever updated), and I've just now logged into it:
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)
sudo apt update and sudo apt upgrade will do it for you. To check what version you are on simple do cat /etc/os-release
Or, for a more cross-distro approach, `lsb_release -a`, although it requires the `lsb-release` package installed.
Truly cross distro is cat /etc/* release
Newbie Ubuntu user here. Currently I have the 16.04LTS (I don't know if mine is 16.04.1 but I downloaded and installed this version of mine the day 16.04LTS officialy released last April 2016). Should I upgrade to 16.04.2? If so, how? I mean, do I have to download the 16.04.2 installer or is there a update command?
I ran with BTRFS for a while. It was pretty nice overall, but you have to be aware of which features are production-ready and which ones aren't. And the wiki isn't up to date either. I had to ask questions in the IRC channel because I didn't feel like wading through mailing list archives which is apparently the only place up-to-date info gets written down.
Edit: This is new. https://btrfs.wiki.kernel.org/index.php/Status I was hoping that it would grow into a more stable, user-friendly project. But RAID1 was broken for months, it would let you create RAID5/6 volume even though the feature wasn't even finished yet, and I personally ended up with a filesystem that will crash the kernel when I try to read certain files. I recovered most of the data using a virtual machine that I could reboot quickly. Maybe I'll look at it again in a couple years, see if someone is taking the project seriously.
I've been using ZFS on all my desktops and my personal NAS for, I can't remember, probably 4-5 years.
Including ZFS root filesystem, and swap on a zvol.
I have to say, I really like it, and I'll use it again if I have to redo a machine.
Installing Ubuntu on a ZFS root filesystem is much more involved than merely running the installer. If you have never done it before, and are appropriately cautious, it'll take you half a day to follow the (very detailed and helpful) Wiki page. I can do it in less than an hour now.
So far, only one problem (and not a bug, more of a misfeature): when one of the disks in the NAS died, I couldn't replace it with a new one, because the ZFS mirroring was using its default 2kb blocks (I forget the correct term). It can only do this on 2kb/sector HDDs. My new HDD had 4kb/sector.
I was forced to recreate the entire filesystem using a larger blocksize (ashift=12).
Luckily this worked without a hitch, thanks also to zfs send | zfs receive, but it still pissed me off.
If you use LXD in Ubuntu, you can (and should) have your containers stored in the ZFS filesystem.
I have be doing that, and did not notice any problems.
About LXD: https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012...
I've been using btrfs in non-redundant and "raid1" (really just chunk duplication) setups for a while now. I've not had any major problems in managing the data pools or catastrophic data loss. In fact, btrfs checksums detected corruption that I isolated to a bad ram module on one of my machines that went undetected on ext4.
At this point I'm sticking with btrfs instead of going with zfs because of the flexibility for growing/shrinking volumes and adding/removing devices in a non-destructive manner.
Yep, been using "ZFS on root" (with multiple pools) for a couple of months now on a new workstation.
No major issues to report and no minor issues that I can recall.
I've been using ZFS on Linux for the past year on a home server. It has worked beautifully and is very simple to use. More importantly it is considered stable. I would certainly not trust btrfs with my data, it simply isn't finished and at this point may never be.
ZFS? Yes, many people. I'm sure you can read lots about it. I've been using it for over 6 months with no problem.
Anyone using ZFS with Ubuntu? Any problems? Anyone tried btfs?