[–] mmjaa link

I agree with you. I've been using Linux since the day Linus announced it on minix-list, and for the last 12 years Ubuntu has been my daily-driver system .. and it is amazing. I'm a musician, so when people see that I'm using Ubuntu Studio for my DAW, I usually hear a few chuckles .. until I fire up Ardour, show them my plugin list, the standard suite of synths and effects I use, and so on .. that usually shuts them up at least. But what gets them really curious is when I show them how easy it is to break out the source code for any of these powerful tools, make modifications, re-install and maintain my system with very, very powerful techniques.

There simply isn't any operating system as conducive to creative tinkering and progressive enhancement of key software as Ubuntu Studio. Its way, way ahead of the pack in this regard, and I think anyone who scoffs at the idea needs to be taught the lesson that Ubuntu - and of course, Linux and the ecosystem it promotes - is really worth the effort to know, learn and understand.

reply

[–] stinos link

Ardour is pretty great, but I'm not so sure the state of audio in general on linux is. Granted last time I struggled (as in a couple of days of trying out different things with god know how many different audio subsystems there were at the time) with it is about 5 years ago but it still seemed impossible to get a configuration with really low I-O latency (say <10mSec), nor did it 'just work' as on the other two main operating systems. Is this better with Ubuntu Studio? Does it work nicely with e.g. RME cards?

reply

[–] mmjaa link

For almost a decade now, I have had rock solid, high-performance audio running on my multi-channel DAW running Ubuntu Studio.

Rock solid. Better than Pro Tools.

How?

I chose my hardware wisely. In my case, I've got two Presonus Firepod FP-10 interfaces, which work out of the box with JACK. Truly superlative Firewire audio interfaces, and I chalk that up to the fact that the Bridgeco chipset used in these devices was developed in a pro-Linux environment.

Of course, ymmv .. especially if you fall into the PulseAudio trap, which is designed - imho - to make Audio suck on Linux as much as possible. Not so with Jack and Firewire audio - in this case, its the best possible combination of audio hardware and software components, given superlative latency and multi-channel I/o capabilities ..

reply

[–] tehwalrus link

Do you have a link to a blog post about your setup? If not, would you consider writing about it?

reply

[–] pawadu link

I am also interested in this!

It would also be great to see some audio latency numbers for ubuntu vs the rest on common hardware

reply

[–] Crespyl link

IIRC Ubuntu Studio (and most (all?) professional linux audio systems) uses JACK rather than ASLA/Pulse like most other versions of Ubuntu.

JACK has a reputation for being fiddly to set up, but puts great emphasis on low-latency/real-time and other professional use-cases. I'd expect it to be competitive, but don't really have the knowledge to make the comparisons myself.

I did find this unfinished review of latency in JACK from a few years ago: http://apps.linuxaudio.org/wiki/jack_latency_tests#does_late...

reply

[–] Cephlin link

I also would like to know your setup! I'd be very interested!

I was looking into whether I should purchase Windows or buy a Mac to run a DAW, but if you're saying that Ardour is good enough, then I can just install that this weekend!

reply

[–] laumars link

Ubuntu Studio isn't managed by Canonical (though I think they do receive financial contributions?) so that's a little different from the Canonical lead variations that the GP was discussing. However I do agree with you that Ubuntu Studio is a great product. I remember using it back when it was relatively new and even then I could see real potential in the making.

reply

[–] Pigo link

Looks like you got a lot of feedback from your comment, but if you have the time I'd like to ask a question too. I'm still getting my feet wet with more the more advanced aspects of Linux, primarily Ubuntu on my Chromebook. I always hear about people tweaking programs, but it never dawns on me what kind of tweaks are possible or even useful. I'd be interested in a couple use-cases you've come across, like problems you wanted to solve with a program and a brief overview of how you altered the source to do so. Maybe it's embarrassing to admit, but I've never tweaked anything outside of VS Code or Atom.

reply

[–] zeptomu link

First, it is definitely not embarrassing to admit, and the answer may be overwhelming, but you can customize nearly everything.

One of the simpler things is to customize the look and feel. E.g. you could try out different desktop experiences and WMs (window managers). Most of them are just an `apt-get` away and when you logout and login again you can choose and try out a different desktop.

reply

[–] mmjaa link

The only thing I could contribute to the conversation so far would be to NEVER use PulseAudio, and commit to using JACK for all your audio I/o needs, with regards to setting up Linux (UbuntuStudio) as a high-performance DAW.

This is really the great thing about Linux: we've got tons of options when it comes to audio I/o routing .. but its a liability too. My advice: commit to setting up Jack and learning it - definitely the best thing about Linux Audio.

reply

[–] jlgaddis link

I'm not sure if this is the kind of thing you're looking for or not, but here's a situation I encountered in just the last couple of days.

I have a new workstation PC I just recently put together. It's currently running Ubuntu 16.04 LTS but I plan to switch over to Arch Linux in the near future. Because I want to use LUKS (a.k.a. full disk encryption) and ZFS (and multiple ZFS pools) on it, I decided to work through the installation procedure on a laptop first (just to make sure I've got the steps down right -- an installation w/ the root filesystem on ZFS is "non-standard", and LUKS throws additional issues into the mix) before I wipe out my primary work machine.

First, by default, the LUKS setup only supports "unlocking" a single encrypted drive at boot. Since the laptop has a pair of SSDs (in a ZFS mirror), I need to unlock both of them (and my workstation has an NVMe, two SSDs, and two HDDs, so that'll be an even bigger problem!). I had to "hack on" the initramfs stuff so that I could unlock both drives when the system boots up.

In addition, I have several Yubikeys and wanted to use them with LUKS. I set them up in Challenge-Response mode, programmed them with a "secret key" that I generated, and then came up with a (relatively) short(er) passphrase that I can use to unlock the disks. I also had to create a custom (initramfs) "hook" to prompt for my passphrase, detect if a Yubikey is attached and, if so, send the "challenge" to the Yubikey, receive the response from it, and use that to unlock the disks. If a Yubikey isn't attached, then I have to provide a (much longer, more secure) "backup" passphrase to unlock (each of) the disks.

On the laptop, this means I can enter in my (relatively) short(er) passphrase once and unlock both disks. On my workstation, it means I'll enter it once to unlock all five disks. The alternative would be to type in my much longer, more secure passphrase FIVE separate times every time I boot the system up.

None of this was very hard or time consuming, but it was necessary in order to meet my exact goals and I'm glad that Linux allows me the opportunity to do exactly this type of non-standard stuff.

reply

[–] xyos link

I own a Toshiba Chromebook 2, there are things that doesn't work out of the box (sleep mode, laptop mode) so I wrote some scripts to fix those issues. Also I wanted to have the same keyboard experience that I had on chrome os, so I tweaked the keyboard (upper row media keys, shift+backspace=del, etc...)

reply

[–] thirdsun link

I have no doubt that you can work efficiently in such an environment and I firmly believe that audio related works these days are rarely limited by the available tools, but knowing how to use them in-depth. However you're giving up a lot of third party instrument and effect options, aren't you? Again, those are not essential in any way, but there are really worthwhile options I wouldn't want to miss. Do you rely on internal plugins only?

And have you tried Bitwig yet? With its similarities to Ableton Live, I'm sure it has the potential to push audio production on Linux to some degree.

reply

[–] mmjaa link

>However you're giving up a lot of third party instrument and effect options, aren't you?

You'd be surprised just how many VST's work smoothly and reliably under Linux.

>Bitwig

Tried it, but not a fan - I have too many external hardware sequencers to be bothered with setting it up with the rest of my system. (Hardware sequencers run forever!) I do, however, have Ardour set up for perfect multi-channel recording, so every instrument has its own digital tracking, and this has been wonderful ..

reply

[–] pawadu link

A few years back Linux Voice reviewed Bitwig and they seemed to like it a lot:

https://www.linuxvoice.com/issues/003/LV3bitwig.pdf

As plugins go, many good ones are available in LV2 or similar formats so no problem on that front.

reply

[–] coldtea link

>I'm a musician, so when people see that I'm using Ubuntu Studio for my DAW, I usually hear a few chuckles .. until I fire up Ardour, show them my plugin list, the standard suite of synths and effects I use, and so on .. that usually shuts them up at least.

Well, it's not that impressive compared to something like Live or Cubase.

reply

[–] tomcam link

Everything the Beatles, Sex Pistols, Sinatra, Louis Armstrong, Lou Reed, Abba, Dylan, Led Zeppelin, The White Stripes, and Miles Davis used to record with was infinitely less powerful than Ardour. Yet somehow they were able to cough up some acceptable music.

reply

[–] coldtea link

Yeah, and Bach wrote on mere sheets of paper.

Your point being? "A subpar DAW should be enough for everybody in 2017 as long as it's better than 70s era studio technology"?

Not to mention that you compare apples and oranges. These people did rock, punk and/or jazz, not modern r&b, pop or electronic music that are usually more demanding of a DAW for one.

Second they had tens of thousands and sometimes millions of dollars as a budget for a single record, great facilities with top-notch acoustics, multiple producers, access to orchestras and top-notch musicians, and crazily expensive consoles with people to operate them for them.

Third one doesn't compete against Louis Armstrong era sound but against their own's generation's sound and production techniques.

reply

[–] 4ad link

Less powerful but more usable.

reply

[–] funnyfacts365 link

Can you modify Live or Cubase?

reply

[–] oelmekki link

Or even a better question : can you plug Live to Cubase? Audio on linux is not just playing catch up, the whole jackd system, allowing to take output from any program supporting jack (most programs for musicians) and use it as an input for an other one is unparalleled, afaik.

reply

[–] coldtea link

Live, for one, comes with a full blown programming environment called Max for Live, where you can create all kinds of things on par with native plugins and controls.

But the more essential question is: does it matter?

If one is experimental electronic musician / sound artist, it might be worth it.

But the average musician wants a good environment with the features they need built-in (and easy access to quality third party plugins etc for FX etc), not to tinker (and even less so, in C/C++) with their DAW.

reply

[–] thirdsun link

"modify" is a very flexible term, but at least Live offers lots of customization options, particularly when you consider Max for Live: https://www.ableton.com/en/live/max-for-live/

reply

[–] sjellis link

I think that the most charitable answer is that Ubuntu changed it's basic premise, and some people don't like the newer approach.

Back when Ubuntu started, it was explicitly a variation of Debian with a GNOME desktop, plus some custom parts to make a "Linux for Human Beings", such as an easy-to-use installation process. Mark Shuttleworth had been a Debian developer, Canonical hired it's technical people from Debian contributors, and everybody was sensitive to the need for Ubuntu to work with the upstream projects (and vice versa).

Linux is a complicated system of components, and desktops are far more complex than servers, so it's essential for developers with different employers to cooperate to get things done. Important decisions require developers from multiple organizations to reach a consensus. It can be a slow and frustrating process, and it's easy for awkward people to cause a lot of hassle for everyone else.

In practice, Canonical always struggled to work well with others, and eventually they switched to developing their own convergence stack (to span desktops, smart TVs and mobile) that happens to use Linux components but shares increasing less common ground with the rest of the community: using their own graphics systems, desktop environment, and their own software packaging systems. In other words, Ubuntu has been morphing from community-friendly Debian variant to an Android-style single-vendor system.

Google can do this with Android and not take the same level of flak because Android has always been a commercial product that happens to have FOSS components, and they seem to cooperate reasonably well with the rest of the community in areas of shared interest.

(All IMO).

reply

[–] moo link

I've stopped letting these people lead me by the nose. For the community support and staying compatible I'm currently sticking with an Ubuntu upgrade path, but I've long left the default user interfaces behind. For a window manager I use openbox with tint2. I don't find much utility in the Debian/Ubuntu derived distros like Linux Mint, jumping from being based on Ubuntu,to Debian, then back to Ubuntu and breaking compatibility along the way.

reply

[–] mook link

(IMHO, of course)

I think a big part of the problem is Red Hat. Not that they're malicious or doing something wrong or anything of the sort; just that they're overrepresented in key projects that determines the direction of Linux. With enough people in key positions it is much easier for others to write off working upstream because they have different business interests and feel like contributing is working for you rather than with you. Here LWN with their stats on contributor affiliation for kernel releases can help (since they show that they're not even mostly one company), as does having people employed by the Linux Foundation. This doesn't occur as much for non-kernel software though.

reply

[–] wink link

I'm not so sure Kernel contribution is really such a big deal in day to day linux usage.

Sure, everyone loves if stuff just works - but if I'm looking back to my first years of Linux (1998-2007) and the last (nearly) decade - I haven't actively cared what kind of kernel, what version of kernel or whatever I am running on my laptops or servers. It just works[tm]. I'm really not trying to downplay the kernel developers' work - but if you're not using the latest hardware or need the last bit of performance.. many people could live with security fixes alone (hey, surprise, all the kernels in LTS distros only do get security backports).

(For me at least) in the last 5+ years the real development was in userland (for better or worse) - but I've stopped caring for anything besides "works fine" on a kernel level. Sad? Maybe.

reply

[–] mook link

Right; I just mentioned the kernel because it's the only thing I know of with consistent stats on what companies are contributing (and I mostly skip over those anyway). I just don't know anybody who publishes the same for, say, Gnome. It probably doesn't help that the boundary between the various projects need for a desktop Linux are blurrier.

reply

[–] unethical_ban link

So is anyone giving Red Hat grief for, in effect, forcing everyone to use systemd? Or is it ok for them to do it because they "won"?

I don't think Ubuntu deserves flak just for forging their own path at times, even if it creates a bit of a fork in some stacks.

reply

[–] nickik link

Is anybody giving Red Hat grief? Are you joking?

People bitch constantly about evil Red Hat. Any article about systemd, flatpak and so on.

reply

[–] AsyncAwait link

> Or is it ok for them to do it because they "won"?

It's Ok because systemd is actually pretty great.

reply

[–] unethical_ban link

You're saying the ends justifies the means when it comes to "going it alone" in the Linux ecosystem, and I don't think that's clear.

reply

[–] zeptomu link

I think that building a good UI for the desktop is only possible if UI developers agree on "look-and-feel" and that may be simpler for Canonical than for a community like Debian and they came to the conclusion that building their own is just the way to go. AFAIK their home-grown desktop is FOSS, so it's not like you are buying into a proprietary system and this is tremendous.

When I started with Linux I used a distribution like Mandrake with a KDE Desktop and by default you got several text editors, browsers and other tools (the drop-down menu was huge) and for me as a starter (I did not know any programming then and just wanted to play with a different OS coming from a Microsoft stack) I was overwhelmed and although I could appreciate the customization it was hard to get started as there were too many options.

Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well. E.g. I use Ubuntu as you can find lots of documentation, it is supported well in cloud infrastructure and they (re)distribute a lot of packages.

In particular I do not get critics about the Desktop Environment. In no way made Ubuntu your Desktop less customizable and I use e.g. xmonad as my WM which is totally straight-forward and it just works. My mom uses the default Desktop and is pretty happy with it.

That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.

reply

[–] sjellis link

> I think that building a good UI for the desktop is only possible if UI developers agree on "look-and-feel" and that may be simpler for Canonical than for a community like Debian and they came to the conclusion that building their own is just the way to go.

Yes, and it seems like a number of other distribution developers have come to the same conclusion as well. We really need some thoughtful analysis about this trend.

> AFAIK their home-grown desktop is FOSS, so it's not like you are buying into a proprietary system and this is tremendous.

Source availability is part of enabling a broader developer community, but there's a huge amount of other work that is needed, as well. I haven't looked at the current state of Unity development, but the reputation of the project is that it is built for Ubuntu.

> Ubuntu changed that and they really thought a lot about how to improve the Linux Desktop experience and IMHO they did very, very well.

I totally agree: Ubuntu really was revolutionary when it started. It's kind of amazing how many innovative things Mark Shuttleworth and his team did right at the start.

> That may be controversial, but I think there are often just 2 points of users; those that want it to just-work without any configuration and those who want to customize everything. I think Ubuntu is doing pretty well (currently, I hope that never changes) in both camps.

There is also a third audience for any piece of software that is large enough to be programmable: developers. For desktops, you have third-party theme authors as well as application developers, people that want to work on the desktop software itself, maintainers of other Linux OS components, and folks that want to use the source code to build their own custom projects.

reply

[–] zeptomu link

> developers. For desktops, you have third-party theme authors as well as application developers, people that want to work on the desktop software itself, maintainers of other Linux OS components, and folks that want to use the source code to build their own custom projects.

I think they are part of the "possibility to customize-everything" crowd. The most important point for me in a distribution is its security record and a good package system to support all common use-cases. The default desktop should totally be targeted to casual users, as experienced users will not agree with your base-configuration system anyway (I remember YAST from SUSE), so I appreciate strong opinions there, in particular less is more (the original Ubuntu approach compared to other distributions at the time).

reply

[–] AsyncAwait link

> I don't understand why Ubuntu is receiving so much bad comments from Linux community.

That's a complex topic, but over the years, a couple of reasons come to mind; Outdated packages and broken PPAs giving a bad impression to new users, slow and bloated in many respects, aggressive community behavior, CLAs, passive-aggressive blog posts and stances by the project leads towards alternative distros like Mint or any criticism whatsoever, Mir when everyone standardizes on Wayland, Unity is very hard to get working properly on non-Ubuntu distros, aggressive push for Snaps, reluctant to adopt systemd, not really part of the community, unless they want to push their own tech, (a bit like Apple), lack of kernel contributions compared to ie RedHat, distancing from the term Linux, using only Ubuntu as much as possible.

reply

[–] jessaustin link

...reluctant to adopt systemd...

Ubuntu weren't alone in that.

reply

[–] AsyncAwait link

No and I did not say they were, just that it was one of the things that some people took issue with. There are others I didn't mention, like the opt-out Amazon Lens integration some time back for example, the point is that they have done some questionable things over the years that left the wider community with a sour taste towards Canonical.

reply

[–] sangnoir link

> No and I did not say they were, just that it was one of the things that some people took issue with

Who were those people taking issue with the lack of eagerness on adopting systemd and were they in Raleigh?[1] Systemd is a rather radical shift from the "*nix philosophy".

I don't have patience for RH "UX contributions don't count" saltiness when they intentionally abandoned desktop Linux in favor of chasing the enterprise market (successfully!). No, I'm not bitter about RHEL[2] at all.

1. Tongue firmly in cheek

2. I do know Fedora exists, I read the very 1st announcement. It's not the same

reply

[–] falcolas link

In my opinion, this falls under the old adage of "There are two kind of [Linux distros]: the ones everyone complains about, and the ones that nobody uses."

As someone who likes his free time for doing things other than fiddling with configuration files, Ubuntu is quite nice. It's not perfect, it's strayed from its original vision; but it's still my Linux desktop distro of choice.

reply

[–] oelmekki link

There are many factors, IMO (probably all wrong).

First, ubuntu was initially seen as "noob's linux", debian users especially not taking the fork well, nor the fact that the numbers of linux users was raising in some kind of OS version of eternal september.

Then there was the fact that ubuntu was OK to mix up proprietary code in their repos, like proprietary drivers. It was (it is) a big fight for debian to sacrifice ease of use to enforce non-proprietary software.

Third, there was the massive success of ubuntu, making all other distros the challengers. This always tends to attract criticism.

And finally, there was the perception of canonical pushing their agenda on their users, leaving them no choice, like when ubuntu migrated to unity or with the whole amazon lens debate.

The mix of all of this makes this linux distro having the easiest setup and the more compatibility/support being looked at with disdain, which is a shame, really.

reply

[–] sametmax link

Yeah but that's the game isn't it ? Free open source licence means authors gave them permissions to do it. Complaining about something you authorized to do is not fair play.

And it's not like they became apple or microsoft. Their mistakes are minor at worst compared to other competitors. What they brought to linux, however, is huge.

reply

[–] digi_owl link

Because there is a loudmouth sub-set of the community that think a single look and feel of the DE would bring about "the year of the desktop". This while they keep CADTing the APIs and ABIs, thus making third parties weary of developing for said desktop.

reply

[–] sbuk link

>>"debian users especially not taking the fork well"

This never sets well with me, especially when OSS groups are involved. Is it not fundamentally the whole point of the software freedom movement?

reply

[–] johnfn link

Amazing how it works, isn't it?

no Linux distribution is popular

All Linux users: "everyone should be on Linux! Linux is amazing! Yay Linux!"

ubuntu becomes very popular

All Linux users: "everyone should be on Linux! Ehhh... but maybe not that Linux..."

reply

[–] sametmax link

So true. The linux community is full of idealists that are unable to compromise. They would anyway not be able to live up to their own standards if they had to both ship software that new comers use and respect their ideals.

reply

[–] drdaeman link

Wasn't Free Software Movement and GNU Project (which are the foundation for this whole GNU/Linux distros thing) born because of a certain idealist who was unable to compromise?

reply

[–] Qub3d link

Ahh, RMS. I listened to a recording of one of his recent talks (the Grand Rapids one, IIRC).

I regard Stallman the way Randall Munroe regards Ayn Rand: "I found myself enthusiastically agreeing with the first 90% of every sentence, but getting lost at 'therefore, be a huge asshole to everyone.'"

reply

[–] sametmax link

You got a point :) But none of GNU is what I would call user friendly or mass-market ready.

As a professional, I love it.

But without canonical and his compromises, my mother wouldn't be able to use it today.

I'm glad I have a solid base, but creating something less radical on top of it will not destroy it.

reply

[–] wink link

I wouldn't call it "unable to compromise". There was some version of Ubuntu that wasn't any more stable than Windows XP for me - that's when I stopped using it.

I've also never tried to discourage anyone from using Ubuntu - but I noticed it's not for me. Apart from a tiny fraction of "this is a little nicer for the desktop user" I am losing a lot versus plain Debian - and I wouldn't that call that idealistic.

On the other hand... maybe there would've been a year of Linux on the desktop if a good part of the distros would've folded and people would've joined forces. Who knows?

reply

[–] JustSomeNobody link

Human nature. You can be popular, just don't be too popular. You can be rich, just don't be too rich. You can be pretty, just don't be too pretty. You can be... just don't be too...

reply

[–] simosx link

This would be really funny if it was not that depressing.. :'(

reply

[–] digi_owl link

CLAs and a history of going their own way i suspect.

Then again the "majority" way is largely dictated by a few big projects in and around Fedora, with developers largely on Red Hat payroll.

And the shit slinging didn't really take off until they up and created Unity after a spat with Gnome over the latters future course (afaik). Closely followed with Canonical starting Mir after misrepresenting/misunderstanding where Wayland was going.

So who really knows whats going on...

reply

[–] sjellis link

> So who really knows whats going on...

Pretty much all of the conversations and disputes have happened on the Internet, so if you are interested you can read the mailing lists, Google+, blog posts etc. in each case. There's no reason why anyone should unless they are interested, but everybody really is free to do so, and then draw their own conclusions.

reply

[–] funnyfacts365 link

And you know what, Unity is great.

reply

[–] sametmax link

I like it too. And the funny thing is all beginers I gave unity to liked it better than gnome. Only power users complain about it, which is lame given they are precisely the ones able to install another desktop whenever they want.

reply

[–] tracker1 link

I'm pretty happy with it as well... although I swear several times a year I have to fiddle with stuff after updates. Even now, my ~/.cache/upstart/unity.7.log file fills up the disk every few days, and I can't catch what's doing it before it happens... so I just rm -rf ~/.cache every now and then. Since I'm unable to tail it, and don't have a thumb drive to fit a 90GB log file. And it seems to happen if it's in sleep (not off) and I turn off the tv/avr.

I'm actually considering switching my HTPC back to Windows, or trying Debian proper. I don't spend much time in unity there, mostly Kodi, and sometimes Chrome, but the DE matters little to me. I run Windows, mac, and linux (ubuntu unity) regularly.

Sorry for veering off into a rant... All around I actually do like Unity though.

reply

[–] funnyfacts365 link

I had the same problem with some Kodi addons being the culprits. Updated python-openssl and python-cryptography to the packages from the debian repo, disabled some theme/addon helper that was running in the addons settings and the unity7.log stopped filling my disk. Let me see if I find the link to the bug on Launchpad.

Edit:

https://answers.launchpad.net/ubuntu/+question/447828

https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/1636573

reply

[–] Grue3 link

I find it completely unusable. The first thing I do after installing Ubuntu is installing XFCE or Cinnamon. Just the fact that you had to sacrifice a newborn to start two instances of the same application, an extremely common user action (I don't know if it's still the case now) shows that whoever designed it is incompetent.

reply

[–] vetinari link

Unity from a user's point of view is great. Too bad it requires patched upstream libraries, so it does not run on other distributions.

reply

[–] stymaar link

I agree with you that Ubuntu had a really positive impact on Linux adoption by providing a polished operating system.

That being said I think they somewhat deserve the bad opinion the Linux community has about them: Canonical decided to play solo on several critical subject instead of cooperating with others, in particular I'm thinking about upstart (a competitor for systemd) and Mir a competitor of wayland).

Their marketing is also pain point, because they brand everything as Ubuntu and don't refer to Linux at all in many of their statements (for instance, you cannot find a single occurance of the word «Linux» on their landing page[1]).

[1] https://www.ubuntu.com/

reply

[–] antnisp link

Upstart not only came before systemd but is the init system of the still supported RHEL 6. I am not qualified to judge which one is better but let's agree on the facts.

As far as the marketing goes, it is a larger trend. See Fedora Workstation/Server, elementaryOS. In fact, on the CentOS homepage there is nary a mention of Linux.

reply

[–] Daviey link

Not only that.. but Lennart was working on Upstart.. committed to working on some items.. Came back, and said - "Surprise, i've made systemd instead".. which is a bit of a kick when you are depending on him to do the features he said he'd do.

reply

[–] Longhanks link

I know what you're trying to say, but to the average user (which Ubuntu probably targets), does it matter?

If you look at the whole product, there's much more to it than Linux. Linux is 'just' the kernel, I doubt the average Joe cares what kernel his device runs. Most probably don't know that their Android phone is powered by Linux, too. Also, this leads to the old "GNU/Linux" discussion. Where would Canonical stop acknowledging important parts of the OS? Ubuntu GNU/Linux/systemd/libinput/Mesa/Qt...

I'd also love to see Canonical mention that Ubuntu uses Linux, but I understand that for their product and their target group, it doesn't really matter (and it may seem more important to push forward the brand "Ubuntu").

reply

[–] LordKano link

Yes, the same kinds of people who laughed at Stallman's "GNU/Linux" nomenclature are now apoplectic about Ubuntu's branding.

We know that Ubuntu is "Ubuntu Linux" and Linux is "GNU/Linux".

reply

[–] stymaar link

> I know what you're trying to say, but to the average user (which Ubuntu probably targets), does it matter?

I'm not trying to make a point here, I just wanted to give some context to someone asking a question. I'm not an Ubuntu user, but I'm not an Ubuntu hater either: the distribution I use (Linux Mint) wouldn't even work without Ubuntu.

reply

[–] abdulmuhaimin link

thats why its hated by the "Linux community". Casual Linux user like you mentioned(me included) doesnt care much about these behind the scene stuff.

reply

[–] digi_owl link

Actually upstart came before systemd.

reply

[–] bryanlarsen link

I think it's Ubuntu the corporation that gets criticized more than Ubuntu the OS.

They get compared to Red Hat, which from a hacker or open source point of view virtually always takes the high road and does the 'right thing'. They open source everything, they track down licenses, they sponsor the community, they insist on the purity of their own products, they're seen as being very co-operative when joining a project, et cetera. It's a high bar and Ubuntu doesn't quite reach it.

They're angels compared to pretty much any corporation but Red Hat, but it's Red Hat we compare them to.

reply

[–] sametmax link

The linux community is giving more shit to canonical than the Mac community is giving to Apple.

Worst things that does Apple ? Delegating slavery, blatant monopoly, consumer lock up, patent trolling, killing small businesses for their profits, etc.

Worst things that does Canonical ? Doing some technical mistake, making controversial design choices, spending less resources than some people would like to help FOSS in specific ways. Yeah, they totally deserve the shit storm.

reply

[–] nickik link

Different values. That comparison is pointless.

reply

[–] digi_owl link

I'm not sure if RH should really be considered an angel as such. It is just they are the grand old man of the Linux world and a massive employer of project developers.

reply

[–] marcoms link

*Canonical

reply

[–] giancarlostoro link

Ubuntu is a distro I always know will just work for the most part on my system, I usually go for Kubuntu, though I may try Ubuntu Budgie once it's officially released alongside the other official flavors. The only other distro I have tried that I've enjoyed anywhere near Ubuntu was openSUSE, but I couldn't get my D compiler to cooperate for whatever reason.

reply

[–] jandrese link

I generally like Ubuntu, but it has its share of problems. For example, they will prefer to keep a package broken than fix it if fixing it means a version bump. This can be really frustrating for long term releases where things are broken in really obvious ways. An example includes gmplayer core dumps instantly in Ubuntu 14, with a known issue that requires the user to either hand tweak some files or just not use it. zsh users get no manual pages due to a slight flaw in the package, which won't be fixed.

Some other things are made much harder than they need to be. Back in the old days making a network bootable image involved compiling a custom kernel and setting up a DHCP server and NFS. These days it seems to require a flock of chickens to sacrifice. Hint: you need to pass a boot option that is entirely undocumented, except for down on page 15 of a discussion topic somewhere on the internet. The README is wrong/obsolete.

Other annoyances come from the system trying to be "smart", like when you try to dump a bootable image on a USB stick with dd, only to have the operation killed shortly after start because the OS detected a new bootable image on the stick and tried to mount it partway through the write, changing out the file descriptors from under DD.

Or when you're trying to diagnose a network problem by upping an interface and putting an IP on it, only to have NetworkManager go LOLNOPE and kick you straight in the balls.

Or when the system fails to boot because some message wasn't passed from some startup script somewhere and good luck tracking that down. That's nearly impossible to debug.

Heaven forbid you select the nVidia binary blob driver for your video card and then let Ubuntu install a new kernel. Ironically the only time the kernel upgrade goes smoothly is when I tell Ubuntu to leave it alone and install the driver directly from nVidia. This is extra fun when Ubuntu is deciding to upgrade the kernel twice a week. Even more fun when you've let it partition the disk for you and it creates a 256MB boot partition that fills up after 3 kernels.

Overall Ubuntu is easier to use than the old systems, but when it breaks it takes 10 times longer to fix it.

reply

[–] cft link

For servers, their choices are often bad. I recall how we had database failures in the middle of the night on the first of each month, until we figured out that a monthly chron was running slocate indexing all files and IO jumped to 100% under load. I have many anecdotes like this. Also they don't seem to understand that during the update, rebooting a server is the last resort, unlike a laptop. In the server distribution patch instructions, it often says "reboot your computer", whereas one can just restart the services like in the latest openssl security update.

reply

[–] acdha link

locate has been a standard feature on Unix boxes for decades – it's certainly not specific to Ubuntu and the only way it causes I/O problem is either if the system is already close to breaking but nobody had been paying attention or the local configuration has been customized to do something like crawl NFS mounts.

> In the server distribution patch instructions, it often says "reboot your computer", whereas one can just restart the services like in the latest openssl security update.

It seems like a bad idea to criticize them for taking the safest tack in generic documentation intended for a wide audience. Some Ubuntu servers are run by veteran sysadmins but others run by people who are learning, primarily working on other things, etc.

Restarting processes requires a decision per-patch to understand all of the affected components and safe restart strategies for all of them – e.g. in the case of OpenSSL, the library will be loaded not only by services but also other long-running jobs – cron tasks, anything a user has been running, etc. Yes, you can script looking for open file-handles and try to restart everything but if that goes wrong in any way, you're running with a known security hole which people will incorrectly think has been patched and may even claim that a scanner must be reporting a false-positive (I've personally seen that).

If I was writing documentation to give to non-experts for operations, I'd make the same choice every time because it's simple and fails safely. Experts probably aren't going to read that documentation anyway and have enough knowledge to understand when they can make optimizations based on local knowledge.

reply

[–] cft link

I am all for having the mlocate program, but imposing a chron schedule on a server that causes heavy IO is stupid. Interestingly they have understood and disabled the mlocate chron in the server versions since at least 14.01. Have you administered a server with say 10m small files? It runs with negligible IO under normal traffic since recently accessed files are cached in memory, but running locate will kill its disks (sometimes physically).

There were other features in Unix for "decades", such as Bash Shellshock bug for example.

reply

[–] mwpmaybe link

Are you sure that was specific to Ubuntu and not general to Debian?

reply

[–] VeejayRampay link

Isn't it just a very common and boring case of "Ubuntu has gotten too mainstream"? People like to be niche.

reply

[–] sametmax link

Kinda. Dev are hipsters that lack of fashion sense after all.

reply

[–] sametmax link

And sense of humour apparently.

I'm a dev guys.

reply

[–] pjmlp link

Fully agree with you.

My first UNIX was Xenix and I got introduced to GNU/Linux with Slackware 2.0.

Ubuntu is the only reason I still have a netbook with GNU/Linux. All other my computers at home run Windows nowadays.

At the office our computers are a mix of Windows and Mac OS X, GNU/Linux installations only exist as VM instances.

reply

[–] Frogolocalypse link

I love ubuntu, but I always view an upgrade with extreme trepidation. v14 broke vmware, and it took so long to find a solution, that I removed it, and simply ran ubuntu from within a vm.

I'd love to try it again, I really would, as my host OS. But I just can't bring myself to do it again... yet.

reply

[–] pawadu link

From my experience, vmware is what breaks vmware under linux.

reply

[–] Frogolocalypse link

So upgrading your OS breaks your applications? Kind of my point.

reply

[–] wila link

Works fine with 16.04.2 LTS so far though.

reply

[–] ergo14 link

My main problem is that ubuntu is not as reliable as it was few years before as a desktop distribution. I still use it but it has bugs, I would so love for them to just stick to gnome and focus efforts on that :(

reply

[–] rocky1138 link

I totally agree. I recommend KDE Neon for newbies (up-to-date KDE on top of stable Ubuntu core) or Xubuntu if their computer is older.

reply

[–] nailer link

A lot of it is historical. In the beginning, they ignored patented stuff compared to Fedora. Then they had various issues with their upstream at Debian. Then they used upstart in 14.04 at a time where it was clear this was a dead end and everyone was moving to systemd (as 16.04 did).

reply

[–] paulddraper link

> I don't understand why Ubuntu is receiving so much bad comments from Linux community

12.04: System V

14.04: Upstart

16.04: System D

18.04: ???

Gahh...stick with something. I'm tired of learning an entirely new init system every couple years.

To be fair, this is a problem with Linux stuff in general; I just wish Ubuntu could lead the pack in picking something and sticking to it.

reply

[–] simosx link

Here is the corrected table:

4.10: System V (in 2004), first release, adopted from Debian

6.10: Upstart (in 2006), see http://upstart.ubuntu.com/index.html

16.04: systemd (in 2016) and it was gradual.

This is part of the natural evolution of software. There are new requirements and new software is needed to implement them.

reply

[–] paulddraper link

Well, technically, systemd was 15.04.

reply

[–] problems link

> Gahh...stick with something. I'm tired of learning an entirely new init system every couple years.

I had this problem too - I gave up and moved to runit. It's basically an improved version of DJB's daemontools, you write very simple scripts, it monitors their output, handles logging, and service management and that's it. Very minimalist, but quite capable.

And, quite importantly, it's capable of running on top of another init system, it doesn't need to be PID 1, so I'm able to run it on FreeBSD with rc, Linux with sysvinit and Linux with systemd with the same script.

It makes it so easy to write service scripts I went from running my personal stuff inside a tmux script to runit in about an hour.

reply

[–] binarymax link

Yes this!!!!

I develop lots of side projects that I always host in Ubuntu. They are web projects that require knowledge of configuring, stopping, and starting services on the box with ease. I am fully comfortable with upstart - but to learn systemD adds a huge amount of overhead to someone like me with little time who just wants to enjoy coding and shipping fun projects. I don't want to be a fulltime sysadmin just so I can launch a demo or game that nobody will ever use!

reply

[–] slitaz link

Actually the move to systemd is quite easy. There are cheatsheets available.

reply

[–] jldugger link

What's the chart look like on RHEL?

reply

[–] digi_owl link

I suspect they would still be on upstart if not logind was hogtied to the rest of the systemd shoggoth.

reply

[–] syntheticnature link

To be fair, this is a problem with Linux stuff in general; I just wish Ubuntu could lead the pack in picking something and sticking to it.

It's the use of the CADT development model: https://www.jwz.org/doc/cadt.html

reply

[–] laumars link

> Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about

[edit]

I'm getting lots of downvotes from people talking about off-the-shelf laptops and other generic x86-based projects. Lets be clear that the following post is in the context of other CPU architectures and other platforms a little more exotic than your typical PC or laptop.

[original post]

I'm not here to bash Ubuntu as I couldn't care less what platform people choose to run - even if that's Windows - just so long as I can run whatever I choose to run. However with that said, I still have to disagree with your statement above (re other distros only dream of supporting a massive number of devices). Ubuntu supports less hardware than their originating platform, Debian. Less than Suse, Redhat, and derivatives. Even Slackware and Arch support a considerable number of alternative architectures through 3rd party ports. And stepping away from GNU/Linux for a moment, FreeBSD, OpenBSD, NetBSD all too official support more platforms than Ubuntu.

Support for multiple devices and architectures isn't something unique to Ubuntu - it's pretty typical in the FOSS community. In fact back in the 90s and early 00s there used to be a running joke about people installing Linux on a whole plethora of odd devices just for kicks; talking kettles, toasters, stuffed animals(!!!), all sorts of things (baring in mind this was before the IoT revolution).

reply

[–] nickpsecurity link

I get lots of second-hand hardware in the projects I do. Mainline Ubuntu works on all of them. The other Linux's are more inconsistent. Fixing their issues with Google is less straight-forward, too, if we're talking non-technical people. I've seen them do it with Ubuntu esp where it takes an apt-get or something.

reply

[–] laumars link

Any of them MIPS or SPARC? Or how about something a little more exotic?

How is it everyone is overlooking that I'm talking about CPU architectures?

Seriously what's with people these days just playing on a few x86 PCs and maybe a raspberry Pi and then making bold claims their distro x runs on everything. Lets talk about some real niche platforms that are non-trivial to port software to please.

reply

[–] Thaxll link

It's bs, it supports more hardware since the kernel is more recent, good luck running Debian / *BSD like on modern laptops.

reply

[–] laumars link

> it supports more hardware since the kernel is more recent

Kernel ABIs are pretty static so drivers can be backported for earlier kernels more easily than recompiling the entirety of Ubuntu Server for an unsupported CPU architecture - such as SPARC.

That's what I meant when I said Debian supports more architectures.

However going back to your previous point about the age of Debian, you don't have to run the default repositories. If you run "testing" or "sid" then you can be just as up-to-date as Ubuntu or even more bleeding edge. In fact it wasn't that many years ago when Ubuntu was effectively just a reskinned Debian + testing repositories (I'm talking pre Unity, upstart, etc). But at the end of the day, it's all FOSS so anything Ubuntu runs can also be run on Debian, Arch, etc. It's just Debian already ships compiled binaries for more alternative CPU architectures can Canonical do with Ubuntu. Which is why I said supporting different platforms isn't anything new to Linux nor unique to Ubuntu.

> good luck Debian / BSD like on modern laptops.*

I have done. They worked fine. In fact FreeBSD was my primary OS for a period of time and Debian has always been my primary "debian-like" platform for all bar media centres (which do run vanilla Ubuntu). I have also ran officially supported variations of Ubuntu as my primary OS for short periods of time as well. I've tried them all before I finally found what OS I felt right for me.

So I do have considerable experience backing up my claims :)

reply

[–] dopu link

I'm rather doubtful this is true. Any sources to back up your claim?

reply

[–] laumars link

Just personal experience playing with unusual hardware platforms over the last couple of decades. Usually the download pages for the respective distros list all the CPU architectures they support Ubuntu needs a little more digging to locate their non-x86 downloads (which I believe are only ARM and POWER8). There used to be some ropy support for SPARC but that was nearly a decade ago.

reply

[–] 8draco8 link

I don't understand why Ubuntu is receiving so much bad comments from Linux community. Canonical is doing great job of putting, fairly reliable system on massive number of devices, something that other distributions can just dream about. In my opinion, currently, Ubuntu is the best general purpose Linux distribution for new and semi advance users.

reply

[–] newman314 link

One key item to note is that switching to upgraded kernel path breaks live kernel patching at this time.

I was considering switching I saw this caveat...

"For clarity, the Canonical Livepatch Service is only available and supported against the generic and lowlatency GA kernel flavours for 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) release. HWE kernels are not supported at this time."

https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack

Also, it's not clear if there is a different kernel/command for upgraded kernels on a server.

EDIT: looks like it's going to be "linux-generic-hwe-16.04"

reply

[–] fulafel link

Do people see evidence of the live patching system doing something?

For me, canonical-livepatch status --verbose has never showed me any fixes, running linux-image-generic on 16.04.

reply

[–] newman314 link

Not really so far. I enabled it a couple months ago but have not seen any changes.

It's not clear to me if upgrading to HWE would correctly disable the livepatching.

reply

[–] brudgers link

Just to clarify, that is how Ubuntu's LTS [Long Term Support] releases are intended to work. The 'point one' release fixes bugs in the initial release and keeps the same kernel. It winds up being the actual release that is supported 'Long Term'. Releases 'point two' and later get updated kernels...and potentially new bugs to go with new features.

I won't say that the End of Life illustration is easy to interpret, but it shows how Ubuntu releases work:

https://www.ubuntu.com/info/release-end-of-life

reply

[–] sp332 link

Hey that's great. My biggest complaint from running 14.04 LTS for a couple years was the lack of kernel upgrades. Fortunately it's not that hard to install a kernel package from a more recent Ubuntu on it, but I had to find out how to do it and it's more manual than I was expecting for a LTS release.

reply

[–] simosx link

The https://wiki.ubuntu.com/Kernel/LTSEnablementStack page has the appropriate upgrade command in order to upgrade the kernel to the newer and supported version.

According to https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Kernel.2FS... you are now at the Linux kernel 4.4 and it will remain the same until the EOL of Ubuntu 14.04 (in 2019).

reply

[–] e12e link

Thank you for providing this summary. I assume this means that if one has a fleet of servers running 16.04 that one keeps up-to-date, but choose not to update to .02 - one would have to use install media for 16.04 (sans .02) when installing new/replacement servers to fit in with the existing fleet?

It's a little bit surprising coming from Debian stable releases, but makes sense.

reply

[–] seenitall link

No, just doing updates gets you all the benefits of the point release, as you would expect. If you want a newer kernel, install the hwe kernel when it appears, it will roll until the next LTS and stabilise then in line with 18.04 (it is basically later release kernels built on 16.04).

reply

[–] simosx link

Here is a summary of what is said in https://wiki.ubuntu.com/Kernel/LTSEnablementStack regarding the 16.04.x versions

1. If you are happy with how Ubuntu 16.04 works for you, you get to keep it and you receive support until 2021.

2. With Ubuntu 16.04.2, you get the option to switch to a new path of updated Linux kernels. If you do so, your Linux kernel will get updated every six months, until 2021.

For the first update with Ubuntu 16.04.2, you can enable to get the 4.8 kernel that was used/tested in development version of Ubuntu 16.10.

In the subsequent update with Ubuntu 16.04.3 (around July 2017), you will be updated to that Linux kernel that was used/tested in Ubuntu 17.04 (to be released in April 2017). And so on.

The command to switch you to the new path of updated kernels (updated every six months), is

sudo apt-get install --install-recommends xserver-xorg-hwe-16.04

reply

[–] tonyedgecombe link
[–] simosx link

Here is HWE, https://wiki.ubuntu.com/Kernel/LTSEnablementStack and there was a change recently in the policy.

In a nutshell,

1. if you are happy with the currently kernel in Ubuntu 16.04, then you can stay with this kernel (it's version 4.4) and it gets supported until 2021).

2. if you want to jump to the new supported and tested (tested in 16.10) 4.8 Linux kernel, then there is a command described in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that helps you upgrade. However, when you upgrade the kernel (and Xserver stack that are linked together), your Linux kernel will be upgraded every six months from now on, until 2021. The next kernel version update will be in July, and it will be whatever Linux kernel was released in Ubuntu 17.04.

reply

[–] fsaintjacques link

Does that means you can have fairly recent kernels with LTS releases? If so, amazing. That was my biggest complaint of Ubuntu on servers.

reply

[–] StavrosK link

This is the first time I'm hearing about this, but the link would suggest that yes, this is what it means. Kind of a sidethought, but I wonder if I can get an easily-administrable WRT distribution (like Tomato is) on my DSL router that has the kernel's bufferbloat fixes.

reply

[–] kasabali link

What do you mean by standard feature? HWE kernels were already introduced after each Ubuntu release.

reply

[–] seenitall link

Ubuntu used to add a kernel to the LTS from each subsequent release, but they were separate packages. Now there will be just one HWE package that will roll until the subsequent LTS.

reply

[–] listic link

Unfortunately, Alternate release images are not published after 16.04.1.

reply

[–] rlpb link

Everything in 16.04.1 comes from the package archives. It's what you get by default from a particular image that changes. You can still get to whatever state you want regardless of which variant and point release installation image of 16.04 you use. However, doing things manually is of course not the same as being automatic.

reply

[–] compuguy link

So are you saying they won't make any more alternate releases for future releases?

reply

[–] listic link

Yes. It was that way for the 14.04 LTS, too.

reply

[–] hd4 link

I think the coolest thing introduced here is that the HWE kernel is going to become a standard feature of LTS releases going forwards.

reply

[–] brudgers link

What that means is that all of the build scripts in Ubuntu 16.04 have been upgraded to Python 3 and building no longer has a dependency on Python 2. One way of looking at it is that Python 2 is not included with the current release of Ubuntu for the same reasons that MIT Scheme and PHP and Forth are not. The system does not require them.

reply

[–] tyingq link

Well, yes, but the interesting bit is that some notable organization just finished their move from 2 to 3.

reply

[–] brudgers link

Some notable organizations are planning not to move and more or less forking: https://opensource.googleblog.com/2017/01/grumpy-go-running-...

Python 2 will be with us for a long long time.

reply

[–] tyingq link

Interesting bit from the release notes:

Python 3: Python2 is not installed anymore by default on the server, cloud and the touch images, long live Python3! Python3 itself has been upgraded to the 3.5 series.

reply

[–] imglorp link

You can run the Sofware Updater any time you want, including setting it to run automatically.

https://www.ubuntu.com/download/desktop/upgrade

reply

[–] reefwalkcuts link

I actually run that every morning. So good to know :)

reply

[–] pilif link

No need. Just keep installing the regular updates to your OS.

These minor releases are just new installer images so that new users don't immediately have to download a huge chunk of data.

reply

[–] simosx link

Your Ubuntu will be automatically updated to 16.04.2 when the next package update kicks in. You probably had some updates around Tuesday which upgraded you to status "16.04.2" (run "lsb_release -a" in a terminal to verify). Yesterday it was only the ISO images that were released.

The important issue with 16.04.2 is that you can now decide easily upgrade the Linux kernel from the original 4.4 version, to the new 4.8 version. This 4.8 Linux kernel version was released in Ubuntu 16.10 (Oct 2016) and it has been promoted to the new kernel for 16.04.2.

May sound complex :-). There is a nice graphic in https://wiki.ubuntu.com/Kernel/LTSEnablementStack that explains it well.

reply

[–] theandrewbailey link

If you can ssh into your machine, it will tell you in the initial welcome message. I've been running it on my server since soon after release (I've only ever updated), and I've just now logged into it:

Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)

reply

[–] 8draco8 link

sudo apt update and sudo apt upgrade will do it for you. To check what version you are on simple do cat /etc/os-release

reply

[–] runejuhl link

Or, for a more cross-distro approach, `lsb_release -a`, although it requires the `lsb-release` package installed.

reply

[–] 8draco8 link

Truly cross distro is cat /etc/* release

reply

[–] reefwalkcuts link

Newbie Ubuntu user here. Currently I have the 16.04LTS (I don't know if mine is 16.04.1 but I downloaded and installed this version of mine the day 16.04LTS officialy released last April 2016). Should I upgrade to 16.04.2? If so, how? I mean, do I have to download the 16.04.2 installer or is there a update command?

reply

[–] undefined link
[deleted]

reply

[–] sp332 link

I ran with BTRFS for a while. It was pretty nice overall, but you have to be aware of which features are production-ready and which ones aren't. And the wiki isn't up to date either. I had to ask questions in the IRC channel because I didn't feel like wading through mailing list archives which is apparently the only place up-to-date info gets written down.

Edit: This is new. https://btrfs.wiki.kernel.org/index.php/Status I was hoping that it would grow into a more stable, user-friendly project. But RAID1 was broken for months, it would let you create RAID5/6 volume even though the feature wasn't even finished yet, and I personally ended up with a filesystem that will crash the kernel when I try to read certain files. I recovered most of the data using a virtual machine that I could reboot quickly. Maybe I'll look at it again in a couple years, see if someone is taking the project seriously.

reply

[–] luca_ing link

I've been using ZFS on all my desktops and my personal NAS for, I can't remember, probably 4-5 years.

Including ZFS root filesystem, and swap on a zvol.

I have to say, I really like it, and I'll use it again if I have to redo a machine.

Installing Ubuntu on a ZFS root filesystem is much more involved than merely running the installer. If you have never done it before, and are appropriately cautious, it'll take you half a day to follow the (very detailed and helpful) Wiki page. I can do it in less than an hour now.

-----

So far, only one problem (and not a bug, more of a misfeature): when one of the disks in the NAS died, I couldn't replace it with a new one, because the ZFS mirroring was using its default 2kb blocks (I forget the correct term). It can only do this on 2kb/sector HDDs. My new HDD had 4kb/sector.

I was forced to recreate the entire filesystem using a larger blocksize (ashift=12).

Luckily this worked without a hitch, thanks also to zfs send | zfs receive, but it still pissed me off.

reply

[–] simosx link

If you use LXD in Ubuntu, you can (and should) have your containers stored in the ZFS filesystem.

I have be doing that, and did not notice any problems.

About LXD: https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012...

reply

[–] alyandon link

I've been using btrfs in non-redundant and "raid1" (really just chunk duplication) setups for a while now. I've not had any major problems in managing the data pools or catastrophic data loss. In fact, btrfs checksums detected corruption that I isolated to a bad ram module on one of my machines that went undetected on ext4.

At this point I'm sticking with btrfs instead of going with zfs because of the flexibility for growing/shrinking volumes and adding/removing devices in a non-destructive manner.

reply

[–] jlgaddis link

Yep, been using "ZFS on root" (with multiple pools) for a couple of months now on a new workstation.

No major issues to report and no minor issues that I can recall.

reply

[–] willtim link

I've been using ZFS on Linux for the past year on a home server. It has worked beautifully and is very simple to use. More importantly it is considered stable. I would certainly not trust btrfs with my data, it simply isn't finished and at this point may never be.

reply

[–] acranox link

ZFS? Yes, many people. I'm sure you can read lots about it. I've been using it for over 6 months with no problem.

reply

[–] Siecje link

Anyone using ZFS with Ubuntu? Any problems? Anyone tried btfs?

reply