A listing of random software, tips, tweaks, hacks, and tutorials I made for Ubuntu

My thoughts on Mir

EDIT 27/08/15: I apologize for the harshness of this post. I was quite upset about this move when I wrote this.

If you aren’t aware, Canonical is planning on writing a new display server, competing to both X11 and Wayland, named Mir.

I’ll state my opinion right now: I really do NOT like this move. The rest of the post is the “why” of my opinion. I have not written all of the “why”, because some of the reasons I thought of lacked enough proof to back what I said (such as Canonical becoming another Microsoft or Apple, trying to take over the linux world ;)).

First, the segregation. Let’s assume that Mir is, as was planned “A … that is extremely well-defined, well-tested and portable.” (this is very hard to do, in fact, there are only a very few amount of software and libraries that are this good). Now this would cause a horrible issue of segregation, because now application developers that write applications for Mir will exclude people who do not have Mir (kind of an obvious issue though). Most application developers will use a toolkit, such as Qt or GTK+, which provides an abstraction layer that will allow the applications to run on any display server (though, of course, this requires patching them to support the display servers, but canonical, IIRC, has promised to do this, at least for Qt), so this is less of an issue. The bigger issue would be with the 3rd party graphics drivers. Both major GPU manufacturers (Nvidia and ATI) are already having issues with giving X11 support to their drivers (though Nvidia has considerably less issues than ATI). Now comes along Wayland, an alternative to X11. Back in 2010, Nvidia clearly stated that they do not have any plans to support wayland (they seemed to have changed their mind though, or at least, considered it), and ATI does not plan on supporting wayland anytime soon. This is reasonable for the companies and for linux. As long as X11 will still be supported until both companies officially support wayland, everything should be somewhat okay. But then comes along Mir, a company-lead alternative to both Wayland and X11, for their operating system. 3 different display servers for linux, and both major GPU manufacturers are already having issues with one. This is just ridiculous. And anyways, just think of the users and distros, trying to find which one they should use.

As I said before, I was assuming Mir was exactly what it was planned (i.e. perfect-world scenario). I already stated that it was extremely hard to actually meet the requirements they wanted, let alone that Canonical is not exactly known for their amazing abilities at efficient coding. Just look at unity. Even GNOME 3 and KDE are faster than that! And GNOME 3 uses Javascript extensively (a language that I think we can all agree on its slowness). Unity is written in C, C++, and Vala, 3 languages that are quite fast (though using many languages together can slow down an application). Please tell me: Why is it that Unity is so much slower that GNOME or KDE, which have at least the same complexity, if not much more? (EDIT: Some people have said part of the cause of the slowness is because of Compiz and Nux. Nux, IIRC, is developed by Canonical too) Now look at the Ubuntu software center, and at Ubiquity, two other applications written by canonical. The USC took ~10 seconds to load, compared to most other applications loading nearly instantly. Of course, they are written in python, which, after programming in it for a rather long time (and having a lot of experience with other languages), I think I can say that it is not only a slow language, but also a rather badly designed one. Canonical did not, of course, make python, but I just used that to show you an example of their poor decisions. Enough hitting on Canonical, let’s assume that they are great coders now. How long do you think it would take to write a complete display server that is “extremely well-defined, well-tested, and portable”? Canonical is a rather small company, how do you think they will write an application (or library) that is “well-tested”? Or maybe we should ask, what defines “extremely well-defined” and “well-tested” (and “portable”)?

Lastly, let’s say that they were able to accomplish their goal. Why do they need a new display server in the first place? Why can’t they just use Wayland? There is a section on the wiki about this, which I tried to read, but it was quite vague. All that I could understand is that they wanted support for 3D input devices. So why don’t they just talk to the Wayland developers about this and maybe help them implement it if progress is not going fast enough? Or if they don’t want to have it in wayland, just fork it, don’t start writing your own. “In summary, we have not chosen Wayland/Weston as our basis for delivering a next-generation user experience as it does not fulfill our requirements completely”. Oh, come on. Wayland is open-source, you can change it if you need to, you know. Or if they don’t want the changes, you can just fork it. I know I’m repeating my last sentence, but this is just ridiculous.

So to summarize, I’m not that crazy about Mir :P

I know I said things rather bluntly, and I’m expecting that most of the reactions to this will be rather harsh, but I feel that it was important to write this. It’s not because I hate ubuntu that I write this, I really like the initiative, just not the execution (which is why I write these kinds of posts… maybe if enough people show their disapproval towards their methods, they might change their minds ^_^). Also, if you think that any of the claims I made were false, let me know, I’m not that closed-minded about it ;)

39 responses to “My thoughts on Mir

  1. Alan Bell (@alanbell_libsol) April 17, 2013 at 10:16 pm

    There are no slow languages, there are no fast languages. That is a complete myth. You are never CPU bound. There are programs that make too much disk access, and are iobound. It is never ever ever the fault of the language, that just doesn’t happen. There are programs that inefficiently draw stuff to the screen, and you can do that in any language.
    Graphics cards are *good* at 3d processing now. Really good. Using a 2d representation of the desktop is just silly in terms of the amount of wasted silicon we are not exercising. Getting the openGL stuff to throw pixels around is logical, it can do it masses better than the CPU can. Graphics stuff should be shoved to the GPU, that is very fast for that. Application logic should be on the CPU and in memory as much as possible, that is very fast, spinning rust disks should be replaced by SSDs, they are fast for desktop use.
    Using Mir instead of Wayland is a pretty bad idea, but not for most of the reasons you have mentioned. It is a staggering duplication of effort, and stuff like colour calibration and text tracking zoom is already in Wayland and isn’t on the published roadmap for Mir even though the developers seem to think it is a good idea to do one day.

    • Anonymous Meerkat April 17, 2013 at 10:34 pm

      You are totally right about the languages, but, what I mean by a “fast language”, is one that _can_ be faster than another (for example, compare C and python, if you wrote extremely optimized code in python, it would still be slower than extremely optimized code in C).

      Yeah, of course (assuming you want to eliminate computers with bad graphics cards, or those who can’t get them working, of course). Though sometimes (assuming you don’t have the best graphics card), it can be slower if you use weird algorithms (such as loading a texture for each pixel, or actually doing per-pixel calculation without shaders, as that would duplicate work, not only does the CPU compute the image, but the GPU recomputes the same image, and yes, IIRC, I saw some applications that do this).

      I totally agree about the duplication of effort (I did write about that in the last section of the post)

      • bochecha April 18, 2013 at 11:27 am

        > “for example, compare C and python, if you wrote extremely optimized code in python, it would still be slower than extremely optimized code in C”

        As Alan said, this is completely wrong.

        Otherwise, how would PyPy (a Python interpreter written in Python) be faster than CPython (the reference Python interpreter, written in C) ?

        Languages are not inherently faster or slower than others. Programmers make mistakes, though, in every language.

      • Anonymous Meerkat April 18, 2013 at 5:09 pm

        “Otherwise, how would PyPy (a Python interpreter written in Python) be faster than CPython (the reference Python interpreter, written in C) ?”

        Maybe these could explain?, and
        From the second link:
        “Current PyPy versions are translated from RPython to C code and compiled”
        Should explain the speed, right? :)

      • Alan Bell (@alanbell_libsol) April 18, 2013 at 9:46 pm

        If you wrote the same code doing the same exact stuff in terms of disk and internet access in two different languages then you would not be able to perceive the speed difference. It would be there, in the order of milliseconds or microseconds. It might be the case that C programmers tend to write more efficient algorithms than python programmers, I don’t know if that is true but it sounds plausible. There are some things that are best done in a low level language, things that are done very very often, widget toolkits perhaps. You might want to optimise the heck out of drawing rounded corners on buttons, but the act of deciding what buttons to put where is suitable for any language.

  2. Fitoschido April 17, 2013 at 10:33 pm

    My policy is: not talking any bullshit about Mir until I try it.

  3. Roger April 17, 2013 at 10:50 pm

    You haven’t even covered how well Canonical plays with others. They use a dvcs no one else uses and a bug tracker no one else uses. They have a history of decision making behind closed doors. They require a CLA that is one sided – they can take your code proprietary but you can’t do the same to theirs. Even requiring a CLA is extremely harmful.

    Essentially if you could contribute and care about freedom, then Canonical are a very suboptimal choice to spend your time and effort with.

    • Jeremy Bicha April 17, 2013 at 11:34 pm

      Every distro uses a bug tracker no one else uses.

      Do you also condemn the Free Software Foundation for requiring copyright assignment? They use the same license: GPLv3 (or LGPLv3 in some cases).

      • Roger April 18, 2013 at 3:04 am

        I don’t mean the instance – I mean the bug tracker software. Nobody else uses Launchpad while many use bugzilla for example.

        There is a big difference with the FSF. They guarantee that should they change the license it will always be to a free (as in freedom) license. For example this was needed to move from GPL2 to GPL3 since GPL3 is not compatible with GPL2. Canonical makes no freedom guarantees. I don’t think the FSF should *require* a CLA, but should make it optionally available for people who want to do so. About a month ago LWN had articles and commentary on FSF projects, CLA etc.

      • bochecha April 18, 2013 at 11:40 am

        In addition to Roger’s reply, there is a huge difference between the FSF and Canonical: the former is a non-profit, the latter a for-profit corporation.

        Even if we assume we can trust Canonical to be excellent FOSS citizens and never do anything bad with the code (like make it non-free), they can be bought by a much worse company who could be less honest.

        The FSF can not be bought.

      • Jeremy Bicha April 18, 2013 at 5:39 pm

        Yes, Canonical is a company but they give away almost everything they make as 100% open source. They have never made a profit. Since Canonical gives away more than they make I see them as being more philantropic than for instance Red Hat which makes millions in profit every year. (Obviously both companies are valuable for the open source community and Red Hat operates on a different scale.)

      • Jeremy Bicha April 18, 2013 at 5:52 pm

        Roger, Debian uses its own custom bug tracker. Arch Linux is the only distro I know that uses Flyspray.

        Linux Mint uses Launchpad and quite a few other project do as well:

        It almost sounds like you’re saying that a true open source project would use Bugzilla for tracking bugs which is a bit absurd. While git is a great choice for a new open source project in the 2010s, git didn’t even exist when Ubuntu was established and many popular open source projects have not converted to it (Mozilla and WordPress are just two examples).

        GNOME has a history of making decisions with outsiders having almost no influence. Who decided on GNOME Shell’s design, or Nautilus 3.6’s UI redesign, or that the time was right this winter for a GNOME Classic mode to be built on top of GNOME Shell? But I’m not condemning GNOME; all organizations make their own decisions while considering the best interests of their customers.

  4. David Sugar April 17, 2013 at 11:00 pm

    Actually Python has some very specific issues. I have noticed that while other scripting languages (perl for one example) seem to degrade slowly with cpu performance, Python often falls off a cliff. Hence, for example, a yum update than downloads and processes a dependency list in a few seconds on a P4 1.8ghz system takes 20 minutes or more to do the exact same task on a C3, where nothing else (including complex perl scripts) slowed down on nearly such a scale. I think this specific collapse has to do with the way python scatters it’s objects into hashes, and especially on chips with a very small caches (like the c3) will never find adjacent elements in the same memory page, and hence become really slow in any kind of iteration. So yes, clearly, sometimes design decisions do create unexpectedly cpu bound languages.

    In respect to their specific choice of Python, it must be remembered that, before Ubuntu, Canonical was principally known as a Python shop. They also focused very heavily on test driven programming methodologies in Python (

    In respect to Mir, I do not expect much in the way of “portability”, at least how most might see it, other than that it will of course build on Ubuntu arm as well as Ubuntu x86. My impression is hence that, like Unity, it will remain a Canonical/Ubuntu only thing, and seems much more about trying to enclose the (ubuntu) platform in applications written specific to it only. Some of this can also be seen in their other initiatives, such as Quickly, and of course Unity.

  5. Ian NIcholson April 17, 2013 at 11:15 pm

    I hope you’re aware that since Mir uses Android drivers, your concern about driver support is totally invalid. You might as well complain that Google’s fragmented the linux driver ecosystem.

    • Anonymous Meerkat April 17, 2013 at 11:42 pm

      I’m talking about the desktop version of Mir :)

    • zeebra May 26, 2013 at 4:00 am

      Google is contributing far less than they are taking. Ubuntu is the dream princess compared to that! So yes, Google actually literally split Linux.

      The main problem ofcourse is that Google does not use GNU, which just shows Linux without GNU doesn’t have any freedoms in it and not much open source / transparency.

      I like Google though, but perhaps I overestimated their good intentions.
      I don’t trust Ubuntu though. They may have some bad intentions.

      And as to slowness mentioned above, both Ubuntu (and Mint) and Fedora is slow for some reason. If anyone know why that is, please shout it!

  6. Ian NIcholson April 17, 2013 at 11:19 pm

    Second thought: Mir exists because forking Wayland would have been too much work. If Canonical had forked Wayland, the fork would have quickly diverged from the main branch, and development would have been just as difficult. Your concern about application portability is totally unfounded as well, since no developer is going to code specifically for Mir, they’ll use QML which is supported on Windows, OSX, and even eventually wayland-based distros.

    • Anonymous Meerkat April 17, 2013 at 11:45 pm

      So you’re saying that writing your own application instead of learning how the other works and doing minor patches is easier??

      Well, yeah, I wrote that it was a small issue, (to which I would probably be the only one concerned :P) and that most people would use a toolkit (such as Qt).

      • Ian NIcholson April 18, 2013 at 12:05 am

        You’re incorrect that it would require “minor” patching. There are several significant portions of Wayland that would have to be rewritten, basically transforming it so much that the costs of forking outweigh the benefits. I personally wish they would have forked it, but I can see why they didn’t. :)

    • Jef Spaleta April 17, 2013 at 11:54 pm

      The question becomes… how big will the vendor specific patchsets be at the toolkit level that Canonical needs to carry forward to support mir? It’s not clear that upstream qt is going to be interested in carrying mir or Ubuntu SDK performance enhancing specific patchsets. Nor even if such patchsets are going to be submitted to upstream. Has anything mir specific with regard to qt support been submitted to upstream yet for review and potential integration into upstream qt codebase?

      • Anonymous Meerkat April 18, 2013 at 12:01 am

        Right… Depends on the toolkit, I presume. I’d think that there’d be some kind of API to do this, so the patches wouldn’t need to completely rewrite the codebase. About Qt, I have no idea if it’s going upstream, but canonical is making their own extension to Qt (QtMir), which allows it to use Mir instead of X11. I’m not sure if this requires patching or not though.

  7. trampster April 18, 2013 at 12:51 am

    When you write M$ you automatically change from someone who is worth lessening to, into a raving fanboy.

    Language choice != speed, at least not when it comes to application development.

    Compiz, is slow and buggy and ubuntu will be much better off without it, if that mean creating MIR then I’m all for it.

    • Anonymous Meerkat October 16, 2013 at 5:18 am

      Yes, my bad about M$ (though, to be honest, that part was intended to be a joke, but I see that it could be interpreted that way), I’ll change it, thanks for noting :)

      But I’d have to disagree with your statement. Yes, it can (and will) impact speed. Let me give you an example:

      int xyz = 5;
      // some random code here

      Some interpreted language:
      xyz = 5
      // some random code here... let's say it's the same as in C

      Looks similar, right? Therefore, this should have a very similar speed, right? Wrong. C will be rather fast (push 4 bytes into the stack, load it, add 5, random stuff, load xyz again, add 1).
      The interpreted language, however, will probably go somewhere along the lines of adding a new language-specific object (notice that this object can be rather large, as it usually has to be able to contain any kind of value) called xyz to a hashtable (or even worse, a simple non-hashed map… also note that “xyz” is stored, so if the variable name was very long, it could impact the speed by a lot, depending on whether the table was hashed or not), then it assigns the numerical value 5 to it (notice that this isn’t a simple assembly-level add function, it actually has to convert it to that object, so even that is slower), then after the random code, it will have to find the xyz object again (unless it’s well optimized, in which case, this step will be skipped), then add the numerical value 5 to it (and if it’s like python, it’s not just a simple & fast hardware-based assembly operation, it’s actually a rather complex software operation, as numbers can theoretically be infinite in python).

      I know this post is way too long, but I just wanted to show why interpreted languages are slower (and always slower) than compiled languages. Sure, if python was compiled to machine code (like PyPy does), it’d be much faster (and could possibly have speeds that match those of plain C), but that wouldn’t count as “interpreted”

      I think I figured out what my next post will be about…. :P

  8. lxsameer April 18, 2013 at 5:09 am

    As much as I agree with you in Mir case, I’m totally disagree with you in language case, comparing languages is not about performance only. Yeah Python is way slower than C, because unlike C it use interpreter to execute its codes, So comparing a compiler base language with an interpreter base language in performance context is not a good. Compere Python with PHP, Perl, Ruby … and the you can talk about performance. Also you have to bear in mind that each language has its own goals and performance is not the main goal of Python.

    Anyway I think Mire idea was complete strategy for Canonical to advertise itself and be more like MS and Apple. These days each company wants to be unique. I’m sure that Mir will end up just like unity ( a piece of useless software ). Its obvious that Canonical has no technical perspective of what does make Mir special, “A … that is extremely well-defined, well-tested and portable.” is bullshit and has no technical value. (sorry for bad English)

    • Cynic April 18, 2013 at 1:41 pm

      Languages that run on VM and are 1 or 2 order of magnitude faster thatn Python: Java, C#, Javascript, Clojure, F#, Scala…
      Python isn’t a good fit for desktop applications.

  9. Chris Halse Rogers April 18, 2013 at 6:49 am

    See also and its three (so far) follow ups: , , and .

    While that doesn’t cover the “I don’t trust Canonical to produce good code” aspect, I’m pretty sure it covers all your other concerns.

  10. Bartosz Zasieczny April 18, 2013 at 8:41 am

    Well, with Wayland they are out of control with design, features and bugs. Wayland is not testable at all. Wayland development is extremely slow.
    Slow Unity? Right – because it is a “glue” between Compiz and GNOME. They overtook the control over Compiz, which is not the best piece of software, as it came out. They were only reusers of the code, and they were blamed for that. Now they are trying to create their own OS for people with their own tools and ideas. It’s a 180° change – now they are designing things from scratch.
    Remember – they want to make something that is usable for most of the people. It involves responsible, fast development, a lot of design – the things that lots of OpenSource projects lack (e.g. GTK which sucks, even in version 3.x, when it comes to app development). That’s why Android uses only Linux kernel…

  11. Simion Ploscariu April 18, 2013 at 9:04 am

    Hi, I recomand reading this is about GTK3 gnome3 vs other DMs and Red hat/ About Unity and Mir I think we should wait and see.

  12. Andrew April 18, 2013 at 2:37 pm

    What does “3D input device” even mean? I honestly don’t see the point of Mir. I’ve tried thinking about it, and I can’t find one reason it should exist.

    • Anonymous Meerkat April 18, 2013 at 5:32 pm

      I’m not exactly sure, but I think that (assuming from the name), it would capture input in 3D instead of 2D? Such as the Xbox Kinect. And anyways, I can’t see why Wayland would not want to have 3D input support, as it’s really the future of input (in other words, I agree with you, Mir shouldn’t exist).

  13. Fazil Abdul Lathif April 18, 2013 at 6:26 pm

    I dont get the part where Canonical becomes like Apple or Microsoft. Canonical releases stuff in the open. Although they develop stuff in a closed environment. If MIR is good enough why can’t others use it. I am not saying they will be able to complete it on time. And I am not saying that MIR will solve all the graphics issues in Linux distros in general. But given the fact that the drivers of android devices will be compatible with Mir is a big advantage. Open source development is chaos. No one should predict what is right and wrong – there is no high level design. It is people choices that will decide what stays relevant.

    Think about Linux Kernel(the biggest open source project in the world). It became so popular because it had the GPL. Otherwise it would have remained irrelevant and probably would only have supported the 386 machine that Linus owned. Everyone is constantly experimenting with Linux nowadays. The best things make into the Kernel maintained by Linus. Choices are good in open source. Fragmentation is good in open source. It doesn’t sport the most efficient development cycle. But it does create wonderful things.

    My favourite open source application is Blender 3D. It beginned as a closed application. But now it has become one of the most creative application in open source. All thanks to the great community and wonderful developers.

    • Anonymous Meerkat April 18, 2013 at 7:37 pm

      Right, that part was more or less of a joke, but the point was this (it’s quite flawed, of course, which is why I didn’t include it in the post):

      Since Ubuntu is going to use Mir, and Ubuntu is the most popular linux distro out there, there is no question that many linux users will use Mir. Now, Canonical said that they had contacted the major GPU companies (e.g. NVidia & ATI) so that they would make drivers for Mir. Now say the companies were just overloaded, and just thought: “okay, let’s just make drivers for Mir, as barely anyone uses X or Wayland, and it costs too much to support them all” (which could happen). With this happening, X and Wayland would be left out, making them secondary for everything (since they can’t support the major GPUs well enough… honestly, just look at the state of the open-source drivers in comparison to the proprietary drivers). So distros would be pretty much forced to change to Mir, and then eventually, Canonical would be at the top of most distros, having the power to do whatever they want to do (if they forbid other projects to use Mir, most distros would need to either remove every single version using Mir and restart, or would just shut down, making Ubuntu the complete leader), creating a disaster. Microsoft & Apple did very similar things (and continue doing them).

      This is why I don’t believe fragmentation is good in this regard, as it pretty much NEEDS closed-source technologies (i.e. 3rd party drivers) to really succeed. Unless, of course, you don’t need any kind of fancy graphics card stuff.

  14. Fazil Abdul Lathif April 18, 2013 at 6:37 pm

    One more point… I had always dreamt of a time when a distro would adopt QT and follow the design guidlines of Gnome. I mean for me QT apps have always been responsive. Ubuntu is following that path. And it is great.

    • Anonymous Meerkat April 18, 2013 at 7:41 pm

      I would agree, except that Ubuntu is going the QML route, which, since it’s just an extended version of Javascript, I’m not sure that it’s going to improve responsiveness, maybe it could even make it slower.

      • Jef Spaleta April 18, 2013 at 7:54 pm

        I’d caution you about trying to make responsiveness arguments across UI re-implementations that jump across toolkit boundaries. You can certainty make the comparisons, and even generate repeatable numbers that say one version of the application is less responsive than the other, but you cannot necessarily draw conclusions about toolkit performance specifically versus any other potential slowdown. For complex, featured applications, relative responsiveness it’s going to be a difficult comparison to make and lay claim to a defficiency in a toolkit’s design relative to another.

        Unow and Unext while both named “Unity” are going to be some finite distance a part with regard to featuresets and functionality.. it won’t just be a toolkit change. It won’t be a clean comparison between toolkit responsiveness no matter how it stacks up. Unext is also going to be functionality different in its internal plumbing to some degree beyond just switching out gtk for qml is it not? I fully expect Unext, as a young codebase, to have significant optmization opportunities in it after its first full public release in the 13.10 timeframe.

  15. jeff April 19, 2013 at 7:56 pm

    There are no such things as inherently fast and inherently slow languages. Only smart implementations vs bad programming practices. If what you said about Python was true, the current development version of Pitivi wouldn’t start in *2 seconds flat*. And let me tell you that Pitivi does a sh!tload more stuff than your average application. A simple Python + GTK3 application I made to test drawing some CSS-themed GTK widgets starts in less than 1/10th of a second.

  16. Zachary Bittner April 21, 2013 at 4:53 am

    1. x11 is broken
    2. Wayland will likely not be supported by nvidia and will not be supported by ati
    So, you have three options.
    1. Try to force nvidia and ati to support Wayland
    2. Keep using broken x11
    3. Write your own new display server.

    From canonical employees, nvidia is working with canonical on Mir and will be supported.

    that is all.

  17. Pingback: About the “Mir hate-fest” | lkubuntu

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: