lkubuntu

A listing of random software, tips, tweaks, hacks, and tutorials I made for Ubuntu

Openlux 0.2 beta – Animations, iOS port

I wrote openlux around 2 and a half weeks ago, as a simple, libre alternative to f.lux that addresses a few issues I’ve encountered with it. I’ve since used it everyday, and I’ve actually noticed an improvement in my sleep!

However, my iPad still uses f.lux (or, until today, at least). No, in this case, I’m not worried about the fact that f.lux is proprietary (it’s an iPad), but earlier, when my sleep was really messed up (and by messed up, I mean, I was going to sleep at 7-8am), f.lux would automatically switch to 3400K (instead of 2300K), which definitely didn’t have a positive impact on my sleep. Also, it only goes down to 2300K, doesn’t allow much customizability, and doesn’t always work how I want it to work, etc.

So after spending quite a long time (basically ever since I released the first version of openlux) working on the port, it finally works!!! It doesn’t work as well as I wanted it to (multiple colors output the same value, compressing the color range … I tried lerping values, but it ended up giving garbage), but at least it works!

Animations literally took about the last hour of developing this version (in other words, barely any time at all, compared to the time needed to develop the iOS port), since, luckily, I only encountered one bug while making it. The point of animations is not for visual bling, but rather to make it easier on the eyes if it’s run automatically (e.g. via cron).

Other than those, there are a few minor features, such as optional relative adjustment of colors (“-b 10” will set the blue channel to 10, “-b +10” will add 10 to the blue channel, and “-b -10” will remove 10), and saving/resetting gamma values (mainly just a by-product of working on the iOS port).

If anyone would be interested in testing this on their iDevices, I would really appreciate it ^^ Though it works fine on my 1st generation iPad, I don’t know if it will work on other devices too. I wrote instructions on how to compile and run it here: https://github.com/AnonymousMeerkat/openlux/wiki/Compiling-for-iOS :) I’m not aware of this being able to cause any permanent damage to your device (my device works fine now, even after the display being severely messed up multiple times), but if you’re scared, stick with f.lux for now. Quick note: it doesn’t work on iOS <4, since it needs to retrieve the gamma table (which iOS versions <4 don’t support).

To wrap up, here’s a few examples of the new features that come with openlux 0.2:

openlux -k 1000 -a 10000         # Animates to 1000K in 10 seconds (10000 milliseconds)
openlux -k 1000 -a 100000 -d 100 # Animates to 1000K in 100 seconds, with a delay of 100 milliseconds per "frame" (less CPU usage)
openlux -k 1000 -g +10           # Sets the color temperature to 1000K, but adds 10 to the green channel
openlux -R                       # Resets to the last saved gamma table (openlux automatically saves the gamma table the first time it's run per boot)
openlux -s                       # Saves the gamma table

Follow up on the non-windowing display server idea

Note: I’m sorry, this post is a bit of a mess.

I wrote a post 2 days ago, outlining an idea for a non-windowing display server — a layer that wayland compositors (or other programs) could be built upon. It got quite a bit more attention than I expected, and there were many responses to the idea.

Before I go on, I wish to address a few things that weren’t clear in the original post:

The first being that I am not an ubuntu developer, and am in no way associated with canonical. I am only an ubuntu member :) Even though I don’t use ubuntu personally, I wish to improve the user experience of those who do.

Second is a point that I did not address clearly in the original post: One of the main reasons for this idea is to enable users to modify the video resolution, gamma ramp, orientation, brightness, etc. DRM provides an API for doing these operations, however, AFAIK, you cannot run modesetting operations on a virtual terminal that is already running an application that has called video modesetting operations. In other words, you cannot run a DRM-based application on an already-running wayland server in order to run a modesetting operation. So, AFAIK, the only way to enable an application to do this is to write a sort of “proxy” server that handles requests, and then runs the video modesetting operations.

Since I am currently confusing myself re-reading this, I’ll try to provide a diagram in order to explain what I mean.

If you want to change the gamma ramp, for example, this is impossible:

drm_client_wayland

So with the display server acting as a proxy of sorts, it becomes possible:

drm_client_display_server

This is also why I believe that having a server over a shared library is crucial. A shared library would allow for abstraction over multiple backends, however, it doesn’t allow communication with more than one application. A wayland compositor can access all of the functions, yes, but wayland clients cannot.

The third clarification is that this is not only meant for wayland. Though this is the main “client” I have in mind for this server, it isn’t restricted to only wayland. The idea is that it could be used by anything, for example, as one response pointed out, xen virtualization. Or, in my case, I actually want to write clients that use this server directly, without even using a windowing server like wayland (yes, I actually have a good reason for wanting this XD ). In other words, though I believe that the group that would use this the most would be wayland users (hence why I wrote the original post tailored towards this), it isn’t only meant for wayland.

There were a few responses saying that wayland intentionally doesn’t support this, not because of the reason I originally suspected (it being “only” a windowing protocol), but because one of wayland’s main goals is to let the compositor to have full control over the display, and make sure that there are no flickers or tearing etc., which changing the video resolution (or some other modesetting operations) would undoubtedly cause. I understand and respect this, however, I still want to be able to change the resolution or gamma ramp (etc.) myself, and suffer the consequences of the momentary flickering or whatever else. Again though, I respect wayland’s decision in this aspect, so my proposal, instead, is this: To make this an optional backend for wayland compositors. Instead of my original proposal, which was to build wayland compositors on top of this (in order to help simplify the stack), instead, have this as an option, so that if users wish to have the video modesetting (etc.) capabilities, they can use this backend instead.

A pretty large concern that many people (including myself) have is performance. Having an extra server on the stack would definitely have an impact on performance, but the question is how much.

So with this being said, going forwards, I am currently working on implementing a proof-of-concept prototype in order to have a better sense of what it entails, especially in regards to performance. The prototype will be anything but production-ready, but hopefully will at least work … maybe XD .

Idea: Non-windowing display server

For the TL;DR folk who are concerned with the title: It’s not an alternative to wayland or X11. It’s layer that wayland compositors (or other) can use.

As a quick foreward: I’m still a newbie at this field. While I try my best to avoid inaccuracies, there might be a few things I state here that are wrong, feel free to correct me!

Wayland is mainly a windowing protocol. It allows clients to draw windows (or, as the wayland documentation puts it, “surfaces”), and receive input from those surfaces. A wayland server (or “compositor”) has the task of drawing these surfaces, and providing the input to the clients. That is the specification.

However, where does a compositor draw these surfaces to? How does the compositor receive input? It has to provide many backends for various methods of drawing the composited surface. For example, the weston compositor has support for drawing the composited surface using 7 different backends (DRM, Linux Framebuffer, Headless [a fake rendering device], RDP, Raspberry Pi, Wayland, and X11). The amount of work put into making these backends work must be incredible, which is exactly where the problem relies in: it’s arguably too much work for a developer to put in if they want to make a new compositor.

That’s not the only issue though. Another big problem is that there is then no standard way to configure the display. Say you wanted a wayland compositor to change the video resolution to 800×600. The only way to do that is to use a compositor-specific extension to the protocol, since the protocol, AFAIK, has no method for changing the video resolution — and rightfully so. Wayland is a windowing protocol, not a display protocol.

My idea is to create a display server that doesn’t handle windowing. It handles display-related things, such as drawing pixels on the screen, changing video mode, etc… Wayland compositors and other programs that require direct access to the screen could then use this server and trust that the server will take care of everything display-related for them.

I believe that this would enable for much simpler code, and add a good deal more power and flexibility.

To give a more graphic description (forgive my horrible diagraming skills):

Current Stack:

wayland_current

Proposed Stack:

 

wayland_new

I didn’t talk about the input server, but it’s the same idea as the display server: Have a server dedicated to providing input. Of course, if the display server uses something like SDL as the backend, it may have to also provide the input server, due to the SDL library, AFAIK, doesn’t allow a program to access the input of another program.

This is an idea I have toyed around with for some time now (ever since I tried writing my own wayland compositor, in fact! XD), so I’m curious as to what people think of it. I would be more than happy to work with others to implement this.

Using Openlux to help your sleep and/or relax your eyes

If you are familiar with research suggesting that blue light affects your sleep, you might also be familiar with a (free!) software named f.lux. I use it on my iDevices (used to use it on my computers too), and it works great …. except for a few issues.

The first is CPU consumption. Seriously, this software takes up a lot of CPU. That was the main reason behind ditching xflux (the X11 edition of the software). It also doesn’t entirely block out blue light, even at the lowest color temperature it allows (this is true for the iOS version too). There were a number of other issues that became annoying over time (forced very long animations, a daemon that rarely ever works as intended, sometimes the software doesn’t even work at all, mouse cursor being left entirely out of the picture, etc.). These would (probably) all be simple to fix …. however, it’s free as in price, not as in freedom. The software is closed-source.

Openlux is a very simple open-source MIT-licensed clone I wrote that tries to address these issues (minus the mouse cursor issue, that one is a bit more complex). For now, it doesn’t contain as many features as xflux does, but it is only a first release. Animations and the lot will come later :)

I haven’t worked on packaging yet (if anyone wishes to spend some time doing this, that would be greatly appreciated!!), but for now, visit https://github.com/AnonymouMeerkat/openlux for download and compilation information (sorry for the mess in main.c, I will get to that later!).

Here are a few usage examples

openlux                      # Sets the screen color temperature to 3400K (the default)
openlux -k 1000              # Sets the color temperature to 1000K
openlux -k 2000 -b 0         # Sets color temperature to 2000K, but removes all blue light
openlux -k 2000 -b 255       # Ditto, but blue is set to 255 (maximum value, gives the screen a magenta-ish tone)
openlux -r 130 -g 150 -b 100 # Gives the screen a dark swamp green tint (Kelvin value is ignored)
openlux -k 40000             # Sets the screen color temperature to 40000K
openlux -i                   # Resets the screen color temperature

I personally like using openlux -k 10000 during the day (very relaxing for the eyes!), and openlux -k 2300 -b 40 during the night.

I hope this can be useful for you!! If you have any issues, suggestions, feedback, etc. (even if you just want to say thank-you — those are always appreciated ^^), feel free to write a comment or send me an email!

The importance of freedom in software

Software license agreements (EULA) are generally considered little more than a confirmation on whether or not the user really wants to install said software. Heck, for all that most users care, it could read “Do you wish to install this software?” and their overall reaction would be approximately the same. In fact, I often catch myself using the “I decline” button when I realize that this software is indeed useless.

Of course, in the back of our minds, we know that we really should read it …. but, come on, we have a life to live. We can’t spend it reading license agreements! YOLO.

Many software developers know this fact, and capitalize on it. One good example would be a company named after a fruit that develops smartphone specifications. Have any of you ever read the 60 page long license agreement on a tiny screen, just to install the next Flappy Bird?

I’m no different. I’ve probably only read 3 (proprietary) license agreements in my entire life… and I’ve installed hundreds of proprietary software.

I’ve also found myself accustomed to thinking it’s illegal to share software with my friends. The idea of inspecting or modifying how a proprietary software works (through reverse engineering) feels very risky and only borderline legal. And, actually, both are true in most cases.

For many users, this doesn’t seem like an issue. Most users, and in fact a lot of programmers too, wouldn’t check the source code of a program they are running. And, to be honest, most users would rather just link to the website of the software anyways, even if the software would allow itself to be shared.

However, just because these freedoms are rarely used, it doesn’t mean that they are useless. Think of a self defense class. Unless you’re in a more violent neighborhood, chances are that you will very rarely need to use it. But when you do, you will be really happy that you did invest the time to learn it. After the Snowden leaks, many people started accusing software of sending data to the NSA. Is this true? I don’t know. And that’s the issue: We are not legally allowed to know. We cannot inspect or modify the software in any way. We blindly trust what the developers say about their products.

Of course, there are also more everyday usages of being able to inspect, modify, or share. I’ll use Studio One as an example. It’s a proprietary software. Its bugs have lead me to immense data losses (due to a really badly functioning “Undo” button that can occasionally screw up the entire project file). If I had the source code handy it would be possible to fix this (probably a bit difficult, yes, but possible). But I can’t fix it, because the EULA doesn’t allow me to inspect and modify.

What about sharing software? Because I cannot share the software I use with others, it makes it entirely impossible for me to create truly “open source” music (I’m not sure if the term applies to music, but I think you get the idea). I make breakdown videos, where I show how I made the music, but as far as I know, I cannot legally go any further than that.

This is not because these software developers are evil. They do this to maximize their profits, and that’s understandable. However, the cost of this is our freedom.


Now that I’ve spent some time criticizing proprietary software, I’ll take a bit of time promoting free (as in freedom) software.

First the term, Free Software. “Free” has multiple meanings (in the coincidentally named “thefreedictionary.com”, it lists 38 different meanings for the word “free”), but there are 2 major ones: free as in no price (gratis), and free as in freedom (libre). In order to distinguish between them, I’ll use “gratis” and “libre” instead.

Both the terms gratis and libre can be used to describe software. Hence, using the term “free” can be very ambiguous; “does this specific software respect my freedom? or is it just that my wallet is unnecessary?”. In many software circles, “free software” simply means gratis. In these circles, Skype could be considered free software (even though it doesn’t respect your freedom, among other issues). However, in other circles (generally among libre software developers), “free software” qualifies as “libre”, not “gratis” (and therefore, Skype would not be considered free software).

So what is the purpose of free software? Basically, depending on the license, it enables you to do what proprietary software forbids you from doing. You can share the software with anyone, you can inspect how the program works, you can modify it, and you can redistribute the modified versions too! It allows for an incredible eco-system in which programmers around the world can create new features, fix bugs and security leaks, then submit it back to the project leader for integration with the software. Or, if someone has a wildly different goal than the team who develops the project, they can fork it and create a new project, using a modified codebase of the original!

What does this mean to users who don’t know how to program? Well, okay, sure, not as beneficial to them. However, practically speaking, since an unlimited amount of programmers can get involved, libre software (especially larger ones) have a much lesser chance of having bugs, security leaks, viruses, or spyware. It can also include many more features than proprietary software does. Libre software is also often updated much more frequently than proprietary software, since any developer can contribute.

It is also possible for users to hire a programmer to make a change for them, in the same way that home owners may hire a plumber to fix a leak (except that, generally speaking, programmers would probably take more time to make the change than a plumber would to fix the leak).


Since the first part talked about the idea of proprietary software, and the second about free/libre software, the third will look at practical usage: How to switch over to libre software.

It can be difficult to switch to libre software, especially when you have proprietary software that you use a lot and/or really like. For example, if you use Skype, it may be difficult to ask your Skype contacts to switch over to Ekiga or some other libre VoIP software. In my case, a surprising number of my contacts were thankfully flexible enough to switch over to some other communication method. However, everyone is individual, and your friends might find it difficult to migrate over (even after explaining why not to use Skype).

However, luckily, most proprietary software have libre equivalents. It is beyond the scope of this post to list these, but, with a bit of research, you can find some online (I would link a list, however, I can’t find any lists that only include truly libre software). I would be happy to help find an alternative if you want too! (just leave a comment or send me an email)

Sometimes though, there are no alternatives. This is especially relevant in the field of modern video games, or music production. It is also relevant with drivers for parts of your system that do not have a libre driver written for it. So what do you do? This is really up to you. Are you okay with using proprietary software for this one purpose? Should you avoid using it period?

For me, I use proprietary software for both music production, and a few video games. I don’t like the fact that I’m using either, but I currently value the features that it provides over what it can control (when using proprietary software, I ensure that internet is turned off, and I don’t have any other software open). Later, once I find OSS alternatives for the music software I’m using, and when I detach myself from video games (I only really play Deus Ex Human Revolution …. it’s a good game, with an amazing soundtrack xD), I will probably finally use 100% libre software (minus the BIOS) on all of my machines.


Lastly, I would like to address the fact that libre software is only one part of the issue in having control over your computer. While it is possible to have full freedom in every single way for software, there are two other major issues: Hardware, and Internet.

Hardware is very difficult, since you can’t easily change the hardware. And, in fact, even if you knew the source code (HDL) of the hardware, it would be very very difficult to reverse engineer it in order to make sure that the hardware is indeed following the source code. There are even theories that Intel and AMD CPUs are sending information to the NSA (evasively worded responses from the companies give credence to this theory). Whether or not this is true is outside the scope of this article, but the point is, hardware is a very big issue, and I think the only true answer that would guarantee that the source code truly is the hardware, would be to create your own hardware. I think it goes without saying that this would be very very difficult. Maybe with the rise of 3D printers this will someday change … who knows!

Internet is the other issue. The internet is a way to access ports from foreign computers. Unless you own the foreign computer, there is no way of guaranteeing that your data will be safe with them. They can do anything they want with the data you send. Getting away from services that are known to spy on you and otherwise harm you (such as Facebook) can be a difficult task, depending on how connected you are with the service. In Facebook’s case, everyone is on Facebook, because everyone is on Facebook. Leaving it can be difficult, since you have to sometimes migrate family members and friends to other websites (same point as I made with Skype).


I hope that you found this post useful! I’m sure a lot of points in here may be wrong (please correct me!!), but I have tried my best in order to make sure that this can be as informative and accurate as possible to those that are new to the concept of software freedom. I know I have missed a lot of other important points in here, but I’m not sure where, or if they should be mentioned, so I will link articles containing those below.

If you have any questions, comments, corrections, or anything else (as long as it is constructive, of course!), please feel free to leave a comment or send me an email!


Further reading:

http://www.gnu.org/philosophy/free-sw.en.html (a very good explanation on what the Free Software Foundation considers libre software)
https://www.youtube.com/watch?v=Ag1AKIl_2GM (a talk by Richard Stallman, founder of the GNU project, about software freedom)
http://www.gnu.org/distros/free-distros.en.html (a list of completely libre GNU/Linux distributions)
https://libreplanet.org/wiki/List_of_software_that_does_not_respect_the_Free_System_Distribution_Guidelines (a list of software that are free and open-source, but not libre … yes, Linux contains non-free code!)

I’m quitting relinux

I will start this off by saying: I’m very (and honestly) sorry for, well, everything.

To give a bit of history, I started relinux as a side-project for my CosmOS project (cloud-based distribution … which failed), in order to build the ISO’s. The only reasonable alternative at the time was remastersys, and I realized I would have to patch it anyways, so I thought that I might as well make a reusable tool for other distributions to use too.

Then came a rather large amount of friction between me and the author of remastersys, of which I will not go into any detail of. I acted very immaturely then, and wronged him several times. I had defamed him, made quite a few people very angry at him, and even managed to get some of his supporters against him. True, age and maturity had something to do with it (I was 12 at the time), but that still doesn’t excuse my actions at all.

So my first apology is to Tony Brijeski, the author of remastersys, for all the trouble and possible pain I had put him through. I’m truly sorry for all of this.

However, though the dynamics with Tony and remastersys are definitely a large part of why I’m quitting relinux, that is not all. The main reason, actually, is lack of interest. I have rewritten relinux a total of 7 times (including the original fork of remastersys), and I really hate the debugging process (takes 15-20 minutes to create an ISO, so that I can debug it). I have also lost interest in creating linux distributions, so not only am I very tired of working on it, I also don’t really care about what it does.

On this note, my second apologies (and thanks) have to go those who have helped me so much through the process, especially those who have tried to encourage me to finish relinux. Those listed are in no particular order, and if I forgot you, then let me know (and I apologize for that!):

  • Ko Ko Ye
  • Raja Genupula
  • Navdeep Sidhu
  • Members of the TSS Web Dev Club
  • Ali Hallahi
  • Gert van Spijker
  • Aritra Das
  • Diptarka Das
  • Alejandro Fernandez
  • Kendall Weaver

Thank you very much for everything you’ve done!

Lastly, I would like to explain my plans for it, in case anyone wants to continue it (by no means do I want to enforce these, these are just ideas).

My plan for the next release of relinux was to actually make a very generic and scriptable CLI ISO creation tool, and then make relinux as a specific set of “profiles” for that tool (plus an interface). The tool would basically contain a few libraries for the chosen scripting language, for things like storing the filesystem (SquashFS or other), ISO creation, and general utilities for editing files while keeping permissions, mutli-threading/processing, etc… The “profiles” would then copy, edit, and delete files as needed, set up the tool wanted for running the live system (in ubuntu’s case, this’d be casper), setup the installer/bootloader, and such.

I would like to apologize to you all, the people who have used relinux and have waited for a stable version for 3 years, for not doing this. Thank you very much for your support, and I’m very sorry for having constantly pushed releases back and having never made a stable or well working version of relinux. Though I do have some excuses as to why the releases didn’t work, or why I didn’t test them well enough, none of them can cover why I didn’t fix them or work on it more. And for that, I am very sorry.

I know that this is a very large post for something so simple, but I feel that it would not be right if I didn’t apologize to those I have done wrong to, and thanked those who have helped me along the way.

So to summarize, thank you, sorry, and relinux is now dead.

– Anonymous Meerkat

About the Orchestral Tutorial Series, and other things

Okay, first, I’m very sorry. I promised a person I’d finish it, and I didn’t.

Why? Well, the explanation is rather complicated, but simply put, I had issues with making universal instructions for installing each software, and then life got in the way, and it got forgotten (or rather “Oh, I have to do it sometime… I’m sure that publishing it tomorrow won’t hurt” XD).

But, I was also slightly hesitant, because I had figured out a potentially much better way of making music (no DAW, just JACK routing), but I had issues with that too.

So yeah, I’m really sorry about this. However, I’m also happy I waited, because I have learned much more about orchestral music production since (my new methods are completely different from my old ones).

I am not planning to finish it anytime soon though, because, as I said in the first post of the redux, I am working on my own DAW. But it’s much more (it’s a complete operating system …… that includes a custom-made kernel). I will not post any details about it, but as you can tell, this is a huge project.

I’m not working on it straight away though, because I need some more experience. The first step is to create our own programming language. The language we have planned is theoretically possible to implement, but would be significantly harder to write a compiler for than C or pretty much any other language. So yeah, I do need more experience XD

Anyways, for now, I’m working on a new game engine to gently get myself back into programming (I was working solely on music for a while), and I’m also working on a soundtrack for a friend’s animation (one of the musical ideas is the first one from here, if anyone’s interested: https://www.youtube.com/watch?v=FCJJjJcRIh8 ).

And to finish off this post, I just want to show a little bit of code I’m somewhat proud about (that was originally supposed to go in the game engine, but I have a feeling that this is not a good idea anymore XD). Made this today in about 15 mins :D  (took me a while to debug it…. rather obviously lol)

void rsc_ls_free(char**a){for(long j,i=0;!(((!(j=(long)a[i]))||(realloc((char*)j,0)))&&(!realloc(a,0)));i++);}

How to set up WineASIO

Step 1: Install WineASIO

If you use ubuntu, run this in a terminal:

sudo apt-get install software-properties-common wget
sudo add-apt-repository ppa:kxstudio-debian/kxstudio
sudo apt-get update
sudo apt-get install kxstudio-repos
sudo apt-get update
sudo apt-get install wineasio

If you use Arch Linux:

Add the Arch Audio repository, then run in a terminal:

sudo pacman -Sy wineasio

Step 2: Register WineASIO

If you have a 32-bit WINE prefix, or you have a 64-bit one, and you want to run a 32-bit ASIO application (e.g. a DAW), run this:

regsvr32 wineasio

If you have a 64-bit WINE prefix, and you want to run a 64-bit ASIO application:

wine64 regsvr32 wineasio

If everything went smoothly, you should see a message similar to:

Successfully registered DLL wineasio.dll

However, you may receive:

Failed to load DLL wineasio.dll

In my case, the reason why this message occurred, is that wineasio.dll was installed to the wrong location. I had 2 problems, actually. It was first installed to /usr/lib/wine, not /usr/local/lib/wine (I have a custom-built version of WINE), and second, even if it had been installed to /usr/local/lib/wine, it wouldn’t have worked, because, in my case, WINE loaded 64-bit libraries only from /usr/local/lib64/wine, and 32-bit libraries only from /usr/local/lib/wine. The package had installed the 32-bit version of wineasio to /usr/lib32/wine, and the 64-bit version to /usr/lib/wine.

Try moving the wineasio .so’s to these places:

  • 64-bit wineasio .so: /usr/lib64/wine
  • 32-bit wineasio .so: /usr/lib/wine

Then try again. If you still have problems, leave a comment below, and I’ll try my best to help =)

Step 3: Setup JACK

WineASIO uses JACK as the backend for the audio, so, not surprisingly, JACK has to be setup correctly for WineASIO to function correctly. I wrote an article a while back about how to do this.

Step 4: Profit!

It’s that simple! Now all you have to do is to load up the application you want, and set the ASIO driver to WineASIO =)

Creating an Orchestral track under Ubuntu REDUX – Part 1: Choosing a DAW

So, I originally thought this series was useless, and, well, since I didn’t cover some of the more important sections, it pretty much was =P

But one person asked me to finish it, which was the first time I saw that it was useful, to at least someone, so I decided maybe it’d be a better idea if I make a redux of it, because the first one had many issues (and I’ve learned a lot since then).

One of the issues was that it took LMMS as the base DAW (Digital Audio Workstation), which, as I have learned since, is definitely not the best DAW for orchestral music production (IMHO). Since I have tried a couple of DAWs, I’ll share my thoughts on each one =) Next part will focus on setting them up.

  • LMMS:
    • Pros:
      • It’s somewhat easy to install (you might need to compile it though)
      • Linux-native
      • Very intuitive at first, and good for beginners
    • Cons:
      • Very buggy (minor bugs, but still annoying)
      • I personally hate the automation
      • Multiple MIDI inputs for a VSTi is very hard (I haven’t managed to ever make it work)
      • VSTi’s take a loooong time to load (though this is most likely an issue with having a linux-native DAW using windows VSTi’s)… especially Kontakt, which is probably the most important VSTi you’ll need for orchestral music production
    • Conclusion: Good for beginners, not good for orchestral music production
  • QTractor:
    • Pros:
      • Simple
      • Minimalistic
      • Logical
      • Intuitive
      • Consistent
      • Fast (the program is fast)
    • Cons:
      • More work to install and setup than LMMS (especially with setting up windows VST support)
      • Buggy
      • Crashes a lot
      • Piano Roll is pretty bad (IMO)
      • Not as pretty as most others (though, tbh, that isn’t too important XD)
      • Though the workflow is very consistent and intuitive, the word “fast” would definitely not be the best to describe it
      • I have never been able to successfully load a windows VST on it yet (when I was actually able to _find_ the VST, it crashed while loading it)
    • Conclusion: I like this one a lot, but its cons make it only really useful at a conceptual stage (IMHO, at least)
  • I will skip a lot of other Linux-native DAWs, because I haven’t had enough time with them to give a somewhat decent Pro/Con list to. However, I find that OpenOctaveMidi – though it never worked for me – seems to be (from the features list) the most promising linux DAW so far (sadly, it hasn’t been updated in 2 years).
  • REAPER:
    • Pros:
      • Free (kinda … the trial never really ends)
      • Well maintained (updates pretty much every week or 2)
      • Works almost flawlessly under linux
      • Very customizable
      • Piano roll has a _really_ useful time-stretching feature (when multiple notes are selected, CTRL+Drag on the edge of any of the selected notes, and it will time stretch it)…. something I really miss with other DAWs
    • Cons:
      • It isn’t actually free… but you can keep on using it as long as you like for free (the trial isn’t enforced)
      • (I’ll have to update this later… I know I’m missing a few, but I haven’t used it for so long that I forget >.<)
      • It might have frozen a lot, that may be why I don’t use it anymore (as I said, I forget)
    • Conclusion: It’s great, but I forget what I didn’t like about it…  TODO: FIX THIS!!
  • Ableton Live:
    • Pros:
      • As its name suggests (“Live”, not “Ableton” =P), it’s great for live performances, due to its really neat session view (basically, you can put a lot of 1 bar patterns in it, then play them at different times)
      • Its macro feature is _really_ useful, as it basically (AFAICS, I’ve never used it, but I’ve seen people use it) an automation that automates multiple other automations. Though its use in orchestral music is not that prominent, it’s very useful in electronic music (and since my style usually has a mix of both electronic and orchestral, I would use this a lot, if I still used Live).
      • Automations are really well made
      • CTRL+Drag. Seriously, it’s probably one of my favourite features from it… so simple, but so powerful (while dragging moves a clip or note, CTRL+Drag will duplicate it and move the duplicate… very useful!!)
      • Close integration with Max, a tool that kind of lets you create your own synths or effects
    • Cons:
      • Midi CC automations are terrible, and sometimes don’t even work! This is the main reason why I don’t use it, as in orchestral music, Midi CC automations are pretty much one of the most important things you’ll use.
      • The display is very buggy under linux
      • The Midi editor needs work (it’s workflow is rather slow)
      • It doesn’t bridge VSTs. So if you’re using the 64-bit version, you can’t use 32-bit VSTs.
      • It crashes a lot
    • Conclusion: Though it’s really great for electronic music, it’s not so great for orchestral music
  • Studio One:
    • Pros:
      • Best DAW for Midi CC automation that I’ve used so far (it works both on clips, and on the timeline!)
      • Automation is pretty good (you can create square, triangle, and sine waves really easy on it)
      • Very intuitive (I picked it up really quickly, compared to nearly all other DAWs I’ve used so far)
      • Its plugin browser is also really neat (you can organize it by vendor, category, folder, or just flat)… best one I’ve seen so far
      • Close integration with Melodyne, an apparently really cool audio editor (I still haven’t figured it out though XD)
    • Cons:
      • The display is very buggy under linux (sometimes the timeline time vertical bar indicator [for lack of a better word] doesn’t even show! Also, the rectangle selection doesn’t show either)
      • It’s buggy all-around (I don’t think this is linux-related)
    • Conclusion: Best DAW I’ve used so far for orchestral music production, but it’s very buggy!

I would have included the setup part in this one, but I realized that it would have probably taken 2 more articles (plus this one), so I decided to just give a quicker article at first, to kick off the new tutorial “series” =)

Oh, and, if I may add… I’m working on my own DAW right now, which is fully modular, so if there is something that isn’t quite right, then it’s easy to change it =) It’s kind of a precursor to SythOS (same concept…. 3D virtual environment, network-enabled, fully modular, timelines, timeline branches, etc…), but it’s much simpler (since it’s only an audio workstation). I’m planning on releasing it sometime by the end of this year =)

Some updates

I thought it might be fun/interesting/possibly useful/useless/whatever to create a post with updates on different projects I’m working on that are related to (or have been posted on) this blog.

Orchestral Tutorial Series

I haven’t posted anything in this tutorial for a really long time. Why? A couple of reasons. First, I needed a break, second, JACK stopped working (once I fix it, I’ll update the first part =P ), and third (also the main reason), is that there didn’t seem to be a great interest in the tutorial series (very few hits per day, if any). Which makes total sense (I’m definitely NOT the best person in this field at all).

So does anyone want me to finish it (the next part is going to be about how to, well, create a track using LMMS… like LMMS basics, that kind of stuff)? If anyone does, I’d be happy to do so. But if there isn’t really any interest (which I would totally understand XD), I probably won’t finish it.

Relinux

Relinux 0.4 was a disaster. I think that nearly everyone who used it can agree with that. So, instead of trying to fix all of the issues, and constantly fix the architecture, etc… , I’ll rewrite it! Again! This will be the 7th time I’m rewriting it (yes, I kept count =P)! I’m not kidding.

I’m kind of designing it off-and-on (my main priority is SythOS), but it’s definitely going to be better than 0.4!

Some quick notes: I’m debating on whether it’d be a good idea to call it something else, since I’m not really sure that any product is still to be considered the same product after its 7th rewrite… and because I’m not sure I’ll just want to support linux (I really want to make it work on BSD-based distros!).

I’m also not sure if I’m going to be using C, C++, or SyC (see the SythOS section). If I complete SyC before I start working on relinux, it’ll definitely use SyC, however, I’m not sure if I’m going to wait for that long. I know that if it uses C++, it’ll most probably use Qt.

SythOS

Since I wrote that post on SythOS, I’ve been constantly improving the concept. I’m not going to reveal too much (I’ve had enough of people stealing my ideas… and code), but it’s basically now a fully 3D environment, and everything is editable (without a separate “mode”… if you edit an object, you’ll edit it in real-time). I’ve already figured out how exactly one could create an audio track inside it, same for video, image editing, texturing, gaming (duh), and also, how using SythOS could be much more efficient than using, erm, “normal” solutions. I’ve also figured out most of the “how” of SythOS (i.e. how it’s going to be built, how everything is going to work, etc…).

SythOS is actually going to use a custom programming language, not because it’s impossible to create it using already existing languages (I was almost going to use C++ for it), but because it’d be much faster and easier to use a different language (that I’m designing right now).

The language (named SyC … SythOS C … okay, it’s not a brilliant name … neither is SythOS, for that matter XD) is, well, based on C, but is designed to be more consistent (I got slightly annoyed by the minor inconsistencies with C =P), and, if used correctly, faster. What? Faster? How? Well, it has an extremely powerful pre-processor… which is literally the language itself! Okay, let me rephrase: Preprocessor instructions are normal code. So it’s theoretically possible to run an entire software inside the preprocessor (why? good question!). But yeah, it’s extremely useful for optimizing code that could be run at compile-time. Also, because of this, it’s theoretically possible to extend the language itself (or even, write code in a separate language, which will compile to SyC) using the pre-processor!

Since it’s very possible that I might have confused you (I’m terrible at explaining things, if you haven’t already noticed =P), I’ll give a code example of what I mean (I haven’t finished designing SyC, so the syntax you see here may very well be changed once SyC is done):

@int foo = (@int var) { // Built-ins are namespaced using "@"
    @return = var + 10; // Return is no longer a keyword, it's a variable!
};

// If you use # in a variable name, it's now _forced_ to be used as a pre-processor instruction
@CODE #loop = (@int times, @CODE code) {
    @return = "";
    for (int i = 0; i < times; i++) {
        @return += code + ";"; // NOTE: The "+=" and "+" will probably NOT be used in SyC!
                           // This is just an example for the pre-processor, so please ignore that
    }
}

@char * #error = "ERROR"; // Think of it kind of like:
                          // #define error "ERROR"
                          // If you wanted #define error ERROR , you'd change @char* to be @CODE

@int main = () {
    @return = 0;
    @int var = 10; // NOTE: This'll probably have to be constant... SyC's design is not complete yet!
    @int var2 = foo(var); // This will compile normally... it'll do a function call
    @int var3 = #foo(var); // This will compile as: @int var3 = 20;
    loop(20, // Notice that it is not invoked as #loop ... this is because loop is already marked as a pre-processor instruction!
        var3++;
        var2 += var3;
    ); // This will compile as:
       // var3++;var2 += var3; var3++;var2 += var3; var3++;var2 += var3; etc...

    if (var3 > var2) { // Let's just say this is an error
        @return = 2;
    }
}

So, back to SythOS (instead of talking about SyC), I am not going to ask for people to help on this project (in contrast to what I did with CosmOS). Reason why, is that the last couple of times I did that, it turned out to be a complete disaster. So I won’t do that again! However, I’m definitely not closed to help. It’s just that I won’t be “requesting” help, persay (I would appreciate it though =P).

I haven’t done any code-work on SythOS, as I’m still trying to finalize the design (especially of SyC, as I’ll need to make a compiler for that before I can actually start working on SythOS itself)

This blog

To be honest, I’m not exactly sure what to do about this blog. I’m definitely not going to delete it, but since I’m not using ubuntu anymore, very few things I do on here relate to ubuntu anymore. Sure, I make some tutorials which talk about how to do stuff on linux, but nothing specific to ubuntu.

I’m not sure whether I should continue doing these kinds of posts here (which are not totally related to ubuntu), or not… Though I guess it isn’t too important of an issue, I’m just wondering, would people mind if I kept doing these (since it is promoted on planet ubuntu and other various ubuntu-related websites… and I definitely don’t want to lose them!)?