Valid HTML 4.01 Transitional

Welcome to my personal web log.

This is a place where I rant about my experiences with some of the technologies I encounter.

Atom Feed

C++ symbols in debian/symbols files - symbol export maps


When developing a C++ library that we later intend to provide by means of a Debian package, there are certain things that make it really complicated and hard to maintain. Everyone that had to deal with debian/symbols in a C++ library knows how troublesome it is. The biggest problem besides name mangling: symbols leakage. By default the GNU ELF linker exports everything as it goes, leading to maintenance hell. Sadly, this has to be dealt with on the source level - the best way? Symbol export maps.

This, by any chance, is nothing new. We encountered this problem because symbol export maps is not popular enough. At least most of the C++ projects I had contact with packaging-wise were either not shipping any debian/symbols files or were exporting, like, everything. dpkg-gensymbols is usually generating enormous symbols files which are then used to generate the symbols for the packaging side - after demangling the names and preparing them for C++ (see here for details on name demangling).

As mentioned before - by default the linker on GNU/Linux systems exports all symbols automatically, which does not happen in case of DLLs. We can stop that from happening on various ways: by using gcc-specific source-code stanzas or by using symbol export maps. The second approach sounds a bit better as the source code is not bloated with any additional stanza. How can a symbols export map look? Let me provide a real world example, coming from Thomas Voss and his dbus-cpp project:

     extern "C++" {
     extern "C++" {

Symbol export maps are called LD Version Scripts in the gnulib and can be used for both C and C++, as well as additionally for ABI versioning. The latter is also very important, but let's concentrate only on symbol export for this post.

First o all, we need to warn the linker that the names will be mangled as for all C++ - we can do that through the extern "C++". Then, everything that we put in the global: scope will be exported. Everything else that is mentioned in the local: scope will be hidden. The rest is simply using wildcards for which symbols should be considered in which scope. In our example the local scope has *, as an indication for: "everything that's not in global should be hidden". And all the other stuff? Anyone that was managing any C++ debian/symbols files knows the presented naming. Besides exporting actual types and symbols from their namespaces, we also try remembering about their typeinfo and vtables. Some STL types that might be useful should also be considered for exporting.

What we should do next is simply pass our map file to the linker by using And then we can finally have a maintainable, sane C++ symbols file in our packaging. It still requires some time and thought, but at least we won't have to export all the STL symbols that we actually only use locally. Phew.

Appmenu Qt5: patches and the release candidate


Happy new year! I hope this post to be the last one in the series of 'Appmenu for Qt5'. Holidays have passed and now all my required patches have been successfully merged to upstream Qt repositiories. This means that according to our current policy, we can now cherry-pick those patches to the Qt5 versions that are used in Ubuntu. Therefore, the still-prepared Ubuntu Qt 5.2 packages are already shipping my changes as quilt patches, enabling proper appmenu-qt5 support. All is ready for testing.

This was my first time contributing to the Qt codebase. The process of submitting patches itself is not complicated, but full of small caveats to remember. There were two changes I wanted to upstream for everything to work:

One thing to remember after a change has been submitted. Once the code gets reviewed and approved (NOTE: the small check sign in a review, a +1 is not enough) - your code does not get automagically merged into trunk. The author of the patch still needs to press the Merge button visible above comments. Only then the change gets staged, tested and released.

With the changes that have been submitted, everyone can now explicitly select the platformtheme by using the environment - or, if preferred, by providing a command line argument:

 QT_QPA_PLATFORMTHEME=appmenu-qt5 qtcreator
 # or...
 qtcreator -platformtheme appmenu-qt5

For anyone that wants to test how the new QPA global application menu for Qt5 works, there is a PPA that's created to test-build the new package.

  • Launchpad: Appmenu Qt5
  • Bzr branch: lp:appmenu-qt5
  • Test PPA: ppa:sil2100/qt (remember that Qt5 packages from ppa:canonical-qt5-edgers/qt5-beta2 are required)
  • Package name: appmenu-qt5

After installing the provided packages from the PPA and upgrading your Qt5 to 5.2 from the qt5-beta2 archive, appmenu-qt5 should be enabled by default after a session reboot. Currently the package provides a profile.d/ script that sets the QT_QPA_PLATFORMTHEME environment variable, enabling appmenu-qt5 by default. The environment gets reverted back once the package is uninstalled.

Feel free to test and submit bug reports on the provided launchpad page. Once Qt 5.2 is officially released and/or a decision will be made to backport appmenu-qt5 support to 5.0.2, appmenu-qt5 will be added to our Ubuntu daily-release system - and then migrated to trusty. The daily-release process I have been partially maintaining has changed a lot recently, with some management decision slowing down the whole process, but it's still a very convinient way of release managment for a project.

Appmenu Qt5: through a bumpy road, but working!


Some time ago I mentioned working on the global application menu for Qt5 - the so-called appmenu-qt5 for Ubuntu and its derivatives. After a longer while, I finally fount the time and occasion to resume my work - and, after a really bumpy ride, end up with a working solution. Some hacks had to be made, some Qt5 design decisions worked-around - but the end result is here: a working appmenu-qt5 QPA platformtheme plugin. In this post I would like to overview the implementation of the current proposed appmenu-qt5. Read on if you're interested in some of the Qt5 internals, workings of QPlatformTheme plugins and the confusing elements of the Qt Platform Abstraction in overall.

First of all I would like to mention is the lack of any solid documentation regarding QPA and the platform theming features in Qt5. The most useful 'source' of knowledge was, well, the source-code itself.

appmenu-qt5 implementation

How does the current implementation work? It's a bit shameful, but I actually worked around the whole core design of platform menu-bars that the Qt5 developers invented. Last time I overviewed how in our case we would have to duplicate menu information by providing QPlatformMenu* equivalents for every QMenu* element. As mentioned, I ditched that, instead listening only for the QPlatformMenuBar creation to try and export the related QMenuBar to DBus through DBusMenuExporter. The problem is: when we get the createPlatformMenuBar() call from QPA, we get NO information on what QMenuBar we want to export! A bit too abstract for my taste. So, what I do is 'hack around' and fetch the QWidget that is the parent of the menu bar needing export.

Diagram showing the appmenu-qt5 inner logic

So, in the end, after getting the QMenuBar of the application currently running, I am doing a cast to QMenu and pushing this very object to export through DBus. This way DBusMenuExporter itself handles all the cases of menus/items being added and removed without any additional code-paths.

The most tricky part was the QWidget acquisition, which I briefly mentioned before already. The handleReparent() method only accepts a QWindow argument. The QWindow -> QWidget translation is not directly documented anyway. By browsing through the available methods, I noticed that we can use the following trick, visible below. It's not protecting us against possible switching of the QMenuBar in a given QWidget, but that case happens really rarely. It's still something that needs to be remembered for the future though.

 # How to fetch a QWidget paired with a given QWindow
 # window is the pointer to a QWindow
 QWidget *widget = QWidget::find(window->winId());

The other interesting part worth mentioning is actually how to write QPlatformTheme plugins and how those are loaded. Let's have a separate session for it:


The final decision I made was to use the QPlatformThemePlugin approach. Qt5 offers including platform theme plugins in the platformplugin/ plugin directory. Those are looked for and loaded first by QGuiApplication. The catch is, though, that Qt5 does not look into the directory and try all the available theme plugins there. It tries to fetch the theme plugins basing on what the themeNames() method of the currently selected QPlatformIntegration returns. I started wondering if there is a way of overriding which theme names are returned.

Sadly, in the current Qt5 implementation - there is no way. For the xcb QPA plugin which is used by default in Ubuntu right now, the QGenericUnixTheme::themeNames() fetches the DESKTOP_SESSION environment variable and adds to the theme names, but it adds it to the end of the list. Since theme plugins are being checked one-by-one from first to last, in a normally working system, it's impossible to get our plugin running. For testing, I distro-patched my Qt5 to allow overriding the QGenericUnixTheme's theme name through the DESKTOP_SESSION variable (patch available here), but I'm still looking for a better solution.

The other quirk is - when using QPlatformThemePlugin and QPA in overall, it's not possible to just say: "override only the platform menu, use everything else that you were using up until now". The effect of this is: appmenu-qt5 has to detect if we're running something Gnome-theme like or KDE-theme like and then theme itself accordingly. It's basically deciding whether to use QGnomeTheme or QKdeTheme as the base class (instead of the clean, themeless QPlatformTheme). Those two classes are part of QGenericUnixThemes for the most popular GNU/Linux themes, and without those used we would get ugly X-themed windows with no good-looking pieces.

What I think is a 'bug' is that Qt5 does not allow to derive from the QKdeTheme class by making its constructor private. This is troublesome, as Qt5 only uses the static createKdeTheme() method for creation - but there are no real reasons to disallow doing it manually. The only thing that createKdeTheme() does is fill in the two constructor parameters for the programmer.
I will try upstreaming a patch to fix this, as I think this design is wrong. The one-liner patch I made can be found here for now: patch.

But work is still ongoing. I will push the modified Qt5 and appmenu-qt5 packages to a PPA in the nearest days/weeks so that anyone that's interested can participate in testing. For now I got preempted to other tasks.

UPDATE: Forgot mentioning this earlier. The current solution is hosted on launchpad, you can find everything under the following links:
Project webpage:
Qt5 direct export approach branch: lp:~sil2100/appmenu-qt/appmenu-qt5-directexport
Feature bug: LP: #1157213

QMake - forcing no install target AKA target flags


I had an annoying problem today to which I finally found a workaround for. Let's assume we're using qmake and the 'testcase' CONFIG for a given target. Strangely, whenever the testcase configuration is used for a given project, all test targets are generated with their 'make install' targets in the Makefile (if the Makefile generator is used, of course). Even when we're not adding the target to INSTALLS. But what if we don't want to install the given testcase anywhere? Qt4 ant Qt5 docs say nothing. But the qmake source code says all.

Normally if you have a qmake project and don't want the install routine to be generated for a given target, you simply do not add it to the INSTALLS variable. But the CONFIG += testcase option forces the install routines on you, don't know why excatly. It's annoying, because the default install path for those is like /usr/tests, which is a non-standard directory. Besides, when building Debian packages, you only need to run unit tests during the build process - there is no need for them to be lying around a user's filesystem later.
So, how to force "I do not want to install this target" in qmake? Let's see the example:

 CONFIG += testcase
 TARGET = foo

 SOURCES = foo.c

 # Now, the solution:
 # Let's inform qmake that we don't want a default install target AT ALL,
 # even if we somehow said otherwise before
 target.CONFIG += no_default_install

All we need to do is set the target CONFIG flag no_default_install. This forces qmake not to create an install target at all.

There are other interesting target flags I found as well, such as: dummy_install, no_path or no_check_exist. Too bad there seems to be no documentation for those. I hate when there is functionality that is used but not explicitly visible otherwise. To solve my problem, I actually had to get into the source code and check for the feature I need inside, which usually means the documentation is lacking.

Appmenu Qt5: starting work - QPA, QPlatformThemeFactory


All has been very busy lately, as managing the daily-release process and tools for Ubuntu proved being more challenging then we have expected. In the meantime though, I am also working on creating proper Ubuntu appmenu support for Qt5 by the use of the Qt Platform Abstraction (QPA) API. Not a hard task, but without proper time resources even this can be a bit troublesome. I would like to use this post to iterate some of the things that I have learned related to the Qt5 QPA topic, as well as mentioning some plans, concepts and remarks to proper Ubuntu appmenu support for Qt5, as designed by me.

First of all: yes, currently Ubuntu already has Qt5 support for appmenu. The problem is, due to time constraints, I had to forward-port the old Qt4 patches directly to the Qt5 sources, ignoring all the Qt5 infrastructure that has been specifically created for these purposes. I'm not proud of it, yes. That is why I am trying to correct my mistake by using a QPA platform theme in a new appmenu-qt5.

Launchpad feature bug: LP #1157213
Launchpad blueprint: qt5-qpa-appmenu

A quick overview: QPA is a platform abstraction layer framework for Qt5. This basically means that through QPA we can modify some Qt5 behavior, look and usability to fit given platform requirements. There might be some QPA for a phone using Qt5, or a different system that has specific requirement for the menu etc. It enables making Qt5 work on different platforms and form-factors.
Even though the QPA is a great idea, sadly it's facing some problems that I found a bit irritating. First of all, there is not much documentation - most things you have to get directly from the source, which is not very well commented as well. The other thing, more related to my usage of QPA, is the framework for supporting 'native menus'. It has been designed in a way that requires a lot of duplication, more tailored into what was needed for enabling support for the global menu in cocoa. It's not flexible enough in this regard and forces specific rules to be followed, which are not optimal in our case.

Platform menus are part of the QPlatformTheme functionality. The given QPlatformTheme should implement the createPlatformMenuBar(), createPlatformMenu() and createPlatformMenuItem(). The QMenuBar calls the theme's createPlatformMenuBar() during init (QMenuBarPrivate::init()).

Diagram showing the platform menu framework in Qt5

As seen on the diagram, for every element in the menu for a given application, the QPA requires an 'abstract' equivalent for a given platform. Those abstract menu structures have no kinship with their Qt5 equivalents - QPlatformMenuBar, QPlatformMenu and QPlatformMenuItem are interfaces declared in the Qt5 code. This gives a lot of flexibility, but also means that if a given QPlatformTheme supports a native menu, all the menu items it has have to duplicate their 'contents' to their QPlatform* equivalents. And in our case? Not really what we would like to have, as our DBusMenuExporter for Qt uses the QMenu object as a base for exporting.

For our Ubuntu appmenu, we need a QMenu to export it to DBus when using libdbusmenu-qt (which is the normal path to take). We can theoretically get that implicitly during QPlatformMenuBar::handleReparent() from the QWindow parameter and then using QWidget::find() - but still, as all later addition, removal and modification of menus and menu items requires the QPlatformMenu* equivalents, we need to keep a copy of the whole QMenu tree.
My current approach is - for DBus exporting, we'll try to use the window's QMenu. I will have to check if that is indeed possible, or there are some strict requirements on the QMenu structure that gets published by DBusMenuExporter. If not, this will make the design much simpler, as all additions-removals could be done 'virtually'.

An important thing to note. To make this work, we do not have to create a new QPA platform: if we're interested only in the platform theme, we can use QPlatformThemeFactory for this purpose. As Samuel Rødal explained to me, QGuiApplication on ::init_platform() uses this class to load a platform theme to be used in the application. Once it finds a suitable one (this can be modified by environment variables), it uses it instead of the one defined in the currently used platform. Too bad there is no documentation of those parts though.

Let's look into the details later on.

Updates: Article and interview


Just a quick self-advertising update. Recently I have been involved in two fun activities: first, I wrote an article about Ubuntu Autopilot for the Ubuntu User magazine. It has been published in the 16'th issue of Ubuntu User. It's loosely based on the Autopilot for Unity blog post I wrote some time ago, just this time not only targeting the Unity system but also normal Qt/GTK+ applications. I have also been asked to give a short interview related to display servers and the recent announcement of Mir, for a polish computer-oriented blog - Morfiblog. It's polish-only, sadly.

Just wanted to wave my hands on that. Read up and have fun!

A tale of IBus, GIR and queries


Recently I have been working a lot on making IBus autopilot tests more reliable. The design was simple: fetch the expected resulting characters from IBus and compare those with what actually got written in the Unity Dash/HUD text entry. Simple, right? Wrong. IBus doesn't really allow you to just ask him: hey, when I give you the string "abc", what output string will I get for the current engine? You can do it if you explicitly use the input_context, ok. Hell starts when you want to use Python and the gi.repository IBus bindings (not the old python-ibus ones). Why? Because someone made some essential methods Introspectable="0" for no reason. But let's see how we can actually do without those.
This post is a quick report from my battle with gobject-introspection of IBus in Python - with all my other trials and approaches. A lot of those!

Context: IBus is an input method for Asian languages. To test IBus in Unity, we enable IBus, write input strings to the search entries and check if what we get is what IBus should return. Normally, we had hardcoded 'result' values for every input, but this is not the right way to go - since IBus won't always return the same thing, as the most probable result is dependent on previous selection choices. History could be removed before test execution, yes, but then the ibus-daemon would have to be restarted. Otherwise history will not flush.
Current best solution: querying IBus by ourselves, outside of Unity, and checking the lookup table for a given input string. But that's easier said than done.

For this purpose I wrote a quick Python application that does a direct query to IBus using a separate input context. I registered the project on Launchpad and integrated it into the autopilot Unity integration tests.
Project page: ibus-query
Bazaar branch: lp:ibus-query

Diagram showing an overview of how ibus-query works

The first thing I tried was hooking up to an existing input context of the Dash and maybe listening in to the communication between IBus and nux. Sadly this did not work for reasons unknown to me. Besides, there's still the problem of deciding when the actual input session is finished - i.e. which commit-text signal is the final one. Since we need to know the results while the input context is still focused.
So right now? We create a new InputContext, hook up to some important context signals (such as commit-text, disabled etc.), push the input query as a series of context.process_key_event() calls, run a GLib.MainLoop and return the result that we get from the previously connected signal callbacks.

But this all brings us to the topic of this post: gobject-introspection. Since the code was supposed to be used in Autopilot, I could not have used the python-ibus python bindings. These were using the old glib python bindings, while the current Autopilot uses GLib from GIR. The rule is, old glib bindings cannot be mixed with the new gobject-introspection repository bindings, as it results in undefined behavior (i.e. does not work).
For those who don't know - GIR is a new way of exporting existing, non-python code to Python users. Projects that want to have their code exported create a .gir file with definitions of which GObject functions should be introspectable for Python. This file is usually auto-generated.

So I had to use the GIR bindings. But, as mentioned earlier, I had to workaround the lack of one of important function for my usage - just because it wasn't set as introspectable in the source code. But instead of using the create_input_context() (documentation), I looked up on what theoretically is done during the function call and did those operations myself. It's less portable, but works. Consider the following code:

  # Standard thing - create the IBusBus object
  self._bus = IBus.Bus()

  # This part here is basically what self._bus.create_input_context() would do:
  # Create a DBus connection with the ibus-daemon
      self._dbusconn = dbus.connection.Connection(IBus.get_address())
      print "Error! The ibus-daemon doesn't seem to be running."

  # Connect to the IBus object and interface
  ibus_obj = self._dbusconn.get_object(IBus.SERVICE_IBUS, IBus.PATH_IBUS)
  self._iface = dbus.Interface(ibus_obj, dbus_interface="org.freedesktop.IBus")

  # Now, this is the simple part - the DBus IBus interface simply exports a method
  # that basically does all that create_input_context() should do - allocate and return
  # the bus connection path for our new input context
  path = self._iface.CreateInputContext("IBusPoll")

  # All that is left is to create a new InputContext object
  # Here we also basically work-around some GIR stuff by directly calling the new()
  # method of the InputContext
  self._context =, self._bus.get_connection(), None)

For this to be possible, I had to browse a lot of IBus source code. But before I decided to work-around the missing method through introspection, I wanted to use another method which strangely is introspectable. The ibus_bus_create_input_context_async(). Now what sense does it make? The synchronous method is not available, but async is fine. Of course, to add to the craziness, ibus_bus_create_input_context_async_finish() is not introspectable as well. But I thought I can use the one asynchronous version of create_input_context() and be done with it.

So, the async method more or less uses something from GLib called a GAsyncReadyCallback. A callback like this is given a GAsyncResult object as its argument; this object includes the result of the operation that got finished/updated. It's also available in Python of course, no deal. The only problem is, when using the SimpleAsyncResult version in Python, you can only return the actual result as a boolean (gboolean) or numeral (gssize). So, in my case, it's not useful at all. Especially in Python.
There's no easy-to-find documentation what is really available from those functions in the newer Python GIR GLib bindings, but I decided not to dig further, searching for a solution elsewhere.

But back to my final solution used in ibus-query. On every input character we emit a process_key_event signal, which sends the character to the engine. We do that without a main loop started. The signals are emitted synchronously and sent futher to ibus-daemon through DBus. To receive the results from the daemon, we need to start a GLib.MainLoop, receiving the commit-text, update-preedit-text and disabled signals back, acting accordingly (signal descriptions). We used the disabled signal to our advantage as the mark of when we finished processing all interesting events and when we can quit() the MainLoop.

After a safe testing period and fixes, the code of ibus-query got also merged in into the Unity autopilot test suite for testing IBus (merge request). The method is still not perfect - currently it's not working for the Korean hangul input engine because of some engine implementation differences. I hope to fix that pretty soon as well.

From other news, did you hear about our Ubuntu Tablet? It's a very neat addition to our family of products - let's see where this road will take us.

Pandaboard EA3


ARM platforms were always important for the Ubuntu ecosystem - even more so since the start of the 13.04 cycle. For testing Unity on ARM I'm using an OMAP Pandaboard from the EA3 series. It's a very interesting platform to play with - quite powerful, high-level and good-looking! By no means it's a novelty, but something I have only recently got my hands on. Here's my really quick look on the Panda in my possession.

RB433AH board photo

When releasing new Unity versions into Ubuntu, we usually want to make sure everything works on both the standard x86 related architectures and also ARM. Since 13.04 we're targeting mobile as a future form-factor. The Google Nexus 7 is the platform of preference of course, but a Pandaboard can do fine as well.

First some basic info: the board offers two HDMI ports, 2x USB 2.0, 1x USB 2.0 OTG (UC-E6 slot), 10/100 Ethernet, a wireless chipset (with bluetooth), RS-232, audio input/ouput, SD card slot and a 5V power supply connector. Besides that, there are also expansion, LCD, camera connectors - and a standard JTAG interface.
The insides: an Elpida B8064B2PB-8D-F8 SoC - OMAP4430 1GHz (up-to 1.2GHz) dual-core Cortex-A9 MPCore + 1GB DRAM DDR2 with Hardware accelerated graphics (1x 3D, POWERVR SGX540) a TIWI01 Wireless Module (with WL1271). A rather solid piece of hardware.
One instantly visible problem - Ethernet and USB is provided by the same controller. Too bad.

Of course, there is no preinstalled system on-board. We have to use external storage (SD-Card, USB) to run our own system of choice. The two most popular ones that are pre-built for this platform are Android and Ubuntu. I only tested Ubuntu - the system runs fine, Unity is usable when using the pvr-omap4 drivers that are installed by default. Running from a standard USB 2.0 pendrive is a bit of a bottleneck, even when swap is disabled and some other optimizations performed. I think that there might be also something wrong with the quantal Unity stack for ARM, but I'm not sure - since probably I should get better performance with this hardware. I wonder.
There are two custom LED's on board are used by the system. Ubuntu, by default, assigns STATUS1 LED as the heartbeat diode, while STATUS2 is triggered during the activity of the SD card. These are assigned by the kernel.

The Pandaboard is a very fun device. The other good thing about it, besides the ability to use it as a normal dual-core ARM system (with all the Unity and autopilot madness running there), it can be extended by using the Trainer-xM board for experiments. It includes a nice prototyping area, along with a nice ATmega328 processor and other goodies.

The pricing is nice. The old Pandaboard can be bought for $174, while the more powerful, newer Pandaboard ES (with a OMAP4460 inside) for around $182. That's certainly a bit more than the Raspberry Pi, but of course the use-cases for both are different.

A small note to everyone. The Pandaboard is powered by a 5V power supply. The 'original' (recommended) provides up to 4 Amps. It will also run with anything that has at least 3 Amps, just be sure not to use a weaker one - since the board will simply not run. If you see the over-voltage diode light up (D3, small red diode near the power connector), it can either mean that wrong voltage is given or not enough current is provided - so simply check your power supply once again.

From other news, we recently announced the Ubuntu Phone. Fun thing - I have some things to say about that, but let's leave that for another post in the nearest future.

Autopilot for Unity


Unity currently uses a very interesting tool for functional testing: Autopilot. It's a custom solution created with Unity in mind, but can also be used for any other system as well. We have recently decided to start getting rid of all manual testing in Unity and related components, meaning no more running away from automated testing. This is a quick look on how we can use Autopilot to perform automatic testing for our convenience with the Unity stack. All related to Ubuntu 12.10 and later of course.

Current policy in Unity-related development is that every fix that is to be included is required to have automatic testing, if possible by technical means. This means that every merge request needs to have either unity testing or automated functional testing included - no manual tests are accepted any more. Some code that is obviously not-testable, like visual changes or some difficult crashers or others can be included as exceptions. But the overall rule is: all code needs to be tested.

Unity uses Google Test and Google Mock for unit and mock testing. For functional testing Autopilot is being used. There's a lot about gtest around, but it's harder to find anything regarding autopilot - especially that it's undergoing constant development even now.

The following post relates to Ubuntu 12.10 Quantal Quetzal and Autopilot from the project's PPA (ppa:autopilot/ppa) - version from the 1.2 series (for instance, 1.2+bzr88+pkg56~quantal1). The usage of the latest autopilot is always recommended for the latest Unity stack.
So, things that we will need for learning and using Autopilot with Unity:

  • An Ubuntu based system
  • Unity 6 and the related stack installed and running
  • python-autopilot package installed (version from the 1.2 series)
  • Unity source downloaded (either through bzr, or any other means)

I have provided an example test suite for Unity, packed up nicely in a stand-alone archive. The relevant sources are commented for better readability. All that is needed to run it is extracting the archive and running the ./ script. I will try to overview some of the specifics demonstrated below.

Get the example source archive for this post HERE.

How to run autopilot tests?

This depends on the version of Autopilot we want to use, but it's usually a similar set of steps. For 1.2, running AP tests is really very easy - installing the tests is not necessary, as they can be run from any path. First, we change to the autopilot test directory in the Unity source (./tests/autopilot/). Afterwards we can either check the number of tests available or run all of them or just the ones we're really interested in. We can be as specific as we want.

 cd ./tests/autopilot
 autopilot list unity # List all available unity autopilot tests
 autopilot run unity  # Run all available unity autopilot tests (takes A LOT of time!)
 autopilot list unity.tests.test_switcher # List all tests from the test_switcher suite
 # And we can run either one specific test (like below) or run just one test suite
 # We can be as specific as we want!
 autopilot run unity.tests.test_switcher.SwitcherTests.test_switcher_move_next

A small note to remember: when running autopilot tests, some tests tend to 'hang-up' and wait idle for some time. It seems to happen due to the current multi-threaded design of Autopilot. Just know that the test will resume its normal behavior in time, so patience sometimes might be needed.

How to write autopilot tests?

Writing autopilot tests for Unity is a rather easy task, as Unity provides really useful tools for specific actions one would want to perform during functional testing. The demo I have included shows most of the basic use-cases, but I'll also try to cover some of them here.

AP tests are grouped into test files, each having one or more test-case - while every test-case can have one or more test. In case of Unity, every new unity test-case needs to inherit from the UnityTestCase class (as defined in unity.tests), which includes all helpers necessary for testing the shell. Now, every such test-case can define tests that can be performed being methods of that class, each starting with a test_ prefix, e.g. test_if_foo_is_bar(self).

If a given test case needs some specific setup routines performed before running the selected test suite, we have the setUp() method available for override (just remember to run the parent's version at the beginning). We can put that inside as we want.

 from unity.tests import UnityTestCase

 class SomeTestCase(UnityTestCase):
   def setUp(self):
     super(SomeTestCase, self).setUp()

     # We can include any setup here - for instance cleanups, which I will mention about
     # in a moment

   def test_check_if_oven_is_turned_off(self):
     """This check is supposed to check if the oven is turned off
     # Do the check here
     # (...)

Autopilot uses so called emulators for doing 'specific' interaction with Unity. Some of the Unity-specific emulators can be found in ./unity/emulators/ path in the Unity autopilot directory. These emulators provide methods for doing more complex interaction with the system, e.g. revealing lenses, switching applications, providing states of components (visibility, results) etc. So, for instance, to force the dash to be visible, we can simply call self.dash.ensure_visible() in the body of our test method. There are, of course, some other methods of performing the very same task - but more manually.

Autopilot allows us to move the mouse cursor at will, as well as provide keyboard input as needed. So therefore we can also open the dash by, for instance, moving the mouse to the position of the dash icon with self.mouse.move(10, 30) and then performing or simply using self.keyboard.press_and_release("Super") to force a Super-key tap. Of course, there are more methods of doing the same, as for instance using self.keybinding_tap("dash/reveal") to force a keybinding tap. All these actions will ultimately lead to the dash being opened. Besides that, AP also alows us to start new applications, using self.start_app_window() - and many many more.

The best way of getting to know all the available functions is looking up emulator code or other existing tests. But here's a short list of those most obvious tools:

  • self.mouse.move(x, y),
  • self.keyboard.type(text), self.keyboard.press_and_release(key_combination, time)
  • self.keybinding(keybinding_name, time), self.keybinding_hold(keybinding_name), self.keybinding_release(keybinding_name), self.keybinding_tap(keybinding_name)
  • self.start_app_window(application_name)

But simply being able to perform actions is not enough for functional testing. We actually need to somehow assert that the result of our actions is correct and the behavior is correct. Autopilot provides some interesting constructs for this as well.

Let's take our earlier case - we're opening the dash in Unity. So let's say we actually want to make sure the dash has been opened after we performed the mouse action or the keyboard press. For this, we can make use of the self.assertThat() method of the Autopilot test-case. Consider the following quick example:

 from autopilot.matchers import Eventuall
 from testtools.matchers import Equals
 from unity.tests import UnityTestCase

 class SomeTestCase(UnityTestCase):
   # (...)

   def test_if_dash_opens_on_super(self):
     """Check if pressing Super opens the dash when the dash is not visible
     # First, let's assert that the dash is not visible
     self.assertThat(self.dash.visible, Equals(False))

     # Press the Super key...
     # ...and make sure that after some time, dash finally gets visible
     self.assertThat(self.dash.visible, Eventually(Equals(True)))

   # (...)

The Eventually() syntax gives the given assertion a short time period for it to actually happen (I think it's around 10 seconds now). If the assertion fails even after this time, the test fails. There are many constructs that we can use for assertions, these are mostly: Equals(), NotEquals(), GreaterThan(), LessThan() etc. All for our disposal. And at the end of all the tests we get a summary of how many tests have actually succeeded.

One more thing worth noting - remember the self.addCleanup() I mentioned earlier? This way we can add things we want the test to perform in case when the test ends (normally or due to a failure) - like, for instance, hide the dash. Whatever we set as an argument will be actually called at the end of the given test or test-case. Unity of course also checks by itself at the end of every test to make sure if the test was well-behaved, closing the dash and applications when necessary, but it's always better to deal with it by ourselves.

And that's more or less it. This is by no means a complete tutorial, but more of an overview of how Autopilot can be used for Unity. A real tutorial will probably come pretty soon, but it's not really needed to start writing useful tests already. Much of the magic can also be easily read from existing test-cases. Be sure to include testing (unit tests or AP tests) to every merge-request proposed for Unity - otherwise your merge will probably be rejected or bounced back for fixing.

For the demo code included in the archive (it's here if you missed it), I have also used Python bindings for Notify OSD in Ubuntu. No use of explaining on how it works, as there is a really good tutorial on - so just check it out.

Android app: state persistency on orientation change


Parts of my latest interests tend to lean towards Android - both application and system development. There was one thing I noticed lately: whenever orientation changes, the activity gets restarted and might lose its state. This happens because Android has a strange policy of forcing the recreation of the activity whenever the configuration changes - such as orientation, language etc. So if we don't want our application starting off clean on every device tilt, well, we need to prepare it for this evil.

For starters, as you probably know, an Android activity follows the following 'very simplified' lifecycle:

  1. onCreate()
  2. onStart()/onRestart()
  3. Application running
  4. onResume()/onPause()
  5. onStop()
  6. onDestroy()

When a configuration changes, the Activity just gets killed, so we reach the actual end of the lifecycle - i.e. onDestroy(). But right before onDestroy() is called, we are given a chance to save the current state of the application with onSaveInstanceState(), an useful overridable method for our cause. By deriving from this method, we can actually store the state temporarily by using the Bundle outState object that's being passed as an argument. A Bundle is a very practical type - it's a pre-made Parcelable that we can use to store data and/or Parcelables associated with string keys.

So, we actually are given a multi-type hash-table-like object to store our settings. On each onCreate() we are also being passed a Bundle, which in case of a configuration change restart is actually the outState one we fill by ourselves. So, for example:

 public class FooActivity extends Activity {
   // The usual convention is to first include creation methods, but here we'll do an
   // exception for readability's sake
   // End of application
   public void onSaveInstanceState(Bundle outState) {
     // We can just insert simple types right away, such as ints, floats and even Strings
     outState.putInt("important_integer", importantInteger);
     outState.putFloat("ble", shyFloat);

     // But we can also include Parcelables, e.g. other Bundles
     // Let's imagine a childClass object that has a method that saves its internal state
     // by creating a new Bundle, saving its state and returning it as the result

     // We can take that and just slam it to the state bundle
     outState.putParcelable("child", childClass.saveYourInstance());

   // (...) Rest of code here (...)

   // Start of application
   public void onCreate(Bundle savedInstanceState) {
     if (savedInstanceState != null && savedInstanceState.size() > 0) {
       // Seems like we have a state we can start from
       importantInteger = savedInstanceState.getInt("important_integer");
       shyFloat = savedInstanceState.getFloat("ble");

       // Our smart SomeSmartCustomChildClass has a custom constructor that can restore
       // its state from a bundle, so we have it nicely divided
       childClass = new SomeSmartCustomChildClass(savedInstanceState.getParcelable("child"));

       // (...)
     else {
       // Normal Activity initialization from zero

       // (...)

   private int importantInteger;
   private float shyFloat;
   private SomeSmartCustomChildClass childClass;

We're missing some checks and bits-and-pieces here, but it's just an exemplary piece of code to demonstrate how saving and restoring the application state can be realized.

A tip: when you start designing any application, start off from the very beginning by defining what will compose the state of your Activity. What are the defining variables? What are the elements that define the current user state? Once you have this planned, start implementing the save (onSaveInstanceState) and restore state chunks of code from the very beginning. Whenever you have a new variable defining your state, add it to save/restore right away - later on you're more likely to forget, make a mistake or lose interest and motivation to do so. It's not really motivating to suddenly write Parcelable code for 30 variables and 10 lists.
At least not for me.

A bit unrelated - I'm in Copenhagen right now and will be around for UDS (Ubuntu Developer Summit), so you can always poke me if you're attending it as well. Till later!

Custom library search paths


A beginners post today. The idea for this article came from my recent battles with packages and a question asked by a friend of mine - "how can I point where the system can find some private libraries of mine on GNU/Linux for my application?". There are a few methods for doing this: LD_LIBRARY_PATH, and RPATH. A word of notice though - it is usually not wise to make your application depend on libraries in some private, normally unaccessible paths. But when you are forced to, here's what you can do.

I'm pretty sure all of you know about the LD_LIBRARY_PATH environment variable. If you need to quickly point to an untracked directory with libraries when running an application (or want to override some libraries with other copies), all that is needed is setting LD_LIBRARY_PATH=/path/to/libraries before starting the target program.

 # Run foo, searching for libraries in the /usr/lib/baz folder besides the default 
 # library locations
 LD_LIBRARY_PATH=/usr/lib/baz ./foo

The other way probably known to almost everyone is using It is a special file located in /etc/ In this file we can include custom search paths for libraries on the whole system. After adding entries to this file, we always need to re-build the cache by calling ldconfig though - its good to remember that.
In Ubuntu, we shouldn't edit directly, since there is a special directory /etc/ where we can put our .conf files with paths one-per-line to our convenience. We can also add directories to the cache directly through the command line without editing or adding any files - using ldconfig -n.

 # We can either edit (or put a new .conf file to and rebuild
 # the cache by calling ldconfig, or simply run ldconfig adding the path from the command line
 ldconfig -n /usr/lib/baz

Now, if we do not want all the applications in our system to use the private path but just a selected one - and we do not want to bother by meddling with the environment variables every time - there is something called RPATH which we can use.

RPATH is an optional section in the .dynamic section of any ELF object - both executable and shared object. It is an embedded list of run-time search paths in a given object. By default, when looking for a library, first looks through all directories listed in the binary's RPATH - if none are specified or the given library is not found, it looks through the rest (ldconfig cache, LD_LIBRARY_PATH, default paths etc.). So it's an easy way of adding search paths to a selected object.

We can add entries to RPATH in various ways. If we are using ld directly, we can use the -rpath=path parameter. When using gcc/g++ for linking, we need to pass it using the -Wl property. The -Wl argument can be used to pass flags directly to the linker. A few examples:

 ld foo.o -o foo -rpath="/usr/lib/baz"  # When calling ld directly
 gcc foo.c -o foo -Wl,-rpath="/usr/lib/baz"  # When using gcc/g++
 gcc foo.c -o foo -R"/usr/lib/baz"  # Using the convenient -R notation

After adding the path, we can then confirm if RPATH has been set correctly by either calling objdump -x application | grep RPATH or using any other, similar tool (e.g. readelf).
For convenience, CMake helps out providing some easy-to-use properties for forcing RPATH for both the whole project and a single target. All information regarding that can be found on the CMake wiki. Really useful when using CMake.

WARNING! Just to be clear. As I already mentioned at the beginning, in most cases we should avoid using RPATH or any other non-standard library path meddling. A detailed study on why it is a problem can be found here. But anyway, it's always good to know how to deal with situations like these. Just remember - when building Debian packages from binaries that have custom RPATH's, get used to linian making a fuss about it. And it probably won't be accepted to any official Debian repository as well. No one likes RPATH it seems...

Contributing to Ubuntu on Launchpad


It's really really busy on my side lately, especially since I got assigned to take care of some of release integration. That's why this time I'd like to write a bit less-technical post. I would like to share my experiences related to contributing to some of the official Ubuntu projects hosted on Launchpad. Ubuntu being first of all an open source community based system, most of its development happens due to contributions. It's really easy getting started! This might be considered as a pseudo-tutorial combined with a bit of soap-opera.


As most of you know, almost all Ubuntu related development happens on Launchpad. In combination with bazaar (bzr), Launchpad provides a very useful interface for submitting user contributions to any hosted code. Even though I'm more of a git fan, bazaar is rather OK for fix contributing. Although some things got me irritated at first. Most projects accept contributions on a merge-request-and-review basis. Just find the LP page of a project you are interested in, target appropriate bug, assign to yourself, create bzr branch, request merge and wait for reviews. It's actually much easier than it sounds.

My first, real contributions since I started working for Canonical resolved around the compiz, Unity and Unity 2D projects. Because of their importance to the user experience, there is actually quite a lot people involved in their development - so it's rather easy to start. The first thing I had to do is getting in touch with people responsible for the management of these projects. Entering the actual community of a given project can be very helpful, as other developers can provide insight and advice regarding troublesome issues in the given project.
The best channel for communication: IRC. Most contributors have time-zone information given, along with the IRC nickname used and IRC server (although Freenode is probably where you should look first). This way we can identify people we want to poke for more information or advice and when to do it. If no maintainers are available on IRC, fall-back to e-mail.

I had some bugs in need of fixing. I assigned them to me and started my work. Usually it is wise to ask the existing maintainers and developers about the planned fix before actually preparing it. It's good to use the experience of others - and it also saves some time on rewriting the fix after getting rejected. And probably prepare yourself for getting rejected or having to modify your code A LOT - since this happens very frequently.
I remember having some problem with the lack of documentation for compiz. There are many projects like this sadly. In this case, the best way is just asking others for information if the source code itself is not enough. It's good to remember that asking is nothing shameful.

Then, after preparing the fix, comes the time for submitting a merge request. First of all, we clone the initial project's bzr branch and commit our fix to that branch. We then push it to launchpad, to our own branch. Usually it's best to push it to lp:~launchpadid/project/name_of_the_branch, there: launchpadid is your LP name, project is the name of the project we're submitting the fix to (e.g. compiz) and name_of_the_branch is any more-or-less informative and fitting name for our fix.
After this is done, we can go into that branch and request a merge through the web interface. When requesting a merge, we are asked to provide the description of the fix and other information, if needed. We need to remember that, for Ubuntu projects, writing a description and also a commit message is mandatory. And it's best if we follow the following structure when writing the description:

 - Problem description
 [Input here the detailed description of the problem being fixed]

 - Fix description
 [Describe here how the code you are submitting fixes the problem]

 - Test coverage
 [How to test that the fix is correct? How are you ensuring it's fixed?]

After submitting a merge request, we wait until someone (a maintainer or other developer) performs a review of your code. If it gets approved, someone will begin the process of merging it into trunk sooner or later - usually sooner. How this is done varies from project to project - some bigger Ubuntu projects, like Unity, have it done automatically through a bot (or e.g. jenkins) but some others need the maintainer to merge the change manually.

It's really satisfying when you see your first fix getting merged upstream. I remember I was very happy when my merge request finally made it upstream. When you're new to the code, it can be hard to fit into the code-style too. But usually, after our fix is merged, project maintainers or bug supervisors will take care of closing the bug and preparing it as needed, so here ends our work.
Sometimes some maintainers might ask us to resubmit the same fix for an older branch, especially if the fix is important and fits the SRU profile (Stable Release Update - read more about it here). But this is usually trivial to do - usually just a bzr merge -c revnum to cherry-pick from your branch to the new one.

As you can see, the process of contributing to Ubuntu projects is very simple and fast. Some small advice to remember:

  • When assigning yourself to a bug, remember to set it to In Progress when working on it - otherwise someone else might also fix the bug, and one of you wasted time for nothing
  • Try not to modify code unnecessarily, i.e. only change parts of the code that is required, do not perform re-factorization of existing code when not needed. You can submit a separate merge request for refactoring
  • Keep in mind timezone differences! Sometimes you'll have to take that into consideration to get in touch
  • Always test the fix before submitting a merge request - and best if you do it on a clean system (chroots and VM are very useful here)

That being said, I have really good memories of my first contributions to Ubuntu. Since the current stable version, the 12.04 Precise Pangolin, is a LTS (Long Term Support) version, contributing as much fixes as possible is crucial. Because this release is supposed to be stable and precise for a long period of time, I encourage everyone to try his strength in making Ubuntu rock-solid and rock-hard. For beginners, I would recommend looking at Unity-2D and probably Unity, but there are also many many more upstream Ubuntu projects needing attention.

Hope to see you on Launchpad, soon! I might even the one to review your code..!

Ubuntu - toolchains, switching-the-hard-way and packaging


Sometimes, for whatever reasons, you might want/need to use an older cross-compilation toolchain for a given architecture, especially when you notice something wrong with the more recent versions. Ubuntu usually provides in its repository at least the two most recent toolchain versions. But switching to an older one is not that trivial, since another 'meta-package' handles creating of all the useful symlinks. Here's a hacky and blunt way of switching from one version to another.

Let's take into consideration precise (12.04) and gcc for ARM. The most recent linaro toolchain is gcc-4.6-arm-linux-gnueabi - this is also the one installed when gcc-arm-linux-gnueabi is selected. In all cases, the gcc-[version]-arm-linux-gnueabi package installs all the binaries, while the gcc-arm-linux-gnueabi 'meta-package' creates all symlinks, essentially binding it as the default.
But since gcc-arm-linux-gnueabi is fixed to depend only on the latest toolchain, there is no real way of creating the symlinks for earlier versions. But we can always do it the hard way - rebuilding the package.

It's quite easy. Just perform the following steps:

 mkdir pkg
 cd pkg
 apt-get source gcc-arm-linux-gnueabi
 cd gcc-defaults-armel-cross-1.8 # Or any other name of the extracted sources
 vim debian/rules # Here, modify the rules file and change GCC_VER to the version you want
 dch -i # This is an additional step, if you wish to change the version of the package

We can then install the package and make use of the older cross-compiling gcc through the standard arm-linux-gnueabi-gcc command.

This shows actually how easy it is to rebuild a Debian package (and, therefore, any Ubuntu package). The debuild command will inform you of any missing build dependencies. If something is missing, we can install those packages with apt-get and re-run the command again.
Of course, cross-building packages for different architectures than ours is a bit more tricky. But we'll get back to this once, I hope.

As a side note - since this is also related to cross-compiling. Some of you might be using qemu-arm for cross-compiling as well (for instance, in a chroot with qemu-arm-static). I noticed that the precise qemu-arm emulator probably is 'bugged' - certain native code results in a segfault. I noticed it while trying to run the lupdate-qt4 binary - it crashes with a segmentation fault. The problem seems to be somewhere in the qemu-arm package from precise. From what I know some people are already looking into this problem. Just be advised.

The fglrx bug mystery solved


In my yesterday's post, I over-viewed the situation of a fglrx-related bug in compiz that I have been working on recently. Today, after consultation with the developers from ATI and a joint bug-search with Sam Spilsbury, we were finally able to find the root cause of the issue - resulting in a one liner fix for a bug in compiz. So why did this bug only happen for fglrx? Easy. Due to implementation differences in the drivers.

Quoting the specification [here]:

    If dpy and draw are the display and drawable for the calling thread's
    current context, glXBindTexImageEXT performs an implicit glFlush.

    The contents of the texture after the drawable has been bound are defined
    as the result of all rendering that has completed before the call to
    glXBindTexImageEXT.  In other words, the results of any operation which
    has caused damage on the drawable prior to the glXBindTexImageEXT call
    will be represented in the texture.

I have been pointed to this by William (from ATI), that I should do a glXBindTexImageEXT and glXReleaseTexImageEXT around doing any drawing in my test code. The reason for this is that, as we read in the specs, the bind call actually performs a flush of all Pixmap modifications to the texture. These modifications are normally buffered and flushed whenever the driver feels like it. That's also why the Pixmap size mattered - since the bigger it was, the bigger chances were that the changes will get automatically flushed.

In compiz, due to a really really old mistake in code of someone, the decoration damage events actually were not setting a flag (damaged = true) that was required to rebind the texture on enable () calls. So a whole step of glX(Bind/Release)TexImageEXT was missing.

This, however, was not a problem for other drivers at all. It seems it's completely implementation dependent when the modifications are flushed to the target texture - fglrx seems to be more economical here, not flushing needlessly, which resulted in this bug happening. Crazy stuff.

Big thanks to Sam Spilsbury for spotting the 'early return' in the decor plugin code. Also, big thanks to the developers from ATI (AMD) for their help and patience!

Hunting for a fglrx bug - X programming


I wanted to share a short story about an irritating bug I have been trying to fix in Ubuntu's compiz, related to the ATI Radeon proprietary closed driver fglrx. I had many context switches during the process, so it took longer than I suspected. This post might shed some light on a strangely specific problem that I encountered.

The bug in mention can be seen on Launchpad LP: #770283 from compiz. Window decorations (title bar, window buttons etc.) did not update on normal usage when the fglrx driver was used. The only way of updating the decorations was to resize the window, on which the decorations magically got redrawn the way they should. But things like focus changes, mouse hovering or title changing - nothing resulted in the decoration update. If you ask me, this is a VERY irritating thing not being able to see what window is currently focused.

At first I thought that maybe compiz was at fault, that maybe it's doing something in a way that confuses the fglrx driver. For that, I had to understand how the decoration drawing mechanism works. The following diagram can briefly sum it up:

Diagram showing the internals of decoration update in compiz

This is a very simplified description, but more or less shows how it's done. Knowing this, there is not much philosophy involved - modifying the original Pixmap in gtk-window-decorator (or any other decorator) should result in the texture being automatically updated.
Whenever a window changes its decoration in any way, the decorator draws the modification to the window's decoration Pixmap. The driver ensures that the change gets propagated to the right GLXPixmap. Damage events (XDamageNotify) telling which parts of the screen should be redrawn were flowing correctly as well, so theoretically nothing was wrong.
So what is the cause of the problem?

It seems fglrx has a problem with propagating changes made to a Pixmap to its corresponding GLXPixmap. So, as a temporary workaround, I proposed hooking up to the compiz decor plugin damage events handler and rebind the texture whenever it changes. This way, on every decorator pixmap modification, compiz will destroy and create a new GLXPixmap and the corresponding texture. A dirty workaround, but works with an unnoticeable performance footprint.

But I wanted to prepare a test application that could isolate the problem, making sure that it's not only compiz specific. This was not an easy task, since I just recently started working on X related code, so my knowledge was almost 0. Also, my OpenGL skills didn't age too well as well (it's been a while!). But by looking into compiz code and reading up the GLX_EXT_texture_from_pixmap OpenGL specification, I was somehow able to reproduce the bug with my simple application.
All that are interested can download the source here: [fglrx-test.c]. Not much to see though, since its just a trivial application made specifically with the purpose of testing if the bug can be reproduced or not. Big thanks to Gord Allott for clearing up some things for me!

A summary of some random things that I learned, which might be useful for others:

  • After creating a GLXPixmap using glXCreatePixmap from a Pixmap, modifications to the original Pixmap are passed on to the GLXPixmap.
  • Some GL methods (from extensions) are not normally available to use. Some functions need to be fetched first with the glXGetProcAddress call - which, if the given extension (and procedure) is available, returns a pointer to the procedure of interest (more info here).
  • The XDamage extension can be used for notifying which areas of the screen need redrawing (specification here) - simply call XDamageCreate on a given drawable (e.g. a Pixmap).
  • X is VERY confusing.

As for now, my ugly workaround has been merged into the compiz distro and it'll probably be available to all Ubuntu precise users (12.04) with the nearest SRU (Stable Release Update). After a while, I hope we'll be able to fix the very root cause of the bug. The guys from ATI (AMD) are really helpful, so it might even happen pretty soon!

IMPORTANT UPDATE! Please read my next post regarding the bug mentioned here!

Deb-triggers and Plymouth - notes


Today just a short post, glued together from a few small things that I found useful - more specifically, regarding deb-triggers and some old Plymouth bits. Anyway, geh, it's so busy lately...

First of all, some loose things about Debian packages. Lately I had really much contact with package creation and management. Debian packages are a powerful tool - actually, even basing a build system for cross-compiled embedded systems on .deb wouldn't be such a bad idea.
Recently I learned about the existence of deb-triggers - a very useful mechanism for registering common actions that should be performed by all installed packages as result of a specified action. For example, a package can register a trigger to be run when any Debian package installs/copies a file to a given directory. This can be useful when, for instance, we want to update some cache file whenever new entries are installed by a different package. Examples: texinfo documentation, gtk2/3 IM cache entries.

When a trigger is 'triggered', the postinst script of the trigger-registering package is called with some additional arguments. Besides triggering on directory modification, triggers can also be fired directly by packaged using dpkg-trigger. Very useful thing. You can read more about it on man deb-triggers, man dpkg-trigger and on this nicely written howto.

Now a small thing about Plymouth. Here's a useful batch of Plymouth commands that can be used in your system init shell script (in the initramfs). It's nothing new, but useful to have a list of:

  • plymouth pause-progress - to make the progress bar (dots in the Ubuntu theme) stop moving.
  • plymouth unpause-progress - to make the progress bar move again.
  • plymouth watch-keystroke --keys=KEY --command=COMMAND - makes Plymouth (and the init script) wait for the user pressing the KEY key (e.g. a character key), after which it executes COMMAND and resumes. Useful when ran in the background.
  • plymouth --ignore-keystroke=KEY - disables a previous binding through watch-keystroke.
  • plymouth message --text="TEXT" - if the given Plymouth theme allows it, this displays the given message on the bootsplash.

That's all for today. In the nearest future I'm planning on writing a bit about Maliit plugin development and Compiz plugin development. I just need to find a bit more free time, since I actually picked up too much things lately. Stay tuned!
And by the way - new Unity (5.6.0) has been released! Testing is welcome!

(Haiku Blog-O-Sphere) Bits and Pieces: The Small BCardLayout


A short post about something that's not really documented. When working on a communication application for Haiku, I needed to create a typical configuration wizard window. I required a few views to be present in one spot, with only one being shown at the same time - with the ability to switch between them on user Next/Prev button press. Since Haiku exports a neat layout API, I wanted to use one of those if only possible. And then I found the BCardLayout.
Come visit my Haiku Blog-O-Sphere page and read my new blog-entry - Bits and Pieces: The Small BCardLayout.

Really short post, but still it's something that's not really mentioned anywhere. Have a read if you're interested in Haiku operating system programming. Enjoy!

Plymouth bits


Quite recently I had the need and 'pleasure' of playing around with the Plymouth bootsplash. For those that don't know, Plymouth is an application which runs very early during the boot process and displays either textual or graphical boot animation, hiding the actual boot process in the background. There isn't much documentation available on the configuration and installation process - usually this is done by system distributors, not users themselves. As noted on the homepage, Plymouth isn't really designed to be built from source by end users. You can find some basic howto's around the internet, but today I would like to concentrate on the few bits that are harder to find.

For Plymouth to work correctly, it needs to be included in the initial ramdisk of the kernel (initrd/initramfs) - since we want the bootsplash to appear even before the actual filesystem is available. After building/cross-compiling Plymouth to the platform of interest, all that is left is installing it to the initial filesystem.

The source package for Plymouth has an INSTALL file with some basic information on how to configure, build Plymouth, prepare the initramfs and how to proceed in the init scripts with running plymouth during the boot process.

In Ubuntu based systems, the plymouth and initramfs-tools packages provide some interesting tools for this in the /usr/share/initramfs-tools directory. The hooks/plymouth script can be used for installing the necessary plymouth files into the selected initramfs directory from the currently working system (this can be done, for instance, by a DESTDIR="/initramfs" hooks/plymouth call). This hook copies all libraries for both the text and graphical themes that are currently selected in the system. The script is also an useful hint as to which files are needed.
On the other hand, the scripts/ subdirectory includes some Plymouth initialization scripts used by the Ubuntu initramfs. Look into the scripts/init-top/plymouth, scripts/init-bottom/plymouth and scripts/panic/plymouth files for more information.

After preparing the initramfs, there are a few things that are useful to know (thanks Steve Langasek for clearing up things for me!):

  • For the splash-screen to be visible, the kernel needs to be given the splash boot parameter. Otherwise, no splash screen will be shown.
  • When the graphical theme cannot be used, Plymouth falls back to the default text theme available.
  • Adding a console= command line parameter might confuse Plymouth ('might' is the keyword here, since it's only something I've been told by someone)
  • For the graphical theme - not every library as noted by the hooks/plymouth script is needed for Plymouth to work. Most of these libraries are needed for the label plugin to work, not the actual splash screen.

The last thing is important when for some reasons we cannot fit all these libraries in our initramfs. For instance, the ubuntu-logo theme available in the Ubuntu repositories - besides the actual graphical theme, also many many X and font libraries are installed. All of which weight additional megabytes. These are required by the plugin, which is responsible for displaying text messages during the boot phase. It is used when an error appears and the user needs to be notified or prompted for interaction. If such features are not required, the, fonts and its dependencies (most of which are installed in /usr/lib and /usr/share in the initramfs directory) can be omitted. All that is required are the core graphics libraries and renderers.
For instance, in the case of ubuntu-logo, the required files would be: ubuntu-logo/*,, renderers/*, ubuntu-logo.png and the like.

I think I'll play around with creating custom themes for Plymouth in my free time, since it seems quite easy. For now, I still have some work to do in other projects. I seem to be so busy lately - and this flu certainly isn't helping either. There might be one a short Haiku related post coming up in the nearest days. Hope you'll enjoy it!

Maliit Input Method


Recently, I did some experimenting with the available OSK's (on-screen keyboards) around, ultimately focusing my attention on Maliit. Maliit is an OSK project mainly known for its use on the MeeGo mobile platform - but in reality it can also be used as an input method for both Qt and GTK+ standard applications on any Linux based operating system. Since the project is being actively developed and changes are made quite rapidly, a bit of work was needed to make it work for all possible IM cases. Nothing too complicated though. Let me help you dive in into the world of Maliit.
Big thanks to all Maliit developers for their swift and professional help!

Maliit is certainly a great OSK framework. Its design and implementation allow for much customization, with the default environment providing 'all that is needed'. Let's proceed with a short guide on how to get started.

To get everything working for Qt, GTK-2.0 and GTK-3.0 altogether, we will need the most recent version from the maliit-framework git repository (available here). The latest changes include the merging of meego-inputmethodbridges for GTK+ IM support, as well as new fixes for making GTK+ support work once again. Big thanks to Jon Nordby for finding and fixing the root cause of this issue. Cooperation regarding this bug was magnificent! After fetching, compiling and installing the framework, we will also need the maliit-plugins package installed (most recent sources - here). Detailed instructions on how to build the sources can be found on the Maliit webpage, but in short - basically it's nothing more than just: qmake; make; sudo make install.

Maliit consists of the framework (API and the OSK server etc.) and plugins. Actually, a plugin is the keyboard that we see being displayed on screen. By default, a QML based plugin is built and used, but anyone is free to create their own plugins and use them instead.

How to get Maliit up and running? As noted on the running Maliit section of their web page - after installing both packages, we need to make the application of interest use Maliit as the current input method. Of course, we can do it manually through the "Input method" context menu. Another way is just setting the QT_IM_MODULE (Qt) and GTK_IM_MODULE (GTK+) environment variables. The web-page has a bit outdated information, as currently we should set both of them to Maliit. If needed, we can use pam_env for setting the environment variables for all applications on the system (e.g. the /etc/environment file).

Now we need to make the Maliit OSK server running. Just running maliit-server somewhere in the background is enough, as long as it is started as part of the current session. It is important for the server to be running all the time (for now, because of small problems with GTK+ support - those will be fixed soon), otherwise GTK+ applications will crash (bug report 23949). We can use XDG autostart for starting maliit-server on startup. An exemplary /etc/xdg/autostart/maliit-server.desktop entry could look like this:

  [Desktop Entry]
  Name=Maliit OSK
  Exec=/usr/bin/maliit-server -bypass-wm-hint

For enabling GTK+ support, sadly, we have to perform 2 additional tasks. We need to update both the GTK+ input method module caches. Currently the installation script does not perform this step for us, so we have to do it manually. We have to update two caches - one for GTK-2.0 and one for GTK-3.0. On a Ubuntu-based system (11.10 in my case), this would look like this:

  eval `dpkg-architecture -s` # This is needed to get $DEB_HOST_MULTIARCH
  /usr/bin/gtk-query-immodules-3.0 >/usr/lib/gtk-3.0/3.0.0/immodules.cache
  /usr/bin/gtk-query-immodules-2.0 >/usr/lib/$DEB_HOST_MULTIARCH/gtk-2.0/2.10.0/gtk.immodules

Now, Maliit should be enabled for all GTK+ and Qt applications.

Maliit modified plugin working on Ubuntu 11.10 Unity-2D

Maliit supports keyboard rotation (landscape, portrait) - although it's not really useful for Desktop uses. You can see how it works through the maliit-exampleapp-plainqt example application. Writing custom plugins for the framework is also very pleasant (I might return to this a bit later). I personally use the QML-based quick plugin with a few smaller modifications.

As far as on-screen keyboards go, I prefer Maliit over Onboard or Florence, so I certainly recommend giving it a try. Currently, the GTK+ input method has a bug that might pop-up sometimes, making the plugin invisible until the server is restarted. This should be fixed soon enough, since jonnor and mikhas from Maliit already mentioned a way to hack-fix it. Basing on their propositions, I prepared a small patch as a fix - it's available here. If you encounter a problem with the OSK not re-appearing, apply this patch with patch -p1, rebuild and reinstall. Then, re-run the server with maliit-server -force-show -bypass-wm-hint and it should work. Have fun!
UPDATE! It seems Jon put up a merge request for an almost identical fix to the mainstream! Meaning soon no patching will be required. You can check out the merge request here. Thanks!
UPDATE! Changes adding automatical IM cache update have been added! The Ubuntu cache update during make install patch that I prepared has been merged with the official tree. Jon also added Fedora support as well. Magnificent!

Basic kernel debugging


A modified kernel, a custom system - this can lead to the kernel not being able to boot properly. What to do in such case? Usually we can try getting as much information as possible to locate the underlying problem. We can use some quite basic techniques to achieve our goal.

When working with a relatively sophisticated-debugging-unprepared system, it's best to just see what the kernel says, deducing which part causes the system to halt. In most cases, if there is a display device present and configured, we should be able to see the kernel messages on this device - if, of course, the respective kernel config variables are set (in this case, the CONFIG_VGA_CONSOLE or CONFIG_FRAMEBUFFER_CONSOLE).
The case is different when a display is not present. We can either use a serial console or a net-console here, whichever is available. The easiest approach is using a serial console. We just need to be sure that our kernel configuration includes all necessary entries, such as CONFIG_SERIAL_CORE, CONFIG_SERIAL_CORE_CONSOLE and respective serial drivers (e.g. CONFIG_SERIAL_8250 and CONFIG_SERIAL_8250_CONSOLE in case of a 8250 UART chip). We then just append to the CONFIG_CMDLINE configuration the console=ttyS[console number],[baud rate] parameter and we're ready to go.

In some cases, however, the kernel halts even before we can see some actual output, for instance, before the console driver or the video device are setup. In this case, we might get lucky by using the so-called earlyprintk's mechanism. The Linux kernel has a feature allowing the kernel to output messages to the serial console or VGA buffer directly even before the real console code is initialized. This feature can be enabled by setting the CONFIG_EARLY_PRINTK variable in the kernel config, additionally providing an earlyprintk= parameter to boot arguments. It can be either vga, or ttyS0/ttyS1 (with the baud rate added as necessary). After the real console is initialized, the earlyprintk console is disabled by default - but if you want, you can keep it running by appending a ,keep argument to the earlyprintk parameter. But most of the time, it is not needed.

This can give us a good overview of where the problem lies. There are some flags and kernel command-line parameters which can aid us in debugging certain features, like e.g. initcall_debug for making initcall execution more verbose. This can help a bit when your kernel hangs up and we have problems in locating the source of the problem.
More useful parameters can be found in Documentation/kernel-parameters.txt in the kernel source.

My common way for fast problem localization is using the usual "print it!" debugging, using printk()'s around suspicious kernel areas. Early printk's help in this as well.

If we know that the kernel itself has no problems but problems probabbly appear during or right after rootfs setup, we can also try preparing a small initramfs to include in our image instead. An initramfs is a file-system image that resides directly in the kernel image, being loaded to RAM during boot time. We can then, with the available tools, try hacking the real rootfs manually. Busybox is a good choice for a fast, lightweight and working environment for the RAM file-system. To include an initramfs in our image, we need to set the CONFIG_BLK_DEV_INITRD config option and set the CONFIG_INITRAMFS_SOURCE to point either to the directory to be included or the .cpio archive with our prepared RAM rootfs. We will also need to specify whether the initramfs should be compressed or not, setting the necessary flags as needed. The CONFIG_INITRAMFS_SOURCE also accepts files containing specifications for directories and device nodes to be created on it during building the kernel image. More about this can be found in Documentation/filesystems/ramfs-rootfs-initramfs.txt.

Another useful tool is the SysRq magic key. If we configure our kernel with a CONFIG_MAGIC_SYSRQ option, we can use the specified key combination to command the kernel regardless of what it currently does (most of the time). The key combination varies from architecture to architecture, but from experience I know that usually it's the same ALT + SysRq + [command key] set. The SysRq key is also known on some keyboards as the Print Screen button. If you're working on a serial console, you can try sending the combination through the terminal, raw.

Most useful commands:

  • b - reboot the system immediately
  • k - kill all programs on the current console - this might be useful if something holds up your system
  • m - dump current memory info, useful for debugging memory issues

The SysRq mechanisms are very well documented in the Documentation/sysrq.txt file.

If we encounter a kernel Oops or, even worse, a kernel panic - it is also nice to know how to dig as much information as possible from such a crash. When working on a remote device it is also wise to include the panic=[timeout] kernel parameter to our arguments. This way, when the kernel panics, the device will try to restart itself after the set timeout period, allowing us access to the bootloader without performing a hard-reset. We can set it to a bigger value to still be able to analyze the crash-log.
As for handling the actual crash, the kernel documentation again has a very nicely written guide to Oops handling. Check it out in Documentation/oops-tracing.txt in the source code.

There are cases in which all these methods are useless, and we need something more sophisticated and/or low-level. When such a need arises, we can try our chances with either kgdb-gdb debugging or Linux/gdb-aware JTAG hardware. But I will try to cover these some other time.

Most of the time printk's (and early printk's) will help in finding the problem. Sometimes some disassembly is necessary - for instance a closer look at some parts of the vmlinux image or, specifically, particular object files composing the bootable image. Kernel debugging is usually like crime-solving. It takes much effort, clue-searching, time and thinking. And, as it is also with crime-solving - sometimes you might simply fail. But one must try not to demotivate oneself. If all ideas have been already used up - take your time, switch context, and return with a fresh mindset after a while. This helps.

Gone Canonical


Today's post is more private-life related than the others, but still in some means technical. I am proud to inform that I have officially joined the Canonical team as a Software Engineer! From now on, I will help enhancing the overall Ubuntu experience, mostly working on their flagship Unity environment.


I intend contributing to the Ubuntu community as much as possible, earnestly carrying out my new responsibilities. Time will tell how well I will do. But I hope for the best! Right now I'm preparing everything that is needed, since my work will commence on the 19th of September.

Using the occasion, I would like to thank the whole ASN team for everything up to now. It was great working with you people, these few years were really magical. Anyway, these guys can do real magic with technology! So remember - if only you're looking for someone to code something for you, ASN is probably the best choice there is. There is no challange too big for this team.
I'll miss working with you! But no worries, I'll be around. Watching!

A little bit of profiling


Code profiling is a very important aspect of computer programming - almost every software engineer knows that well. It helps finding bottlenecks in your code, finding which parts need improvement, which cause trouble etc. I'm sure everyone knows of this already. There are many tools for this purpose available around the internet. This short post lists a few of them, as well as a brief introduction to a really simple and naive solution I made in the past.

Here are some tools helpful in application profiling:

  • Valgrind - an excellent set of tools for profiling and memory-leak testing - especially callgrind and memcheck
  • gprof - the GNU profiler, one of the most essential code profilers available, part of GNU Binutils
  • QTestLib - for Qt applications, the QTest unit-testing framework also provides benchmarking functionality

In the past, while working on the Flatconf project for ASN, I needed to do some fast but non-complicated time-profiling for my application. Since I was mostly interested in just knowing the average time spent in selected functions and I somehow couldn't make the existing tools do what I want - I wrote a really simple library for the time profiling I needed.

The timeprof profiler uses the instrumentation callbacks offered by the gcc. The library itself isn't very interesting, since it has been written in a short period of time just to count the time usage of called functions, but it shows nicely the use of the -finstrument-functions flag. You can find the library in its respective ASN Labs git repository here.
The instrumentation handlers can be used for any specific analysis of function behaviour in a program. All that is needed is specifying the flag during compilation and providing the __cyg_profile_func_enter() and __cyg_profile_func_exit() callbacks definitions as needed. We can inform the compiler which functions we do not wish to analyse by declaring them with the __attribute__((no_instrument_function)) attribute.
The instrument-functions mechanism is rather well documented, so there's no use in duplicating information. But it's really useful to know about its existence when specific analysis or debugging is required. Consult the timeprof source code to see an example of its usage.

On a side note - lately many different, interesting things happened, that is why today's post is a rather short one. But I'll return soon and hopefully explain the reasons why. Thanks!

(Haiku Blog-O-Sphere) Bits and Pieces: Notifications and Menu Builders


During the weekends, I'm working on enhancing a very old BeOS application long lost in time. While browsing the Haiku kit and application source tree, sometimes I stumble upon some new (at least for me) but also interesting small elements that the Haiku operating system added to the Haiku API during its development. I like to try these elements out. Most of these API additions might change or even disappear in the nearest future, since I understand their development process is not yet finished, but they're interesting to know nevertheless.
Come visit my Haiku Blog-O-Sphere page and read my new blog-entry - Bits and Pieces: Notifications and Menu Builders.

I finally added a new post to my long forgotten Blog-O-Sphere. Oh, and go give Haiku a try while your at it. It's worth it. Thank you and stay tuned!

Flatconf 1.5 concept?


The Flatconf project has been part of my interests since long - from the very first moment I started cooperating the ASN Labs team. Not a very well known project, from what I know it was only used in the old Lintrack distribution. It's more of an interesting experimental concept than an innovative solution. Flatconf is an attempt of creating an universal configuration system based on the idea of 'flat files' and the usage of the file-system as a natural database. Currently two versions of the specification exist, with the newer one (2.0 draft) using a new concept for holding variable meta-data - not entirely a good one though. But, just now, I thought about the original idea and came up with some thoughts for modifying it, making it little bit more feasible. Flatconf 1.5 anyone?

The original concept of Flatconf 1.0 had uber-flatness in mind. To understand the motivation, first we need to define what a flat file means in our context. A flat file is a file which contents are not structured in any additional way besides holding data. This means, that to get the actual 'data' from a flat file means just reading its contents - because its contents are plain data with no unnecessary structural meta-data. A structured (not flat) file would be any file which would have to be parsed first to retrieve the data of interest.

Flatconf used flat files for holding configuration variables, i.e. a separate file for every variable. Structurization of data is achieved by using the file-system directory hierarchy. Each variable can have meta-data associated with it - such as information about variable type, variable user-readable description and many more. The main problem and difference between versions 1.0 and 2.0 is the way these meta-data are stored.

  • In 1.0, meta-data were flat and also stored as separate files on the file-system. Each configuration sub-directory had an .fc hidden directory in it which held meta-data for all respective variables residing in the given directory.
  • In 2.0, meta-data were held in structured files in a different directory tree, formatted using the custom-made FCML (Flatconf Meta Language).

The 1.0 approach had the problem of bloating the whole data directory tree with unnecessary files and directories at every hierarchy level. The 2.0 approach, well, wasn't flat anymore, and since it used a newly designed meta language - created many, many problems and had many drawbacks. It removed the ease of variable manipulation - forcing the system to parse meta-data first, adding complexity.

But what if we would still leave the 1.0 flat concept, although slightly modified? What if we just moved all the meta-data from the data tree to a separate directory like in 2.0, but left them as flat files? This requires the overall specification to be changed, since now we might have a bit more freedom.
Best explaining it with examples of Flatconf configuration trees.

data/                # Main fc data directory
 |- hostname         # - \
 |- welcome_text     # - Text variables
 \- net/             # A directory variable
     |- ip
     |- gw
     \- ifaces/      # A list variable, i.e. a directory with user element-addition possibilities
         |- eth0/    # eth0 interface element added by user
         \- wlan0/   # wlan0 interface element added by user
                     # (both are directories holding more variables inside, like MAC address etc.)

metadata/               # Main fc directory for holding respective meta-data
 |- hostname/           # Every variable has a directory named the same way holding its meta-data
 |   |- .type           # Every meta-data is a separate file, starting with a '.'
 |   |- .descr          # e.g. this is the variable human-readable description
 |   \- .onapply        # And this is a script/application that is fired on variable modification
 |- welcome_text/
 |   \- (...)
 \- net/
     |- .type           # e.g. directories have "dir" in their .type contents
     |- .descr-en
     |- .descr-pl       # Descriptions can be locale-specific
     |- ip/
     |- gw/
     \- ifaces/
         |- .type
         |- .skel/      # The skeleton directory, holding the base meta-data for every new list element
         |   |- .type   # Like for instance a .type data with contents "dir"
         |   \- (...)
         |- eth0/       # Every new list element has a named copy of .skel created
         |   |- .type   # Same contents as in .skel/
         |   \- (...)
         \- wlan0/

This example can be better understood if one already knows the basics behind the Flatconf concept.

In this approach, fetching the description string meta-data for the net/ip variable is simple as issuing cat $FC_METATREE/net/ip/.descr in bash. The same for fetching the variable's data: cat $FC_DATATREE/net/ip.
Of course, in this approach, the meta-tree is not read-only anymore, as addition/deletion of new elements to the list involves creating/removing directories from the list's meta-data directory. But for this concept it is of not much concern.

Why meta-data variables start with a '.' (dot) character? To remove ambiguity. Because of this small modification, we can easily create directories holding new element meta-data named the same way as the element being added, without having to worry that someone would add a descr list element, bringing confusion to the system (confusion like: "is this the descr meta-data variable, or a list element called descr?")
As for list ordering and visibility - we could add a .listorder trivially structured meta-data variable listing the element order or even list element visibility.
Well, all this is just an idea.

As stated before, Flatconf is an experiment. Using the file-system as a natural database for structuring and holding variables is an interesting idea, similar to the GNU/Linux concept of 'everything is a file'. Actually, Flatconf is very similar - in visualization - to the /proc/sys sysctl interface. But is it efficient? When writing bash scripts, it might indeed be much easier just browsing the file-system and reading file contents instead of parsing contents by hand. But when writing C/C++ code - the costs might be higher. One of these days I will do some benchmarks to test performance issues.

I noticed that many of my posts are about less-known, niche topics. This does not mean the unfamiliar cannot be interesting. There is always much to learn from the non-mainstream!

Haiku, the HaikuAPI and the menu


Most of you probably at least heard about the Haiku operating system. For those who didn't or just know the name, Haiku is the open source recreation and spiritual-successor of BeOS - an alternative multimedia-oriented operating system discontinued some time ago. Today's post will be a short collection of brief, random informations regarding its application programming interface (known as HaikuAPI). An API that I consider very consistent and intuitive to use.

It's just an overview of all the interesting issues though, since sadly I have many work related things on my head right now. Just to say it this way - wish me luck! But now back to the topic at hand.

In the past, BeOS was relatively popular in the personal computer world. Even long after it was no longer developed, I used it quite frequently (during my university days), writing small BeOS applications, learning the insides of the BeAPI during these processes. History aside, the operating system lives till this day in the form of the still developed Haiku operating system.
The API stays more or less consistent with the original, with a few very interesting additions. It would be useless writing an introductory tutorial right now, since there are many other places where the basics of the Haiku API can be learned from. For instance, Learning to Program with Haiku by DarkWyrm, The Haiku Book and the good old - but still very informative - legacy Be Book.

What does the HaikuAPI have to offer? A short overview of its structure: the API is divided into a variety of so called kits - sets of classes and object types for given purposes. For instance, the Interface Kit contains all the widgets and view specific classes, while the Storage Kit defines file system access primitives.
The Haiku modifications of the BeAPI include the addition of the so called Layout API. This was the one thing missing in the old BeOS system, as all layouting had to be done manually by the programmer. Which was, as everywhere, very tedious. The Layout API is still in the development, so its API might change in the nearest future, but here is a very good article written by my past GSoC mentor Ryan Leavengood explaining its basic usage - Laying It All Out, Part 1 available on the Haiku Blog-O-Sphere.

But quite recently I found another new interesting feature in the Haiku API. The BLayoutBuilder class also offers the functionality of easily building menu contents! Similarly to building the layout, we can also create a menu with all its respective BMenuItem's in a fast and easy way. Consider the following BPopUpMenu below:

	// This piece of code is actually part of the new Toku Toku development source
	BPopUpMenu *menu = new BPopUpMenu("contact_menu", false, false);

		.AddItem("Informacje", CONTACTMENU_INFO)
		.AddItem("Rozmowa", BEGG_PERSON_ACTION)
		.AddItem("Dziennik rozmów", CONTACTMENU_LOGS)
		.AddItem("Usuń z listy", CONTACTMENU_REMOVE)
		.AddItem("Ignoruj", CONTACTMENU_IGNORE)

The newly created BPopUpMenu, a pop-up context menu displayed after a right-click mouse action, is filled with menu items - each with a different message with a different identifier (like CONTACTMENU_INFO, a constant identifier defined elsewhere). Every item is enabled by default, but we can easily change that on the run using the SetEnabled() method, during menu construction. We can add separating elements and also sub-menus - everything in one sequence. Check here for more details.

What I like about the HaikuAPI? It's consistent. Its internal data structures are easy to use, without the complexity of the usual high-shelf C++ code. It uses classes and object-oriented design, but in the same time it still feels like writing standard C code. Just with classes. The messaging is well designed, and the API allows for much freedom. It's nicely bonded with the operating system.

With all the other, various cross-platform application frameworks around, like Qt, GTK+ or XULRunner, it's sad that no one succeeded on nicely detaching the HaikuAPI or BeAPI into an external toolkit to be used on other, more popular systems. I knew of some initiatives with similar ideas in the past, but from what I know, none of them survived. Or maybe some did, but I just don't know about it? Nevertheless, as an old BeAPI fan, I would certainly like to write a few multi-platform applications using the Haiku API!

Might consider writing a new post to my Haiku Blog-O-Sphere. It's been a while!

Ubiquiti RSPro RedBoot, OpenWRT and the exec


Not sure if this is a common bug for everyone using a hand-built OpenWRT on Ubiquiti RouterStation Pro platforms, but at least I notice it on all the boards I have in my possession. When booting the OpenWRT kernel and watching the printk() output, rubbish data can be seen in the kernel command line parameters - in normal cases. Usually this does not break anything, but as we know, RedBoot in Ubiquiti boards passes board-specific parameters to the kernel command line with information such as board type, ethernet MAC address etc. Sometimes those parameters are not passed and parsed correctly because of this. I did a small investigation why this happens.

First of all, it is best to understand how the Linux kernel fetches the command line in case of MIPS AR71xx boards. In the OpenWRT kernel 2.6.37, all boot-specific parameters are passed through the a0, a1, a2, a3 processor registers during start-up, which are then copied to respective kernel variables from fw_arg0 up to fw_arg3 - and used in this form later on. We are interested in the three first variables. Their meaning is similar to how normal GNU/Linux applications fetch information from the system:

  • fw_arg0 - the argc equivalent, tells the kernel how many command like arguments have been passed.
  • fw_arg1 - the argv equivalent, a pointer to an array containing the arguments.
  • fw_arg2 - the environment table equivalent, an array of pointers used to pass the environment settings.

When in RedBoot, a call to exec executes the loaded program, setting the environment and allowing passing additional kernel arguments to the cmdline (through the -c option). We need to remember that calls to the RedBoot go command only execute the kernel, while exec also sets all the platform-specific parameters in the environment beforehand. Information such as the aforementioned board type, ethernet address etc. are passed through the environment, so in our case through fw_arg2. The variables fw_arg0 and fw_arg1 are used only in the case of passing additional arguments through the exec -c command. The problem lies in how these two variables are used by the bootloader. Anyway, first the command line arguments are pasted into the kernel, and then concatenated with the environment parameters.

cmdline=[User command line arguments] [Environment variables]

It seems the Ubiqiti-modified RedBoot, whose sources aren't available - but should, passes always a constant number as the number of cmdline arguments = 2. fw_arg1[0] is always an empty, NULL-terminated string and all the other, actual arguments are squeezed inside of fw_arg1[1] as a single string. Besides being strange and illogical, up utill now there is no real problem. But sadly, Ubiquiti's RedBoot (at least version 0.9.00318M.0905121200 - built 12:01:38, May 12 2009 which is on all my RouterStation Pro boards) seems not to initialize the fw_arg1[1] memory area during boot sequence. It only does so when we explicitly pass some additional kernel arguments like exec -c "blah" or even exec -c "" (for no parameters). But before that, there's nothing but chaos in our cmdline.

Some might be thinking - "But wait! If this is how it is, why does the firmware still work correctly most of the time? Shouldn't rubbish in the cmdline make the environment also unreadable?". Yes, that's true. Since the first part of the cmdline that is passed to the kernel is made out of fw_arg1 and then with the environment glued onto it, the important parameters should not be visible because of all the invalid data. But since the memory area is uninitialized, there is also some chance that sooner or later in that big buffer allocated for the command line a 0 byte from that rubbish will appear. So there still is a chance that the environment will be appended and visible. The kernel ignores most of the chaos, since he is unable to parse it, and simply moves to the environment variables. Or at least that's how I explain it.

At least that seems to be the case on my boards. The fastest way to deal with this problem is either using exec -c "" instead of exec in your bootscripts, or modifying the kernel source to ignore the user-given command line arguments (or adding a magic-sequence mechanism perhaps?).

Lately I'm a bit busy with different work and research stuff, so I don't have much time to spend on my hobby projects. But expect some Haiku OS related posts soon, since I intend to get back to Toku Toku as soon as possible.

Stackguard in gcc


During programming in C for work yesterday, I popped into a small issue I did not expect. I was concerned because a piece of code that I normally thought would work (and it did, but in other circumstances) - this time did not. I wanted to better understand this problem and in the end learned a bit about the Stackguard in gcc. Some of you probably heard about it already. Consider the following piece of code below.

#include <stdio.h>
#include <string.h>

struct data {
	char stuff[2];
	char more[510];

int main(void)
	struct data foo;
	strcpy((char *)foo.more, "Something bigger than 2 bytes long");

	return 0;

When compiled with the standard gcc test.c -o test line, the program will probably work normally. Yes, stuff is only 2 bytes long, but afterwards we still have 510 more bytes available, so everything is fine - there's plenty of space. We can even add __attribute__ ((__packed__)) to make sure of that.
But then, let's try compiling the same code with optimization. On my compiler version (4.4.3-4ubuntu5), even building with -O1 and bigger suddenly made this code detect an buffer overflow during compilation and during runtime - which is understandable, of course. But why now? The warning informs of some __builtin___strcpy_chk() being called. In disassembly we see that indeed instead of the standard strcpy() we wanted, a different, safe version of the function is called. In libssp/strcpy-chk.c of the gcc 4.3.3 source code we can see the internals of __strcpy_chk(). This version is called instead in cases when a statical, known sized buffer is used. It first checks if the copied string can fit in the destination buffer, and bails out when not. Such mechanisms are part of the GNU C compiler's Stackguard - you can read a bit more about it here.

Why during optimization? It seems that the Ubuntu version of gcc, even the basic optimization levels have the _FORTIFY_SOURCE define set to 2 by default. This can be disabled by adding a -D_FORTIFY_SOURCE=0 to our gcc flags during compilation when needed. Then we can finally do what all we want with our static buffers. At least almost!

Atom feed added


Quite recently, I was asked by a colleague to maybe include an RSS feed on my web page. I thought that it might not be a bad idea, considering that I am not doing updates too frequently. So I added an Atom Feed for the development part of my web page. Since I am not using any CMS system here - only some minor PHP scripting help - having a small amount of time, I decided writing a quick Python app converting my HTML content into an Atom web feed XML file.

Putting aside the question why I do not want to use CMS for my web content, the Python script I prepared is really, really badly written. I probably shouldn't even post it here, but since it works, maybe at least one person will find it a bit useful. For parsing, I used the HTMLParser module (known as html.parser in 3.0). The script works as a very naive finite state machine, looking for particular div's and other HTML tags I use and fetches their contents respectively. One day I might clean it up and make it better. For now, something that just works is fine.
You can check out this ugly piece of Python code [here].
UPDATE! It seems the server had some problems with serving files with the .py extention. Now the download should work correctly. I apologise for the problem.

In the nearest update, I will also add an Atom feed for the art section of my web log. Stay tuned.

UniConf: part I


In this post, I will try to write about how to use the basic UniConf API. This is an unofficial guide to UniConf, so be advised. I will concentrate on the native C++ version of the API. This can be thought of as something like a small UniConf tutorial.

If you read my previous UniConf post, you probably have an overview of how simple and straightforward it can be. Everything starts off with the creation of an UniConfRoot object defining our UniConf configuration root. If we interpret the configuration tree as a hierarchical tree similar to those in file systems, the UniConfRoot is the variable tree that is mounted at the root path ("/"). Its constructor, in one of the most commonly used forms, accepts a moniker string as the argument. Monikers are strings used to represent generators available in a UniConf system. I already mentioned some of them previously. Using generators allows using different configuration system backends and modificators. Monikers can be mixed together.

Some of the available monikers:

  • ini: a standard .ini file-like parser
  • temp: a writeable hierarchy stored only in volatile memory, used for temporary values
  • unix: a UNIX domain socket communication with an uniconfd daemon
  • tcp: a TCP socket communication with an uniconfd daemon
  • ssl: a SSL-encrypted TCP domain socket communication with an uniconfd daemon
  • readonly: make the tree read-only
  • cache: use cache to make variable-access faster
  • list: makes UniConf browse through the list of other generators in order to find the variable

By defining the root UniConf entry, we need to decide what generators we want to use. Let us consider 2 .ini files we would like to use as part of our configuration: foo.ini and bar.ini.


hostname = ala.lan
ip_address =
gateway =


welcome_text = Hello world!

width = 90
height = 40

As stated above, we first start by defining the UniConfRoot. We will use both files at once. Consider the following code:

#include <iostream>
#include <wvstreams/uniconfroot.h>

using namespace std;

int main(void)
	UniConfRoot root("cache:list:ini:foo.ini ini:bar.ini");

	UniConf alias(root["/terminal"]);

	cout << "hostname = " << root["hostname"].getme().cstr() << endl;
	cout << "/terminal/height = " << alias["height"].getme().cstr() << endl;


	cout << "hostname (new) = " << root["hostname"].getme().cstr() << endl;

	return 0;

We defined the UniConfRoot to include caching of values for faster accessibility, and told UniConf to use a list containing two ini: generators for fetching the configuration variables. This means that in result, mounted on top of the UniConf tree we will have the contents of our two .ini files at once. Once we want to access a variable, UniConf will browse the whole list looking for the variable definition.

Almost every object in a UniConf tree is of UniConf type. This is quite intuitive, because if we consider the configuration tree as a directory hierarchy tree, even the root of the tree is in fact just another directory. The UniConf type (and, since UniConfRoot is his derivative, this type as well) provides a handy [] operator by which the programmer can easily access variables with a given path relative to that variable. As argument, this operator takes a UniConfKey - a very useful type for UniConf path storage. But since an UniConfKey can be constructed from standard C strings, we can insert a string as the variable path to the [] operator. The resulting UniConf object representing the given variable/object relative to the queried object is returned.

In our example above, the code root["hostname"] will return the object corresponding to the foo.ini's hostname variable, which's VFS-like path is /hostname (because we mounted the configuration file at the root). .ini file sections act as directories used for holding section variables. After defining the root, we define an 'alias' to the terminal section present int the bar.ini file. What I called an 'alias' is nothing more than simply the UniConf object to the terminal object (section). We can now use the alias object to access variables in the terminal .ini section. So, now, instead of writing root["terminal/height"] we can use alias["height"].

Now for fetching variable values. Every UniConf object exports a getme() method that can be used for this purpose. The method returns a WvString, which is a wvstream library string format. In the case when you do not want to use any other elements from the wvstream library instead of the UniConf part - WvString's provide a cstr() method that returns the standard C char string equivalent of the string.
The same way setting variables can be done. Analogously, every UniConf object has a setme() method. As an argument it reqires a WvStringParm (aka WvFastString) object, which is nothing more than a faster WvString for passing function parameters. It's faster and more memory-efficient, because these objects are created only from const char *'s. It does not allocate its own memory for holding the string and copying it, but uses the const char * directly. They are not advised to be used for other purposes. We can therefore use root["hostname"].setme("yuki.lan"); for modifying the contents of the variable hostname to yuki.lan.

After we set the new value of the variable, we want to make sure the change has been propagated to the given configuration subsystem. That is why we commit the changes with a root.commit() call. After changes are committed, the new variable values will be saved to their corresponding .ini files. We can then check if the change really happened by reading the value again and checking the configuration files later.

But now, let us suppose that we have a new configuration tree we want to attach to our configuration system. We could of course define a new, detached UniConfRoot for this purpose, but let us suppose we want it present in the tree we already have. Consider adding the following code.

	cout << "From uniconfd = " << root["/tcp/test"].getme().cstr() << endl;

The mount() method mounts a given generator at the selected UniConf key. In this case, we try to mount the tree exported by an uniconfd server running on localhost's port 4111 to the key at path /tcp. This way we can extend our UniConf configuration tree dynamically using a similar scheme to the one present on Unix filesystems.
For this example to work, one would need a running uniconfd server on localhost. The simplest way would be running uniconfd in the following way:

# As for uniconfd version 4.6.1
uniconfd -l tcp:4111 /=temp:

This way we will have an uniconfd server running an empty in-memory configuration - consult the uniconfd manual and help text for more details. Since we use the temp: generator, the /tcp configuration tree is empty by default. So to have any variables to access, we have to create them by ourselves first.

One last thing I will explain during this post are iterators. UniConf provides us with a handful of different iterators that can iterate through our variable tree. The most basic one is UniConf::Iter, which simply iterates through all immediate children of an UniConf node. This means we can use this iterator to browse through variables on one level, not looking into sub-branches of the tree (like iterating through files in a given directory, not including sub-directories). If we want a depth-first recursive search of a given branch, we can use the UniConf::RecursiveIter iterator. There are also sorted iterator versions of the two - the UniConf::SortedIter and UniConf::SortedRecursiveIter - which traverse the variables in alphabetic order, by full path.
Consider the following piece of code:

	UniConf::RecursiveIter i(root);
	for (i.rewind();; )
		cout << i.ptr()->fullkey().cstr() << " = " << i.ptr()->getme().cstr() << endl;

We create a recursive iterator to browse through our whole tree. We first call rewind() to position the iterator on the beginning of the branch in mention, and then move the iterator through consequent next() calls at every iteration. To get the object currently pointed at by the given iterator, all we need to do is a ptr() call. The fullkey() is another UniConf method that returns us the UniConfKey object of the given variable. It contains the full path to the variable. We print it along with the variable's value.

This is all for today's post. In the nearest future I will try to explain some other, maybe a little less basic aspects of UniConf, such as notifications, copying and many others. But as you have probably noticed by now, UniConf is simple. Very simple. And that makes it so interesting.

You can download the source code used in this post (with a small edition and a Makefile) [here].
Remember to have UniConf development libraries and headers installed beforehand - and uniconfd, if you want to check how fetching through the tcp: generator works.

UniConf: introduction


While writing my thesis some time ago, I did a small review of existing configuration systems in use. One of them especially caught my eye - UniConf. The authors once called it the "One True Configuration System", which might sound a bit provocative, but has a seed of truth in it.

In short - UniConf aims to meld many configuration systems together. This means UniConf tries to understand most existing configuration systems, written in various file formats, and exports them in an unified way. UniConf, by its own, does not really define one particular way of storing variables, but provides an API for reading most of the common configurations (though the use of so called generators) and adds some features that can be useful when designing/using/implementing a configuration system.

UniConf architecture

In the graph above, temp: cache: unix: ssl: are some of the possible generators that can be used by UniConf. For instance, the temp: moniker makes the variables of the given UniConf tree be only saved in memory, where unix: makes use of UNIX sockets to read the configuration variables from a listening UniConf server (uniconfd - we'll get to that). As another example, an ini: generator can be used to read from and store variables in the standard .ini configuration file format.
UniConf provides an C++ API with respective wrappers for the C language. Designing a simple UniConf application is very easy and is a rather convenient process.

#include <iostream>
#include <wvstreams/uniconfroot.h>

using namespace std;

int main(void)
	// Define the root of the configuration.
	// Use an .ini file stored in the current working directory
	// Since this is our first UniConf tree, we mount it as "/"
	UniConfRoot root("ini:tmp.conf");

	// Reading the value of a variable
	cout << "Value before: " << root["ala"].getme().cstr() << endl;

	// Setting the value of a variable
	root["ala"].setme("is the best");

	// We can also ask for the virtual name and 'pathname' of the variable
	cout << "Path: " << root["ala"].fullkey().cstr() << endl;

	// In this choice of monikers, if we want to save the changes, we
	// need to commit them

	return 0;

Besides having the basic get/set variable functionality, UniConf also offers such features like variable on-change notification (by using the add_callback() method) or mounting multiple configurations into different mount points a single UniConf tree. But since this is only an introductory post, we will get to these features later.
But configurations not only need to be accessed locally. UniConf includes a configuration serving daemon, uniconfd, which can be used either to export variables outside of the local space or act as a local server, binding configuration systems and making them available in one consistent form. The uniconfd server can either listen on a given TCP port or use a UNIX socket for communication with clients. Useful, although from what I know some features seem to be still missing.

I am rather fond of UniConf, since the idea of a hybrid system that could bind configurations together reminds me of Flatconf. One problem might be the lack of beginner-targeting documentation, since one of the only programming related help one can get can be found either in the source code or the doxygen documentation. But I'll try to shed some more light on the programming aspects of this system in my future UniConf posts. Stay tuned.

Haiku optional packages


Another short post. I've seen some people having trouble with proper including of development tools in the Haiku image. The process is very simple, but indeed there is not really much information about it. All the development tools (like gcc, ld, autotools, perl and more) are packed in so called Haiku Optional Packages. You can browse the list of available optional software in the build/jam/OptionalPackages file in the Haiku source tree. Some of them can be installed by using the in-system application installoptionalpackages as well, but that's another story.

As for our dev-tools, there are 3 packages available for this purpose: Development, DevelopmentMin and DevelopmentBase, each of them adding some set of tools. To add them to the image during build time, the user friendly build/jam/UserBuildConfig script can be used. How the configuration file should look like can be seen in UserBuildConfig.sample or UserBuildConfig.ReadMe files in the same directory - but right now we're only interested in the AddOptionalHaikuImagePackages command. A package set with this command is downloaded from the internet, prepared and included in the image. Easy.

AddOptionalHaikuImagePackages Development ;
AddOptionalHaikuImagePackages DevelopmentBase ;
AddOptionalHaikuImagePackages DevelopmentMin ;

With this, the development tools are added to the Haiku image during the build process. Note the HAIKU_IMAGE_SIZE, since without increasing the size of the image our additional packages might not fit.
Now you can experiment with the Haiku API in real-time.

Slow git status response


This is a trick that my colleague Michał Wróbel showed me when I had problems with slow git response on my local repository. Sometimes it strangely so happens that git status becomes really really slow, taking even a few minutes to complete - on a big, multi submodule repository in this case. This can be really annoying. Quoting my colleague:

"Under some (still not fully known) circumstances git starts to work in a strange mode scanning the contents of the files instead of just stat()ing them:"


firmware/code/linux$ strace git status
lstat(".mailmap", {st_mode=S_IFREG|0664, st_size=4021, ...}) = 0
lstat("COPYING", {st_mode=S_IFREG|0664, st_size=18693, ...}) = 0
lstat("CREDITS", {st_mode=S_IFREG|0664, st_size=94027, ...}) = 0


firmware/code/linux$ strace git status
lstat(".mailmap", {st_mode=S_IFREG|0664, st_size=4021, ...}) = 0
open(".mailmap", O_RDONLY)              = 3
read(3, "#\n# This list is used by git-sho"..., 4021) = 4021
close(3)                                = 0
lstat("COPYING", {st_mode=S_IFREG|0664, st_size=18693, ...}) = 0
open("COPYING", O_RDONLY)               = 3
read(3, "\n   NOTE! This copyright does *n"..., 18693) = 18693
close(3)                                = 0
lstat("CREDITS", {st_mode=S_IFREG|0664, st_size=94027, ...}) = 0
open("CREDITS", O_RDONLY)               = 3
mmap(NULL, 94027, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f52ad1fb000
munmap(0x7f52ad1fb000, 94027)           = 0
close(3)                                = 0

The trick that worked with me was doing git checkout <your_branch_here> and git update-index --refresh on the main repository and all its submodules, with an additional git submodule update at the end, just in case.

Kernel sysctl


More basics. The Linux kernel offers an interface for browsing and modifying system parameters, mostly kernel related. This interface is called sysctl. In Linux, sysctl variables are available to the user as normal, editable files through a virtual filesystem browsable in the /proc/sys directory - or by the usage of the sysctl application. In my today's post I would like to concentrate on how to create sysctl configurations in kernel code.

As with most Linux kernel related topics, the internal mechanisms and definitions vary from version to version. Most of my knowledge comes from the one used in the 2.6.31 series kernels, but I also have some experience with the more recent 2.6.34 series - and there were some changes made somewhere in-between those two releases.

In short - kernel code and modules can export internal variables and data through mechanisms of sysctl or procfs. Usually the proc interface is used for read-only variables and internally structured data, where sysctl is used typically for read/write operations on short, non-structured data. Both choices have a hierarchical tree-like structure, in which directories that can be used for organization of variables.
sysctl variables are defined in code by ctl_table structures. Each sysctl variable (both file and directory) has its own ctl_table object representing the variable. The ctl_table structures need to be grouped into arrays representing parts of a given level in the directory tree (e.g. the contents of the /proc/sys/dev directory). Such arrays need to be terminated by a NULL ctl_table entry - i.e. a variable with the name (procname) and ID (ctl_name, but only in 2.6.31 and similar) equal to NULL.

/* 2.6.31 */
struct ctl_table 
	int ctl_name;			/* Binary ID, not present in later versions */
	const char *procname;		/* Text ID for /proc/sys, or zero */
	void *data;
	int maxlen;
	mode_t mode;
	struct ctl_table *child;
	struct ctl_table *parent;	/* Automatically set */
	proc_handler *proc_handler;	/* Callback for text formatting */
	ctl_handler *strategy;		/* Callback function for all r/w, not present in later versions */
	void *extra1;
	void *extra2;

The procname defines the name of the variable and ctl_name the ID for the given variable in the current directory. The data and maxlen fields can be used by some handling functions, while mode defines the basic access rights to the variable. There are also the extra1 and extra2 fields - reserved for any extra data you may need. The important fields are child and proc_handler. When the child pointer is different than NULL, then the variable in mention is a directory and proc_handler is ignored. The child pointer then points to another ctl_table array defining the contents of the subdirectory. A directory of a given path can be defined by more than one ctl_table array.
The proc_handler function pointer is the function that is called during I/O operations performed on the sysctl variable.

typedef int proc_handler (struct ctl_table *ctl, int write, struct file * filp,
			  void __user *buffer, size_t *lenp, loff_t *ppos);

There is a set of predefined routines for basic operations on variables that can be used as a proc_handler. These include proc_dostring() for reading/writing a string, proc_dointvec() for reading/writing one or more integers - as well as a few other variants of the latter function. In case of using these functions, the data and maxlen fields are used. Data points to the buffer holding the variable in-system, and maxlen the length of the buffer.
A kernel programmer can also define his/her own proc_handler function. In this case, the write function parameter shows whether the operation was a read (write == 0) or write (write == 1) operation. The buffer pointer is a pointer to the buffer with the data being read/written. The lenp is a pointer to the size of the user buffer holding (or to be used for holding) the data, and ppos is the offset from the beginning of the sysctl file of the variable during the operation I/O. These two are pointers so that they can be modified during handling.

So, what did change between 2.6.31 and 2.6.34? As noted in the comments, the ctl_name and strategy fields have been removed. I never used the strategy field before, but it seems it was used to optionally initialize and format data before display or storage. The proc_handler functions do not include the filp parameter anymore as well. No big changes really. At least I didn't notice anything else of interest.
The ctl_name field was indeed useless since long. Most variables used CTL_UNNUMBERED as the ctl_name since they did not care about an unique ID. There were times it was useful though, for instance while creating one proc_handler for many sysctl variables, later identified by ctl_name - but still the 'extra' fields can be used for that now. Or even a strcmp of the procname field.

But how are these variables positioned in the sysctl tree? The function register_sysctl_table() needs to be called for the main ctl_table array. The main root of all sysctl variables is /proc/sys. From that, you need to provide the ctl_table's of all directories in the path e.g. if we want to have a variable accessible at /proc/sys/dev/ala0/name, the ctl_table arrays for dev and ala0 need to be created and linked with each other using the child fields. If no other kernel code already defined the given directory, it is created in the virtual filesystem.

The sysctl interface is one of the recommended ways of exchanging data between the user and the kernel. Just remember to always use copy_from_user()/copy_to_user() when writing your own proc_handler functions! I must say that I like the idea of how sysctl configurations are created, accessed and exported - reminds me of Flatconf somehow... But more about this in the nearest future.

g++ and C++ - class method definitions and shared objects


During work on some things regarding my master thesis, I stumbled upon something I did not know about before. I'm not much of a reverse engineer, and I don't really have the time to look into it closer.

I wanted to create a shared object file using a class of my creation. I wanted to export an already defined object of that class to be accessible through the applications dynamically linking the object. Consider the following example:

#include <iostream>

class ala {
		int test(void) {
			return 1;

extern "C" {
	ala test;

Those who had to deal with .so files in Linux systems already know that there is a difference in the way how symbols are stored in C and C++. C++ names are mangled to support function overloading, so for us to be able to correctly access a variable we need to use the extern "C" qualifier.
Our example should work perfectly fine. We can now dlopen() the object file and dlsym() the symbol test. Everything is as it should. Virtualization is also fine, as long as there are no loose ends on both classes. What should be remembered: you cannot create class objects from a shared object using the default new operator. Creation of new objects needs to be done in shared object code, e.g. using wrapper functions. But this you probably already know while reading other articles on the internet.

I don't use C++ too much. What I did not know is that there is a slight difference between defining a method inside the class body and defining it outside, leaving just the declaration inside. In the first case, when the g++ compiler doesn't see the method being used, it doesn't seem to be included in the binary at all.

Interesting. But maybe it's just a coincidence? I would have to check gcc source code to be sure.

Kernel writing to file


Today a quick post about something obvious - file reading/writing in kernel space. First of all, I'm obliged to inform you that this practice is very bad and shouldn't be done for purposes other than e.g. temporary debugging. You can find out why it's bad and how to do it properly here.

But there are times when you want to dump something (e.g. binary data) to a file on a filesystem fast, just once and just for debugging.

struct file *file;
loff_t pos = 0;
int fd;
mm_segment_t old_fs = get_fs();


file = filp_open("/tmp/dump", O_WRONLY | O_CREAT, 0644);
if (file != NULL) {
	vfs_write(file, mem_addr, mem_length, &pos);

Reading can be done analogically, using a different flag instead (O_RDONLY or O_RDWR) and vfs_read().



Today will include a small advertisement-like post. Cross compilation has always been a bothersome process. Right now it's not as bad as it once was though - yet, still it can be time consuming, especially if you want to create you own GNU/Linux system for a different platform. For proper cross-compilation, we first need a specific toolchain present that will be able to generate code for our architecture of interest (we can use crosstool or OpenWrt here, for instance). Usually, this can be a troublesome as well, even more if the platform we're interested in is relatively unpopular - but we will skip this case for now and return to it in a later post.

There are many ways of building applications for non-local architectures. Having a ready toolchain it's really just a matter of using their binaries instead of ours. OpenWrt, for instance, uses their so called buildroot - a cross compilation system utilizing Makefiles. Most cross-magic is done there thanks to the power of configure and Makefile scripts, allowing different binaries from the toolchain to be used instead of the local ones. Not bad.

In one of my first posts I wrote that I prefer Fakebox - a cross compilation toolkit developed some time ago by ASN (advertisement!). Fakebox, similarly to Scratchbox, attempts to emulate a Linux machine on a Linux machine. I never used Scratchbox before, but from what I heard from my colleagues, even though it really does everything you need and it does it really good, it can be a troublesome and complex beast. That is why they created Fakebox, a much simpler toolkit for the same purpose. It's really interesting how such a simple thing can work well for most uses.

Fakebox is nothing more then a batch of shell scripts and configuration files. To get it working, all you need is the toolchain and qemu installed. How does it work? The Fakebox website actually explains it nicely. What Fakebox does is:

  • change $PATH so shell chooses development tools from Fakebox wrappers instead of /usr/bin, etc.
  • most of these wrappers are symbolic links to one simple shell script which basically just adds toolchain prefix
  • replaces uname with trivial shell script which returns contents of $FB_UNAME
  • registers a binfmt_misc wrapper so binaries compiled for the target CPU architecture are executed via qemu emulation, in the target root filesystem

After this is done, we can run and compile programs for the specified architecture as we please. Since we are using binfmt_mist, all executables are ran using qemu emulation (e.g. qemu-mips or qemu-arm), so we can use them later on in the build process. Fakebox has its problems, but for what it is, it's still sufficient.

For instance, if we would like to setup an environment for a mips architecture in Fakebox (latest version from the git repositories), our fakebox.conf could look like this:


FB_UNAME="Linux amatsu 2.6.31 #1 Mon May 7 20:21:51 CEST 2007 mips unknown"


FB_FTPAGENT="wget --continue --passive-ftp --tries=3 --waitretry=3 --timeout=10"


After putting this configuration file in the mips/ directory, we can run our new environment by typing "fakebox mips/". Fakebox then sets the $PATH variable to include our custom paths (FB_PATH), our toolchain path, our fakebox wrappers and so on, registers binfmt_misc for our architecture (if supported) and runs a new shell. Besides emulation, Fakebox also offers an integrated simple package manager with building capabilities (pkgtools) with overall functionality similar to the ones seen in OpenWrt (with the exception of being implemented in bash instead).
This is a very nice feature of Fakebox as well. You can organize your applications/libraries as packages, download and build them easily when needed. After a package gets build, you can then install/uninstall it on the target root file system with one command. It saves up some time and makes development cleaner. Since it's an important feature, I will return to it in the next post about Fakebox. Stay tuned.

In-kernel module unloading and the usermode helper


It seems safely unloading a Linux kernel module in kernel space is not a very straightforward task, but once you see how it's done, it seems awfully trivial - and strange. Those of you that had some kernel programming experience before probably know about the request_module() and its non blocking version request_module_nowait() functions declared in include/linux/kmod.h. Just provide the name of the module you want to load as the parameter and the function takes care of the rest. Sadly, there is no such function for loaded module removal.

There are many tools that could be used for this purpose in the Linux kernel, but none of them are explicitly exported for us to use.

But if we look into the __request_module function internals (the base for both the request_module functions) in kernel/kmod.c, we can see that it actually doesn't do anything with the module at all. All it does is run modprobe (yes, the one from userspace) to do the module loading instead - at least in versions 2.6.28 and .31. It does this using a usermode helper, with the call_usermodehelper() function. Its syntax is similar to the execve() user mode function - requiring the binary path, a NULL-terminated string array of arguments, a NULL-terminated string array of environment variables and a flag indicating the wait policy. The wait policy defines whether the caller should wait for the process to finish (UMH_WAIT_PROC), just wait for the exec call to finish (UMH_WAIT_EXEC) or not to wait at all (UMH_NO_WAIT).

Now that we know how request_module() works, we can do the same thing when we need to remove a module by name. We can either do a modprobe -r or a rmmod call. This way, we're not unloading modules entirely by force, and it's safe for the operating system. This can be done, for instance, like this:

static char *argv[] = { "/sbin/rmmod", "my_module", NULL }, 
            *env[] = { "HOME=/", "PATH=/sbin:/usr/sbin:/bin:/usr/bin", NULL };
if (call_usermodehelper(argv[0], argv, env, UMH_WAIT_PROC))
	PRINTD(KERN_WARNING "Failed unloading the module\n");

Just remember that with just that we cannot actually tell if the module has been unloaded or not. This requires additional checking done by ourselves later on.
You can of course use usermode helpers to call any other user mode applications when needed. Just use it with care. It's best if the kernel doesn't ask for help from user space. Kernel is strong.

Mikrotik RouterBoard 433AH


Some time ago, I have been given the opportunity to work on the Mikrotik RouterBoard 433AH platform. The RB433AH is mostly same as the RB433, just a bit more powerful. It has the standard Atheros AR7130 chipset clocked at 640 MHz (some weaker versions involve a 300MHz processor), 128MB or RAM, 3 10/100 ethernet ports, 3 MiniPCI slots and a microSD card reader. A more detailed hardware specification can be found on the RouterBoard page here. Pretty interesting piece of hardware. One thing that we need to get ready for is the serial port, since in this platform we need a null modem cable for connection.

RB433AH board photo

By default, it comes with a pre-installed RouterBOOT bootloader and a RouterOS Level5 system. The best thing is that the RB433AH chipset is supported by OpenWRT out of the box. Building the system for this platform is quite straightforward, since it only requires to choose the AR71xx target system during configuration - but the OpenWRT team provided some additional informations on their old wiki page here if necessary. Thanks to this we have an open path for lightweight hacking.

The RouterBOOT uses the 'kernel' partition for kernel load during system boot from NAND flash memory. The partition needs to be a yaffs2 filesystem with the kernel as an elf executable of the same name placed on / of the partition directory tree.

The toolchain built during OpenWRT compilation can be later used for building your own system using any cross-compilation toolkit available like Scratchbox or Fakebox (or even no toolkit at all). My choice is, of course, Fakebox - as to why, I will get back to it in a different post later. I also had to hack the toolchain a little bit, since I needed a full iberty library with all the extra object files. But having done this, it's all just compile. Building the kernel will require for us to apply the AR71xx specific patches from OpenWRT first. Now - the interesting part. Basing on the OpenWRT support patches, the flash partition table is defined statically during compile time in drivers/mtd/nand/rb4xx_nand.c. This cannot be modified by a mtdparts cmdline parameter in the kernel. The code is quite obvious. Why not mtdparts? Maybe because the RB4xx are MIPS based and make use of the RouterBOOT bootloader, which seems not to have any means of passing additional commandline arguments? Not sure.

static struct mtd_partition rb4xx_nand_partitions[] = {
		.name	= "booter",
		.offset	= 0,
		.size	= (256 * 1024),
		.mask_flags = MTD_WRITEABLE,
		.name	= "kernel",
		.offset	= (256 * 1024),
		.size	= (4 * 1024 * 1024) - (256 * 1024),
		.name	= "rootfs",

After examining kernel output we can notice that the RB433 has some interesting cmdline arguments passed to it during boot time, such as board, boot, HZ, console etc. Parameters that are not added in CONFIG_CMDLINE in the .config file. How do those paramters get passed to the kernel?
This is another important thing worth noticing in the RB4xx boards - the Atheros SoC has an internal PROM with some board specific data included. The MIPS-configured kernel reads the PROM during boot time and appends the parameters. All the code regarding this procedure can be found in arch/mips/ar71xx/prom.c.

With all that, the Mikrotik RouterBoard 433 platform is a relatively powerful device for experimenting. Worth noting is also the fact that the smaller RB411 uses the very same chipset, with almost no notable differences (besides lacking a few features). It can run the same firmware with no problems. Killing two birds with one stone.

EDIT: After more experiments and gaining experience, I know now that not everything is as I thought during writing this post.

A welcoming post


With this first post, I would like to welcome everyone to my personal web log. I intend to post here about my experiences with technologies that I encounter at work and during private research. Since my interests include embedded systems, kernel hacking, GNU/Linux programming and the Haiku operating system, you can expect a bit of those in the near future.
Since I am also an amateur artist, you can find some of my artworks in the Art section.

Just a reminder - this page is still under construction. I want to keep it as simple as possible, but there still is much to do. Also, I am not a web designer - keep that in mind when browsing through.