Working on QML Book

Do you remember QML Book? It started as a project between me and Jürgen Bocklage-Ryannel where we tried to fix the problem that there is no QML book out there.

Back in the Qt 5.2 days, we spent wrote about a year. Unfortunately, the project has mainly been sitting idle since then. I’ve poked at issues every now and then, and Jürgen has done various fixes as well.

Thanks to The Qt Company, this is changing. This autumn, it sponsors me to work on the project. The current plan is to add a chapter to Qt Quick Controls 2, and to update the entire contents to Qt 5.12 and Qt Creator 4.8. By doing so, many of the remaining bug reports will be resolved.

Other things in the backlog are getting the CI system back into shape and having a native speaker edit the language. All in all, this will result in an up-to-date book on QML. If you want to help out, just reach out to me or send me your pull requests. All help is welcome!

QML Weather

I recently took some time to develop a photo frame style home automation control panel. The idea is to control some common tasks of my home assistant setup from a panel instead of having to rely on the phone. To hide the panel, it currently act as a photo frame until touched.

The build is based on a Raspberry Pi 2 with the official touch screen attached and a USB wifi dongle. Nothing fancy, but still good enough.

One of the features that I wanted was a weather forecast, so I decided to use Yr’s xml weather as a base for this. The result is the YrWeatherModel QML item.

The weather forecast overlay.

The presentation side of things is the fairly straight forward piece of QML shown below, resulting in the overlay shown above.

Row {
    anchors.bottom: dateText.bottom
    anchors.right: parent.right
    anchors.rightMargin: 40

    spacing: 20
    Repeater {
        delegate: Column {
            spacing: 2
            Text {
                anchors.horizontalCenter: parent.horizontalCenter
                color: "white"
                font.pixelSize: 16
                font.bold: true
                text: {
                    switch (period) {
                    case 0:
                        "00 - 06"
                        break;
                    case 1:
                        "06 - 12"
                        break;
                    case 2:
                        "12 - 18"
                        break;
                    case 3:
                    default:
                        "18 - 00"
                        break;
                    }
                }
            }
            Image {
                anchors.horizontalCenter: parent.horizontalCenter
                source: symbolSource
            }
            Text {
                anchors.horizontalCenter: parent.horizontalCenter
                color: "white"
                font.pixelSize: 16
                font.bold: true
                text: precipitation + "mm"
            }
            Text {
                anchors.horizontalCenter: parent.horizontalCenter
                color: "white"
                font.pixelSize: 16
                font.bold: true
                text: temperature + "°C"
            }
        }

        model: weatherModel.model
    }
}

This is followed by the model itself, and a small notice of the data source.

YrWeatherModel {
    id: weatherModel
    place: "Sweden/V%C3%A4stra_G%C3%B6taland/Alings%C3%A5s"
}

Text {
    anchors.bottom: parent.bottom
    anchors.right: parent.right
    anchors.bottomMargin: 5
    anchors.rightMargin: 40
    text: weatherModel.dataSourceNotice
    color: "white"
    font.pixelSize: 16
    font.italic: true
}

Diving into the model itself, we hit the interesting parts. The structure looks like this:

Item {
    id: root

    property alias model: weatherModel
    property int refreshHour: 1     // How often is the model refreshed (in hours)
    property int dataPoints: 6      // How many data points (max) are expected (in 6h periods)
    property string place           // Place, URL encoded and according to Yr web site, e.g. Sweden/V%C3%A4stra_G%C3%B6taland/Alings%C3%A5s
    readonly property string dataSourceNotice: "Data from MET Norway"

    ListModel {
        id: weatherModel
    }

    Timer {
        interval: 3600000 * root.refreshHour
        running: true
        repeat: true
        onTriggered: {
            _innerModel.reload();
        }
    }

    XmlListModel {
        id: _innerModel

        query: "/weatherdata/forecast/tabular/time"

        source: (place.length === 0)?"":("https://www.yr.no/place/" + root.place + "/forecast.xml")

        XmlRole { name: "period"; query: "string(@period)" }
        XmlRole { name: "symbol"; query: "symbol/string(@number)"; }
        XmlRole { name: "temperature"; query: "temperature/string(@value)"; }
        XmlRole { name: "precipitation"; query: "precipitation/string(@value)"; }

        onStatusChanged: {
            // ...
        }
    }
}

As you can see, the model consists of an inner model of the type XmlListModel. This model is refreshed by a timer (don’t refresh too often – you will most likely be auto-banned by Yr). At the top, there is also a ListModel that is the actual model used by the user interface.

The reason for the ListModel to exist is that I wanted to be able to limit how many data points I show. Each data point represents a six hour window, and I’d like 6 of them, i.e. one and a half day of forecasting.

The onStatusChanged handler in the XmlListModel takes care of this in the following for loop:

onStatusChanged: {
    if (status === XmlListModel.Ready)
    {
        for(var i = 0; i< root.dataPoints && i < count; ++i)
        {
            var symbol = get(i).symbol;
            var period = parseInt(get(i).period);
            var is_night = 0;

            if (period === 3 || period === 0)
                is_night = 1;

            weatherModel.set(i, {
               "period":period,
               "symbol":symbol,
               "symbolSource":"https://api.met.no/weatherapi/weathericon/1.1/?symbol=" + symbol + "&is_night=" + is_night + "&content_type=image/png",
                "temperature":get(i).temperature,
                "precipitation":get(i).precipitation
                });
        }
    }
    else if (status === XmlListModel.Error)
    {
        console.warn("Weather error")
        console.warn(errorString());
    }
}

As you can tell, this code has *very* limited error handling. It is almost as it has been designed to break, but it works. The code also shows how convenient it is to connect to online services via QML to build simple, reusable, models that can be turned into beautiful user interfaces.

Next time I have some free time, I’ll look at interfacing with the public transport APIs. Then I will have to deal with JSON data and make explicit XmlHttpRequest calls.

Five days left

I use to joke that the last week before foss-north is the worst – everything is done, all that is left is the stress.

This year, we have the broadest program yet. 25 speakers talking about everything from community policies, GPU isolation, blockchain, historical KDE software, retro computers, IoT, Android, SailfishOS, bug triaging, crowd funding, software updates, yocto, home automation, design to sub-atomic particles.

You can still get a ticket (and make sure to bring a friend) at foss-north . Welcome!

foss-north – the count down

At this year’s foss-north event FSFE will revive the Nordic Free Software Award and the conference will host the prize ceremony. Get your tickets for a great opportunity to meet with the FOSS community, learn new things and visit Gothenburg.

It is just 9 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.

foss-north – the count down

We are approaching the count down to foss-north 2018 – at least from an organizer perspective. This year we will be at Chalmers Conference Centre, in the centre of Gothenburg – the world’s most sociable, friendliest city. So, save the date – April 23 – and make sure to drop by.

The reason why it feels like the count down has started is that it is just 10 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.

When moving to Chalmers we ended up with a larger venue than last year so make sure to get your ticket – and bring your friends. The saying “the more the merrier” definitely applies to FOSS conferences!

Building Qt on Debian

I recently followed the advice of @sehurlburt to offer help to other developers. As I work with Qt and embedded Linux on a daily basis, I offered to help. (You should do the same!)

As it is easy to run out of words on Twitter, so here comes a slightly more lengthy explanation on how I build the latest and greatest of Qt for my Debian machine. Notice that there are easier ways to get Qt – you can install it from packages, or use the installer provided from The Qt Company. But if you want to build it yourself for whatever reason, this is how I do it.

First step is to get the build dependencies to your system. This might feel tricky, but apt-get can help you here. To get the dependencies for Qt 5, simply run sudo apt-get build-dep libqt5core5a and you are set.

Next step would be to get the Qt source tarball. You get it by going to https://www.qt.io/download/, select the open source version (unless you hold a commercial license) and then click the tiny View All Downloads link under the large Your download section. There you can find source packages for both Qt and Qt Creator.

Having downloaded and extracted the Qt tarball, enter the directory and configure the build. I usually do something like
./configure -prefix /home/e8johan/work/qt/5.9.0/inst -nomake examples -nomake tests. That should build everything, but skip examples and tests (you can build these later if you want to). The prefix should point to someplace in your home directory. The prefix has had some peculiar behaviour earlier, so I try to make sure not to have a final dash after the path. When the configuration has been run, you can look at the config.summary file (or the a bit higher up in the console output) and you can see a nice summary of what you are about to build. If this list looks odd, you need to look into the dependencies manually. Once you are happy, simply run make. If you want to speed things up, use the -j option with the highest number you dare (usually number of CPU cores plus one). This will parallelize the build.

Once the build is done (this takes a lot of time, expect at least 45 minutes with a decent machine), you need to install Qt. Run make install to do so. As you install Qt to someplace in your home directory, you do not need to use sudo.

The entry point to all of Qt is the qmake tool produced by your build (i.e. prefix/bin/qmake). If you run qmake -query you can see that it knows its version and installation point. This is why you cannot move a Qt installation around to random locations without hacking qmake. I tend to create a link (using ln -s) to this binary to somewhere in my path so that I can run qmake-5.9.0 or qmake-5.6.1 or whatnot to invoke a specific qmake version (caveat: when changing Qt version in a build tree, run qmake-version -recursive from the project root to change all Makefiles to the correct Qt version, otherwise you will get very “interesting” results).

Armed with this knowledge, we can go ahead and build QtCreator. It should be a matter of extracting the tarball, running the appropriate qmake in the root of the extracted code followed by make. QtCreator does not have to be installed, instead, just create a link to the qtcreator binary in the bin/ sub directory.

Running QtCreator, you can add Qt builds under Tools -> Options… -> Build & Run. Here, add a version by pointing at its qmake file, e.g. the qmake-5.9.0 link you just created. Then it is just a matter of picking Qt version for your project and build away.

Disclaimer! This is how I do things, but it might not be the recommended or even the right way to do it.

foss-north speaker line-up

I am extremely pleased to have confirmed the entire speaker line-up for foss north 2017. This will be a really good year!

Trying to put together something like this is really hard – you want the best speakers, but you also want a mix of local and international, various technologies, various viewpoints and much, much more. For 2017 we will have open hardware and open software, KDE and Gnome, web and embedded, tech talks and processes, and so on.

The foss north conference is a great excuse to come visit Gothenburg in the spring. Apparently, Sweden’s wildest city!

Just three days left…

The call for papers for foss-north 2017 ends on Sunday. That means that you only have three days to…

  • … get a chance to visit Gothenburg, Sweden, the most sociable city in the world!
  • … speak in front of a great audience of 220 people (if we sell all the tickets – get your’s here).
  • … listen to other awesome speakers. Right now we’ve confirmed Lydia Pintscher, Lennart Poettering, Knut Yrvin and Jos Poortvliet. (There will be more awesome speakers announced when the call for papers is over).

So what are you waiting for – submit your talk proposal and join us at foss-north 2017!

foss-north 2017: Call for Papers

The Call for Papers for foss-north is open for another week (until the 12th). This gives you an opportunity to speak in front of a great crowd. Looking at the results from last year’s questionnaire, more than 90% are users of open source software and more than 50% are contributors. One thing that surprised me, is that more people actually contribute as a part of their profession than as hobbyists. Looking at the professional vs hobbyist proportions, 45% of the visitors stated that they had their ticket paid by their employer/school, while 42% paid them out of their own pocket.

The topic of the conference is free and open source – so anything related is much welcome.We do not even limit ourselves to software – hardware, patents, community and much more is also appreciated topics. Last year we had speakers talking about timing synchronization over vast networks, patent issues, working as a designer, linguistics and must more.

As always with these things, crowd dynamics means that me as an organizer has to work on my stress management abilities. Almost 30% of the tickets to last year’s event was sold in the last two days before the event. The same goes for Call for Papers – nobody registers a talk in good time before the deadline. So if you want to help an ageing developer keeping the pulse under control – submit your talk proposal now! ;-)

kdenlive, audacity and lessons in audio sync

During the last foss-gbg meeting I tried filming the entire event. The idea is to produce videos of each talk and publish them on YouTube. Since I’m lazy, I simply put up a camera on a tripod and recorder the whole event, some 3h and 16 minutes and a few seconds. A few seconds that would cause me quite some pain, it turns out.

All started with me realizing that I can hear the humming sound of the AC system in the video. No problem, simply use ffmpeg to separate the audio from the video and use the noise reduction filter in Audacity. However, when putting it all together I recognized a sound sync drift (after 6h+ of rendering videos, that is).

ffprobe told me that the video is 03:16:07.58 long, while the extracted audio is 03:16:04.03. This means that the video of the last speaker drifts more than 3s – unwatchable. So, googling for a solution, I realized that I will have to try to stretch the audio to the same duration as the video. Audacity has a tempo effect to do this, but I could not get the UI to accept my very small adjustment in tempo (or my insane number of seconds in the clip). Instead, I had to turn to ffmpeg and the atempo filter.

ffmpeg -i filtered.ac3 -filter:a "atempo=0.9996983236995207" -vn slower.ac3

This resulted in an audio clip of the correct length. (By the way, the factor is the difference in length of the audio and video).

Back to kdenlive – I imported the video clip, put it on the time line, separated the audio and video (just a right click away), ungrouped them, removed the audio, added the filtered, slowed down audio, grouped it with the video and everything seems nice. I about 1h43 I will know when the first clip has been properly rendered :-)