An <adjective of your choice> day for freedom

I don’t want to criticize the outcome of the UK’s EU referendum — first of all I’m not wiser than everyone else, and second in a democracy you always have the right to decide both ways. Freedom absolutely includes the freedom to hurt yourself and do bad decisions (note, I’m explicitly not saying — or even knowing! — which is which!).

What concerns me though, is how the course of political debates at large and this referendum in particular have been going. Real policital debates and consensus finding are the essence of democracy, but they essentially stopped many years ago in the US already; with the two major parties just talking/swearing about each other but not any more with each other, and every little proposal gets ridiculously blown up to a crusade. The EU is of course not exempt from that in general, although for most day-to-day political work it’s much more moderate due to most states having proportional instead of majority voting, which enforces coalitions and thus compromises on an institutional level. But the very same bad dispute style immediately came to surface with the Brexit referendum — the arguments have been highly emotional, misleading, populistic, and were often outright lies, like £50M a day (it’s just a third of that, and the ROI is enormous!), or the visa issue for Turkey. This causes voting to be based on stirred emotions, false information, whoever shouts the loudest, and which politician of the day you really want to give a slap in the face, instead of voting rationally on the actual matter at hand and what the best long-term path is.

But we have a saying in Germany: “Nichts wird so heiß gegessen wie es gekocht wird.”, which translates as “Nothing get eaten as hot as it gets cooked.”. In the end, the EU treaties are all just paper, and as long as there are enough people agreeing the rules have been, and will be bent/ignored/adjusted. And dear UK, you of all people should know this ☺ (SCNR). So while today emotions are high, bank charts look crazy, some colleagues are worrying about their employment in the UK etc., there’s nothing more reliable than the human nature — all off this will eventually be watered down, procrastinated, and re-negotiated during the next two (haha, maybe 10) years.

If this has taught us anything though: this looks like yet another example of bad application of direct democracy. In my opinion representative democracy is the better structure for such utterly complex and rather abstract topics that we can’t in good faith expect the general populus to understand. This isn’t meant to sound derogatory — it’s just a consequence of a highly developed world with an extreme grade of division of work. You don’t propose (I hope!) a referendum about how to build a bridge, airplane turbine, pacemaker, or OS kernel; we educate, train, and pay specialists for that. But for the exact same reason we have professional politicians who have the time to think about/negotiate/understand complex issues like EU treaties, and what their benefits and costs are. That said, direct democracy certainly has its place for issues that you can expect the general populus to have a qualified opinion on: Should we rather build a highway or 10 kindergartens? Do you want both for 3% more taxes? Should smoking be prohibited in public places? So the tricky question is how to tell these apart and who decides that.

Tags: , , ,

autopkgtest 4.0: Simplified CLI, deprecating “adt”

Historically, the “adt-run” command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don’t know anyone or any CI system which actually makes use of the “multiple tests on a single command line” feature.

The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests.

The other long-standing confusion is the pervasive “adt” acronym, which is still from the very early times when “autopkgtest” was called “autodebtest” (this was changed one month after autodebtest’s inception, in 2006!).

Thus in some recent night/weekend hack sessions I’ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu ≥ 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.)

New “autopkgtest” command

The adt-run program is now superseded by autopkgtest:

  • It accepts only exactly one tested source package, and gives a proper error if none or more than one (often unintend) is given. Binaries to be tested, --override-control, etc. can now be specified in any order, making the arguments position independent. So you now can do things like:
    autopkgtest *.dsc *.deb [...]

    Before, *.deb only applied to the following test.

  • The explicit --source, --click-source etc. options are gone, the type of tested source/binary packages, including built vs. unbuilt tree, is detected automatically. Tests are now only specified with positional arguments, without the need (or possibility) to explicitly specify their type. The one exception is --installed-click com.example.myapp as possible names are the same as for apt source package names.
    # Old:
    adt-run --unbuilt-tree pkgs/foo-2 [...]
    # or equivalently:
    adt-run pkgs/foo-2// [...]
    # New:
    autopkgtest pkgs/foo-2
    # Old:
    adt-run --git-source [...]
    # New:
    autopkgtest [...]
  • The virtualization server is now separated with a double instead of a tripe dash, as the former is standard Unix syntax.
  • It defaults to the current directory if that is a Debian source package. This makes the command line particularly simple for the common case of wanting to run tests in the package you are just changing:
    autopkgtest -- schroot sid

    Assuming the current directory is an unbuilt Debian package, this will build the package, and run the tests in ./debian/tests against the built binaries.

  • The virtualization server must be specified with its “short” name only, e. g. “ssh” instead of “adt-virt-ssh”. They also don’t get installed into $PATH any more, as it’s hardly useful to call them directly.

README.running-tests got updated to the new CLI, as usual you can also read the HTML online.

The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version.

Image build tools

All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with “autopkgtest” instead of “adt”. For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial.

In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old “adt-” prefix.

Environment variables in tests

Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*:


As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years).


As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it.

Happy testing!

Tags: , , , ,

Results from proposed-migration virtual sprint

This week from Tuesday to Thursday four Canonical Foundations team members held a virtual sprint about the proposed-migration infrastructure. It’s been a loooong three days and nightshifts, but it was absolutely worth it. Thanks to Brian, Barry, and Robert for your great work!

I started the sprint on Tuesday with a presentation (slides) about the design and some details about the involved components, and showed how to deploy the whole thing locally in juju-local. I also prepared a handful of bite-size improvements which were good finger-exercises for getting familiar with the infrastructure and testing changes. I’m happy to report that all of those got implemented and are running in production!

The big piece of work which we all collaborated on was providing a web-based test retry for all Ubuntu developers. Right now this is limited to a handful of Canonical employees, but we want Ubuntu developers to be able to retry autopkgtest regressions (which stop their package from landing in Ubuntu) by themselves. I don’t know the first thing about web applications and OpenID, so I’m really glad that Barry and Robert came up with a “hello world” kind of Flask webapp which uses Ubuntu SSO authentication to verify that the requester is an Ubuntu Developer. I implemented the input variable validation and sending the actual test requests over AMQP.

Now we have a nice autopkgtest-retrier git with the required functionality and 100% (yes, complete!) test coverage. With that, requesting tests in a local deployment works! So what’s left to do for me now is to turn this into a CGI script, configure apache for it, enable SSL on, and update the charms to set this all up automatically. So this moved from “ugh, I don’t know where to start” from “should land next week” in these three days!

We are going to have similar sprints for Brian’s error tracker, Robert’s CI train, and Barry’s system-image builder in the next weeks. Let’s increase all those bus factors from the current “1” to at least “4” ☺ . Looking forward to these!

Tags: , , , ,

What’s new in autopkgtest: LXD, MaaS, apt pinning, and more

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

New LXD virtualization backend

3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

  lxc remote add lco

and use the image to run e. g. the libpng test from the archive:

  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64

The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

MaaS setup script

While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey

The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

Note that this is not wired into Ubuntu’s production CI environment, but it will be.

Selectively using packages from -proposed

Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.


These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...

and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

Unified testbed setup script

There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.


Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

Tags: , , , , , ,

autopkgtest 3.14 “now twice as rebooty”

Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out.

First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the “wait for go down and back up” part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details.

The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron.

Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary “marker”, saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another “nonstandard” way of rebooting the testbed. README.package-tests shows an example how this looks like.

3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases.

Enjoy, and let me know if you run into troubles or have questions!

Tags: , , , ,

Polarlichtreise nach Lappland

Wir sind wieder zurück aus unserem tollen Winterurlaub! Es ging nach Lappland in Nord-Finnland und Nord-Norwegen. Das ganze Fotoalbum gibt es auch zu sehen.

Am Montag den 16. März starten wir unsere Reise nach Lappland, vom Flughafen München über Helsinki bis nach Ivalo. Auf dem Flug bekommt man schon einen guten Eindruck von Finnland: Außer an den südlichen Küstengebieten ist das Land sehr dünn bevölkert, und der ganze Norden besteht fast nur aus einem Flickenteppich aus Wald, Seen, und Flüssen. Außer den ganz großen Flüssen ist fast alles zugefroren, und man erkennt auch deutlich die Spuren der Lastwagen und Schneemobile über den Seen, die hier über den ganzen Winter als Straße genutzt werden können.

Vom Flughafen in Ivalo, mit einem winzigen Terminal (natürlich aus viel Holz!) geht es dann eine halbe Stunde nach Inari, einem kleinen Dorf an der Südwestecke des riesigen Inari-Sees. Hier ist die Heimat der Samen, aber auch ganzjährig viele Touristen aus Finnland, Europa, und sogar Asien, die Polarlichter, Skifahren, oder endlose Sommernächte erleben wollen.

Am Dienstag lernen wir wie man sich hierzulande durch die Winterlandschaft bewegt. Wir besuchen eine Husky-Farm. Schon beim Aussteigen werden wir vom aufgeregten Gebell von etwa einhundert Tieren begrüßt, die es kaum erwarten können dass sie losrennen dürfen. Das Einspannen der Schlitten dauert aber eine Weile, bis dahin beschäftigen wir die Huskys mit viel Kraulen und Spielen. Das sind sehr neugierige und liebe Tiere, selbst ich der bekkantermaßen kein großer Hunde-Narr ist verstehe mich prima mit ihnen. Und dann gehts endlich los! Je fünf Hunde ziehen einen Schlitten, Annett liegt drin, macht Fotos und feuert die Huskys an, ich stehe dahinter, lenke, und – am wichtigsten – bremse. Die Huskys haben eine enorme Kraft und haben nur eine einzige Geschwindigkeit: schnell. Alles andere regelt man dann mit der Fußbremse, einem Metallbügel mit zwei Stäben die sich in den Schnee graben. So sausen wir etwa eine Stunde durch die sonnige Winterlandschaft, dann helfen wir noch beim Ausspannen und bedanken uns bei den Tieren noch mit ein paar Streicheleinheiten.

Vor dem Abendessen erklärt uns Joachim, unser Reiseleiter, die wichtigsten Grundlagen der Polarlichter. So wissen wir wenigstens grob wie sie entstehen, vorausgesagt werden, und welche Formen sie annehmen. Die Vorfreude ist riesig, denn wir haben momentan fast perfekte Bedingungen: hohe Sonnenwind-Aktivität, die Erde steht in einem günstigen Bereich des Sonnenmagnetfeldes, und die Wettervorhersage verspricht uns einen klaren Himmel in der Nacht.

Am Abend steht uns dann das großartige Schauspiel bevor: Pünktlich um zehn verziehen sich die Wolken wieder die ein paar Stunden früher aufgezogen sind und geben den Blick frei auf einen gigantischen Sternenhimmel. Für drei Stunden sehen wir Polarlichter in Bändern, Streifen, Koronas, und allen möglichen Formen bewundern. Einige bleiben für Minuten bestehen, andere bewegen sich sehr schnell, und man kommt mit Staunen und Fotografieren kaum hinterher.

Hier oben gibt es kaum Lichtverschmutzung, so können wir zwischendurch durch den Feldstecher wunderbar Jupiter und die galileischen Monde, den Andromeda-Nebel, die Pleiaden und Hyaden, oder verschiedene Satelliten sehen.

Am Mittwoch schlafen wir erstmal aus und besuchen die lokale Rentier-Farm. Hier lernen wir so einiges über das Wildleben und die Haltung und Nutzung dieser halbwilden Tiere, inklusive Fütterung aus der Hand und einer Runde um den Block mit dem Rentierschlitten. Diese Fahrt läuft wesentlich ruhiger ab als mit den Huskys, Rentiere sind eher die gleichmäßigen und ausdauernden Lastenzieher. Die ganze Farm wird von einer Sami-Familie geführt die hier schon seit mehreren Generationen ansässig ist. Bei Tee und Gebäck in einer gemütlichen beheizten “Kota” (Hütte) erfahren wir viel über die Geschichte und aktuelle Kultur der Samen-Völker.

Am Nachmittag laufen wir noch eine schöne Runde am Fluss entlang, und zwischendurch auch darüber. Als Mitteleuropäer haben wir ein mulmiges Gefühl dabei, aber hier macht das jeder und 80 cm Eis und 60 cm Schnee darüber halten wesentlich mehr aus.

Nach dem Abendessen war eigentlich ein weiterer Vortrag geplant, aber kurz nach dem Nachtisch um Neun gibt es schon wieder Polarlicht-Alarm 🙂 Heute Abend haben sie andere Formen und Verhaltensweisen und ziehen in langen Bändern von Nord nach Süd über den gesamten Himmel. Das beobachten wir noch bis um eins.

Am Donnerstag gibts zur Abwechslung mal maschinelle Fortbewegung: Wir machen eine Schneemobil-Tour! Diese sind für die Einheimischen das Mittel der Wahl in den etwa sieben Monaten mit Schnee. Die Scooter-Bahnen ziehen sich kreuz und quer durch die Lappland-Wälder und über die Seen, sind gut sichtbar mit Stangen markiert, und es gibt sogar Stopschilder und Wegweiser. Und die gehen gut ab! Annett übernimmt die erste Etappe von Inari über den See zum “heiligen Berg”, einer großen Felseninsel mitten im Inarisee. Ich fahre dann durch den Wald zur “Holzkirche”, die dort schon seit 1647 steht und ein beliebter Ausflugsort ist. Dort veranstalten wir ein zünftiges Picknick auf finnische Art: Holzfeuer, rußige Kessel für Tee, dicke Bratwürste und Toast. Annett fährt uns dann zurück nach Inari. Die Dinger machen einen Heidenspaß und sind auch super-simpel zu bedienen (stufenlose Automatikschaltung, außer dem Gashebel muss man da nichts tun).

Am Abend erleben wir wieder Polarlichter, diesmal sehr langlebige Formen. Diese lassen uns viel Zeit zum Experimentieren mit Belichtungszeiten, Blitzstärke und Standorten, so dass wir ganz passable Erinnerungsfotos von allen mit Polarlicht und der Venus als Dreingabe bekommen. Ein Vortrag von Joachim über die Geschichte der Polarlichtforschung rundet den Abend ab.

Freitag steht gleich das nächste Ereignis am Himmel an: Eine partielle Sonnenfinsternis die hier um 12:13 etwa 91% Bedeckung erreicht. Wir versammeln uns vor dem Hotel mit Schutzbrillen und einem H-Alpha-Teleskop (in dem man Flares auf der Sonne sehen kann) und fotografieren auch eine Serie. Es ist sehr kalt und meine Hände frieren mir ein, aber dafür hat sich das locker gelohnt.

Am Nachmittag besuchen wir das Sami-Museum hier gleich um die Ecke. Das ist schön gemacht, eine große Halle in der jeder Wand eine Jahreszeit gewidmet ist, die die Tier- und Pflanzenwelt in jedem Monat zeigt. Wir erfahren auch viel über die Geschichte und Lebensweise der Samen.

Am Samstag wandern wir noch eine Runde über den Inari-See bevor dann um 13:00 unser Bus für den zweiten Teil der Reise startet. Wir überqueren bald die norwegische Grenze und finden uns in einer ganz anderen Landschaft wieder: es gibt auf einmal Berge, der Wald wird kleiner, lichter, und besteht fast nur noch aus Birken, und das erste Mal seit langem sehen wir auch wieder flüssiges Wasser in den Fjorden. Wir machen noch einen kleinen Abstecher nach Kirkenes, die zweitnördlichste Stadt Norwegens (nach Hammerfest) die vorwiegend vom Eisenerzbergbau und der Schifffahrt lebt. Die bekannte Hurtigruten-Tour startet hier auch.

Am frühen Abend kommen wir dann in Svanvik an, auf dem “Svanhovd”, ein ehemaliger Bauernhof der mittlerweile ein Naturschutz- und Bildungszentrum und ein Hotel ist. Nach einem überreichlichen Abendbuffet bekommen wir dann wieder eine gigantische Polarlicht-Show mit Lauflichtern und sich schnell ändernden hellen grünen Bändern zu bestaunen.

Sonntags ist Wandern angesagt. Noch ist schönes – aber kaltes – sonniges Wetter, unser Ziel ist ein Aussichtsturm etwa 8 km entfernt. Leider ist der letzte Kilometer durch den hohen Schnee kaum passierbar, deshalb laufen wir lieber noch ein Stück weiter die Straße entlang und machen ein kleines Picknick auf einem Waldarbeiter-Bauplatz.Am Abend erklärt und demonstriert uns Joachim an seiner selbstgebauten Armillarsphäre ein paar Lektionen Himmelsmechanik. Einfach genial so ein Teil, man kann jede Menge Phänomene wie Sommer/Wintertageslängen, Sonnen-/Mondfinsternisse, Polartag/-nacht, Planetenbewegungen, langfristige Verschiebung der Ekliptik usw. vom erdbezogenen Beobachter darstellen und verstehen. Nachts ist es dann leider bewölkt, so dass wir mal früh schlafen gehen.

Montag steht dann unser letzter Ausflug auf dem Programm: Es geht in das Schneehotel in Kirkenes! Das wird jeden September aus großen Ballons und Schneekanonen aufgebaut, und dann erhält jedes Zimmer und die Bar Eis- und Schneeskulpturen, die von extra eingeflogenen chinesischen Künstlern angefertigt werden. Am Nachmittag spektroskopieren wir dann noch ein bisschen die Sonne und genießen dann ein paar Runden Sauna inklusive Im-Schnee-Wälzen.

Den letzten Dienstag verbringen wir dann noch recht gemütlich mit einer Wanderung, Sauna, und natürlich abends wieder mit Polarlichern.

Tags: , , , ,

Snappy package for Robot Operating System tutorial

ROS what?

Robot Operating System (ROS) is a set of libraries, services, protocols, conventions, and tools to write robot software. It’s about seven years old now, free software, and a growing community, bringing Linux into the interesting field of robotics. They primarily target/support running on Ubuntu (current Indigo ROS release runs on 14.04 LTS on x86), but they also have some other experimental platforms like Ubuntu ARM and OS X.

ROS, meet Snappy

It appears that their use cases match Ubuntu Snappy’s vision really well: ROS apps usually target single-function devices which require absolutely robust deployments and upgrades, and while they of course require a solid operating system core they mostly implement their own build system and libraries, so they don’t make too many assumptions about the underlying OS layer.

So I went ahead and created a snapp package for the Turtle ROS tutorial, which automates all the setup and building. As this is a relatively complex and big project, it helped to uncover quite a number of bugs, of which the most important ones got fixed now. So while the building of the snap still has quite a number of workarounds, installing and running the snap is now reasonably clean.

Enough talk, how can I get it?

If you are interested in ROS, you can look at bzr branch lp:~snappy-dev/snappy-hub/ros-tutorials. This contains documentation and a script which builds the snapp package in a clean Ubuntu Vivid environment. I recommend a schroot for this so that you can simply do e. g.

  $ schroot -c vivid ./

This will produce a /tmp/ros/ros-tutorial_0.2_<arch>.snap package. You can download a built amd64 snapp if you don’t want to build it yourself.

Installing and running

Then you can install this on your Snappy QEMU image or other installation and run the tutorial (again, see for details):

  yourhost$ ssh -o UserKnownHostsFile=/dev/null -p 8022 -R 6010:/tmp/.X11-unix/X0 ubuntu@localhost
  snappy$ scp <yourhostuser>@
  snappy$ sudo snappy install ros-tutorial_0.2_amd64.snap

You need to adjust <yourhostuser> accordingly; if you didn’t build yourself but downloaded the image, you might also need to adjust the host path where you put the .snap.

Finally, run it:

  snappy$ ros-tutorial.rossnap roscore &
  snappy$ DISPLAY=localhost:10.0 ros-tutorial.rossnap rosrun turtlesim turtlesim_node &
  snappy$ ros-tutorial.rossnap rosrun turtlesim turtle_teleop_key

You might prefer ssh’ing in three times and running the commands in separate shells. Only turtlesim_node needs $DISPLAY (and is quite an exception — an usual robotics app of course wouldn’t!). Also, note that this requires ssh from at least Ubuntu 14.10 – if you are on 14.04 LTS, see


Tags: , , ,

Ramblings from LinuxCon/Plumbers 2014

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Tags: , , , , , , , ,

Running autopkgtests in the cloud

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./ ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./ ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.

Finally, let’ run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Tags: , , , ,

autopkgtest 3.5: Reboot support, Perl/Ruby implicit tests

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Tags: , , , , , ,