Artikel getaggt mit ubuntu

Snappy package for Robot Operating System tutorial

ROS what?

Robot Operating System (ROS) is a set of libraries, services, protocols, conventions, and tools to write robot software. It’s about seven years old now, free software, and a growing community, bringing Linux into the interesting field of robotics. They primarily target/support running on Ubuntu (current Indigo ROS release runs on 14.04 LTS on x86), but they also have some other experimental platforms like Ubuntu ARM and OS X.

ROS, meet Snappy

It appears that their use cases match Ubuntu Snappy’s vision really well: ROS apps usually target single-function devices which require absolutely robust deployments and upgrades, and while they of course require a solid operating system core they mostly implement their own build system and libraries, so they don’t make too many assumptions about the underlying OS layer.

So I went ahead and created a snapp package for the Turtle ROS tutorial, which automates all the setup and building. As this is a relatively complex and big project, it helped to uncover quite a number of bugs, of which the most important ones got fixed now. So while the building of the snap still has quite a number of workarounds, installing and running the snap is now reasonably clean.

Enough talk, how can I get it?

If you are interested in ROS, you can look at bzr branch lp:~snappy-dev/snappy-hub/ros-tutorials. This contains documentation and a script which builds the snapp package in a clean Ubuntu Vivid environment. I recommend a schroot for this so that you can simply do e. g.

  $ schroot -c vivid ./

This will produce a /tmp/ros/ros-tutorial_0.2_<arch>.snap package. You can download a built amd64 snapp if you don’t want to build it yourself.

Installing and running

Then you can install this on your Snappy QEMU image or other installation and run the tutorial (again, see for details):

  yourhost$ ssh -o UserKnownHostsFile=/dev/null -p 8022 -R 6010:/tmp/.X11-unix/X0 ubuntu@localhost
  snappy$ scp <yourhostuser>@
  snappy$ sudo snappy install ros-tutorial_0.2_amd64.snap

You need to adjust <yourhostuser> accordingly; if you didn’t build yourself but downloaded the image, you might also need to adjust the host path where you put the .snap.

Finally, run it:

  snappy$ ros-tutorial.rossnap roscore &
  snappy$ DISPLAY=localhost:10.0 ros-tutorial.rossnap rosrun turtlesim turtlesim_node &
  snappy$ ros-tutorial.rossnap rosrun turtlesim turtle_teleop_key

You might prefer ssh’ing in three times and running the commands in separate shells. Only turtlesim_node needs $DISPLAY (and is quite an exception — an usual robotics app of course wouldn’t!). Also, note that this requires ssh from at least Ubuntu 14.10 – if you are on 14.04 LTS, see


Tags: , , ,

Creating a local swift server on Ubuntu for testing

Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.

In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single shell script.

You can run this in a standard ubuntu container or VM as root:

sudo apt-get install lxc
sudo lxc-create -n swift -t ubuntu -- -r trusty
sudo lxc-start -n swift
# log in as ubuntu/ubuntu, and wget or scp
sudo ./

Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:

$ sudo apt-get install python-swiftclient
$ swift -A -U testproj:testuser -K testpwd stat

Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.

I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.

Tags: , , , , ,

Run autopilot test in autopkgtest

I recently created a test for digicam photo import for Shotwell (using autopilot and umockdev), and made that run as an autopkgtest. It occurred to me that this might be interesting for other desktop applications as well.

The community QA team has written some autopkgtests for desktop applications such as evince, nautilus, or Firefox. We run them regularly in Jenkins on real hardware in a full desktop environment, so that they can use the full desktop integration (3D, indicators, D-BUS services, etc). But of course for those the application already needs to be in Ubuntu.

If you only want to test functionality from the application itself and don’t need 3D, a proper window manager, etc., you can also call your autopilot tests from autopkgtest with a wrapper script like this:

set -e

# start X
(Xvfb :5 >/dev/null 2>&1 &)
export DISPLAY=:5

# start local session D-BUS
eval `dbus-launch`
export XAUTHORITY=/dev/null

# change to the directory where your autopilot tests live, and run them
cd `dirname $0`
autopilot run autopilot_tests

This will set up the bare minimum: Xvfb and a session D-BUS, and then run your autopilot tests. Your debian/tests/control should have Depends: yourapp, xvfb, dbus-x11, autopilot-desktop, libautopilot-gtk for this to work. (Note: I didn’t manage to get this running with xvfb-run; any hints to how to simplify this appreciated, but please test that it actually works.)

Please note that this does not replace the “run in full desktop session” tests I mentioned earlier, but it’s a nice addition to check that your package has correct dependencies and to automatically block new libraries/dependencies which break your package from entering Ubuntu.

Tags: , , , ,

Ubuntu Saucy translations are now open

You can now start translating Ubuntu Saucy on Launchpad.

Tags: , , , ,

PostgreSQL 9.2 final available for Debian and Ubuntu

PostgreSQL 9.2 has just been released, after a series of betas and a release candidate. See for yourself what’s new, and try it out!

Packages are available in Debian experimental as well as my PostgreSQL backports PPA for Ubuntu 10.04 to 12.10, as usual.

Please note that 9.2 will not land any more in the feature frozen Debian Wheezy and Ubuntu Quantal (12.10) releases, as none of the server-side extensions are packaged for 9.2 yet.

Tags: , , , ,

Apport 2.5: Better support for third-party and PPA packages

I just released Apport 2.5 with a bunch of new features and some bug fixes.

By default you cannot report bugs and crashes to packages from PPAs, as they are not Ubuntu packages. Some packages like Unity or UbuntuOne define their own crash database which reports bugs against the project instead. This has been a bit cumbersome in the past, as these packages needed to ship a /etc/apport/crashdb.conf.d/ snippet. This has become much easier, package hooks can define a new crash database directly now (#551330):

def add_info(report, ui):
   if determine_whether_to_report_to_upstream:
       report['CrashDB'] = '{ "impl": "launchpad", "project": "picsaw" }'

(Documented in package-hooks.txt)

Apport now also looks for package hooks in /opt (#1020503) if the executable path or a file in the package is somewhere below /opt (it tries all intermediate directories).

With these two, we should have much better support for filing bugs against ARB packages.

This version also finally drops the usage of gksu and moves to PolicyKit. Now we only have one package left in the default install (update-notifier) which uses it. Almost there!

Tags: , , , ,

New PostgreSQL microreleases with two security fixes

New PostgreSQL microreleases with two security fixes and several bug fixes was just announced publically.

I spent the morning with the packaging orgy for Debian unstable and experimental (now uploaded), Debian Wheezy (update sent to security team), Ubuntu hardy, lucid, natty, oneiric, precise (LP #1008317) and my backports PPA.

I tested these fairly thoroughly, but please let me know if you encounter any problem with these.

Tags: , , , , ,

QA changes for Ubuntu 12.04

Half a year ago I blogged about the changed expectancies and processes to improve quality of the development release which we discussed at the UDS in Orlando: A promise that we don’t break the development version, regressions are not to be tolerated, acceptance criteria for Canonical upstreams. For that we introduced the Stable+1 team, actually did some reversions of broken packages, our QA team set up rigorous daily installation image and upgrade tests, and the code development process for Unity and related project was changed to enforce buildability and passing automatic tests with each and every change to trunk.

To be honest I was still a tad sceptic back then when this was planned. These were a lot of changes for one cycle, the stable+1 team was a considerable resource investment (starting with three people fulltime in the first few months), and not to the least our friends in the DX team felt thwarted because they had to sit down for a long time developing tests, and then changing their habits and practices for development.

So was all that effort worth it?


Just a random sample of goodness that this brought:

  • It was nice to not have to sit down for an hour every cople of days to figure out how to get back my desktop after the daily dist-upgrade bricked it.
  • Unity, compiz, and friends were remarkably stable. I still remember the previous cycles where every new version got differently crashy, broke virtual workspaces, and what not. The worst thing that happened this cycle is eternally breaking keybindings (or changing them around), but at least those usually had obvious workarounds.
  • As a result of those, I think we had at least one, maybe two magnitudes more testers of the daily development release than in previous cycles. So we got a lot of good bug reports and also patch contributions for smaller issues in Precise which we otherwise would not have discovered.
  • The daily dist-upgrade tests tremendously helped to uncover packaging problems which would break real-world upgrades out there by the dozens. It took months to fix the hardest one: upgrading 10.04 LTS to 12.04 LTS with all universe packages offered in software-center. This beast takes 13 hours to run, so nobody really did manual tests like that in the past cycles.
  • Due to the daily automatic CD image builds we dramatically reduced both the cost of fixing regressions as well as the emergency hackathons during milestone preparations. It is a lot easier to unbreak e. g. LVM setup or OEM install modes on our images when the regression happened just a day before than discovering it two days before a milestone is due, as again nobody tests these less common modes very often.
  • So as a result, I really think the investments into QA and the stable+1 teams already paid off twofold by giving us more time to work on the less critical fixes, avoiding lots of user frustration about broken upgrades, and generally making the daily development a lot more enjoyable. Or, as Rick Spencer puts it: Velocity, velocity, velocity!

    Despite these improvements, there are still some improvements I’m looking forward to in the next cycles: Thanks to Colin Watson we can now use -proposed as a proper staging area, and used this feature rather extensively in the past month. From my point of view, 90% of the remaining daily dist-upgrade failures were due to packages building on different architectures at vastly different times, or failing on some, but not all architectures (“arch skew”). This is something you cannot really predict or guard against as a developer when you upload large and potentially harmful packages directly to the development release, so uploading them to the staging area and letting everything build there will reduce the breakage to zero. This was successfully demonstrated with Unity, GTK, and other packages where arch skew pretty much always causes people to hose their desktop, as well as daily CD images not working.

    I’m also looking forward to combining the staging area with lots of automatic tests against reverse dependencies (e. g. testing the installer against a new GTK or pygobject before it lands), something we just barely tipped our toes in.

    I can’t imagine how we were ever able to develop our new releases the old way. :-)

    Precise Pangolin^W^WUbuntu 12.04, I’m proud of you! Go out and amaze people!

    Tags: , , ,

Precise’s QA improvements for Alpha-1

I’m the release engineer in charge for Precise Alpha 1 which is currently being prepared. I must say, this has been a real joy! The fruits of the new QA paradigm and strategy and the new Stable+1 maintenance team have already achieved remarkable results:

  • The archive consistency reports like component-mismatches, uninstallability, etc. now appear about 20 minutes earlier than in oneiric.
  • CD image builds can now happen 30 minutes earlier after the publisher start, and are much quicker now due to moving to newer machines. We can now build an i386 or amd64 CD image in 8 minutes! Currently they still need to wait for the slow powerpc buildd, but moving to a faster machine there is in progress. These improvements lead to much faster image rebuild turnarounds.
  • Candidate CDs now get automatically posted to the new ISO tracker as soon as they appear.
  • Whenever a new Ubuntu image is built (daily or candidate), they automatically get smoke-tested, so we know that the installer works under some standard scenarios and produces an install which actually boots.
  • Due to the new discipline and the stable+1 team, we had working daily ISOs pretty much every day. In previous Alphas, the release engineer(s) pretty much had to work fulltime for a day or two to fix the worst uninstallability etc., all of this now went away.

All this meant that as a release engineer almost all of the hectic and rather dull work like watching for finished ISO builds and posting them or getting the archive into a releasable state completely went away. We only had to decide when it was a good time for building a set of candidate images, and trigger them, which is just copy&pasting some standard commands.

So I could fully concentrate on the interesting bits like actually investigating and debugging bug reports and regressions. As the Law of Conservation of Breakage dictates, taking away work from the button pushing side just caused the actual bugs to be much harder and earned us e. g. this little gem which took Jean-Baptiste, Andy, and me days to even reproduce properly, and will take much more to debug and fix.

In summary, I want to say a huge “Thank you!” to the Canonical QA team, in particular Jean-Baptiste Lallement for setting up the auto-testing and Jenkins integration, and the stable+1 team (Colin Watson, Mike Terry, and Mathieu Trudel-Lapierre in November) for keeping the archive in such excellent shape and improving our tools!

Tags: , , ,

Apport 1.90: Client-side duplicate checking

Apport and the retracer bot in the Canonical data center have provided server-side automatic closing of duplicate crash report bugs for quite a long time. As we have only kept Apport crash detection enabled in the development release, we got away with this as bugs usually did not get so many duplicates that they became unmanageable. Also, the number of duplicates provided a nice hint to how urgent and widespread a crash actually was.

However, it’s time to end that era and provide something better now:

  • This probably caused a lot of frustration when a reporter of the crash spent time, bandwidth, and creativity to upload the crash data and create a description for it, only to find that it got closed as a duplicate 20 minutes later.
  • Some highly visible crashes sometimes generated up to a hundred duplicates in Launchpad, which was prone to timeouts, and needless catch-up by the retracers.
  • We plan to have a real crash database soon, and eventually want to keep Apport enabled in stable releases. This will raise the number of duplicates that we get by several magnitudes.
  • For common crashes we had to write manual bug patterns to avoid getting even more duplicates.

So with the just released Apport 1.90 we introduce client-side duplicate checking. So from now, when you report a crash, you are likely to see “We already know about this” right away, without having to upload or type anything, and you will get directed to the bug page. You should mark yourself as affected and/or subscribe to the bug, both to get a notification when it gets fixed, and also to properly raise the “hotness” of the bug to bubble up to developer attention.

For the technically interested, this is how we detect duplicates for the “signal” crashes like SIGSEGV (as opposed to e. g. Python crashes, where we always have a fully symbolic stack trace):
As we cannot rely on symbolic stack traces, and do not want to force every user to download tons of debug symbols, Apport now falls back to generating a “crash address signature” which combines the absolute addresses of the (non-symbolic) stack trace and the /proc/pid/maps mapping to a stack of libraries and the relative offsets within those, which is stable under ASLR for a given set of dependency versions. As the offsets are specific to the architecture, we form the signature as combination of the executable name, the signal number, the architecture, and the offset list. For example, the i386 signature of bug looks like this:


As library dependencies can change, we have more than one architecture, and the faulty function can be called from different entry points, there can be many address signatures for a bug, so the database maintains an N:1 mapping. In its current form the signatures are taken as-is, which is much more strict than it needs to be. Once this works in principle, we can refine the matching to also detect duplicates from different entry points by reducing the part that needs to match to the common prefix of several signatures which were proven to be a duplicate by the retracer (which gets a fully symbolic stack trace).

The retracer bots now exports the current duplicate/address signature database to in an indexed text format from where Apport clients can quickly check whether a bug is known.

For the Launchpad crash database implementation we actually check if the bug is readable by the reporter, i. e. it is private and the reporter is in a subscribed team, or the bug is public; if not, we let him report the bug anyway and duplicate it later through the existing server-side retracer, so that the reporter has a chance of getting subscribed to the bug. We also let the bug be filed if the currently existing symbolic stack trace is bad (tagged as apport-failed-retrace) or if a developer wants a new symbolic stack trace with the current libraries (tagged as apport-request-retrace).

As this is a major new feature, I decided that it’s time to call this Apport 2.0. This is the first public beta towards it, thus called 1.90. With Apport’s test driven and agile development the version numbers do not mean much anyway (the retracer bots in the data center always just run trunk, for example), so this is as good time as any to reset the rather large “.26″ minor version that we are at right now.

Tags: , , , , ,