simple email setup

categories: config, debian

I was unable to find a good place that describes how to create a simple self-hosted email setup. The most surprising discovery was, how much already works after:

apt-get install postfix dovecot-imapd

Right after having finished the installation I was able to receive email (but only in in /var/mail in mbox format) and send email (bot not from any other host). So while I expected a pretty complex setup, it turned out to boil down to just adjusting some configuration parameters.


The two interesting files to configure postfix are /etc/postfix/ and /etc/postfix/ A commented version of the former exists in /usr/share/postfix/ Alternatively, there is the ~600k word strong man page postconf(5). The latter file is documented in master(5).


I changed the following in my

@@ -37,3 +37,9 @@
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
+home_mailbox = Mail/
+smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated
+smtpd_sasl_type = dovecot
+smtpd_sasl_path = private/auth
+smtp_helo_name =

At this point, also make sure that the parameters smtpd_tls_cert_file and smtpd_tls_key_file point to the right certificate and private key file. So either change these values or replace the content of /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key.

The home_mailbox parameter sets the default path for incoming mail. Since there is no leading slash, this puts mail into $HOME/Mail for each user. The trailing slash is important as it specifies ``qmail-style delivery'' which means maildir.

The default of the smtpd_recipient_restrictions parameter is permit_mynetworks reject_unauth_destination so this just adds the permit_sasl_authenticated option. This is necessary to allow users to send email when they successfully verified their login through dovecot. The dovecot login verification is activated through the smtpd_sasl_type and smtpd_sasl_path parameters.

I found it necessary to set the smtp_helo_name parameter to the reverse DNS of my server. This was necessary because many other email servers would only accept email from a server with a valid reverse DNS entry. My hosting provider charges USD 7.50 per month to change the default reverse DNS name, so the easy solution is, to instead just adjust the name announced in the SMTP helo.


The file is used to enable the submission service. The following diff just removes the comment character from the appropriate section.

@@ -13,12 +13,12 @@
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
-#submission inet n - - - - smtpd
-# -o syslog_name=postfix/submission
-# -o smtpd_tls_security_level=encrypt
-# -o smtpd_sasl_auth_enable=yes
-# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
-# -o milter_macro_daemon_name=ORIGINATING
+submission inet n - - - - smtpd
+ -o syslog_name=postfix/submission
+ -o smtpd_tls_security_level=encrypt
+ -o smtpd_sasl_auth_enable=yes
+ -o smtpd_client_restrictions=permit_sasl_authenticated,reject
+ -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes


Since above configuration changes made postfix store email in a different location and format than the default, dovecot has to be informed about these changes as well. This is done in /etc/dovecot/conf.d/10-mail.conf. The second configuration change enables postfix to authenticate users through dovecot in /etc/dovecot/conf.d/10-master.conf. For SSL one should look into /etc/dovecot/conf.d/10-ssl.conf and either adapt the parameters ssl_cert and ssl_key or store the correct certificate and private key in /etc/dovecot/dovecot.pem and /etc/dovecot/private/dovecot.pem, respectively.

The dovecot-core package (which dovecot-imapd depends on) ships tons of documentation. The file /usr/share/doc/dovecot-core/dovecot/documentation.txt.gz gives an overview of what resources are available. The path /usr/share/doc/dovecot-core/dovecot/wiki contains a snapshot of the dovecot wiki at The example configurations seem to be the same files as in /etc/ which are already well commented.


The following diff changes the default email location in /var/mail to a maildir in ~/Mail as configured for postfix above.

@@ -27,7 +27,7 @@
# <doc/wiki/MailLocation.txt>
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Mail

# If you need to set multiple mailbox locations or want to change default
# namespace settings, you can do it by defining namespace sections.


And this enables the authentication socket for postfix:

@@ -93,9 +93,11 @@

# Postfix smtp-auth
- #unix_listener /var/spool/postfix/private/auth {
- # mode = 0666
- #}
+ unix_listener /var/spool/postfix/private/auth {
+ mode = 0660
+ user = postfix
+ group = postfix
+ }

# Auth process is run as this user.
#user = $default_internal_user


Now Email will automatically put into the '~/Mail' directory of the receiver. So a user has to be created for whom one wants to receive mail...

$ adduser josch

...and any aliases for it to be configured in /etc/aliases.

@@ -1,2 +1,4 @@
-# See man 5 aliases for format
-postmaster: root
+root: josch
+postmaster: josch
+hostmaster: josch
+webmaster: josch

After editing /etc/aliases, the command

$ newaliases

has to be run. More can be read in the aliases(5) man page.

Finishing up

Everything is done and now postfix and dovecot have to be informed about the changes. There are many ways to do that. Either restart the services, reboot or just do:

$ postfix reload
$ doveadm reload


$ apt-get install postfix-policyd-spf-python


policy-spf_time_limit = 3600s


policy-spf unix - n n - - spawn user=nobody argv=/usr/bin/policyd-spf

DNS TXT record with value:

v=spf1 ip4: -all


debugLevel = 1 
defaultSeedOnly = 1

HELO_reject = SPF_Not_Pass
Mail_From_reject = Fail

PermError_reject = False
TempError_Defer = False

skip_addresses =,::ffff:,::1//128

FIXME: the skip_addresses field should also list all hosts that I get email forwarded from. For example if I get my email forwarded to this server, then I should list the mail relay servers. A list of these can be found by doing:

ldapsearch -x -LLL -b dc=debian,dc=org -h 'purpose=mail relay' ipHostNumber

Otherwise, senders with an SPF record with only their own IP and a final -all will see their mail rejected by the server. This is because the email was forwarded by the relay but that IP was not in their SPF record.


$ apt-get install opendkim opendkim-tools
$ mkdir /etc/mail
$ cd /etc/mail
$ opendkim-genkey -t -s mail -d
$ cat mail.txt


Domain KeyFile /etc/mail/mail.private Selector mail Canonicalization relaxed/relaxed




milter_default_action = accept milter_protocol = 2 smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891

$ service opendkim restart
$ service postfix restart
View Comments

automatically suspending cpu hungry applications

categories: config

TLDR: Using the awesome window manager: how to automatically send SIGSTOP and SIGCONT to application windows when they get unfocused or focused, respectively, to let the application not waste CPU cycles when not in use.

I don't require any fancy looking GUI, so my desktop runs no full-blown desktop environment like Gnome or KDE but instead only awesome as a light-weight window manager. Usually, the only application windows I have open are rxvt-unicode as my terminal emulator and firefox/iceweasel with the pentadactyl extension as my browser. Thus, I would expect that CPU usage of my idle system would be pretty much zero but instead firefox decides to constantly eat 10-15%. Probably to update some GIF animations or JavaScript (or nowadays even HTML5 video animations). But I don't need it to do that when I'm not currently looking at my browser window. Disabling all JavaScript is no option because some websites that I need for uni or work are just completely broken without JavaScript, so I have to enable it for those websites.

Solution: send SIGSTOP when my firefox window looses focus and send SIGCONT once it gains focus again.

The following addition to my /etc/xdg/awesome/rc.lua does the trick:

local capi = { timer = timer }
client.add_signal("focus", function(c)
if c.class == "Iceweasel" then
awful.util.spawn("kill -CONT " ..
client.add_signal("unfocus", function(c)
if c.class == "Iceweasel" then
local timer_stop = capi.timer { timeout = 10 }
local send_sigstop = function ()
if ~= then
awful.util.spawn("kill -STOP " ..
timer_stop:add_signal("timeout", send_sigstop)

Since I'm running Debian, the class is "Iceweasel" and not "Firefox". When the window gains focus, a SIGCONT is sent immediately. I'm executing kill because I don't know how to send UNIX signals from lua directly.

When the window looses focus, then the SIGSTOP signal is only sent after a 10 second timeout. This is done for several reasons:

  • I don't want firefox to stop in cases where I'm just quickly switching back and forth between it and other application windows
  • When firefox starts, it doesn't have a window for a short time. So without a timeout, the process would start but immediately get stopped as there is no window to have a focus.
  • when using the X paste buffer, then the application behind the source window must not be stopped when pasting content from it. I assume that I will not spend more than 10 seconds between marking a string in firefox and pasting it into another window

With this change, when I now open htop, the process consuming most CPU resources is htop itself. Success!

Another cool advantage is, that firefox can now be moved completely into swap space in case I run otherwise memory hungry applications without ever requiring any memory from swap until I really use it again.

I haven't encountered any disadvantages of this setup yet. If 10 seconds prove to be too short to copy and paste I can easily extend this delay. Even clicking on links in my terminal works flawlessly - the new tab will just only load once firefox gets focused again.

EDIT: thanks to Helmut Grohne for suggesting to compare the pid instead of the raw client instance to prevent misbehaviour when firefox opens additional windows like the preferences dialog.

View Comments temporarily not updated

categories: debian

I'll be moving places twice within the next month and as I'm hosting the machine that generates the data, I'll temporarily suspend the service until maybe around September. Until then, will not be updated and retain the status as of 2014-07-28. Sorry if that causes any inconvenience. You can write to me if you need help with manually generating the data provided.

View Comments

botch updates

categories: debian

My last update about ongoing development of botch, the bootstrap/build ordering tool chain, was four months ago and about several incremental updates. This post will be of similar nature. The most interesting news is probably the additional data that now provides. This is listed in the next section. All subsequent sections then list the changes under the hood that made the additions to possible.

The service used to have botch as a git submodule but now runs botch from its Debian package. This at least proves that the botch Debian package is mature enough to do useful stuff with it. In addition to the bootstrapping results by architecture, now also hosts the following additional services:

Further improvements concern how dependency cycles are now presented in the html overviews. While before, vertices in a cycle where separated by commas as if they were simple package lists, vertices are now connected by unicode arrows. Dashed arrows indicate build dependencies while solid arrows indicate builds-from relationships. For what it's worth, installation set vertices now contain their installation set in their title attribute.

Debian package

Botch has long depended on features of an unreleased version of dose3 which in turn depended on an unrelease version of libcudf. Both projects have recently made new releases so that I was now able to drop the dose3 git submodule and rely on the host system's dose3 version instead. This also made it possible to create a Debian package of botch which currently sits at Debian mentors. Writing the package also finally made me create a usable install target in the Makefile as well as adding stubs for the manpages of the 44 applications that botch currently ships. The actual content of these manpages still has to be written. The only documentation botch currently ships in the botch-doc package is an offline version of the wiki on gitorious. The new page ExamplesGraphs even includes pictures.


By default, botch analyzes the native bootstrapping phase. That is, assume that the initial set of Essential:yes and build-essential packages magically exists and find out how to bootstrap the rest from there through native compilation. But part of the bootstrapping problem is also to create the set of Essential:yes and build-essential packages from nothing via cross compilation. Botch is unable to analyze the cross phase because too many packages cannot satisfy their crossbuild dependencies due to multiarch conflicts. This problem is only about the dependency metadata and not about whether a given source package actually crosscompiles fine in practice.

Helmut Grohne has done great work with rebootstrap which is regularly run by He convinced me that we need an overview of what packages are blocking the analysis of the cross case and that it was useful to have a crossbuild order even if that was a fake order just to have a rough overview of the current situation in Debian Sid.

I wrote a couple of scripts which would run dose-builddebcheck on a repository, analyze which packages fail to satisfy their crossbuild dependencies and why, fix those cases by adjusting package metadata accordingly and repeat until all relevant source packages satisfy their crossbuild dependencies. The result of this can then be used to identify the packages that need to be modified as well as to generate a crossbuild order.

The fixes to the metadata are done in an automatic fashion and do not necessarily reflect the real fix that would solve the problem. Nevertheless, I ended up agreeing that it is better to have a slightly wrong overview than no overview at all.

Minimizing the dependency graph size

Installation sets in the dependency graph are calculated independent from each other. If two binary packages provide A, then dependencies on A in different installation sets might choose different binary packages as providers of A. The same holds for disjunctive dependencies. If a package depends on A | C and another package depends on C | A then there is no coordination to choose C so to minimize the overall amount of vertices in the graph. I implemented two methods to minimize the impact of cases where the dependency solver has multiple options to satisfy a dependency through Provides and dependency disjunctions.

The first method is inspired by Helmut Grohne. An algorithm goes through all disjunctive binary dependencies and removes all virtual packages, leaving only real packages. Of the remaining real packages, the first one is selected. For build dependencies, the algorithm drops all but the first package in every disjunction. This is also what sbuild does. Unfortunately this solution produces an unsatisfiable dependency situation in most cases. This is because oftentimes it is necessary to select the virtual disjunctive dependency because of a conflict relationship introduced by another package.

The second method involves aspcud, a cudf solver which can optimize a solution by a criteria. This solution is based on an idea by Pietro Abate who implemented the basis for this idea back in 2012. In contrast to a usual cudf problem, binary packages now also depend on the source packages they build from. If we now ask aspcud to find an installation set for one of the base source packages (I chose src:build-essential) then it will return an installation set that includes source packages. As an optimization criteria the number of source packages in the installation set is minimized. This solution would be flawless if there were no conflicts between binary packages. Due to conflicts not all binary packages that must be coinstallable for this strategy to work can be coinstalled. The quick and dirty solution is to remove all conflicts before passing the cudf universe to aspcud. But this also means that the solution does sometimes not work in practice.

Test cases

Botch now finally has a test target in its Makefile. The test target tests two code paths of the script and the script. Running these two scripts covers testing most parts of botch. Given that I did lots of refactoring in the past weeks, the test cases greatly helped to assure that I didnt break anything in the process.

I also added autopkgtests to the Debian packaging which test the same things as the test target but naturally run the installed version of botch instead. The autopkgtests were a great help in weeding out some lasts bugs which made botch depend on being executed from its source directory.

Python 3

Reading the suggestions in the Debian python policy I evaluated the possibility to use Python 3 for the Python scripts in botch. While I was at it I added transparent decompression for gzip, bz2 and xz based on the file magic, replaced python-apt with python-debian because of bug#748922 and added argparse argument parsing to all scripts.

Unfortunately I had to find out that Python 3 support does not yet seem to be possible for botch for the following reasons:

  • no soap module for Python 3 in Debian (needed for bts access)
  • hash randomization is turned on by default in Python 3 and therefore the graph output of networkx is not deterministic anymore (bug#749710)

Thus I settled for changing the code such that it would be compatible with Python 2 as well as with Python 3. Because of the changed string handling and sys.stdout properties in Python 3 this proved to be tricky. On the other hand this showed me bugs in my code where I was wrongly relying on deterministic dictionary key traversal.

View Comments

mapbender - maps for long-distance travels

categories: openstreetmap

Back in 2007 I stumbled over the "Plus Fours Routefinder", an invention of the 1920's. It's worn on the wrist and allows the user to scroll through a map of the route they planned to take, rolled up on little wooden rollers.

At that point I thought: that's awesome for long trips where you either dont want to take electronics with you or where you are without any electricity for a long time. And creating such rollable custom maps of your route automatically using openstreetmap data should be a breeze! Nevertheless it seems nobody picked up the idea.

Years passed and in a few weeks I'll go on a biking trip along the Weser, a river in nothern Germany. For my last multi-day trip (which was through the Odenwald, an area in southern Germany) I printed a big map from openstreetmap data which contained the whole route. Openstreetmap data is fantastic for this because in contrast to commercial maps it doesnt only allow you to just print the area you need but also allows you to highlight your planned route and objects you would probably not find in most commercial maps like for example supermarkets to stock up on supplies or bicycle repair shops.

Unfortunately such big maps have the disadvantage that to show everything in the amount of detail that you want along your route, they have to be pretty huge and thus easily become an inconvenience because the local plotter can't handle paper as large as DIN A0 or because it's a pain to repeatedly fold and unfold the whole thing every time you want to look at it. Strong winds are also no fun with a huge sheet of paper in your hands. One solution would be to print DIN A4 sized map regions in the desired scale. But that has the disadvantage that either you find yourself going back and forth between subsequent pages because you happen to be right at the border between two pages or you have to print sufficiently large overlaps, resulting in many duplicate map pieces and more pages of paper than you would like to carry with you.

It was then that I remembered the "Plus Fours Routefinder" concept. Given a predefined route it only shows what's important to you: all things close to the route you plan to travel along. Since it's a long continuous roll of paper there is no problem with folding because as you travel along the route you unroll one end and roll up the other. And because it's a long continuous map there is also no need for flipping pages or large overlap regions. There is not even the problem of not finding a big enough sheet of paper because multiple DIN A4 sheets can easily be glued together at their ends to form a long roll.

On the left you see the route we want to take: the bicycle route along the Weser river. If I wanted to print that map on a scale that allows me to see objects in sufficient detail along our route, then I would also see objects in Hamburg (upper right corner) in the same amount of detail. Clearly a waste of ink and paper as the route is never even close to Hamburg.

As the first step, a smooth approximation of the route has to be found. It seems that the best way to do that is to calculate a B-Spline curve approximating the input data with a given smoothness. On the right you can see the approximated curve with a smoothing value of 6. The curve is sampled into 20 linear segments. I calculated the B-Spline using the FITPACK library to which scipy offers a Python binding.

The next step is to expand each of the line segments into quadrilaterals. The distance between the vertices of the quadrilaterals and the ends of the line segment they belong to is the same along the whole path and obviously has to be big enough such that every point along the route falls into one quadrilateral. In this example, I draw only 20 quadrilaterals for visual clarity. In practice one wants many more for a smoother approximation.

Using a simple transform, each point of the original map and the original path in each quadrilateral is then mapped to a point inside the corresponding "straight" rectangle. Each target rectangle has the height of the line segment it corresponds to. It can be seen that while the large scale curvature of the path is lost in the result, fine details remain perfectly visible. The assumption here is, that while travelling a path several hundred kilometers long, it does not matter that large scale curvature that one is not able to perceive anyways is not preserved.

The transformation is done on a Mercator projection of the map itself as well as the data of the path. Therefore, this method probably doesnt work if you plan to travel to one of the poles.

Currently I transform openstreetmap bitmap data. This is not quite optimal as it leads to text on the map being distorted. It would be just as easy to apply the necessary transformations to raw openstreetmap XML data but unfortunately I didnt find a way to render the resulting transformed map data as a raster image without setting up a database. I would've thought that it would be possible to have a standalone program reading openstreetmap XML and dumping out raster or svg images without a round trip through a database. Furthermore, tilemill, one of the programs that seem to be one of the least hasslesome to set up and produce raster images is stuck in an ITP and the existing packaging attempt fails to produce a non-empty binary package. Since I have no clue about nodejs packaging, I wrote about this to the pkg-javascript-devel list. Maybe I can find a kind soul to help me with it.

Here a side by side overview that doesnt include the underlying map data but only the path. It shows how small details are preserved in the transformation.

The code that produced the images in this post is very crude, unoptimized and kinda messy. If you dont care, then it can be accessed here

View Comments
« Older Entries -- Newer Entries »