$ sudo mmdebstrap unstable ./unstable-chroot
And you'll get a Debian unstable chroot just as debootstrap would create it. It
also supports the --variant
option with minbase
and buildd
values which
install the same package sets as debootstrap would.
A list of advantages in contrast to debootstrap:
Essential: yes
and apt)$SOURCE_DATE_EPOCH
is set)--second-stage
)You can find the code here:
]]>I thought this was a pretty simple task to solve but I am unable to find any software that fits above criteria.
The below table shows the result of my research of what's currently available. The columns mark whether the respective software fulfills one of the six criteria from above.
Software | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|
owncloud | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
sparkleshare | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
dvcs-autosync | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
git annex assistant | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
syncthing | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
pydio | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
seafile | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
sandstorm.io | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
ipfs | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
bozon | ✔ | ✔ | ✔ | ✘ | ✘ | ✔ |
droppy | ✔ | ✔ | ✔ | ✘ | ✘ | ✔ |
Pydio, seafile and sandstorm.io look promising but they seem to be beasts similar in complexity to owncloud as they bring features like version tracking, office integration, wikis, synchronization across multiple devices or online editing of files which are features that I do not need.
I would already be very happy if there was a script which would make it easy to create a hard-to-guess symlink to a directory with data tracked by git annex under my www-root and then generate some static HTML to provide a thumbnails view or a photo gallery. Unfortunately, even that solution would not be sufficient as it would still disallow public upload by anybody whom I would give the link to...
If you know some software that meets my criteria or would like to submit corrections to above table, please shoot an email to josch@debian.org. Thanks!
]]>EDIT2: I created this post when Let's Encrypt was still in beta. For a recipe of how to use letsencrypt with pound and without super user privileges read the very last section at the bottom.
I just tried out the letsencrypt client Debian packages prepared by Harlan Lieberman-Berg which can be found here:
My server setup uses Pound as a reverse proxy in front of a number of LXC based containers running the actual services. Furthermore, letsencrypt only supports Nginx and Apache for now, so I had to manually setup things anyways. Here is how.
After installing the Debian packages I built from above git repositories, I ran the following commands:
$ mkdir -p letsencrypt/etc letsencrypt/lib letsencrypt/log
$ letsencrypt certonly --authenticator manual --agree-dev-preview \
--server https://acme-v01.api.letsencrypt.org/directory --text \
--config-dir letsencrypt/etc --logs-dir letsencrypt/log \
--work-dir letsencrypt/lib --email josch@mister-muffin.de \
--domains mister-muffin.de --domains blog.mister-muffin.de \
--domains [...]
I created the letsencrypt
directory structure to be able to run letsencrypt
as a normal user. Otherwise, running this command would require access to
/etc/letsencrypt
and others. Having to set this up and pass all these
parameters is a bit bothersome but there is an upstream
issue about making this
easier when using the "certonly" option which in princible should not require
superuser privileges.
The --server
option is necessary for now because "Let's Encrypt" is still in
beta and one needs to register for
it.
Without the --server
option one will get an untrusted certificate from the
"happy hacker fake CA".
The letsencrypt
program will then ask me for my agreement to the Terms of
Service and then, for each domain I specified with the --domains
option
present me the token content and the location under each domain where it
expects to find this content, respectively. This looks like this each time:
-------------------------------------------------------------------------------
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running letsencrypt in manual mode on a machine that is
not your server, please ensure you're okay with that.
Are you OK with your IP being logged?
-------------------------------------------------------------------------------
(Y)es/(N)o: Y
Make sure your web server displays the following content at
http://mister-muffin.de/.well-known/acme-challenge/XXXX before continuing:
{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}
Content-Type header MUST be set to application/jose+json.
If you don't have HTTP server configured, you can run the following
command on the target server (as root):
mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
cd /tmp/letsencrypt/public_html
echo -n '{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}' > .well-known/acme-challenge/XXXX
# run only once per server:
$(command -v python2 || command -v python2.7 || command -v python2.6) -c \
"import BaseHTTPServer, SimpleHTTPServer; \
SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {'': 'application/jose+json'}; \
s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \
s.serve_forever()"
Press ENTER to continue
For brevity I replaced any large base64 encoded chunks of the messages with
YYYY
, ZZZZ
and QQQQ
. The token location is abbreviated with XXXX
.
After temporarily stopping Pound on my webserver I created the directory
/tmp/letsencrypt/public_html/.well-known/acme-challenge
and then opened two
shells on my server, both at /tmp/letsencrypt/public_html
. In one, I kept a
tiny HTTP server running (like the suggested Python SimpleHTTPServer which will
also work if one has Python installed). In the other I copy pasted the echo
line that the letsencrypt
program suggested me to run.
I had to copypaste that echo
command for each domain I wanted to verify. This
could easily be automated, so I filed an issue about
this with upstream.
It seems that the letsencrypt servers query each of these tokens twice: once directly each time after having hit enter after seeing the message above and another time once all tokens are in place.
At the end of this ordeal I get:
2015-11-04 11:12:18,409:WARNING:letsencrypt.client:Non-standard path(s), might not work with crontab installed by your operating system package manager
IMPORTANT NOTES:
- If you lose your account credentials, you can recover through
e-mails sent to josch@mister-muffin.de.
- Congratulations! Your certificate and chain have been saved at
letsencrypt/etc/live/mister-muffin.de/fullchain.pem. Your cert will
expire on 2016-02-02. To obtain a new version of the certificate in
the future, simply run Let's Encrypt again.
- Your account credentials have been saved in your Let's Encrypt
configuration directory at letsencrypt/etc. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Let's
Encrypt so making regular backups of this folder is ideal.
I can now scp the content of letsencrypt/etc/live/mister-muffin.de/*
to my
server. Unfortunately, Pound (and also my ejabberd XMPP server) requires the
private key to be in the same file as the certificate and the chain, so on the
server I also had to do:
cat /etc/ssl/private/privkey.pem /etc/ssl/private/fullchain.pem > /etc/ssl/private/private_fullchain.pem
And edit the Pound config to use /etc/ssl/private/private_fullchain.pem
. But
that's all, folks!
EDIT
It seems that manually copying over the echo commands as I described above is
not necessary. Instead of using the certonly
plugin, I can use the webroot
plugin. That plugin takes the --webroot-path
option and will copy the tokens
to there. Since my webroot is on a remote machine, I could just mount it
locally via sshfs and pass the mountpoint as --webroot-path
.
That I didn't realize that the webroot plugin does what I want (and not the certonly plugin) can easily be explained by the only documentation of the webroot plugin in the help output and the man page generated from it being "Webroot Authenticator" which is not very helpful.
Another user seems to have run into similar problems. Better documenting the plugins so that these situations can be prevented in the future is tracked in this upstream bug.
EDIT2
Now that letsencrypt is out for everybody, lets update the instructions with
what I learned. Firstly, since we don't want a long downtime, we add the
following section to /etc/pound/pound.cfg
:
Service
URL "^/.well-known/acme-challenge/"
BackEnd
Address 127.0.0.1
Port 8000
End
End
This will make sure that all requests to /.well-known/acme-challenge/
and
below are redirected to a server running on port 8000. That service will be a
temporary webserver which we will only switch on for the purpose of retrieving
new certificates. So on my server I run:
$ mkdir ~/letsencrypt
$ (cd ~/letsencrypt && python3 -m http.server 8000)
Now on my laptop I mount that directory via sshfs locally:
$ sshfs fulda:/root/letsencrypt ~/letsencrypt/fulda
And finally I use the webroot
authenticator to automatically retrieve and
validate all my certificates. No manual intervention needed anymore:
$ letsencrypt certonly --authenticator webroot --text \
--config-dir letsencrypt/etc --logs-dir letsencrypt/log \
--work-dir letsencrypt/lib --email josch@mister-muffin.de \
--webroot-path ~/letsencrypt/fulda --domains mister-muffin.de \
--domains [...]
Now I can quit the python webserver running on my server and copy the generated certificates into their right locations.
]]>newuidmap
and newgidmap
are used). I wrote a Perl script which documents how this is done in practice.
This script is nearly equivalent to using the existing commands lxc-usernsexec
[opts] -- unshare [opts] -- COMMAND
except that these two together cannot be
used to mount a new proc. Apart from this problem, this Perl script might also
be useful by itself because it is architecture independent and easily
inspectable for the curious mind without resorting to sources.debian.net (it is
heavily documented at nearly 2 lines of comments per line of code on average).
It can be retrieved here at
https://gitlab.mister-muffin.de/josch/user-unshare/blob/master/user-unshare
Long story: Nearly two years after my last last rant about everything needing
superuser privileges in
Linux,
I'm still interested in techniques that let me do more things without becoming
root. Helmut Grohne had told me for a while about unshare(), or user namespaces
as the right way to have things like chroot without root. There are also
reports of LXC containers working without root privileges but they are hard to
come by. A couple of days ago I had some time again, so Helmut helped me to get
through the major blockers that were so far stopping me from using unshare in a
meaningful way without executing everything with sudo
.
My main motivation at that point was to let dpkg-buildpackage
when executed
by sbuild
be run with an unshared network namespace and thus without network
access (except for the loopback interface) because like pbuilder I wanted
sbuild to enforce the rule not to access any remote resources during the build.
After several evenings of investigating and doctoring at the Perl script I
mentioned initially, I came to the conclusion that the only place that can
unshare the network namespace without disrupting anything is schroot itself.
This is because unsharing inside the chroot will fail because
dpkg-buildpackage is run with non-root privileges and thus the user namespace
has to be unshared. But this then will destroy all ownership information. But
even if that wasn't the case, the chroot itself is unlikely to have (and also
should not) tools like ip
or newuidmap
and newgidmap
installed. Unsharing
the schroot call itself also will not work. Again we first need to unshare the
user namespace and then schroot will complain about wrong ownership of its
configuration file /etc/schroot/schroot.conf
. Luckily, when contacting Roger
Leigh about this wishlist feature in
bug#802849 I was told that this was already
implemented in its git master \o/. So this particular problem seems to be taken
care of and once the next schroot release happens, sbuild will make use of it
and have unshare --net
capabilities just like pbuilder
already had since
last year.
With the sbuild case taken care of, the rest of this post will introduce the
Perl script I wrote.
The name user-unshare
is really arbitrary. I just needed some identifier for
the git repository and a filename.
The most important discovery I made was, that Debian disables unprivileged user
namespaces by default with the patch
add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by-default.patch
to the
Linux kernel. To enable it, one has to first either do
echo 1 | sudo tee /proc/sys/kernel/unprivileged_userns_clone > /dev/null
or
sudo sysctl -w kernel.unprivileged_userns_clone=1
The tool tries to be like unshare(1) but with the power of lxc-usernsexec(1) to
map more than one id into the new user namespace by using the programs
newgidmap
and newuidmap
. Or in other words: This tool tries to be like
lxc-usernsexec(1) but with the power of unshare(1) to unshare more than just
the user and mount namespaces. It is nearly equal to calling:
lxc-usernsexec [opts] -- unshare [opts] -- COMMAND
Its main reason of existence are:
I hoped that systemd-nspawn
could do what I wanted but it seems that its
requirement for being run as root will not change any time
soon
Another tool in Debian that offers to do chroot without superuser privileges is
linux-user-chroot
but that one cheats by being suid root.
Had I found lxc-usernsexec
earlier I would've probably not written this. But
after I found it I happily used it to get an even better understanding of the
matter and further improve the comments in my code. I started writing my own
tool in Perl because that's the language sbuild was written in and as mentioned
initially, I intended to use this script with sbuild. Now that the sbuild
problem is taken care of, this is not so important anymore but I like if I can
read the code of simple programs I run directly from /usr/bin without having to
retrieve the source code first or use sources.debian.net.
The only thing I wasn't able to figure out is how to properly mount proc into
my new mount namespace. I found a workaround that works by first mounting a new
proc to /proc
and then bind-mounting /proc
to whatever new location for
proc is requested. I didn't figure out how to do this without mounting to
/proc
first partly also because this doesn't work at all when using
lxc-usernsexec
and unshare
together. In this respect, this perl script is a
bit more powerful than those two tools together. I suppose that the reason is
that unshare
wasn't written with having being called without superuser
privileges in mind. If you have an idea what could be wrong, the code has a big
FIXME
about this issue.
Finally, here a demonstration of what my script can do. Because of the /proc
bug, lxc-usernsexec
and unshare
together are not able to do this but it
might also be that I'm just not using these tools in the right way. The
following will give you an interactive shell in an environment created from one
of my sbuild chroot tarballs:
$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net \
--uts --mount --fork -- sh -c 'ip link set lo up && ip addr && \
hostname hoothoot-chroot && \
tar -C /tmp/buildroot -xf /srv/chroot/unstable-amd64.tar.gz; \
/usr/sbin/chroot /tmp/buildroot /sbin/runuser -s /bin/bash - josch && \
umount /tmp/buildroot/proc && rm -rf /tmp/buildroot'
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ whoami
josch
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ hostname
hoothoot-chroot
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ ls -lha /proc | head
total 0
dr-xr-xr-x 218 nobody nogroup 0 Oct 25 19:06 .
drwxr-xr-x 22 root root 440 Oct 1 08:42 ..
dr-xr-xr-x 9 root root 0 Oct 25 19:06 1
dr-xr-xr-x 9 josch josch 0 Oct 25 19:06 15
dr-xr-xr-x 9 josch josch 0 Oct 25 19:06 16
dr-xr-xr-x 9 root root 0 Oct 25 19:06 7
dr-xr-xr-x 9 josch josch 0 Oct 25 19:06 8
dr-xr-xr-x 4 nobody nogroup 0 Oct 25 19:06 acpi
dr-xr-xr-x 6 nobody nogroup 0 Oct 25 19:06 asound
Of course instead of running this long command we can also instead write a small shell script and execute that instead. The following does the same things as the long command above but adds some comments for further explanation:
#!/bin/sh
set -exu
# I'm using /tmp because I have it mounted as a tmpfs
rootdir="/tmp/buildroot"
# bring the loopback interface up
ip link set lo up
# show that the loopback interface is really up
ip addr
# make use of the UTS namespace being unshared
hostname hoothoot-chroot
# extract the chroot tarball. This must be done inside the user namespace for
# the file permissions to be correct.
#
# tar will fail to call mknod and to change the permissions of /proc but we are
# ignoring that
tar -C "$rootdir" -xf /srv/chroot/unstable-amd64.tar.gz || true
# run chroot and inside, immediately drop permissions to the user "josch" and
# start an interactive shell
/usr/sbin/chroot "$rootdir" /sbin/runuser -s /bin/bash - josch
# unmount /proc and remove the temporary directory
umount "$rootdir/proc"
rm -rf "$rootdir"
and then:
$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh
As mentioned in the beginning, the tool is nearly equivalent to calling
lxc-usernsexec [opts] -- unshare [opts] -- COMMAND
but because of the problem
with mounting proc (mentioned earlier), lxc-usernsexec
and unshare
cannot
be used with above example. If one tries anyways one will only get:
$ lxc-usernsexec -m b:0:1000:1 -m b:1:558752:1 -- unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh
unshare: mount /tmp/buildroot/proc failed: Invalid argument
I'd be interested in finding out why that is and how to fix it.
]]>And a super big thank you to Roger Leigh who, despite having resigned from Debian, was always available to give extremely helpful hints, tips, opinion and guidance with respect to sbuild development. Thank you!
Here is a list of the major changes since the last release:
--arch-all-only
to build arch:all
packagesSBUILD_CONFIG
allows to specify a custom
configuration file--build-path
to set a deterministic build path--extra-repository-key
for extra apt keys--build-dep-resolver=aspcud
for aspcud based resolver%SBUILD_SHELL
produces an interactive shell--build-deps-failed-commands
, --build-failed-commands
and
--anything-failed-commands
for more hooksThanks to akira for the confetti to celebrate the occasion!
]]>apt-get install postfix dovecot-imapd
Right after having finished the installation I was able to receive email (but
only in in /var/mail
in mbox format) and send email (bot not from any other
host). So while I expected a pretty complex setup, it turned out to boil down
to just adjusting some configuration parameters.
The two interesting files to configure postfix are /etc/postfix/main.cf
and
/etc/postfix/master.cf
. A commented version of the former exists in
/usr/share/postfix/main.cf.dist
. Alternatively, there is the ~600k word
strong man page postconf(5). The latter file is documented in master(5).
I changed the following in my main.cf
@@ -37,3 +37,9 @@
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
+
+home_mailbox = Mail/
+smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated
+smtpd_sasl_type = dovecot
+smtpd_sasl_path = private/auth
+smtp_helo_name = my.reverse.dns.name.com
At this point, also make sure that the parameters smtpd_tls_cert_file
and
smtpd_tls_key_file
point to the right certificate and private key file. So
either change these values or replace the content of
/etc/ssl/certs/ssl-cert-snakeoil.pem
and
/etc/ssl/private/ssl-cert-snakeoil.key
.
The home_mailbox
parameter sets the default path for incoming mail. Since
there is no leading slash, this puts mail into $HOME/Mail
for each user. The
trailing slash is important as it specifies ``qmail-style delivery'' which
means maildir.
The default of the smtpd_recipient_restrictions
parameter is
permit_mynetworks reject_unauth_destination
so this just adds the
permit_sasl_authenticated
option. This is necessary to allow users to send
email when they successfully verified their login through dovecot. The dovecot
login verification is activated through the smtpd_sasl_type
and
smtpd_sasl_path
parameters.
I found it necessary to set the smtp_helo_name
parameter to the reverse DNS
of my server. This was necessary because many other email servers would only
accept email from a server with a valid reverse DNS entry. My hosting provider
charges USD 7.50 per month to change the default reverse DNS name, so the easy
solution is, to instead just adjust the name announced in the SMTP helo
.
The file master.cf
is used to enable the submission
service. The following
diff just removes the comment character from the appropriate section.
@@ -13,12 +13,12 @@
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
-#submission inet n - - - - smtpd
-# -o syslog_name=postfix/submission
-# -o smtpd_tls_security_level=encrypt
-# -o smtpd_sasl_auth_enable=yes
-# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
-# -o milter_macro_daemon_name=ORIGINATING
+submission inet n - - - - smtpd
+ -o syslog_name=postfix/submission
+ -o smtpd_tls_security_level=encrypt
+ -o smtpd_sasl_auth_enable=yes
+ -o smtpd_client_restrictions=permit_sasl_authenticated,reject
+ -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes
Since above configuration changes made postfix store email in a different
location and format than the default, dovecot has to be informed about these
changes as well. This is done in /etc/dovecot/conf.d/10-mail.conf
. The second
configuration change enables postfix to authenticate users through dovecot in
/etc/dovecot/conf.d/10-master.conf
. For SSL one should look into
/etc/dovecot/conf.d/10-ssl.conf
and either adapt the parameters ssl_cert
and ssl_key
or store the correct certificate and private key in
/etc/dovecot/dovecot.pem
and /etc/dovecot/private/dovecot.pem
,
respectively.
The dovecot-core
package (which dovecot-imapd
depends on) ships tons of
documentation. The file
/usr/share/doc/dovecot-core/dovecot/documentation.txt.gz
gives an overview of
what resources are available. The path
/usr/share/doc/dovecot-core/dovecot/wiki
contains a snapshot of the dovecot
wiki at http://wiki2.dovecot.org/. The example configurations seem to be the
same files as in /etc/
which are already well commented.
The following diff changes the default email location in /var/mail
to a
maildir in ~/Mail
as configured for postfix above.
@@ -27,7 +27,7 @@
#
# <doc/wiki/MailLocation.txt>
#
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Mail
# If you need to set multiple mailbox locations or want to change default
# namespace settings, you can do it by defining namespace sections.
And this enables the authentication socket for postfix:
@@ -93,9 +93,11 @@
}
# Postfix smtp-auth
- #unix_listener /var/spool/postfix/private/auth {
- # mode = 0666
- #}
+ unix_listener /var/spool/postfix/private/auth {
+ mode = 0660
+ user = postfix
+ group = postfix
+ }
# Auth process is run as this user.
#user = $default_internal_user
Now Email will automatically put into the '~/Mail' directory of the receiver. So a user has to be created for whom one wants to receive mail...
$ adduser josch
...and any aliases for it to be configured in /etc/aliases
.
@@ -1,2 +1,4 @@
-# See man 5 aliases for format
-postmaster: root
+root: josch
+postmaster: josch
+hostmaster: josch
+webmaster: josch
After editing /etc/aliases
, the command
$ newaliases
has to be run. More can be read in the aliases(5) man page.
Everything is done and now postfix and dovecot have to be informed about the changes. There are many ways to do that. Either restart the services, reboot or just do:
$ postfix reload
$ doveadm reload
$ apt-get install postfix-policyd-spf-python
policy-spf_time_limit = 3600s
policy-spf unix - n n - - spawn user=nobody argv=/usr/bin/policyd-spf
DNS TXT record with value:
v=spf1 ip4:62.75.219.19 -all
debugLevel = 1
defaultSeedOnly = 1
HELO_reject = SPF_Not_Pass
Mail_From_reject = Fail
PermError_reject = False
TempError_Defer = False
skip_addresses = 127.0.0.0/8,::ffff:127.0.0.0//104,::1//128
FIXME: the skip_addresses
field should also list all hosts that I get email
forwarded from. For example if I get my josch@debian.org email forwarded to
this server, then I should list the debian.org mail relay servers. A list of
these can be found by doing:
ldapsearch -x -LLL -b dc=debian,dc=org -h db.debian.org 'purpose=mail relay' ipHostNumber
Otherwise, senders with an SPF record with only their own IP and a final -all
will see their mail rejected by the server. This is because the email was
forwarded by the debian.org relay but that IP was not in their SPF record.
$ apt-get install opendkim opendkim-tools
$ mkdir /etc/mail
$ cd /etc/mail
$ opendkim-genkey -t -s mail -d mister-muffin.de
$ cat mail.txt
Domain mister-muffin.de KeyFile /etc/mail/mail.private Selector mail Canonicalization relaxed/relaxed
SOCKET="inet:8891@localhost"
milter_default_action = accept milter_protocol = 2 smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891
$ service opendkim restart
$ service postfix restart
]]>SIGSTOP
and SIGCONT
to application windows when they get unfocused or
focused, respectively, to let the application not waste CPU cycles when not in
use.
I don't require any fancy looking GUI, so my desktop runs no full-blown desktop environment like Gnome or KDE but instead only awesome as a light-weight window manager. Usually, the only application windows I have open are rxvt-unicode as my terminal emulator and firefox/iceweasel with the pentadactyl extension as my browser. Thus, I would expect that CPU usage of my idle system would be pretty much zero but instead firefox decides to constantly eat 10-15%. Probably to update some GIF animations or JavaScript (or nowadays even HTML5 video animations). But I don't need it to do that when I'm not currently looking at my browser window. Disabling all JavaScript is no option because some websites that I need for uni or work are just completely broken without JavaScript, so I have to enable it for those websites.
Solution: send SIGSTOP
when my firefox window looses focus and send SIGCONT
once it gains focus again.
The following addition to my /etc/xdg/awesome/rc.lua
does the trick:
local capi = { timer = timer }
client.add_signal("focus", function(c)
if c.class == "Iceweasel" then
awful.util.spawn("kill -CONT " .. c.pid)
end
end)
client.add_signal("unfocus", function(c)
if c.class == "Iceweasel" then
local timer_stop = capi.timer { timeout = 10 }
local send_sigstop = function ()
timer_stop:stop()
if client.focus.pid ~= c.pid then
awful.util.spawn("kill -STOP " .. c.pid)
end
end
timer_stop:add_signal("timeout", send_sigstop)
timer_stop:start()
end
end)
Since I'm running Debian, the class is "Iceweasel" and not "Firefox". When the
window gains focus, a SIGCONT
is sent immediately. I'm executing kill
because I don't know how to send UNIX signals from lua directly.
When the window looses focus, then the SIGSTOP
signal is only sent after a 10
second timeout. This is done for several reasons:
With this change, when I now open htop
, the process consuming most CPU
resources is htop itself. Success!
Another cool advantage is, that firefox can now be moved completely into swap space in case I run otherwise memory hungry applications without ever requiring any memory from swap until I really use it again.
I haven't encountered any disadvantages of this setup yet. If 10 seconds prove to be too short to copy and paste I can easily extend this delay. Even clicking on links in my terminal works flawlessly - the new tab will just only load once firefox gets focused again.
EDIT: thanks to Helmut Grohne for suggesting to compare the pid instead of the raw client instance to prevent misbehaviour when firefox opens additional windows like the preferences dialog.
]]>The bootstrap.debian.net service used to have botch as a git submodule but now runs botch from its Debian package. This at least proves that the botch Debian package is mature enough to do useful stuff with it. In addition to the bootstrapping results by architecture, bootstrap.debian.net now also hosts the following additional services:
Further improvements concern how dependency cycles are now presented in the
html overviews. While before, vertices in a cycle where separated by commas as
if they were simple package lists, vertices are now connected by unicode
arrows. Dashed arrows indicate build dependencies while solid arrows indicate
builds-from relationships. For what it's worth, installation set vertices now
contain their installation set in their title
attribute.
Botch has long depended on features of an unreleased version of dose3
which
in turn depended on an unrelease version of libcudf
. Both projects have
recently made new releases so that I was now able to drop the dose3
git
submodule and rely on the host system's dose3
version instead. This also made
it possible to create a Debian package of botch which currently sits at Debian
mentors. Writing the package also finally made me create a usable
install
target in the Makefile
as well as adding stubs for the manpages of
the 44 applications that botch currently ships. The actual content of these
manpages still has to be written. The only documentation botch currently ships
in the botch-doc
package is an offline version of the wiki on gitorious.
The new page ExamplesGraphs even includes pictures.
By default, botch analyzes the native bootstrapping phase. That is, assume that
the initial set of Essential:yes
and build-essential
packages magically
exists and find out how to bootstrap the rest from there through native
compilation. But part of the bootstrapping problem is also to create the set of
Essential:yes
and build-essential
packages from nothing via cross
compilation. Botch is unable to analyze the cross phase because too many
packages cannot satisfy their crossbuild dependencies due to multiarch
conflicts. This problem is only about the dependency metadata and not about
whether a given source package actually crosscompiles fine in practice.
Helmut Grohne has done great work with rebootstrap which is regularly run by jenkins.debian.net. He convinced me that we need an overview of what packages are blocking the analysis of the cross case and that it was useful to have a crossbuild order even if that was a fake order just to have a rough overview of the current situation in Debian Sid.
I wrote a couple of scripts which would run dose-builddebcheck
on a
repository, analyze which packages fail to satisfy their crossbuild
dependencies and why, fix those cases by adjusting package metadata accordingly
and repeat until all relevant source packages satisfy their crossbuild
dependencies. The result of this can then be used to identify the packages that
need to be modified as well as to generate a crossbuild order.
The fixes to the metadata are done in an automatic fashion and do not necessarily reflect the real fix that would solve the problem. Nevertheless, I ended up agreeing that it is better to have a slightly wrong overview than no overview at all.
Installation sets in the dependency graph are calculated independent from each
other. If two binary packages provide A
, then dependencies on A
in
different installation sets might choose different binary packages as providers
of A
. The same holds for disjunctive dependencies. If a package depends on A
| C
and another package depends on C | A
then there is no coordination to
choose C
so to minimize the overall amount of vertices in the graph. I
implemented two methods to minimize the impact of cases where the dependency
solver has multiple options to satisfy a dependency through Provides
and
dependency disjunctions.
The first method is inspired by Helmut Grohne. An algorithm goes through all disjunctive binary dependencies and removes all virtual packages, leaving only real packages. Of the remaining real packages, the first one is selected. For build dependencies, the algorithm drops all but the first package in every disjunction. This is also what sbuild does. Unfortunately this solution produces an unsatisfiable dependency situation in most cases. This is because oftentimes it is necessary to select the virtual disjunctive dependency because of a conflict relationship introduced by another package.
The second method involves aspcud
, a cudf solver which can optimize a
solution by a criteria. This solution is based on an idea by Pietro Abate who
implemented the basis for this idea back in 2012. In contrast to a usual cudf
problem, binary packages now also depend on the source packages they build
from. If we now ask aspcud
to find an installation set for one of the base
source packages (I chose src:build-essential
) then it will return an
installation set that includes source packages. As an optimization criteria the
number of source packages in the installation set is minimized. This solution
would be flawless if there were no conflicts between binary packages. Due to
conflicts not all binary packages that must be coinstallable for this strategy
to work can be coinstalled. The quick and dirty solution is to remove all
conflicts before passing the cudf universe to aspcud
. But this also means
that the solution does sometimes not work in practice.
Botch now finally has a test
target in its Makefile
. The test
target
tests two code paths of the native.sh
script and the cross.sh
script.
Running these two scripts covers testing most parts of botch. Given that I did
lots of refactoring in the past weeks, the test cases greatly helped to assure
that I didnt break anything in the process.
I also added autopkgtests to the Debian packaging which test the same
things as the test
target but naturally run the installed version of botch
instead. The autopkgtests were a great help in weeding out some lasts bugs
which made botch depend on being executed from its source directory.
Reading the suggestions in the Debian python policy I evaluated the
possibility to use Python 3 for the Python scripts in botch. While I was at it
I added transparent decompression for gzip, bz2 and xz based on the file magic,
replaced python-apt with python-debian because of bug#748922 and added
argparse
argument parsing to all scripts.
Unfortunately I had to find out that Python 3 support does not yet seem to be possible for botch for the following reasons:
Thus I settled for changing the code such that it would be compatible with
Python 2 as well as with Python 3. Because of the changed string handling and
sys.stdout
properties in Python 3 this proved to be tricky. On the other
hand this showed me bugs in my code where I was wrongly relying on
deterministic dictionary key traversal.