Looking for self-hosted filesharing software

categories: blog, debian

The owncloud package was removed from Debian unstable and testing. I am thus now looking for an alternative. Unfortunately, finding such replacement seems to be harder than I initially thought, even though I only use a very small subset of what owncloud provides. What I require is some software which allows me to:

  1. upload a directory of files of any type to my server (no "distributed" filesharing where I have to stay online with my laptop)
  2. share the content of that directory via HTTP (no requirement to install any additional software other than a web browser)
  3. let the share-links be private (no possibility to infer the location of other shares)
  4. allow users to browse that directory (image thumbnails or a photo gallery would be nice)
  5. allow me to allow anonymous users to upload their own content into that directory (also only requiring their web browser)
  6. already in Debian or easy to package and maintain due to low complexity (I don't have enough time to become the next "owncloud maintainer")

I thought this was a pretty simple task to solve but I am unable to find any software that fits above criteria.

The below table shows the result of my research of what's currently available. The columns mark whether the respective software fulfills one of the six criteria from above.

Software 123456
owncloud
sparkleshare
dvcs-autosync
git annex assistant
syncthing
pydio
seafile
sandstorm.io
ipfs
bozon
droppy

Pydio, seafile and sandstorm.io look promising but they seem to be beasts similar in complexity to owncloud as they bring features like version tracking, office integration, wikis, synchronization across multiple devices or online editing of files which are features that I do not need.

I would already be very happy if there was a script which would make it easy to create a hard-to-guess symlink to a directory with data tracked by git annex under my www-root and then generate some static HTML to provide a thumbnails view or a photo gallery. Unfortunately, even that solution would not be sufficient as it would still disallow public upload by anybody whom I would give the link to...

If you know some software that meets my criteria or would like to submit corrections to above table, please shoot an email to josch@debian.org. Thanks!

View Comments

Let's Encrypt with Pound on Debian

categories: debian

TLDR: mister-muffin.de (and all its subdomains), bootstrap.debian.net and binarycontrol.debian.net are now finally signed by "Let's Encrypt Authority X1" \o/

EDIT2: I created this post when Let's Encrypt was still in beta. For a recipe of how to use letsencrypt with pound and without super user privileges read the very last section at the bottom.

I just tried out the letsencrypt client Debian packages prepared by Harlan Lieberman-Berg which can be found here:

  • python-acme git ITP
  • python-letsencrypt (needs python-acme) git ITP

My server setup uses Pound as a reverse proxy in front of a number of LXC based containers running the actual services. Furthermore, letsencrypt only supports Nginx and Apache for now, so I had to manually setup things anyways. Here is how.

After installing the Debian packages I built from above git repositories, I ran the following commands:

$ mkdir -p letsencrypt/etc letsencrypt/lib letsencrypt/log
$ letsencrypt certonly --authenticator manual --agree-dev-preview \
    --server https://acme-v01.api.letsencrypt.org/directory --text \
    --config-dir letsencrypt/etc --logs-dir letsencrypt/log \
    --work-dir letsencrypt/lib --email josch@mister-muffin.de \
    --domains mister-muffin.de --domains blog.mister-muffin.de \
    --domains [...]

I created the letsencrypt directory structure to be able to run letsencrypt as a normal user. Otherwise, running this command would require access to /etc/letsencrypt and others. Having to set this up and pass all these parameters is a bit bothersome but there is an upstream issue about making this easier when using the "certonly" option which in princible should not require superuser privileges.

The --server option is necessary for now because "Let's Encrypt" is still in beta and one needs to register for it. Without the --server option one will get an untrusted certificate from the "happy hacker fake CA".

The letsencrypt program will then ask me for my agreement to the Terms of Service and then, for each domain I specified with the --domains option present me the token content and the location under each domain where it expects to find this content, respectively. This looks like this each time:

-------------------------------------------------------------------------------
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running letsencrypt in manual mode on a machine that is
not your server, please ensure you're okay with that.

Are you OK with your IP being logged?
-------------------------------------------------------------------------------
(Y)es/(N)o: Y
Make sure your web server displays the following content at
http://mister-muffin.de/.well-known/acme-challenge/XXXX before continuing:

{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}

Content-Type header MUST be set to application/jose+json.

If you don't have HTTP server configured, you can run the following
command on the target server (as root):

mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
cd /tmp/letsencrypt/public_html
echo -n '{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}' > .well-known/acme-challenge/XXXX
# run only once per server:
$(command -v python2 || command -v python2.7 || command -v python2.6) -c \
"import BaseHTTPServer, SimpleHTTPServer; \
SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {'': 'application/jose+json'}; \
s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \
s.serve_forever()" 
Press ENTER to continue

For brevity I replaced any large base64 encoded chunks of the messages with YYYY, ZZZZ and QQQQ. The token location is abbreviated with XXXX.

After temporarily stopping Pound on my webserver I created the directory /tmp/letsencrypt/public_html/.well-known/acme-challenge and then opened two shells on my server, both at /tmp/letsencrypt/public_html. In one, I kept a tiny HTTP server running (like the suggested Python SimpleHTTPServer which will also work if one has Python installed). In the other I copy pasted the echo line that the letsencrypt program suggested me to run.

I had to copypaste that echo command for each domain I wanted to verify. This could easily be automated, so I filed an issue about this with upstream.

It seems that the letsencrypt servers query each of these tokens twice: once directly each time after having hit enter after seeing the message above and another time once all tokens are in place.

At the end of this ordeal I get:

2015-11-04 11:12:18,409:WARNING:letsencrypt.client:Non-standard path(s), might not work with crontab installed by your operating system package manager

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to josch@mister-muffin.de.
 - Congratulations! Your certificate and chain have been saved at
   letsencrypt/etc/live/mister-muffin.de/fullchain.pem. Your cert will
   expire on 2016-02-02. To obtain a new version of the certificate in
   the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at letsencrypt/etc. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.

I can now scp the content of letsencrypt/etc/live/mister-muffin.de/* to my server. Unfortunately, Pound (and also my ejabberd XMPP server) requires the private key to be in the same file as the certificate and the chain, so on the server I also had to do:

cat /etc/ssl/private/privkey.pem /etc/ssl/private/fullchain.pem > /etc/ssl/private/private_fullchain.pem

And edit the Pound config to use /etc/ssl/private/private_fullchain.pem. But that's all, folks!

EDIT

It seems that manually copying over the echo commands as I described above is not necessary. Instead of using the certonly plugin, I can use the webroot plugin. That plugin takes the --webroot-path option and will copy the tokens to there. Since my webroot is on a remote machine, I could just mount it locally via sshfs and pass the mountpoint as --webroot-path.

That I didn't realize that the webroot plugin does what I want (and not the certonly plugin) can easily be explained by the only documentation of the webroot plugin in the help output and the man page generated from it being "Webroot Authenticator" which is not very helpful.

Another user seems to have run into similar problems. Better documenting the plugins so that these situations can be prevented in the future is tracked in this upstream bug.

EDIT2

Now that letsencrypt is out for everybody, lets update the instructions with what I learned. Firstly, since we don't want a long downtime, we add the following section to /etc/pound/pound.cfg:

Service
        URL "^/.well-known/acme-challenge/"
        BackEnd
                Address 127.0.0.1
                Port 8000
        End
End

This will make sure that all requests to /.well-known/acme-challenge/ and below are redirected to a server running on port 8000. That service will be a temporary webserver which we will only switch on for the purpose of retrieving new certificates. So on my server I run:

$ mkdir ~/letsencrypt
$ (cd ~/letsencrypt && python3 -m http.server 8000)

Now on my laptop I mount that directory via sshfs locally:

$ sshfs fulda:/root/letsencrypt ~/letsencrypt/fulda

And finally I use the webroot authenticator to automatically retrieve and validate all my certificates. No manual intervention needed anymore:

$ letsencrypt certonly --authenticator webroot --text \
    --config-dir letsencrypt/etc --logs-dir letsencrypt/log \
    --work-dir letsencrypt/lib --email josch@mister-muffin.de \
    --webroot-path ~/letsencrypt/fulda --domains mister-muffin.de \
    --domains [...]

Now I can quit the python webserver running on my server and copy the generated certificates into their right locations.

View Comments

unshare without superuser privileges

categories: code, debian, linux

TLDR: With the help of Helmut Grohne I finally figured out most of the bits necessary to unshare everything without becoming root (though one might say that this is still cheated because the suid root tools newuidmap and newgidmap are used). I wrote a Perl script which documents how this is done in practice. This script is nearly equivalent to using the existing commands lxc-usernsexec [opts] -- unshare [opts] -- COMMAND except that these two together cannot be used to mount a new proc. Apart from this problem, this Perl script might also be useful by itself because it is architecture independent and easily inspectable for the curious mind without resorting to sources.debian.net (it is heavily documented at nearly 2 lines of comments per line of code on average). It can be retrieved here at https://gitlab.mister-muffin.de/josch/user-unshare/blob/master/user-unshare

Long story: Nearly two years after my last last rant about everything needing superuser privileges in Linux, I'm still interested in techniques that let me do more things without becoming root. Helmut Grohne had told me for a while about unshare(), or user namespaces as the right way to have things like chroot without root. There are also reports of LXC containers working without root privileges but they are hard to come by. A couple of days ago I had some time again, so Helmut helped me to get through the major blockers that were so far stopping me from using unshare in a meaningful way without executing everything with sudo.

My main motivation at that point was to let dpkg-buildpackage when executed by sbuild be run with an unshared network namespace and thus without network access (except for the loopback interface) because like pbuilder I wanted sbuild to enforce the rule not to access any remote resources during the build. After several evenings of investigating and doctoring at the Perl script I mentioned initially, I came to the conclusion that the only place that can unshare the network namespace without disrupting anything is schroot itself. This is because unsharing inside the chroot will fail because dpkg-buildpackage is run with non-root privileges and thus the user namespace has to be unshared. But this then will destroy all ownership information. But even if that wasn't the case, the chroot itself is unlikely to have (and also should not) tools like ip or newuidmap and newgidmap installed. Unsharing the schroot call itself also will not work. Again we first need to unshare the user namespace and then schroot will complain about wrong ownership of its configuration file /etc/schroot/schroot.conf. Luckily, when contacting Roger Leigh about this wishlist feature in bug#802849 I was told that this was already implemented in its git master \o/. So this particular problem seems to be taken care of and once the next schroot release happens, sbuild will make use of it and have unshare --net capabilities just like pbuilder already had since last year.

With the sbuild case taken care of, the rest of this post will introduce the Perl script I wrote. The name user-unshare is really arbitrary. I just needed some identifier for the git repository and a filename.

The most important discovery I made was, that Debian disables unprivileged user namespaces by default with the patch add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by-default.patch to the Linux kernel. To enable it, one has to first either do

echo 1 | sudo tee /proc/sys/kernel/unprivileged_userns_clone > /dev/null

or

sudo sysctl -w kernel.unprivileged_userns_clone=1

The tool tries to be like unshare(1) but with the power of lxc-usernsexec(1) to map more than one id into the new user namespace by using the programs newgidmap and newuidmap. Or in other words: This tool tries to be like lxc-usernsexec(1) but with the power of unshare(1) to unshare more than just the user and mount namespaces. It is nearly equal to calling:

lxc-usernsexec [opts] -- unshare [opts] -- COMMAND

Its main reason of existence are:

  • as a project for me to learn how unprivileged namespaces work
  • written in Perl which means:
    • architecture independent (same executable on any architecture)
    • easily inspectable by other curious minds
  • tons of code comments to let others understand how things work
  • no need to install the lxc package in a minimal environment (perl itself might not be called minimal either but is present in every Debian installation)
  • not suffering from being unable to mount proc

I hoped that systemd-nspawn could do what I wanted but it seems that its requirement for being run as root will not change any time soon

Another tool in Debian that offers to do chroot without superuser privileges is linux-user-chroot but that one cheats by being suid root.

Had I found lxc-usernsexec earlier I would've probably not written this. But after I found it I happily used it to get an even better understanding of the matter and further improve the comments in my code. I started writing my own tool in Perl because that's the language sbuild was written in and as mentioned initially, I intended to use this script with sbuild. Now that the sbuild problem is taken care of, this is not so important anymore but I like if I can read the code of simple programs I run directly from /usr/bin without having to retrieve the source code first or use sources.debian.net.

The only thing I wasn't able to figure out is how to properly mount proc into my new mount namespace. I found a workaround that works by first mounting a new proc to /proc and then bind-mounting /proc to whatever new location for proc is requested. I didn't figure out how to do this without mounting to /proc first partly also because this doesn't work at all when using lxc-usernsexec and unshare together. In this respect, this perl script is a bit more powerful than those two tools together. I suppose that the reason is that unshare wasn't written with having being called without superuser privileges in mind. If you have an idea what could be wrong, the code has a big FIXME about this issue.

Finally, here a demonstration of what my script can do. Because of the /proc bug, lxc-usernsexec and unshare together are not able to do this but it might also be that I'm just not using these tools in the right way. The following will give you an interactive shell in an environment created from one of my sbuild chroot tarballs:

$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net \
    --uts --mount --fork -- sh -c 'ip link set lo up && ip addr && \
    hostname hoothoot-chroot && \
    tar -C /tmp/buildroot -xf /srv/chroot/unstable-amd64.tar.gz; \
    /usr/sbin/chroot /tmp/buildroot /sbin/runuser -s /bin/bash - josch && \
    umount /tmp/buildroot/proc && rm -rf /tmp/buildroot'
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ whoami
josch
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ hostname
hoothoot-chroot
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ ls -lha /proc | head
total 0
dr-xr-xr-x 218 nobody nogroup    0 Oct 25 19:06 .
drwxr-xr-x  22 root   root     440 Oct  1 08:42 ..
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 1
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 15
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 16
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 7
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 8
dr-xr-xr-x   4 nobody nogroup    0 Oct 25 19:06 acpi
dr-xr-xr-x   6 nobody nogroup    0 Oct 25 19:06 asound

Of course instead of running this long command we can also instead write a small shell script and execute that instead. The following does the same things as the long command above but adds some comments for further explanation:

#!/bin/sh

set -exu

# I'm using /tmp because I have it mounted as a tmpfs
rootdir="/tmp/buildroot"

# bring the loopback interface up
ip link set lo up

# show that the loopback interface is really up
ip addr

# make use of the UTS namespace being unshared
hostname hoothoot-chroot

# extract the chroot tarball. This must be done inside the user namespace for
# the file permissions to be correct.
#
# tar will fail to call mknod and to change the permissions of /proc but we are
# ignoring that
tar -C "$rootdir" -xf /srv/chroot/unstable-amd64.tar.gz || true

# run chroot and inside, immediately drop permissions to the user "josch" and
# start an interactive shell
/usr/sbin/chroot "$rootdir" /sbin/runuser -s /bin/bash - josch

# unmount /proc and remove the temporary directory
umount "$rootdir/proc"
rm -rf "$rootdir"

and then:

$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh

As mentioned in the beginning, the tool is nearly equivalent to calling lxc-usernsexec [opts] -- unshare [opts] -- COMMAND but because of the problem with mounting proc (mentioned earlier), lxc-usernsexec and unshare cannot be used with above example. If one tries anyways one will only get:

$ lxc-usernsexec -m b:0:1000:1 -m b:1:558752:1 -- unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh
unshare: mount /tmp/buildroot/proc failed: Invalid argument

I'd be interested in finding out why that is and how to fix it.

View Comments

new sbuild release 0.66.0

categories: debian

I just released sbuild 0.66.0-1 into unstable. It fixes a whopping 30 bugs! Thus, I'd like to use this platform to:

  • kindly ask all sbuild users to report any new bugs introduced with this release
  • give a big thank you to everybody who supplied the patches that made fixing this many bugs possible (in alphabetical order): Aurelien Jarno, Christian Kastner, Christoph Egger, Colin Watson, Dima Kogan, Guillem Jover, Luca Falavigna, Maria Valentina Marin Rordrigues, Miguel A. Colón Vélez, Paul Tagliamonte

And a super big thank you to Roger Leigh who, despite having resigned from Debian, was always available to give extremely helpful hints, tips, opinion and guidance with respect to sbuild development. Thank you!

Here is a list of the major changes since the last release:

  • add option --arch-all-only to build arch:all packages
  • environment variable SBUILD_CONFIG allows to specify a custom configuration file
  • add option --build-path to set a deterministic build path
  • fix crossbuild dependency resolution
  • add option --extra-repository-key for extra apt keys
  • add option --build-dep-resolver=aspcud for aspcud based resolver
  • allow complex commands as sbuild hooks
  • add now external command %SBUILD_SHELL produces an interactive shell
  • add options --build-deps-failed-commands, --build-failed-commands and --anything-failed-commands for more hooks
View Comments

I became a Debian Developer

categories: blog, debian

Thanks to akira for the confetti to celebrate the occasion!

View Comments
« Older Entries