SSH reverse tunnel

With autossh you can establish an SSH reverse tunnel from a given system, provided it can reach some other machine via SSH outside its own network. You can also do that with ssh alone, but autossh comes with added features that are worthwhile exploring and using.

Ubuntu 14.04 LTS and older (Upstart)

To use autossh with Upstart, you need two files: /etc/init/autossh.conf and /etc/init/autossh.override. The former is the main Upstart script, the latter is a file providing customizable settings for the former.


description "Establish persistent SSH tunnel"
start on local-filesystems and net-device-up IFACE=eth0 and started ssh
stop on [016]

respawn limit 5 60 # respawn max 5 times in 60 seconds
    # exec 2>>/tmp/autossh.log
    # set -x
    export AUTOSSH_POLL
    sleep 5
    autossh -M $AUTOSSH_MONIPORT -- \
        -o 'StrictHostKeyChecking=no' \
        -o 'UserKnownHostsFile=/dev/null' \
        -o 'PasswordAuthentication=no' \
        -o 'PubkeyAuthentication=yes' \
        -o 'ServerAliveInterval 60' \
        -o 'ServerAliveCountMax 3' \
        -o 'BatchMode=yes' \
end script

If you ever run into trouble, uncomment the two commented out lines in the script and have a look at the /tmp/autossh.log afterward.


setuid user
setgid usergroup
env SSH_CONNECTION_HOST=user@host.domain.tld
env SSH_IDENTITY=/home/user/.ssh/id_rsa
env SSH_OPTIONS="-R 10022:localhost:22"
env AUTOSSH_LOGFILE=/var/log/autossh.log

Some remarks:

  • SSH_CONNECTION_HOST is the host to which you want to connect on the outside.
  • SSH_OPTIONS gives the arguments for ssh as started by autossh; here we forward port 22 from localhost (the machine running the Upstart script) to port 10022 at localhost for host.domain.tld
  • AUTOSSH_LOGFILE make sure this file is writable by user or usergroup

Use start autossh, stop autossh and restart autossh to control this Upstart service. If you decided to name your file differently, filename.conf means you need to pass filename as the service name.

Ubuntu 16.04 LTS and newer (systemd)

With the introduction of systemd to Ubuntu, we need to provide a unit file on these newer Ubuntu versions.


Description=Establish persistent SSH tunnel

ExecStart=/usr/bin/autossh -M 10023 -- -4Nngi /home/user/.ssh/id_rsa -R 10022:localhost:22 -o 'StrictHostKeyChecking=no' -o 'UserKnownHostsFile=/dev/null' -o 'PasswordAuthentication=no' -o 'PubkeyAuthentication=yes' -o 'ServerAliveInterval 60' -o 'ServerAliveCountMax 3' -o 'BatchMode=yes' user@host.domain.tld


This unit file combines the settings from what was the .override in Upstart directly into the unit. If you wanted to separate most of the settings out, you could use the EnvironmentFile stanza with the respective file containing variable assignments.

To have systemd re-read its unit files, run systemctl daemon-reload. To verify the status (also after starting), run systemctl status autossh.service. To start or restart the service, run systemctl restart autossh.service. And last but not least to enable the service to start at boot time, run systemctl enable autossh.service.

// Oliver

PS: beware of the -f switch of autossh. Neither Upstart nor systemd like them particularly. In case you decide to use them you need to let these init systems know how many times a fork() happens, so that it can figure out the PID of the resulting daemon process.

This entry was posted in Administration, EN, Linux, Software, Unix and unixoid. Bookmark the permalink.

2 Responses to SSH reverse tunnel

  1. Shippy says:

    Great to see clear definitions ( instead of confusing remote, host, client, local terminology), and both rc.local and systemd setup !


    I want to do a reverse SSH setup up for various home computers at friends and family ( for repair/updates), and want the least intervention/typing by their users !

    I am assuming that to keep the reverse SSH automated as much as possible for *multiple* Linux systems behind firewalls ( remote servers) communicating with a *single* connection host with a public IP/domain address ( middleman machine), you need:

    A. Generate keys pairs on the connection host, then copy the public key on all remote servers.

    B. Make sure different remote servers can be identified on the connection host when the client (admin) logs into the connection host to SSH into the various remote servers.

    Can this be simply accomplished by assigning different ports on the connection host to remote servers, e.g., Port 20022 to Remote Server1 (RS1), Port 20023 to RS2, with some kind of simple algorithm to match connection host ports to RSs?

    The idea here is to have some variable string as placeholder instead of actual port number, to be determined automatically at boot time of RSs or login time of the client admin into RSs.

    C. What kind of minimal script needs to be run on the connection host, or none ?

  2. Oliver says:

    Hey, thanks for the praise. Let me go through your questions one by one.

    A. All of the connecting machine will require a public/private key pair. The connection host needs their public keys and of course the connection host — running an SSH daemon — will also have its own public/private key pair (several, even for DSA, RSA and so on, the ones in /etc/ssh). The real problem is how to make sure the initial connection can be made. Typically you’d use ssh-copy-id and supply a password, but that may not be an option, or not easy enough. Once trust is established (i.e. the connection host has the public keys of all machines connecting to it) things are easy. To dodge the bullet but have a slight chance of compromise you might want to look into ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ...

    B. Yes, that can be done. In fact that’s what I have been doing myself. After all you’ll want to be able to connect back to the connected hosts 😉
    That said, I also jailed each of the connecting hosts and gave them an individual system user. This way you can easily see the usernames with, say, lsof -i :22. The jailing was done by bootstrapping a very simple system into a folder which I named /chroots/.basedir, which I mounted as the lowerdir of an overlay and a separate upperdir and workdir per connecting host. Then I bind-mounted the home folder of the system user inside of said chroot and each of these system users to a group named ssh-forwarders.
    After that I needed to add this to /etc/ssh/sshd_config:

    Match Group ssh-forwarders
        Banner none
        ChrootDirectory /chroots/%u
        AllowTcpForwarding yes
        PermitOpen any
        Banner none
        PasswordAuthentication no

    … and was set. These days I’d do it with ephemeral LXD containers, though, to add another layer of security (the kernel namespaces), i.e. in fact that’s what I am going to do for the next setup.
    Either way, you may not want or need this kind of paranoid setup. But I like to have an extra barrier in case something gets compromised. And a chroot is nice in this case, as it provides what seems to be a full-fledged system but in fact cordons off the user in his/her own realm which they can decide to screw with.

    As for C. that’s a tough one. I don’t know anything about your setup other than what you describe. But I imagine the minimum is to make the public keys of the connecting hosts known to the connection host. Also, you need to script in some way how to set up autossh.

Leave a Reply

Your email address will not be published. Required fields are marked *