Skip to content
README.md 4.35 KiB
Newer Older
james-reed's avatar
james-reed committed
# mumble-server-vultr-terraform
james-reed's avatar
james-reed committed
This is based on the use of [vultr-terraform-example](https://github.com/Psmths/vultr-terraform-example)
james-reed's avatar
james-reed committed

james-reed's avatar
james-reed committed
Tested with Terraform v0.13.6.

This will deploy a mumble server (or murmur as it may be) onto a Vultr vc2-1c-1gb instance in the Sydney region, running Debian 11 x64.
james-reed's avatar
james-reed committed

Upon deployment, the public IPv4 address of the instance that gets spun up will be inserted into a DNS record over at a Cloudflare hosted DNS zone. The case use for this is that users joining the server can simply remember or bookmark a specific hostname in their mumble client to connect to the server, with the server having the ability to be deployed & destroyed in an ephemeral manner only for when it's actually going to be used, so you're only incurring instance charges for when you're actually using it (when the Vultr instance is running). There would be no need for a static/reserved IP address for subsequent deployments. At present, Cloudflare do not charge for DNS hosting. You'd simply need a domain name setup inside Cloudflare, and you can forgo the costs of a reserved IP address over at Vultr, or using a DNS zone at the Vultr end.
james-reed's avatar
james-reed committed
### Startup Script
The startup script is located in two places for convenience. Because Vultr expects the script to be passed in Base64 encoding, we can use Terraform's `filebase64` functionality to automatically encode a file in base64 and pass it to this instance, like so:

```
resource "vultr_startup_script" "standup" {
    name = "mumble-debian11"
james-reed's avatar
james-reed committed
    script = filebase64("startup.sh")
    type = "boot"
}
```

The startup script is applied to the instance (referenced by id) with this line in the main instance resource:
```
script_id = vultr_startup_script.standup.id
```

### SSH Keys
***This is commented out/disabled by default***

This terraform deployment will also add an authorized SSH key to the root account. The relevant provider is as follows, and is self-explanatory:

```
resource "vultr_ssh_key" "my_user" {
  name = "Root SSH key"
  ssh_key = "${file("sshkey.pub")}"
}
```

The SSH key is applied to the instance in the main instance provider as follows:

```
ssh_key_ids = ["${vultr_ssh_key.my_user.id}"]
```

### tfvars
The file `terraform.tfvars` contains all of the variable assignments listed in `variable.tf`. To obtain these values, use `vultr-cli`, which can be found [here](https://github.com/vultr/vultr-cli). You can also check through the Vultr API docs [here](https://www.vultr.com/api/v1/) We see these values applied to the main instance provider as shown below:

```
plan = var.plan
region = var.region
os_id = var.os
label = var.label
hostname = var.hostname
```

### Firewall
***The inbound SSH rule is commented out/disabled by default***

The firewall rules added are for the default mumble/murmur ports - TCP & UDP inbound IPv4 traffic allowed from anywhere to port 64738.

This deployment creates a firewall group, adds rules to this group, and assigns the group to the instance. It first creates the group as follows:

```
resource "vultr_firewall_group" "my_firewall_grp" {
James's avatar
James committed
    description = "mumble-fw-deployed-by-terraform"
james-reed's avatar
james-reed committed
}
```

This group is then applied to the main instance:
```
firewall_group_id = vultr_firewall_group.my_firewall_grp.id
```

### Output
The output provider in `output.tf` simply prints the instance's final IP address after the deployment is complete.

So once Terraform has completed an apply, it'll output the public IP address of the instance to **$instance_ip**.

### Updating Cloudflare DNS record to instance public IP address
james-reed's avatar
james-reed committed
```cloudflare-set-dns-record.sh``` has to be run seperately to Terraform after a successful apply (just tack it onto a one-liner when you do the Terraform apply). This will read the **$instance_ip** value and use it to update a DNS record inside a Cloudlare DNS zone. It's ideal to have this run locally rather than part of the startup script that runs on the instance - so that we can avoid any Cloudflare API keys being transported unnecessarily.
james-reed's avatar
james-reed committed

james-reed's avatar
james-reed committed
### Deploying
james-reed's avatar
james-reed committed
To deploy this instance, simply issue the following commands:
```
terraform init
terraform plan
terraform apply;./cloudflare-set-dns-record.sh
```

james-reed's avatar
james-reed committed
### Destroying and clearing Cloudflare DNS record
james-reed's avatar
james-reed committed
To destroy the environment, simply issue:
```
terraform destroy
```

During this process, Terraform will trigger the ```cloudflare-clear-dns-record.sh``` script. This will simply set the DNS record over at Cloudflare to 127.0.0.1.