Skip to content
README.md 4.53 KiB
Newer Older
James's avatar
James committed
# icecast-video-streaming-server-vultr-terraform
James's avatar
James committed
This is based on the use of [vultr-terraform-example](https://github.com/Psmths/vultr-terraform-example)
James's avatar
James committed

Essentially, this will be a Terraform deployment of what is described in this project -
[Icecast video streaming with OBS](https://git.jreed.cc/James/icecast-video-streaming-with-obs)

James's avatar
James committed
Tested with Terraform v0.13.6.

This will deploy an icecast server configured with the intention to stream video, onto a Vultr vc2-1c-1gb instance in the Sydney region, running Fedora 36 x64.
James's avatar
James committed

Upon deployment, the public IPv4 address of the instance that gets spun up will be inserted into a DNS record over at a Cloudflare hosted DNS zone. The case use for this is that users joining the server can simply remember or bookmark a specific hostname to connect to the server, with the server having the ability to be deployed & destroyed in an ephemeral manner only for when it's actually going to be used, so you're only incurring instance charges for when you're actually using it (when the Vultr instance is running). There would be no need for a static/reserved IP address for subsequent deployments. At present, Cloudflare do not charge for DNS hosting. You'd simply need a domain name setup inside Cloudflare, and you can forgo the costs of a reserved IP address over at Vultr, or using a DNS zone at the Vultr end.

### Startup Script
The startup script is located in two places for convenience. Because Vultr expects the script to be passed in Base64 encoding, we can use Terraform's `filebase64` functionality to automatically encode a file in base64 and pass it to this instance, like so:

```
resource "vultr_startup_script" "standup" {
    name = "icecast-video-fedora36"
James's avatar
James committed
    script = filebase64("startup.sh")
    type = "boot"
}
```

The startup script is applied to the instance (referenced by id) with this line in the main instance resource:
```
script_id = vultr_startup_script.standup.id
```

### SSH Keys
***This is commented out/disabled by default***

This terraform deployment will also add an authorized SSH key to the root account. The relevant provider is as follows, and is self-explanatory:

```
resource "vultr_ssh_key" "my_user" {
  name = "Root SSH key"
  ssh_key = "${file("sshkey.pub")}"
}
```

The SSH key is applied to the instance in the main instance provider as follows:

```
ssh_key_ids = ["${vultr_ssh_key.my_user.id}"]
```

### tfvars
The file `terraform.tfvars` contains all of the variable assignments listed in `variable.tf`. To obtain these values, use `vultr-cli`, which can be found [here](https://github.com/vultr/vultr-cli). You can also check through the Vultr API docs [here](https://www.vultr.com/api/v1/) We see these values applied to the main instance provider as shown below:

```
plan = var.plan
region = var.region
os_id = var.os
label = var.label
hostname = var.hostname
```

### Firewall
***The inbound SSH rule is commented out/disabled by default***

James's avatar
James committed
The firewall rules added are for the default SSL/TLS port TCP 443 inbound over IPv4 from any IP.
James's avatar
James committed

This deployment creates a firewall group, adds rules to this group, and assigns the group to the instance. It first creates the group as follows:

```
resource "vultr_firewall_group" "my_firewall_grp" {
    description = "icecast-https-fw-deployed-by-terraform"
}
```

This group is then applied to the main instance:
```
firewall_group_id = vultr_firewall_group.my_firewall_grp.id
```

### Output
The output provider in `output.tf` simply prints the instance's final IP address after the deployment is complete.

So once Terraform has completed an apply, it'll output the public IP address of the instance to **$instance_ip**.

### Updating Cloudflare DNS record to instance public IP address
```cloudflare-set-dns-record.sh``` has to be run seperately to Terraform after a successful apply (just tack it onto a one-liner when you do the Terraform apply). This will read the **$instance_ip** value and use it to update a DNS record inside a Cloudlare DNS zone. It's ideal to have this run locally rather than part of the startup script that runs on the instance - so that we can avoid any Cloudflare API keys being transported unnecessarily.

### Deploying
To deploy this instance, simply issue the following commands:
```
terraform init
terraform plan
terraform apply;./cloudflare-set-dns-record.sh
```

### Destroying and clearing Cloudflare DNS record
To destroy the environment, simply issue:
```
terraform destroy
```

During this process, Terraform will trigger the ```cloudflare-clear-dns-record.sh``` script. This will simply set the DNS record over at Cloudflare to 127.0.0.1.