Categories
Hybrid Multicloud Oracle Cloud Infrastructure

Deploying K3S on Oracle Cloud Infrastructure – part 1

“Installing your own Kubernetes is hard!”

Everybody

“Hold my kubectl!”

Rancher

Who is this article for:

This is written for those of us who’s interested by Kubernetes and would like to install and run it easily and cheaply anywhere. You may already know that installing and running a “full” Kubernetes all by yourself can be rather difficult, the learning curve is really steep.

There are many alternatives out there – one of the solutions to the complexity is subscribing to one of the managed Kubernetes offerings in the cloud, for example Oracle OKE. Rancher Labs came up with another interesting solution, modifying and considerably slimming down a standard distribution.

K3S is a lightweight Kubernetes distribution made by Rancher Labs, targeted to edge computing, IoT devices and ephemeral infrastructure deployments (for example created in CI/CD pipelines). It can run on Intel and ARM processors.

I would say it’s a great case for learning and development environments too. It even runs on Raspberry Pi! Should you run your critical production workloads in k3s? Not sure this is a good idea, Rancher certainly doesn’t position it like that.

In this article I’m going to describe the installation steps needed to install K3S on OCI in an Oracle Linux VM, but pretty much any cloud or local install will be very similar. Was it as easy as advertised? Keep reading 🙂

0. Prerequisites

  • A subscription to OCI. If you don’t have it yet, please go and grab one – you’ll have access to $300 you can spend on any IaaS or PaaS service during one month, but when this month (or credits) expire, you will still have access to “always free” resources. More details here: https://www.oracle.com/cloud/free/
    Everything described here will work with always free subscription, none of the services used require upgrading to a paid subscription.
  • You already have basic knowledge of OCI – you know how to navigate the OCI console and create and run instances. This set of videos can help you to get up to speed.
  • Basic Linux shell skills (understanding how to navigate a file system, create directories and files).
  • I’m working from WSL’s Linux shell as my admin workstation, more on my setup in my previous article.

1. Create a compute VM in OCI

(If you already have a VM or know how to create and access one, skip to the section 2.)

  • Create necessary directories and ssh keys.

mkdir -p ~/tutorials/k3s/.ssh
chmod 700 ~/tutorials/k3s/.ssh
cd ~/tutorials/k3s/.ssh
ssh-keygen -f ./id_rsa

The keygen utility will ask you for a passphrase, feel free to change it to your liking.

Verify to be sure the keys were created:

ls -la

  • Create OCI prereqs (compartement, virtual network)

Log into your OCI console.

Choose an existing or create a new compartment (Menu > Governance and Administration > Identity > Compartments)
I’ve created a compartment “k3s” where I’ll place all the resources I’ll be creating.

Go to Menu > Core Infrastructure > Networking, select “k3s” compartment. If you’ve just created the compartment and it doesn’t show up in the dropdown list yet, press Ctrl-F5 in the browser to refresh its cache. Then press the “Networking Quickstart” button:

Follow the Quickstart wizard to create a “VCN with Internet Connectivity”

I’ll name mine “k3s_vcn” and select the VCN CIDR = “10.0.0.0/16”, PUBLIC SUBNET CIDR = “10.0.0.0/24” and PRIVATE SUBNET CIDR = “10.0.1.0/24”. I keep everything else by default.

The subnets that will be created are of “Regional” type – they span multiple Availability domains in the regions that have more than one AD.

Press “Next” and then “Create” – you should see “Virtual Cloud Network creation complete” message appearing almost immediately together with the list of VCN components that were created.

  • Create a compute instance

I’ll create a VM instance that will be hosting the K3S server, for the simplicity sake I’ll be placing it in a public subnet (10.0.0.0/24) but it will work fine in the private subnet too, you’ll just need to create an additional bastion host to be able to access it.
The public subnet created by quickstart wizard already has SSH protocol ingress authorized, we have nothing to modify there.

Go to Menu > Core Infrastructure > Compute > Instances and press the “Create Instance” button

On the “Create Compute Instance” you need to:
give the instance a name (mine will be named “k3s_server”),
select an image source (OL 7.7),
select an Availability domain (AD3)

Instance type (VM),
Instance shape (I feel adventurous and will try the free-tier eligible VM.Standard.E2.1.Micro). Will it be sufficient? We’ll see – maybe 🙂 At the moment of the writing, the free-tier instances are only available in one of the three ADs, this is why I’ve selected the AD3. In your tenancy it can be another AD – you’ll need to check and see which one.

Select our newly created k3s_vcn, and its public subnet, don’t forget to assign a Public IP:

Now go back to the WSL console and show the contents of the public key we’ve created earlier, then select it and copy it to the clipboard:

cat id_rsa.pub

Back to the OCI console, paste the public key into the “SSH Key” field and then press “Create” button:

About one minute later, the instance will be provisioned. Once finished, go into instance details and record its Public IP

Back to WSL again, connect to the instance’s public IP using its private key as identity:

ssh -i ~/tutorials/k3s/.ssh/id_rsa opc@130.61.249.207

confirm the authenticity of the host

Enter passphrase if you’ve defined one when the key was created

And finally you’re here, in the k3s_server bash prompt:

2. Install k3s server

(Main document listing available installation options: https://rancher.com/docs/k3s/latest/en/installation/install-options/)

  • First, install all the latest Linux updates on the k3s-server:

sudo yum update -y

  • Run the “quick installation” script: it will install the latest k3s version using all default settings

curl -sfL https://get.k3s.io | sh -

  • Try to access the cluster

Wait – this was so simple – is this all we need? Let’s try to query for kubernetes nodes for example. Pretty much everything that we need can be done using kubectl command line tool:

kubectl get nodes

Damn, it didn’t work! What’s up with this “permission denied” error? Let’s look at permissions of the /etc/rancher/k3s/k3s.yaml file (this file is used by kubectl to connect to the kubernetes cluster and authenticate as cluster admin):

ls -lah /etc/rancher/k3s/k3s.yaml

Ok, there is “rw” in the first permission byte and nothing in the second and the third indeed (only – – are there ). Meaning the file is only readable by root, and I’m connected as “opc” user – this is why my kubectl is unable to read the file.

One could think that modifying the rights of the k3s.yaml is all you need to fix this (for example running chmod 644 /etc/rancher/k3s/k3s.yaml) but it isn’t so simple – this file is rewritten every time the k3s server is started. Making it readable this way won’t survive a service reboot. But what should I do? The answer is “read the whole error message first” 🙂 Let’s try again:

It’s saying there is a “–write-kubeconfig-mode” flag, and after some searching you only need to add this to the quick installation script command line like so (run uninstall first to cleanup the first attempt if needed):

/usr/local/bin/k3s-uninstall.sh

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
instead of
curl -sfL https://get.k3s.io | sh - suggested by the documentation

But wait, for sure, if I don’t do this reinstall but instead just run the command with sudo, it will work? Ok, let’s try to run sudo kubectl get nodes :

Oh no. This doesn’t work either, what gives? The kubectl is in my path and sudo is supposed to use paths available to my current unprivileged user “opc”, right?

Well, after some searching it turned out I didn’t know one interesting detail about sudo (all the veteran Linux admins are already screaming at me) – it will only use paths that are declared as “secure” in sudoers config file. Let’s run sudo visudo command to see the config. Search for “secure_path” variable:

Hey, /usr/local/bin isn’t there! And this is how RHEL 7, Centos 7 and OL7 are set up by default. As a comparison, this is the same config file on Debian or Ubuntu:

Aha, now it’s there! What does it mean: if you do your k3s install using the “quick installation” script from https://rancher.com/docs/k3s/latest/en/installation/install-options/ without any modifications or optional variables, then you’ll be able to run kubectl commands with sudo on Debian-like, but this isn’t true on RHEL-like systems.

  • Choosing the right way

Ok, I have two options:

  1. Make the k3s write its k3s.yaml file as readable by everybody (install using
    curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 ), to access kubectl without sudo. Seems not super secure – not only “opc”, but also any other user on this machine will be able to read the k3s.yaml, and this file contains a plain text admin password…
  2. Modify the sudoers config file (run sudo visudo, then add /usr/local/bin path to the secure_path variable, similar to the setup in Debian). This way only users authorized to sudo will be able to read the file, and you’ll have to prefix every kubectl call by sudo. But maybe your security rules forbid you to add additional paths to the secure_path, what’s then?

Which one is the best? This is up to you to decide: the second one has better security but is slightly more annoying – need to type sudo all the time. Certainly think about your use case and chose whichever you think suits better. I’m going forward with the second one – adding /usr/local/bin to sudoers file.

I ran sudo visudo and added /usr/local/bin to the secure_path variable:

Now, sudo kubectl get nodes works just fine!

3. Now what?

The cluster is installed and is running on one node – what can I do with it now?

I’ll reserve another article for that, stay connected!

4 replies on “Deploying K3S on Oracle Cloud Infrastructure – part 1”

Hello,
Thank you for thus tutorial. Just a remark. I managed to get k3s working only by disabling firewalld.

I fully agree with you… I’ve forgot to metion to install iptables-service to replace firewalld 🙂
BTW. I’m running a 2 node k3s cluster on OCI Always Free Services. Do you managed to get the OCI loadbalancer working together with k3s and oci-cloud-controller-manager?

No, didn’t have time to try this unfortunately… Was pulled in to work on another urgent things. Will resume the blog posts as soon as I have time!

Leave a Reply