Luma

tech.luma.dev by Luma

k8s managed VPNを作ろうとしてたときの記録

vpn-instances
luma-planet

これでvultrにインスタンスを用意。2master+2workerでの例。以下は作業用インスタンスをEC2 amazon linuxとする。(注記:Vultrはわりと環境が変わり続けており、今は動かなくなった。)

# amazon-linux
amazon-linux-extras | grep ansible
sudo amazon-linux-extras enable ansible
shell

kubespray の内容を git submodule などで適当に管理し持っていく。

# amazon-linux
# in kubespray/

python -m venv ../venv
source ../venv/bin/activate

pip3 install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster

# declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
shell

inventory の内容を書き換える。

all:
  hosts:
    master0:
      ansible_host: 45.76.215.176
      ip: 45.76.215.176
      access_ip: 45.76.215.176
      ansible_ssh_private_key_file: ~/.ssh/id_ed25519
      ansible_user: root
    master1:
      ansible_host: 167.179.98.152
      ip: 167.179.98.152
      access_ip: 167.179.98.152
      ansible_ssh_private_key_file: ~/.ssh/id_ed25519
      ansible_user: root
    worker0:
      ansible_host: 108.160.130.242
      ip: 108.160.130.242
      access_ip: 108.160.130.242
      ansible_ssh_private_key_file: ~/.ssh/id_ed25519
      ansible_user: root
    worker1:
      ansible_host: 167.179.101.158
      ip: 167.179.101.158
      access_ip: 167.179.101.158
      ansible_ssh_private_key_file: ~/.ssh/id_ed25519
      ansible_user: root
  children:
    kube_control_plane:
      hosts:
        master0:
        master1:
    kube_node:
      hosts:
        master0:
        master1:
        worker0:
        worker1:
    etcd:
      hosts:
        master0:
        master1:
        worker0:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
yaml

group_vars/all/all.yml に以下を追記

minimal_node_memory_mb: 512
minimal_master_memory_mb: 512
shell

← ホームに戻る