Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to upscale (an older) cluster (add new node) #80

Closed
Privatecoder opened this issue Jun 9, 2022 · 5 comments
Closed

unable to upscale (an older) cluster (add new node) #80

Privatecoder opened this issue Jun 9, 2022 · 5 comments

Comments

@Privatecoder
Copy link

Hi @vitobotta

I just tried to "upscale" a cluster by increasing an instance_count of a worker_node_pool and re-running the script (v0.5.7 via Docker).

The script creates the new instance and also logs ...server hetzner-cpx21-pool-data-worker3 is now up. however it seems not to be able to recognize the existing masters and worker-nodes to be up

Waiting for server hetzner-cx21-master1 to be up...
Waiting for server hetzner-cx21-master2 to be up...
Waiting for server hetzner-cx21-master3 to be up..
...server hetzner-cpx21-pool-data-worker3 is now up.
Waiting for server hetzner-cpx21-pool-data-worker1 to be up...
Waiting for server hetzner-cpx21-pool-data-worker2 to be up...
Waiting for server hetzner-cpx21-pool-tools-worker1 to be up...
Waiting for server hetzner-cpx21-pool-tools-worker2 to be up...

and therefore never runs the k3s-install on the newly added node, nor does it continue with firewall-configs etc.

Any idea?

Best
Max

@vitobotta
Copy link
Owner

Looks like I forgot to document this in the release notes. In the last update I made it possible to configure some commands to run on servers after they are created, for example to upgrade os packages etc. So the latest version assumes a server is up when it finds a file at /etc/ready with 'true' as the content. This file is automatically created when the user defined commands are done and the server has been rebooted.

Of course this file doesn't exist on servers created with previous versions. All you need to do is ssh into the existing servers and create the file /etc/ready with the word 'true' in it. Then rerun the create command. Hope it helps

@Privatecoder
Copy link
Author

awesome Vito!

Indeed touch /etc/ready && echo "true" > /etc/ready works & this can be closed :)

Thank you!

@vitobotta
Copy link
Owner

Awesome :)

@Privatecoder Privatecoder reopened this Jun 12, 2022
@Privatecoder
Copy link
Author

Privatecoder commented Jun 12, 2022

@vitobotta I‘m thinking:

Maybe you could write all additional packages installed through your script as stringifyed JSON to /etc/ready instead of just true and then parse it back to an array when running the script again (to check if it includes all of the „to be then installed“ packages)?

I.e:

["wireguard","fail2ban"] > write to /etc/ready.

Next script run: Check if /etc/ready exists > parse content back to array and check if all current additional_packages as well as your statict packages (i.e. wireguard and fail2ban) are included > if not > install.

@vitobotta
Copy link
Owner

Hi, sorry for the delay. I am just now some time to work a bit on this project so I am making updates, but I don't think I want to spend time to change this since it would be specifically for clusters created prior to that version and the fix is easy, albeit manual. I am focusing on more meaningful changes :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants