linux (containers) web services
MIT License
lws is a Command-Line Interface (CLI) tool designed to streamline the management of Proxmox Virtual Environments (VE), LXC containers, and Docker services through a unified, efficient interface.
[!WARNING] Unstable, untested release under active development :)
lws was created to simplify and unify the management of Proxmox VE, LXC containers, and Docker services using a single command-line tool. Although powerful, lws is still in its early stages and was developed primarily for learning and exploration. It should be used with caution, especially in production environments.
Before using lws, ensure you have the following:
Clone the Repository
git clone https://github.com/fabriziosalmi/lws.git
cd lws
Install Dependencies
pip install -r requirements.txt
Make lws Executable
chmod +x lws.py
Create a Shortcut
alias lws='./lws.py'
Verify Installation
lws --help
lws is configured using a config.yaml
file. This file defines your environment settings, including regions, availability zones (AZs), instance sizes, network settings, security credentials, and ng parameters.
config.yaml
use_local_only: false
start_vmid: 10000
default_storage: local-lvm
default_network: vmbr0
minimum_resources:
cores: 1
memory_mb: 512
regions:
eu-south-1:
availability_zones:
az1:
host: proxmox1.public-fqdn.com # example: public FQDN (access must be secured)
user: root
ssh_password: password
az2:
host: 172.23.0.2 # example: VPN address
user: root
ssh_password: password
az3:
host: proxmox3.local # example: local network
user: root
ssh_password: password
az4:
host: 192.168.0.4 # example: LAN address
user: root
ssh_password: password
eu-central-1:
availability_zones:
pve-rhine:
host: pve-rhine.mydomain.com
user: root
ssh_password: password
pve-alps:
host: pve-alps.mydomain.com
user: root
ssh_password: password
[!IMPORTANT] Secure your
config.yaml
file to prevent exposure of sensitive information. Consider using tools likeansible-vault
or environment variables for managing sensitive data securely.
lws offers various commands for managing Proxmox VE, LXC containers, and Docker services. Below are detailed examples for each command set.
Manage your Proxmox hosts and clusters with these commands. Use the --region
and --az
options to target specific regions and availability zones.
lws px list
[!TIP] Use the
list
command to quickly verify which Proxmox hosts are available for management in specific regions.
lws px backup --region eu-south-1
lws px backup-lxc 101 --region eu-south-1 --az az1
lws px cluster-restart --region eu-central-1 --az pve-rhine
lws px image-add 101 --region eu-central-1 --az pve-alps --template-name "my-template"
lws px image-rm --region eu-central-1 --az pve-rhine --template-name "my-template"
[!WARNING] Be careful when deleting images, as this action is irreversible and can result in data loss.
lws px status --region eu-south-1 --az az3
lws px reboot --region eu-south-1 --az az2
[!TIP] Rebooting a Proxmox host will temporarily affect all services running on it. Ensure you plan this operation during maintenance windows.
Manage LXC containers with these versatile commands. Just specify the container ID and, optionally, the region and AZ.
lws lxc start 101 --region eu-central-1 --az pve-alps
lws lxc stop 101 --region eu-central-1 --az pve-alps
lws lxc reboot 101 --region eu-central-1 --az pve-alps
lws lxc terminate 101 --region eu-central-1 --az pve-alps
[!WARNING] The
terminate
command permanently deletes the LXC container. Use this with caution.
lws lxc clone 101 102 --region eu-central-1 --az pve-alps
lws lxc migrate 101 --region eu-central-1 --source-az pve-rhine --target-az pve-alps
lws lxc exec 101 --region eu-central-1 --az pve-alps --command "apt-get update"
lws lxc scale 101 --region eu-central-1 --az pve-alps --cpu 4 --memory 8192
[!TIP] Scaling resources can help optimize performance but may also increase resource consumption on your host.
lws lxc snapshot-add 101 --region eu-central-1 --az pve-alps --name "pre-update"
lws lxc show-snapshots 101 --region eu-central-1 --az pve-alps
lws lxc volume-attach 101 --region eu-central-1 --az pve-alps --volume "my-volume"
lws lxc show-public-ip 101 --region eu-central-1 --az pve-alps
Manage Docker services within LXC containers using these commands. Specify the container ID along with the region and AZ.
lws app setup 101 --region eu-south-1 --az az1
lws app deploy 101 --region eu-south-1 --az az1 --compose-file docker-compose.yml
[!TIP] Ensure your
docker-compose.yml
file is correctly configured before deployment to avoid runtime issues.
lws app list 101 --region eu-south-1 --az az1
lws app logs 101 --region eu-south-1 --az az1 --container "my-container"
lws app update 101 --region eu-south-1 --az az1
[!TIP] Regularly updating your Docker containers ensures they are running the latest versions with security patches.
Manage your lws client configurations with these commands.
lws conf backup --output backup-config.yaml
lws conf show
lws conf validate
[!IMPORTANT] Always validate your configuration after making changes to avoid runtime errors.
Instance profiles define the resource allocations (memory, CPU, storage) for different types of workloads. These can be customized for specific applications, ranging from general-purpose to specialized setups.
instance_sizes:
micro:
memory: 512
cpulimit: 1
storage: local-lvm:4
small:
memory: 1024
cpulimit: 1
storage: local-lvm:8
mid:
memory: 2048
cpulimit: 2
storage: local-lvm:16
large:
memory: 4096
cpulimit: 2
storage: local-lvm:32
x-large:
memory: 8192
cpulimit: 4
storage: local-lvm:64
# Specialized instance profiles for specific applications
lws-postgres:
memory: 4096 # 4 GB
cpulimit: 2 # 2 vCPUs
storage: local-lvm:40 # 40 GB of storage
# Example: PostgreSQL for relational database.
lws-redis:
memory: 2048 # 2 GB
cpulimit: 1 # 1 vCPU
storage: local-lvm:10 # 10 GB of storage
# Example: Redis for in-memory caching.
[!TIP] Customize instance profiles based on the specific requirements of your applications. For example, databases like PostgreSQL may need more memory and CPU, while caching solutions like Redis can operate efficiently with fewer resources.
Security settings within lws control aspects like SSH timeouts, discovery methods, and the number of parallel workers. These settings help secure your environment while ensuring efficient operations.
security:
discovery:
proxmox_timeout: 2 # Timeout in seconds for Proxmox host discovery
lxc_timeout: 2 # Timeout in seconds for LXC container discovery
discovery_methods: ['ping'] # Methods used for discovering resources
max_parallel_workers: 10 # Maximum number of parallel workers during discovery
[!TIP] Adjust the
max_parallel_workers
setting to optimize discovery operations based on your infrastructure's size and complexity.
Scaling thresholds and triggers allow lws to automatically adjust resources (CPU, memory, storage) for LXC containers based on defined conditions met on both the Proxmox host and the LXC container. This feature ensures optimal performance while preventing resource exhaustion.
scaling:
host_cpu:
max_threshold: 80 # Maximum percentage of host CPU usage before scaling down
min_threshold: 30 # Minimum percentage of host CPU usage before scaling up
step: 1 # Base increment or decrement of CPU cores on the host
scale_up_multiplier: 1.5 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.5 # Multiplier applied to step size when scaling down
lxc_cpu:
max_threshold: 80 # Maximum percentage of LXC CPU usage before scaling down
min_threshold: 30 # Minimum percentage of LXC CPU usage before scaling up
step: 1 # Base increment or decrement of CPU cores in the LXC
scale_up_multiplier: 1.5 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.5 # Multiplier applied to step size when scaling down
host_memory:
max_threshold: 70 # Percentage of total memory on the host before scaling down
min_threshold: 40 # Percentage of total memory on the host before scaling up
step_mb: 256 # Base amount of memory in MB to increase or decrease
scale_up_multiplier: 1.25 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.75 # Multiplier applied to step size when scaling down
lxc_memory:
max_threshold: 70 # Maximum percentage of LXC memory usage before scaling down
min_threshold: 40 # Minimum percentage of LXC memory usage before scaling up
step_mb: 256 # Base amount of memory in MB to increase or decrease
scale_up_multiplier: 1.25 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.75 # Multiplier applied to step size when scaling down
host_storage:
max_threshold: 85 # Maximum percentage of storage usage on the host before scaling down
min_threshold: 50 # Minimum percentage of storage usage on the host before scaling up
step_gb: 10 # Base increment or decrement of storage in GB
scale_up_multiplier: 1.5 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.5 # Multiplier applied to step size when scaling down
lxc_storage:
max_threshold: 85 # Maximum percentage of storage usage in the LXC before scaling down
min_threshold: 50 # Minimum percentage of storage usage in the LXC before scaling up
step_gb: 10 # Base increment or decrement of storage in GB
scale_up_multiplier: 1.5 # Multiplier applied to step size when scaling up
scale_down_multiplier: 0.5 # Multiplier applied to step size when scaling down
limits:
min_memory_mb: 512 # Minimum allowed memory for any LXC container
max_memory_mb: 32768 # Maximum allowed memory for any LXC container
min_cpu_cores: 1 # Minimum allowed CPU cores for any LXC container
max_cpu_cores: 16 # Maximum allowed CPU cores for any LXC container
min_storage_gb: 10 # Minimum allowed storage for any LXC container
max_storage_gb: 1024 # Maximum allowed storage for any LXC container
general:
scaling_interval: 5 # Interval in minutes to check for resource adjustments
notify_user: true # Notify user via CLI output when scaling adjustments are made
dry_run: false # If true, simulate scaling adjustments without applying changes
scaling_log_level: DEBUG # Log level for scaling operations (DEBUG, INFO, WARN, ERROR)
use_custom_scaling_algorithms: false # Enable if custom scaling algorithms are implemented
[!TIP] Use
notify_user: true
to get immediate feedback on scaling adjustments, which is especially useful in dynamic environments.
[!WARNING] Be cautious when setting the
dry_run
option tofalse
, as real scaling adjustments will be applied. Ensure your thresholds and multipliers are well-tested before applying them in production.
Given that lws involves sensitive operations and SSH connections, it's important to:
config.yaml
file is secure.safer to use it in test or development environments.
[!WARNING] Misconfigured SSH or insecure usage can lead to unauthorized access to your systems. Always follow best practices for SSH security.
lws is an open-source project developed for fun and learning. Contributions are welcome! Feel free to submit issues, feature requests, or pull requests.
git checkout -b feature-branch
[!TIP] Include clear commit messages and documentation with your pull requests to make the review process smoother.
lws is still in its infancy. Planned features and improvements include:
This project is licensed under the MIT License. See the LICENSE file for more details.
[!WARNING] Disclaimer: lws is a project created for fun and exploration. It is not intended for production use, and users should exercise caution when using it on live systems. Always test thoroughly in a non-production environment.