An intro to SaltStack
Software to automate [and document] the management and configuration of any infrastructure or application.
About Me
Nick Vissari
Security Guy & DevSecOps Enthusiast
Sysadmins have to configure systems
We spend hours upon hours reading the friendly manuals, watching youtube videos, reading quickstarts and tutorials.
A lot of what we owe our expertise to is the ability to google better than the next person.
So we end up with the responsibility to maintain and configure infrastructure.
Keeping track of things is hard
Which system is doing what.
What do we patch when.
Which stuff in on prem and what's in the cloud.
How do I join my linux systems to our windows domain again?
How do we keep track of all the things?
Documentation to the rescue!
Word docs
Text files
Uber 1337 h4x0r shell scripts
Talking to yourself on slack
Bookmarks to stack overflow answers
Here's a list of effective methods of documenting.
Shout out to my org mode folks?
I have used them all, you may have your preferred method.
But then we are still left with some problems.
Documentation can drift as the system evolves over time.
Where do we put all this documentation so it's accessible to the team?
How do we make the time to document what we are changing?
How do we keep track of changes to documentation and what that means to systems that are on the old way of doing things vs the new way?
Personally, I hate documentation for the sake of documentation.
You can't document well if you don't know your audience.
Ideally the audience of my documentation is future me, and I don't know what future me is going to forget.
I know from present me that past me was an idiot and did things in very wonky ways.
So how can I keep all this documentation in a way that future me will thank past me and not think I'm an idiot.
Infrastructure as code!
& track it all with git
wikipedia.org/wiki/Infrastructure_as_code
We can write our infrastructure as code and store everything in git.
Using git we can keep track of what changed when and by creating descriptive comments for ourselves.
Just by looking at the source code we can and answer, the who, what, where, how, why behind every infrastructure change.
And we aren't documenting for the sake of documentation.
We are describing, in code, the actual system level changes that will modify our configuration.
Saltstack is a client server technology that, by default, runs as root/SYSTEM on the client.
you can use it to:
- run commands
- install software
- update software
- get information about systems
- store information about systems
- and declare all of this in code as "granular" as you like
Get used to the salt puns, there are more to come.
Notice the minions are calling into the master.
One of the minions is in the "cloud".
So what does a configuration change look like in saltstack.
SaltStack enables us to deploy configurations to servers and document the configuration.
What do we need, xyz installed.
How are we installing it, package manager or script.
We need to ensure it's installed or not, the "state" of the installation.
Where are we installing xyz.
- Maybe specific servers A and C but not B.
- Or all servers that have a specific attribute like an IP address of 10.200.something.
Then deploy our state.
- This is where you would leave a helpful git comment for future you.
- Git log will show us when, why, and who.
Install salt master
salt-talk/lab$ vagrant ssh master
$ sudo apt install curl
$ curl -L https://bootstrap.saltstack.com -o install_salt.sh
$ sudo sh install_salt.sh -P -M
Start up our three systems one master and two minions all three are debian/stretch.
Install curl on all three and I'm going to install vim on the master since it's my editor of choice.
Install the salt using the bootstrap script.
Install salt minions
salt-talk/lab$ vagrant ssh minion1
$ sudo apt install curl
$ curl -L https://bootstrap.saltstack.com -o install_salt.sh
$ sudo sh install_salt.sh -P
There's some magic happening in the background, the hostname "salt" resolves to the master.
I do not need to specify the salt master but there is a parameter to do that if you want to name salt something else or run salt over the internet.
Accept the keys
The minions will accept the master's key by default. If you recreate the master without the a backup of the master's key, the clients won't talk to the master anymore and you'll have to redeploy.
Now on the master we have to accept the minion's keys, there is mutual authentication occurring here but this exchange is susceptible to MITM so this should be done on a trusted network.
And if you are running on the internet you don't want to just accept all clients. I plan on putting together some material on evil minions. We'll talk about that later.
Test connectivity
Run commands using cmd.run
Commands executed like this in saltstack happen in parallel.
All machines connected to the salt master will independently run the command and return in whatever order they finish.
Errors are fun
And you'll be warned about minions that error.
At this point we have an extremely powerful tool. One console with a constant root shell to everything connected to it.
You might be tempted to stop here and that would be perfectly understandable but let's see what else we can do.
Grains
Saltstack collects grains, bits of information about a host
You can get grains one at a time.
You can set grains and use them later. Grains are great for describing system roles or describing a system's makeup
Targeting minions
'server*' = server1 server2 server3 server-fred
'server?' = server1 server2 server3
'server[1,3]' = server1 server3
'server[1-3]' = server1 server2 server3
-L 'server1,server2'
-G 'role:worker' = whoever has the grain role=worker
-S '192.168.0.0/16' = whoever is in the subnet
Without any switches, the default targeting uses shell style globbing.
It's customary to surround the target with single quotes so the command doesn't conflict with your shell.
We can also use lists, grains, subnets and many other methods for targeting.
Compound matchers
-C 'G@role:worker and
G@os_family:debian and
not *fred'
You can combine as many as you like in a compound matcher with boolean logic that supports parenthesis.
A state is two things in one, first a check to ensure the system has whatever you want it to have, second is whatever is required to take the system from it's current state to the state you want it to be in.
S aL t S tate file
xyz.sls
xyz:
pkg.installed
or
xyz:
cmd.script:
- source: salt://xyz-installed.sh
Here we have two different state files that demonstrate the same thing.
The first one is using saltstack's builtin abstraction to the OSes package manager. On debian that's apt, if this was centos it would be yum.
The second state file is more appropriate to use if the thing you want installed is not in the package manager.
Since we are working on script and naming things it's a good idea at this point to start tracking changes.
This is were git comes in.
Keep track of what you do
cd /srv
git config --global user.name "John Doe"
git config --global user.email johndoe@example.com
git init
This isn't a talk on how to use git. We are just going to use what we need and if we screw it up, that's ok.
We'll delete it all and just start over with a fresh copy.
By default salt states are stored in the salt file server at /srv/salt. Let's initialize /srv/ and keep track of our new states.
/srv/salt/screenfetch/installed.sls
screenfetch:
pkg.installed
For our first state file we'll do something simple. We'll use the OSes builtin package manager to deploy screenfetch.
Test it
...
Now we can test our state file be deploying it to one minion.
It works, commit it!
Ok we have a working state file, what do we do with it?
Now we can describe where this state gets deployed.
/srv/salt/top.sls
base:
'minion?':
- screenfetch.installed
The top.sls file is a special significance. You can think of it like main. All minions will look to the top.sls file when applying state. In salt world they call this the highstate.
I guess they ran out of puns. Or perhaps "seasoning" the systems was too much.
Now we deploy using state.apply to set however many minions we want to set their highstate.
...
Notice one of the minions failed hard. The reason there is that we called the state.apply and that minion doesn't have any states to apply.
You should always be able to call state.apply to get your systems in the appropriate state without fear of causing significant disruption.
/srv/salt/go/installed.sh
#!/bin/bash
# Download and extract go
if [ ! -f "/usr/local/go/bin/go" ]
then
cd /root
wget -q https://dl.google.com/go/go1.14.linux-amd64.tar.gz
tar -C /usr/local -xzf /root/go1.14.linux-amd64.tar.gz
rm /root/go1.14.linux-amd64.tar.gz
fi
# Add go bin path to global profile
if ! grep /usr/local/go/bin /etc/profile >/dev/null
then
echo export PATH=\$PATH:/usr/local/go/bin >> /etc/profile
fi
Here's another more complicated example. Installing the latest version of go. The version in package manager
differs from the latest version. Notice this script is separated into two parts. The first part downloads and
extracts go. The second adds the go bin path to the profile. If we were to run this script twice what would
happen the second time?
/srv/salt/go/installed.sls
go:
cmd.script:
- source: salt://go/installed.sh
Now we create a state file that will execute the script from the salt file server.
Here's what it looks like when we run this state a second time. Notice the state is "changed" but we know
nothing happened.
/srv/salt/top.sls
base:
'minion?':
- screenfetch.installed
'salt':
- go.installed
Now we update our top file to describe where go will be installed.
/srv/salt/netdata/installed.sh
#!/bin/bash
if [ ! -f "/usr/sbin/netdata" ]
then
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
fi
Let's collect some telemetry using netdata. The kickstart script for netdata will complain about a bunch
missing packages and fail. So we have to include some dependencies.
/srv/salt/netdata/deps.sls
autoconf:
pkg.installed
autoconf-archive:
pkg.installed
autogen:
pkg.installed
automake:
pkg.installed
cmake:
pkg.installed
gcc:
pkg.installed
git:
pkg.installed
libjudy-dev:
pkg.installed
liblz4-dev:
pkg.installed
libmnl-dev:
pkg.installed
libssl-dev:
pkg.installed
libuv1-dev:
pkg.installed
make:
pkg.installed
pkg-config:
pkg.installed
uuid-dev:
pkg.installed
zlib1g-dev:
pkg.installed
We can make a file that defines all the dependencies and ensure it's applied before we attempt to install netdata.
/srv/salt/netdata/installed.sls
include:
- netdata.deps
netdata:
cmd.script:
- source: salt://netdata/installed.sh
Now we have a state file that includes another state file. These states are blocking and will execute in order.
/srv/salt/top.sls
base:
'minion?':
- screenfetch.installed
'salt':
- go.installed
'G@role:worker':
- netdata.installed
Finally we update the top.sls file to describe where netdata will be installed. Now when we run state.apply
nothing will happen since we are targeting systems with the grain role=worker.
/srv/pillar/top.sls
base:
'*':
- users
/srv/pillar/users.sls
users:
nick: 30001
tim: 30002
jen: 30003
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions.
They allow confidential, targeted data to be securely sent only to the relevant minion.
Think of what happens if a minion went evil. If you had to distribute secrets, you wouldn't want all minions knowing all secrets. If you put your secrets in a state file, that's what you are doing. All minions have access to all data in the salt file server. Not so with pillar data.
Here's an example that shows distributing a list of users and their ids to all minions.
/srv/salt/users/init.sls
{% for user, uid in pillar.get('users', {}).items() %}
{{user}}:
user.present:
- uid: {{uid}}
{% endfor %}
becomes
nick:
user.present:
- uid: 30001
tim:
user.present:
- uid: 30002
jen:
user.present:
- uid: 30003
Now we can create a state file that leverages the pillar. This looks a lot different that others we've seen.
This is using a templating language called nijna. Between the curly braces is a little python and when this expands it makes more sense.
Summary
/srv/salt - what is deployed, how
/srv/salt/top.sls - where are states deployed
Git - who did what when
/srv/pillar - kinda secret store for data, great for complicated jinja stuff
/srv/pillar/top.sls - where the data goes
Should you keep pillar data in source control?
Why shouldn't you use grains to target minions with pillar data?
Extra bits
docs.saltstack.com
python, yaml, jinja are our friends
salt-ssh - clientless salt
salt mine - dynamic data from minions for minions
salt runners - convenient programs for the master
Salt mine useful for exchanging data between minions, wireguard public keys.
Salt runner could be used to interface with a drac or send messages to slack.
Thanks
Go forth and DevOps!
...and have fun!