.bashrc series

One File, Whole Fleet

/etc/<org>/bash-env.sh + an Ansible role

May 24, 2026 ~7 min
bashansiblefleetinfra-as-code

By article five of this series we've talked about history, prompts, conditional aliases, and secrets. All of that lives in ~/.bashrc. So — on a fleet of fifteen Linux boxes, with five users per box on average — you now have seventy-five bashrc files to keep in sync. Which is exactly how every team I've worked with arrives at the conclusion that managing bash environment "doesn't scale."

It does. You just have to invert the architecture.

The pattern: one fleet file, slim per-user wrappers

/etc/<org>/bash-env.sh     # Fleet-wide. Managed by Ansible. One source of truth.
~/.bashrc                  # Per-user. Twelve lines. Sources the fleet file.
~/.bashrc.local            # Per-user overrides. Never touched by Ansible.

The fleet file contains everything we've discussed in the series: history settings, color prompt logic, conditional kubectl/helm/docker blocks, safe aliases, organization-specific helpers. Edit it once, push via Ansible, every shell on every box gets the update at next login.

The per-user .bashrc is intentionally tiny:

# ~/.bashrc — managed by Ansible role `bash-env`.
# Almost everything lives in /etc/mcaster1/bash-env.sh.

case $- in *i*) ;; *) return;; esac

[ -f /etc/bash.bashrc ] && . /etc/bash.bashrc
[ -f /etc/mcaster1/bash-env.sh ] && . /etc/mcaster1/bash-env.sh
[ -f "$HOME/.bashrc.local" ] && . "$HOME/.bashrc.local"

Twelve lines. Identical for every user on every box. The interesting stuff lives in the fleet file. Personal stuff goes in ~/.bashrc.local, which the role won't touch (that's the topic of the next article).

The Ansible role layout

roles/bash-env/
├── files/
│   ├── mcaster1-bash-env.sh         # The fleet file
│   ├── mcaster1-bashrc-template     # The slim ~/.bashrc
│   └── mcaster1-profile-template    # Standard Debian-style .profile
├── templates/
│   ├── mysql_env.sh.j2              # Per-user secrets (vaulted)
│   └── my.cnf.j2                    # MySQL CLI config
├── vars/main.yml                    # User mapping per host
└── tasks/main.yml                   # Deployment logic

The key insight is that the role iterates over users, but the users it acts on are discovered from the target host, not hard-coded. Pseudocode:

users_to_manage = ["root", "dstjohn", "bthlops", "mediacast1", "caster-yp", "caster-www"]
present_users = []
for u in users_to_manage:
    if getent passwd $u succeeds AND that user's home directory exists:
        present_users.append(u)
for u in present_users:
    deploy ~/.bashrc, ~/.profile, ~/.bashrc.local stub to $u's home

This is critical. On the Kubernetes worker nodes, only root and dstjohn exist. On the OVH web host, all six users exist. The role auto-detects per-host and skips users that aren't there. One playbook, every box, no per-host customization.

The backup-on-first-deploy step

The first time you push this role across a fleet, you're overwriting bashrc files that may have customizations users care about. The role handles this with a one-time backup task:

- name: One-time backup of existing .bashrc (0600, owner-only)
  shell: |
    home="{{ user_home }}"
    if [ -f "$home/.bashrc" ] && [ ! -f "$home/.bashrc.preMc1.bak" ]; then
        cp -p "$home/.bashrc" "$home/.bashrc.preMc1.bak"
        chmod 0600 "$home/.bashrc.preMc1.bak"
        echo backed_up
    fi

Three things to note. One: the backup is only made on the first run. Subsequent runs see the .preMc1.bak file and skip. So you can rerun the role daily without piling up backups. Two: the backup is chmod 0600 — because the file might contain inline secrets like our caster-yp case from the previous article, and we don't want the backup leaving the password world-readable. Three: users keep a path back to their original config if they hate the new one.

The verification step

The last task in the role is a paranoid check:

- name: Verify caster-yp bashrc no longer contains plaintext password
  shell: "grep -c 'PASSWORD_LITERAL' {{ home }}/.bashrc || echo 0"
  failed_when: result.stdout | int > 0
  when: "'caster-yp' in present_users"

This guarantees the role hasn't left the password in the world-readable file. If you accidentally leave the original cred-bearing bashrc in place, the playbook fails. You can't ship a half-applied fix.

What happens at update time

You want to add a new alias to the fleet. The flow is:

  1. Edit roles/bash-env/files/mcaster1-bash-env.sh — add your alias.
  2. Commit, push.
  3. Run ansible-playbook bash-env.yml.
  4. Every box gets the new /etc/mcaster1/bash-env.sh on the same run. Logged-in users source the file at next shell start, or source /etc/mcaster1/bash-env.sh mid-session to pick it up immediately.

The per-user .bashrc files never need to change. The personal .bashrc.local files never get touched.

Why this beats config-management tools that "own" the dotfile

Tools like Chef and Puppet often manage ~/.bashrc as a templated file directly — one Jinja template per user, rendered per host. That works, but it fights the engineer's instinct to customize their own bashrc. The moment a user edits their .bashrc, the next Chef run overwrites it. They learn to either disable the cookbook or to never edit dotfiles.

The fleet-file pattern preserves the engineer's autonomy. ~/.bashrc is "managed" in the sense that it sources the fleet file, but it's a stable, slim wrapper that doesn't change run to run. The actual customization vector — ~/.bashrc.local — is theirs. They never feel managed-out-of.

Per-user bashrc files don't scale. A fleet file with slim wrappers does. The architecture is what unlocks the rest of the series.

Last article in the series: .bashrc.local and the Discipline of Not Editing Managed Files.

All .bashrc articles